id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.10940 | Polarization-dependent beam shifts upon metallic reflection in
high-contrast imagers and telescopes | (Abridged) Context. To directly image rocky exoplanets in reflected
(polarized) light, future space- and ground-based high-contrast imagers and
telescopes aim to reach extreme contrasts at close separations from the star.
However, the achievable contrast will be limited by reflection-induced
polarization aberrations. While polarization aberrations can be modeled
numerically, such computations provide little insight into the full range of
effects, their origin and characteristics, and possible ways to mitigate them.
Aims. We aim to understand polarization aberrations produced by reflection off
flat metallic mirrors at the fundamental level. Methods. We used polarization
ray tracing to numerically compute polarization aberrations and interpret the
results in terms of the polarization-dependent spatial and angular
Goos-H\"anchen and Imbert-Federov shifts of the beam of light as described with
closed-form mathematical expressions in the physics literature. Results. We
find that all four beam shifts are fully reproduced by polarization ray tracing
and study the origin, characteristics, sizes, and directions of the shifts. Of
the four beam shifts, only the spatial Goos-H\"anchen and Imbert-Federov shifts
are relevant for high-contrast imagers and telescopes because these shifts are
visible in the focal plane and create a polarization structure in the PSF that
reduces the performance of coronagraphs and the polarimetric speckle
suppression close to the star. Conclusions. The beam shifts in an optical
system can be mitigated by keeping the f-numbers large and angles of incidence
small. Most importantly, mirror coatings should not be optimized for maximum
reflectivity, but should be designed to have a retardance close to 180{\deg}.
The insights from our study can be applied to improve the performance of
current and future high-contrast imagers, especially those in space and on the
ELTs. | R. G. van Holstein, C. U. Keller, F. Snik, S. P. Bos | 2023-08-21T18:00:01Z | http://arxiv.org/abs/2308.10940v2 | # Polarization-dependent beam shifts upon metallic reflection in high-contrast imagers and telescopes
###### Abstract
Context:To directly image rocky exoplanets in reflected (polarized) light, future space- and ground-based high-contrast imagers and telescopes aim to reach extreme contrasts at close separations from the star. However, the achievable contrast will be limited by reflection-induced polarization aberrations. While polarization aberrations can be modeled with numerical codes, these computations provide little insight into the full range of effects, their origin and characteristics, and possible ways to mitigate them.
Aims:We aim to understand polarization aberrations produced by reflection off flat metallic mirrors at the fundamental level.
Methods:We used polarization ray tracing to numerically compute polarization aberrations and interpret the results in terms of the polarization-dependent spatial and angular Goos-Hanchen and Imbert-Fedorov shifts of the beam of light as described with closed-form mathematical expressions in the physics literature.
Results:We find that all four beam shifts are fully reproduced by polarization ray tracing. We study the origin and characteristics of the shifts as well as the dependence of their size and direction on the beam intensity profile, incident polarization state, angle of incidence, mirror material, and wavelength. Of the four beam shifts, only the spatial Goos-Hanchen and Imbert-Fedorov shifts are relevant for high-contrast imagers and telescopes because these shifts are visible in the focal plane and create a polarization structure in the point-spread function that reduces the performance of coronagraphs and the polarimetric speckle suppression close to the star.
Conclusions:Our study provides a fundamental understanding of the polarization aberrations resulting from reflection off flat metallic mirrors in terms of beam shifts and lays out the analytical and numerical tools to describe these shifts. The beam shifts in an optical system can be mitigated by keeping the f-numbers large and angles of incidence small. Most importantly, mirror coatings should not be optimized for maximum reflectivity, but should be designed to have a retardance close to 180\({}^{\circ}\). The insights from our study can be applied to improve the performance of SPHERE-ZIMPOL at the VLT and future telescopes and instruments such as the Roman Space Telescope, the Habitable Worlds Observatory, GMagAO-X at the GMT, PSI at the TMT, and PCS (or EPICS) at the ELT.
Conclusions:
## 1 Introduction
To directly image rocky exoplanets in (polarized) reflected visible and near-infrared light, future space telescopes and extremely large ground-based telescopes and instruments aim to reach extreme planet-to-star contrast ratios at diffraction-limited angular separations from the star. Even though the optical systems of these high-contrast imagers will minimize scalar aberrations, the coronagraphic performance and achievable contrast will still be limited by polarization aberrations (e.g., Chipman, 1989; McGuire & Chipman, 1990; Sanchez Almeida & Martinez Pillet, 1992; McGuire & Chipman, 1994, 1995; Breckinridge et al., 2015). Polarization aberrations are minute, polarization-dependent variations of the amplitude and phase of the electromagnetic field across a beam of light that result in a polarization structure in the point-spread function (PSF). Polarization aberrations are predominantly caused by reflection off oblique and/or curved metallic mirrors and originate directly from the Fresnel reflection coefficients. The first-order polarization aberrations, that is, the sub-wavelength, polarization-dependent shifts of the beam of light, most negatively affect the achievable contrast. Because polarization aberrations are different for orthogonal polarization components of unpolarized light, adaptive optics cannot fully correct these aberrations (Breckinridge et al., 2015).
Recently, it has become clear that high-angular-resolution polarimeters are also affected by polarization aberrations. The polarization aberrations of the Gemini South telescope appear to be limiting the polarimetric contrast achieved by the Gemini Planet Imager at the smallest angular separations from the star (Millar-Blanchaer et al., 2022). Moreover, the polarimetric speckle suppression of the high-contrast imaging polarimeter SPHERE-ZIMPOL at the Very Large Telescope, which is specifically designed to search for the reflected, polarized visible light of giant exoplanets, is limited by reflection-induced, polarization-dependent beam shifts (Schmid et al., 2018). Such shifts also affect interferometric polarization measurements with the Speckle Polarimeter at the Sternberg Astronomical Institute 2.5-m telescope (Safonov et al., 2019). The beam shifts become apparent for these instruments due to the unprecedented polarimetric sensitivity and spatial resolution they achieve.
The polarization aberrations of an astronomical telescope and instrument can be numerically computed with polarization ray tracing (Breckinridge et al., 2015). First, the paths of the rays of light are traced through the optical system using geometrical
optics, but instead of the intensity, the electric field components of the rays are computed upon each reflection or transmission (e.g., Waluschka, 1989; Chipman, 1989; Yun et al., 2011, 2012). Each point in the exit pupil is then associated with a Jones matrix. In this way, the Jones pupil, which maps the changes in the electric fields between the entrance and exit pupils of the system, is calculated (Totzeck et al., 2005). Finally, the intensity in the focal plane (i.e., the PSF) is computed in the Fraunhofer approximation through spatial Fourier transforms over the Jones pupil. Several studies have used polarization ray tracing to model the polarization aberrations of proposed and future high-contrast imagers and telescopes, such as the Roman Space Telescope (Krist et al., 2017), HabEx (Davis et al., 2018; Breckinridge et al., 2018), LUVOIR (Sabatke et al., 2018; Will & Fienup, 2019), PICTUREC (Mendilo et al., 2019), and the three extremely large telescopes (Anche et al., 2018, 2023). However, these numerical computations give little insight into the full range of aberrations, their origin and characteristics, and the relative importance of amplitude and phase effects.
Breckinridge et al. (2015) use polarization ray tracing to analyze a three-mirror system consisting of a Cassegrain telescope followed by a flat fold mirror, and find two beam-shift effects that both originate from the oblique reflection off the flat mirror. The authors find phase gradients (i.e., wavefront tilts) in the Jones pupil that have opposite directions for the linearly polarized components parallel and perpendicular to the plane of incidence of the fold mirror. In the focal plane, these gradients cause the orthogonally polarized components of the PSF to shift in opposite directions, thereby broadening the resulting PSF in intensity. Furthermore, the authors find PSF components that couple the light from one orthogonal polarization into the other. These PSF components, which they call ghost PSFs, have two peaks, one on either side of the plane of incidence.
Sub-wavelength, polarization-dependent shifts of a beam of light induced by reflection off a flat metallic mirror are also extensively described in the physics literature (for overviews, see Aiello & Woerdman, 2008; Gotte & Dennis, 2012; Bliokh & Aiello, 2013). These shifts are referred to as the Goos-Hanchen (GH) and Imbert-Federv (IF) shifts and occur in the directions parallel and perpendicular to the plane of incidence, respectively. Both shifts are further divided into a spatial and an angular shift. The spatial shifts are displacements of the entire beam of light upon reflection, and the angular shifts refer to angular deviations of the beam upon reflection. As such, the four shifts are considered first-order corrections to the laws of geometrical optics due to diffraction within a beam of light of finite width; the Fresnel equations only apply to infinitely extended interfaces, and a correct description of light reflected off an interface must therefore take into account the finite beam size. The GH and IF shifts are derived from first principles through full diffraction calculations and are described using closed-form mathematical expressions specifying the centroid of the intensity of a reflected Gaussian beam (e.g., Aiello & Woerdman, 2007, 2008). All four shifts have been experimentally validated for metallic reflections (Merano et al., 2007; Aiello et al., 2009; Hermosa et al., 2011). Schmid et al. (2018) show in their analysis of the beam shifts of SPHERE-ZIMPOL that the spatial GH shift is likely the same as the shift arising from phase gradients in the Jones pupil as described by Breckinridge et al. (2015).
In this paper, we aim to understand polarization aberrations produced by reflection off flat metallic mirrors at the fundamental level and seek to unify the two views of the beam shifts from polarization ray tracing and full diffraction calculations in the physics literature. To this end, we determine the beam shifts from the polarization ray tracing of the reflection of a beam of light with a uniform (or top-hat) intensity profile (as applies to astronomical telescopes and instruments), and compare the resulting shifts to the spatial and angular GH and IF shifts as predicted by the closed-form expressions derived for Gaussian beams. We investigate whether the GH and IF shifts are reproduced by polarization ray tracing or whether they are additional effects that we need to take into account for astronomical instruments. In addition, we study the origin and characteristics of the shifts and determine how the size and direction of the shifts depend on the beam intensity profile, incident polarization state, angle of incidence, mirror material, and wavelength. Finally, we examine how these shifts affect the performance of high-contrast imagers and how we can mitigate them in (future) diffraction-limited astronomical telescopes and instruments.
The outline of this paper is as follows. In Sect. 2 we describe the conventions and definitions of the mathematics used throughout the paper. Subsequently, in Sect. 3, we outline the polarization ray tracing of the reflection of a beam of light off a flat metallic mirror and the determination of the beam shifts. In Sect. 4 we then explain the origin and characteristics of the spatial and angular GH and IF shifts and their relation to shifts found using polarization ray tracing. We also show the dependence of the size and direction of the shifts on the incident polarization state and angle of incidence. In Sect. 5 we investigate the polarization structure in the PSF induced by the beam shifts and the effect of the beam shifts on polarimetric measurements. In the same section we also examine the size of the beam shifts for various mirror materials and wavelengths, and discuss and refine the approaches to mitigate the beam shifts. Finally, we show a table summarizing the properties of the four beam shifts at the end of Sect. 5 and present conclusions in Sect. 6.
## 2 Conventions and definitions
In this section, we outline the conventions and definitions used throughout this paper. In the literature, the mathematical definitions underlying the descriptions of polarization aberrations and beam shifts are often incomplete and not consistent among different studies. This can lead to errors in the physical interpretation, for example with the handedness of the circular polarization or the direction of the beam shifts. We therefore describe our definitions extensively and have carefully checked our equations for consistency. As such, this paper provides a complete reference for the correct computation of the polarization aberrations and beam shifts. To enable easy comparison of our results with those from the physics literature, we use the same definitions as Aiello & Woerdman (2007), Merano et al. (2007), Aiello & Woerdman (2008), Aiello et al. (2009), and Hermosa et al. (2011). For the description of the polarization of light, these definitions are consistent with the definitions adopted by the International Astronomical Union (see e.g., Hamaker & Bregman, 1996). We present the mathematics to describe light and its polarization in Sect. 2.1 and discuss metallic reflection in Sect. 2.2.
### Polarization of light
We shall consider a monochromatic, polarized light wave propagating in the positive \(z\)-direction of a Cartesian reference frame (or basis) \(xyz\) as shown in Fig. 1. The transverse electric field components of this light wave in the vertical \(x\)- and horizontal \(y\)-directions can then be described as follows (see e.g., Born &
Wolf [20]):
\[\tilde{E}_{x}(z,t) =A_{x}\cos\left(kz-\omega t+\varphi_{x}\right)=\mathrm{Re}\left[A_{x }\mathrm{e}^{\mathrm{i}\varphi_{x}}\mathrm{e}^{\mathrm{i}(kz-\omega t)}\right], \tag{1}\] \[\tilde{E}_{y}(z,t) =A_{y}\cos\left(kz-\omega t+\varphi_{y}\right)=\mathrm{Re}\left[A _{y}\mathrm{e}^{\mathrm{i}\varphi_{y}}\mathrm{e}^{\mathrm{i}(kz-\omega t)} \right], \tag{2}\]
where \(t\) is time, \(\omega>0\) is the angular frequency, \(k=2\pi/\lambda\) is the wave number with \(\lambda\) the wavelength, \(A_{x}\) and \(A_{y}\) are the amplitudes, \(\varphi_{x}\) and \(\varphi_{y}\) are the initial phases, \(\mathrm{Re}[\dots]\) denotes the real part, and \(\mathrm{i}\) is the imaginary unit. On the right side of Eqs. (1) and (2), the factor \(\exp\left[\mathrm{i}(kz-\omega t)\right]\) only describes the propagation of the light wave. The polarization of the wave can therefore be described by a Jones vector \(\@vec{E}\):
\[\@vec{E}=\begin{bmatrix}E_{x}\\ E_{y}\end{bmatrix}=\begin{bmatrix}A_{x}\mathrm{e}^{\mathrm{i}\varphi_{x}}\\ A_{y}\mathrm{e}^{\mathrm{i}\varphi_{y}}\end{bmatrix}, \tag{3}\]
where \(E_{x}\) and \(E_{y}\) are the complex electric field components.
As an alternative way to describe the polarization, we can define a set of Stokes parameters (see Fig. 1):
\[I =E_{x}E_{x}^{*}+E_{y}E_{y}^{*} =A_{x}^{2}+A_{y}^{2} =I_{x}+I_{y}=I_{d}+I_{a} \tag{4}\] \[=I_{r}+I_{l}=1,\] \[Q =E_{x}E_{x}^{*}-E_{y}E_{y}^{*} =A_{x}^{2}-A_{y}^{2} =I_{x}-I_{p},\] (5) \[U =E_{x}E_{y}^{*}+E_{y}E_{x}^{*} =2A_{x}A_{y}\cos\delta=I_{d}-I_{a},\] (6) \[V =\mathrm{i}\left(E_{x}E_{y}^{*}-E_{y}E_{x}^{*}\right) =2A_{x}A_{y}\sin\delta =I_{r}-I_{l}, \tag{7}\]
where the asterisk denotes the complex conjugate, \(\delta=\varphi_{y}-\varphi_{x}\) is the phase difference between the \(y\)- and \(x\)-components of the electric field, and \(I_{x}\) and \(I_{y}\) are the intensities of the \(x\)- and \(y\)-components of the electric field. The variables \(I_{d}\) and \(I_{a}\) are the intensities of the \(d\)- and \(a\)-components in the basis of the diagonal and antidiagonal polarizations, \(daz\), and \(I_{r}\) and \(I_{l}\) are the intensities of the \(r\)- and \(l\)-components in the basis of the right-handed and left-handed circular polarizations, \(r\!\!\!r\) (see Fig. 1). Stokes \(I\) is the total intensity, positive (negative) Stokes \(Q\) describes linear polarization in the vertical \(x\)-direction (horizontal \(y\)-direction), positive (negative) Stokes \(U\) describes linear polarization in the diagonal (antidiagonal) direction, \(45^{\circ}\) counterclockwise (clockwise) from the \(x\)-direction, and positive (negative) Stokes \(V\) describes right-handed (left-handed) circular polarization. Whereas the \(xyz\)-basis is the natural basis of Stokes \(Q\), the \(daz\)- and \(r\!\!\!r\)-bases are the natural bases of Stokes \(U\) and \(V\), respectively. Because we normalize the total intensity, that is, we set \(I=1\) in Eq. (4), \(Q\), \(U\), and \(V\) have values between \(1\) and \(-1\). We note that Eqs. (4)-(7) are strictly speaking only valid for 100% polarized, monochromatic light. However, for quasi-monochromatic light, whether 100% polarized, partially polarized, or unpolarized, we simply need to take the time averages over the terms in the equations.
From Eqs. (4) and (5), we can derive expressions for the intensities of the \(x\)- and \(y\)-components of the electric field:
\[I_{x} =\frac{1+Q}{2}, \tag{8}\] \[I_{y} =\frac{1-Q}{2}. \tag{9}\]
Although these two equations are simple, they are important, and we use them in all closed-form expressions for the beam shifts in Sect. 4. Finally, we assemble the Stokes parameters in a Stokes vector \(\@vec{S}\):
\[\@vec{S}=\begin{bmatrix}I\\ Q\\ V\end{bmatrix}, \tag{10}\]
and define the degree of linear polarization \(P\) (which for \(I=1\) is equal to the linearly polarized intensity) and angle of linear polarization \(\chi\) (see Fig. 1) as follows:
\[P =\sqrt{Q^{2}+U^{2}}, \tag{11}\] \[\chi =\frac{1}{2}\arctan\left(\frac{U}{Q}\right). \tag{12}\]
### Metallic reflection
Using this mathematically consistent description of light and its polarization, we can describe the reflection of light using the Fresnel equations in the geometric polarization ray-tracing approximation. We shall consider the central ray of a beam of light incident on a flat metallic mirror as shown in Fig. 2. Describing this ray as a plane electromagnetic wave, we decompose the incident electric field into the \(p\)- and \(s\)-polarized components that are parallel and perpendicular to the plane of incidence, respectively. For this central ray, the \(p\)- and \(s\)-directions correspond to the \(x\)- and \(y\)-directions, respectively. Assuming the refractive index of the incident medium (air) to be equal to \(1\), we compute the complex Fresnel reflection coefficients \(r_{p}\) and \(r_{s}\) as follows (see e.g., Born & Wolf [20]):
\[r_{p} =\frac{\hat{n}^{2}\cos\theta-\sqrt{\hat{n}^{2}-\sin^{2}\theta}}{ \hat{n}^{2}\cos\theta+\sqrt{\hat{n}^{2}-\sin^{2}\theta}}=R_{p}\mathrm{e}^{ \mathrm{i}\phi_{p}}, \tag{13}\] \[r_{s} =\frac{\cos\theta-\sqrt{\hat{n}^{2}-\sin^{2}\theta}}{\cos\theta+ \sqrt{\hat{n}^{2}-\sin^{2}\theta}} =R_{s}\mathrm{e}^{\mathrm{i}\phi_{s}}, \tag{14}\]
Figure 1: Definition of the three reference frames (or bases) and the Stokes parameters to describe the electric field components and polarization of an electromagnetic wave. The light propagates along the \(z\)-axis out of the paper toward the reader. In the \(xyz\)-basis, the \(x\)-axis (\(y\)-axis) is oriented in the vertical (horizontal) direction. In the \(daz\)-basis, the \(d\)-axis (\(a\)-axis) is oriented in the diagonal (antidiagonal) direction, at \(45^{\circ}\) counterclockwise (clockwise) from the \(x\)-axis. In the \(r\!\!\!r\)-basis, \(r\) and \(r\) represent the right-handed and left-handed circularly polarized components. For each reference frame, the basis Jones vectors, expressed in the \(xyz\)-bases, are indicated. The Stokes parameters are shown in orange with the plus sign (minus sign) indicating that the Stokes parameter is positive (negative) in that direction. The angle of linear polarization \(\chi\) is defined positive for a counterclockwise rotation from the \(x\)-axis.
where \(\theta\) is the central angle of incidence (see Fig. 2) and \(\hat{n}=n+\mathrm{i}\kappa\) is the complex refractive index of the mirror material, with \(n\) and \(\kappa\) the real and complex parts, respectively. The amplitudes \(R_{p/s}=|r_{p/s}|\) specify the ratios of the amplitudes of the reflected and incident electric fields, while the phases \(\phi_{p/s}=\arg\left(r_{p/s}\right)\) describe the phase shifts between the reflected and incident electric fields.
Two important quantities related to the reflection coefficients are the diattenuation and the retardance, which can be considered to be the zeroth-order polarization aberrations. The diattenuation \(\epsilon\) is defined as follows:
\[\epsilon=\frac{R_{s}^{2}-R_{p}^{2}}{R_{s}^{2}+R_{p}^{2}}, \tag{15}\]
which ideally equals 0. When unpolarized light is incident on the mirror, a nonzero value of the diattenuation quantifies the amount of linearly polarized light that is created, that is, the instrumental polarization. The retardance \(\lambda\) is defined as follows:
\[\lambda=\phi_{s}-\phi_{p}, \tag{16}\]
which ideally equals 180\({}^{\circ}\). The latter value comes from the requirement that the electromagnetic wave before and after reflection is described by a right-handed triplet in terms of the electric field, the magnetic field, and the wave vector. For values other than 180\({}^{\circ}\), retardance results in the conversion of incident linearly polarized light into circularly polarized light and vice versa, that is, it produces polarimetric crosstalk.
The physics of the beam shifts as described in Sect. 4 depends on the diattenuation and retardance as well as on the gradients of the amplitude and phase of the reflection coefficients with the angle of incidence. Figure 3 shows the amplitude and phase of the reflection coefficients as a function of the angle of incidence for gold with \(\hat{n}=0.188+\mathrm{i}5.39\) at a wavelength of 820 nm, corresponding to the configuration studied in Sects. 3-5. From Fig. 3 (left) it follows that the diattenuation, which is roughly the difference between the curves of \(R_{s}\) and \(R_{p}\) (see Eq. (15)), is zero at \(\theta=0^{\circ}\), increases with increasing angle of incidence until it reaches a maximum around \(\theta=80^{\circ}\), and then decreases again to zero at \(\theta=90^{\circ}\). In Fig. 3 (right) we see that the retardance, which is the difference between the curves of \(\phi_{s}\) and \(\phi_{p}\) (see Eq. (16)), is 180\({}^{\circ}\) at \(\theta=0^{\circ}\) and remains close to this value for small values of \(\theta\). For large \(\theta\), the retardance decreases rapidly to 0\({}^{\circ}\) at \(\theta=90^{\circ}\). Fig. 3 (left and right) also show the gradients in amplitude and phase at \(\theta=45^{\circ}\) (similar to the phase gradients shown by Breckinridge et al., 2015). Whereas the amplitude gradient \(\partial R_{s}/\partial\theta\) is always positive for \(\theta>0^{\circ}\), \(\partial R_{p}/\partial\theta\) is initially negative, then becomes zero, and finally is positive for very large angles of incidence. Lastly, for \(\theta>0^{\circ}\) the phase gradients \(\partial\phi_{s}/\partial\theta\) and \(\partial\phi_{p}/\partial\theta\) are negative and positive, respectively, and monotonically decrease and increase with increasing angle of incidence.
## 3 Beam shifts from polarization ray tracing
In this section, we describe the polarization ray tracing of a beam of light that reflects off a (flat) metallic mirror, following the methodology outlined in Breckinridge et al. (2015), and the determination of the beam shifts that result. In Sect. 4 we compare the resulting shifts for various incident polarization states and angles of incidence to the predicted spatial and angular GH and IF shifts as derived for Gaussian beams. We determine the centroid shifts of both the focal-plane intensity (i.e., the PSF) and the intensity in the exit-pupil plane because these planes are where the spatial shifts (shifts of the complete beam) and angular shifts (angular deviations as measured from the focus) should be visible. To enable a direct comparison of our results with the experimental measurements of the GH and IF shifts by Merano et al. (2007), Aiello et al. (2009), and Hermosa et al. (2011), we consider a (practically) identical configuration to the one used in those studies: a converging, monochromatic beam of light with n-number of 61.3 that reflects off a flat gold mirror at a wavelength of 820 nm and with a focal distance of 11.9 cm. Our configuration differs in that the beam of light is not Gaussian but has a uniform (or top-hat) intensity profile across the entrance pupil as is the case for astronomical telescopes and instruments.
As the first step in our analysis, we compute the Jones pupil that describes the electric-field response in the exit pupil upon reflection. We only describe this computation briefly here (for detailed descriptions see e.g., Waluschka, 1989; Gotte & Dennis, 2012). We use the definitions as shown in Fig. 2 and decompose the beam of light into a set of rays that each can be described by a plane electromagnetic wave. For each ray, we compute the angle of incidence and, using Eqs. (13) and (14), the corresponding Fresnel reflection coefficients in the local \(p\)- and \(s\)-directions. Subsequently, we calculate the orientation of the local plane of incidence for each ray. Finally, we compute the Jones pupil as the
Figure 3: Amplitude (_left_) and phase (_right_) of the Fresnel reflection coefficients in the \(p\)- and \(s\)-directions as a function of the angle of incidence for gold with \(\hat{n}=0.188+\mathrm{i}5.39\) at a wavelength of 820 nm. The gradients in the amplitude and phase for an angle of incidence of \(45^{\circ}\) are indicated in blue for the \(p\)-direction and in red for the \(s\)-direction.
Figure 2: Schematic of the reflection of a beam of light off a flat metallic mirror with complex refractive index \(\hat{n}=n+\mathrm{i}\kappa\). The central ray of the beam hits the mirror at an angle of incidence \(\theta\) measured with respect to the normal to the surface of the mirror. The orientation of the _xyz_ reference frame before and after reflection is indicated.
set of Jones matrices describing the reflection of each ray, taking into account the orientation of the local plane of incidence and the change of sign of the \(x\)-coordinate of the ray upon reflection. The resulting Jones pupil \(J_{xyz}\), which is expressed in the _xyz_-basis, can be written as follows:
\[J_{xyz}=\begin{bmatrix}J_{xx}&J_{xy}\\ J_{yx}&J_{yy}\end{bmatrix}=\begin{bmatrix}R_{xx}\mathrm{e}^{\mathrm{i}\phi_{ xx}}&R_{xy}\mathrm{e}^{\mathrm{i}\phi_{yy}}\\ R_{yx}\mathrm{e}^{\mathrm{i}\phi_{yx}}&R_{yy}\mathrm{e}^{\mathrm{i}\phi_{yy}} \end{bmatrix}, \tag{17}\]
where \(J_{xx}\) to \(J_{yy}\) are the complex Jones-pupil elements describing the contribution of the \(x\)- or \(y\)-polarized components of the incident electric field (in the entrance pupil) to the \(x\)- or \(y\)-polarized components of the reflected electric field (in the exit pupil). The amplitudes and phases of the Jones-pupil elements, which define the ratios of the amplitudes and the phase shifts of the reflected and incident electric fields, are denoted \(R_{xx}\) to \(R_{xy}\) and \(\phi_{xx}\) to \(\phi_{xy}\), respectively. The Jones pupil \(J_{yy}\): for reflection with an angle of incidence of \(45^{\circ}\) is shown in Fig. 4 (top).
The Jones pupil is a crucial ingredient for our understanding of the beam shifts in Sect. 4. In that context, it is useful to also express the Jones pupil in the basis of the diagonal and antidiagonal polarizations, \(daz\), and the basis of the right-handed and left-handed circular polarizations, \(r\!lz\), as defined in Fig. 1. The Jones pupils in the \(daz\)- and \(r\!lz\)-bases, \(J_{daz}\) and \(J_{r\!lz}\), are defined as follows:
\[J_{daz}=T_{daz}J_{xyz}T_{dz}^{-1}=\begin{bmatrix}R_{da}\mathrm{e}^{\mathrm{i} \phi_{xx}}&R_{da}\mathrm{e}^{\mathrm{i}\phi_{xx}}\\ R_{ab}\mathrm{e}^{\mathrm{i}\phi_{xx}}&R_{ab}\mathrm{e}^{\mathrm{i}\phi_{yy}} \end{bmatrix}, \tag{18}\]
\[J_{r\!lz}=T_{r\!lz}J_{xyz}T_{r\!lz}^{-1}=\begin{bmatrix}R_{rr}\mathrm{e}^{ \mathrm{i}\phi_{xx}}&R_{r}\mathrm{e}^{\mathrm{i}\phi_{xx}}\\ R_{lz}\mathrm{e}^{\mathrm{i}\phi_{yy}}&R_{ll}\mathrm{e}^{\mathrm{i}\phi_{yy}} \end{bmatrix}, \tag{19}\]
where \(R_{dd}\) to \(R_{ll}\) and \(\phi_{dd}\) to \(\phi_{ll}\) are the amplitudes and phases of the Jones-pupil elements and \({}^{-1}\) denotes the inverse of a matrix. The matrices \(T_{daz}\) and \(T_{r\!lz}\) describe the transformations from the _xyz_-basis to the _dz_- and \(r\!lz\)-bases, respectively, and are given by:
\[T_{daz}=\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}, \tag{20}\]
\[T_{r\!lz}=\frac{1}{\sqrt{2}}\begin{bmatrix}1&-\mathrm{i}\\ 1&\mathrm{i}\end{bmatrix}. \tag{21}\]
The Jones pupils \(J_{daz}\) and \(J_{r\!lz}\) for reflection with an angle of incidence of \(45^{\circ}\) are shown in Fig. 4 (center) and Fig. 4 (bottom), respectively.
As the next step, we compute the amplitude-response matrix (ARM) specifying the electric-field response in the focal plane (expressed in the _xyz_-basis). The ARM is computed as follows:
\[ARM=\begin{bmatrix}\mathcal{F}(J_{xx})&\mathcal{F}(J_{yy})\\ \mathcal{F}(J_{yx})&\mathcal{F}(J_{yy})\end{bmatrix}=\begin{bmatrix}R^{ \prime}_{xx}\mathrm{e}^{\mathrm{i}\phi_{xx}}&R^{\prime}_{xy}\mathrm{e}^{ \mathrm{i}\phi_{yy}}\\ R^{\prime}_{xy}\mathrm{e}^{\mathrm{i}\phi_{yy}}&R^{\prime}_{xy}\mathrm{e}^{ \mathrm{i}\phi_{yy}}\end{bmatrix}, \tag{22}\]
where \(\mathcal{F}(\dots)\) denotes the spatial Fourier transform over a Jones-pupil element and \(R^{\prime}_{xx}\) to \(R^{\prime}_{yy}\) and \(\phi^{\prime}_{xx}\) to \(\phi^{\prime}_{yy}\) denote the amplitudes and phases, respectively, of the ARM elements. By using the spatial Fourier transform for the computation of the ARM we assume that the Fraunhofer approximation to diffraction applies, which is the case for beams with absolute f-numbers larger than \(\sim\)5 (see e.g., McGuire & Chipman 1990). The ARM for reflection with an angle of incidence of \(45^{\circ}\) is shown in Fig. A.1.
Next, we calculate the point-spread matrix (PSM), which is the Mueller-matrix representation of the PSF and describes the intensity response in the focal plane for any incident Stokes vector, whether 100% polarized, partially polarized, or unpolarized. The PSM is calculated as follows:
\[PSM=C(ARM\otimes ARM^{+})C^{-1} \tag{23}\]
where \(\otimes\) denotes the Kronecker product, the asterisk indicates the element-wise complex conjugate, and the matrix \(C\) is given by (see e.g., Espinosa-Luna et al. 2008):
\[C=\begin{bmatrix}1&0&0&1\\ 1&0&0&-1\\ 0&1&1&0\\ 0&\mathrm{i}&-\mathrm{i}&0\end{bmatrix}. \tag{24}\]
The PSM can be written as follows:
\[PSM=\begin{bmatrix}I\to I&Q\to I&U\to I&V\to I\\ I\to Q&Q\to Q&U\to Q&V\to Q\\ I\to U&Q\to U&U\to U&V\to U\\ I\to V&Q\to V&U\to V&V\to V\end{bmatrix}, \tag{25}\]
where each element \(A\to B\) describes the contribution of the incident Stokes parameter \(A\) to the resulting Stokes parameter \(B\). The PSM for reflection with an angle of incidence of \(45^{\circ}\) is shown in Fig. 5. We note that the same PSM can also be obtained by computing the ARM (Eq. (22)) from the Jones pupil expressed in the _daz_- or _r\(l\!z\)_-bases and replacing the matrix \(C\) in Eqs. (23) and (24) with the appropriate matrix corresponding to those bases.
As the final step, we determine the beam shifts in the exit pupil and the focal plane. To this end, we define an incident Jones vector or Stokes vector with a uniform intensity profile and polarization state. For the determination of the shift in the exit pupil, we right-multiply the Jones pupil by the incident Jones vector to obtain the Jones vector in the pupil plane. Subsequently, we compute the intensity distribution in the pupil plane as the sum of squares of the amplitudes of the latter Jones vector. Finally, we calculate the beam shift as the offset of the centroid of the intensity distribution with respect to the beam position in the absence of diffraction and aberrations. To determine the beam shift in the focal plane, we compute the Stokes vector after reflection by right-multiplying the PSM by the incident Stokes vector. We then retrieve the intensity image from the first element of the resulting Stokes vector and determine the shift as the offset of the centroid with respect to the beam position in the absence of diffraction and aberrations.
## 4 Explanation of beam shifts and comparison to polarization ray tracing
In this section, we explain the spatial and angular GH and IF shifts and compare them to the shifts found using polarization ray tracing. We analytically describe the four shifts using the closed-form expressions from Aiello & Woerdman (2008). These expressions are derived (see Aiello & Woerdman 2007) by decomposing an incident, uniformly polarized Gaussian beam of light into the angular spectrum of plane waves (e.g., Born & Wolf 2013) and computing the effect of the reflection on each wave. Because the plane waves are infinitely extended, the Fresnel equations can be applied without making any approximations. The decomposition into plane waves is equivalent to a Fourier transform of the electric field at the mirror interface. The resulting reflected plane waves are then integrated over, and the shift is calculated as the shift of the centroid of the intensity of the
Figure 4: Jones pupil expressed in the _xyz- (top)_, _daz- (center)_, and _rlz_-bases (_bottom_) at a wavelength of 820 nm for a converging beam of light with an f-number of 61.3 that reflects off gold at an angle of incidence of 45\({}^{\circ}\). The panels in the first and second (third and fourth) columns show the amplitude (phase) of the Jones-pupil elements. The positive \(x\)- and \(y\)-directions are upward and to the left, respectively. The values of the color maps are different among the panels. The red, orange, blue, and green borders around the panels indicate the gradients that are visible and the specific beam shifts that these gradients cause (see the legend above the top panels). The panels of \(R_{\rm{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{ }}}}}}}}}}}}}}}\), \(\phi_{\rm{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{ {\it{ { }}}}}}}}}}}}}}}}\), and \(\phi_{\rm{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{ {\it{{\it{ }}}}}}}}}}}}}}}}\), have two colored borders because these panels show a combination of two gradients. To reveal the gradient in the panels of \(\phi_{\rm{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{ \it{\it{\it{ }}}}}}}}}}}}}}}}\) and \(\phi_{\rm{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{ \it{\it{\it{\it{\it{ {\it{ }}}}}}}}}}}}}}}}}}\), \(\pi\) has been added to the phase in the left and right halves of the pupil, respectively.
beam. The expressions depend on the Fresnel reflection coefficients at the central angle of incidence and the complex electric-field components of the incident beam. We have rewritten the expressions in terms of the more familiar Stokes parameters to make the expressions easier to understand and enable the computation of the shifts for any incident polarization state.
For each of the four shifts, which generally occur simultaneously, we explain the origin and characteristics, and analytically compute the size and direction as a function of the angle of incidence for different incident polarization states. We consider 100% linearly polarized light with angles of linear polarization \(\chi\) ranging from \(0^{\circ}\) to \(180^{\circ}\) in steps of \(22.5^{\circ}\), 100% right-handed and left-handed circularly polarized light (i.e., \(V=1\) and \(V=-1\), respectively), and unpolarized light. For these same polarization states, we numerically compute the shifts from the polarization ray tracing as outlined in Sect. 3 and compare the results to the analytical computations. We also explain the shifts using the Jones pupil and the PSM. We discuss the spatial and angular GH shifts in Sects. 4.1 and 4.2 and the spatial and angular IF shifts in Sects. 4.3 and 4.4. For easy reference, an overview of the properties of the four beam shifts is shown in Table 1 of Sect. 5.5.
### Spatial Goos-Hanchen shift
The spatial GH shift, \(\mathrm{\chi_{sGH}}\), is a displacement of the entire beam of light upon reflection and occurs in the plane of incidence (e.g., Goos & Hanchen 1947; Merano et al. 2007; Aiello & Woerdman 2008; Aiello et al. 2009; Gotte & Dennis 2012; Bliokh & Aiello 2013). Figure 6 (top) shows a schematic with the definition of the spatial GH shift. The shift is independent of the divergence angle of the incident beam (i.e., the f-number) and does not depend on whether the reflection occurs in the focus or the converging or diverging parts of the beam. From the perspective of the plane-wave decomposition, the spatial GH shift can be understood from a 2D picture of the beam of light, looking from a direction perpendicular to the plane of incidence (i.e., the side view as shown in Fig. 6, top). Each plane wave of the beam has a different angle of incidence and therefore acquires a correspondingly different phase shift upon reflection. This results in a gra
Figure 5: Point-spread matrix (PSM) at a wavelength of 820 nm for a converging beam of light with an f-number of 61.3 that reflects off gold at an angle of incidence of 45\({}^{\circ}\). The panels show the central 100 μm \(\times\) 100 μm of the PSM elements. The positive \(x\)- and \(y\)-directions are upward and to the left, respectively. The gray plus signs indicate the centroids of the PSM elements in the absence of diffraction and aberrations. The values of the color maps are different among the panels.
dient in phase over the range of angles of incidence (see Fig. 3, right). Integrating over all reflected plane waves, this then results in a shift of the entire beam parallel to the plane of incidence. The integration is equivalent to an inverse Fourier transform, which explains how a phase gradient is equivalent to a shift of the entire beam on the mirror.
The spatial GH shift can be computed as follows:
\[X_{\rm{GH}}=\frac{\lambda}{2\pi}\frac{\frac{\partial\phi_{p}}{\partial\theta}R_ {p}^{2}I_{x}+\frac{\partial\phi_{s}}{\partial\theta}R_{s}^{2}I_{y}}{R_{p}^{2}I _{x}+R_{s}^{2}I_{y}}, \tag{26}\]
where \(R_{p}\) and \(R_{s}\) (from Eqs. (13) and (14)) and the phase gradients \(\partial\phi_{p}/\partial\theta\) and \(\partial\phi_{s}/\partial\theta\) (see Fig. 3, right) are computed at the central angle of incidence of the beam, and \(I_{x}\) and \(I_{y}\) are the intensities of the components of the light polarized in the \(x\)- and \(y\)-direction, respectively. These intensities only depend on the incident Stokes \(Q\) and follow from Eqs. (8) and (9). The factor \(R_{p}^{2}I_{x}+R_{s}^{2}I_{y}\) in Eq. (26) is the intensity of the reflected beam and returns in the expressions of all shifts. The spatial GH shift is produced by the phase gradients, whereas \(R_{p}\) and \(R_{s}\) can be considered to be small corrections. Indeed, if we set either \(I_{x}\) or \(I_{y}\) equal to zero in Eq. (26), we obtain:
\[X_{\rm{GH,}\times{\it{\rm{eff}}}}=\frac{\lambda}{2\pi}\frac{\partial\phi_{p/s }}{\partial\theta}, \tag{27}\]
which shows that the spatial GH shift consists of two components: \(X_{\rm{GH,}\times{\it{\rm{eff}}}}\) for the light polarized in the \(x\)-direction and \(X_{\rm{GH,}\times{\it{\rm{eff}}}\,{\it{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \
ized light), the resulting shift is in between the three aforementioned shifts.
As can be seen from Fig. 4 (top), which shows the Jones pupil expressed in the _xyz_-basis, the spatial GH shift produces gradients in the phase of all Jones-pupil elements (blue borders). These phase gradients represent wavefront tilts in the exit pupil and as such result in shifts of the centroid of the PSF in the focal plane. This confirms the claim by Schmid et al. (2018) that the spatial GH shift is the shift that arises from the phase gradient in the \(x\)-direction in the Jones pupil as described by Breckinridge et al. (2015). However, we note that Fig. 27 of Schmid et al. (2018) suggests that the spatial GH shift is caused by both a shift on the mirror and a directional change of the beam due to a wavefront tilt induced upon reflection. This depiction is inaccurate: The spatial GH shift is a shift of the entire beam that occurs on the mirror surface, which, in the Fraunhofer approximation, can be described as a wavefront tilt in the exit pupil.
From the Jones pupil, it may seem that the spatial GH shift depends on the f-number, but this is not the case. Although a two times smaller f-number gives a two times larger phase gradient in the pupil plane, the focal distance is also two times smaller, resulting in the same shift in the focal plane. Similarly, for a diverging beam (i.e., a beam with a negative f-number) the phase gradients have the opposite sign but then the focal plane is virtual and located in front of the mirror (i.e., the focal distance is negative), again yielding the same shift. A more mathematical approach to showing the independence of the shift from the f-number is presented in Schmid et al. (2018). We note that the size of the shift (which scales with \(\lambda\), see Eq. (26)) relative to the size of the PSF (which scales with \(\lambda|F|\), with \(F\) the f-number) does depend on the f-number and is proportional to \(1/|F|\). This means that a more strongly converging or diverging beam results in a larger shift relative to the PSF.
Finally, we show that the spatial GH shift is visible in the PSM as well (see Fig. 5). As described in Sect. 3, the focal-plane shifts are determined from the intensity image constructed by right-multiplying the PSM by the incident Stokes vector. In other words, the shifts are determined from the image constructed as a linear combination of the PSM elements in the top row, weighted with the incident Stokes parameters. Whereas the \((I\!\rightarrow\!I)\)-, \((U\!\rightarrow\!I)\)-, and \((V\!\rightarrow\!I)\)-elements have their centroids shifted in the \(x\)-direction by the same small amount, the \((Q\!\rightarrow\!I)\)-element exhibits a much larger shift in this direction. For incident unpolarized light (\(Q=U=V=0\)), the shift we find is that of the \((I\!\rightarrow\!I)\)-element. On the other hand, for incident light with \(Q\) nonzero, a scaled version of the \((Q\!\rightarrow\!I)\)-element, which shows a relatively large shift, is added to or subtracted from the \((I\!\rightarrow\!I)\)-element. This results in a larger, smaller, or opposite shift compared to that of the \((I\!\rightarrow\!I)\)-element, in agreement with the curves in Fig. 7. Finally, for incident light with nonzero \(U\) and/or \(V\), scaled versions of the \((U\!\rightarrow\!I)\)- and \((V\!\rightarrow\!I)\)-elements are added to or subtracted from the \((I\!\rightarrow\!I)\)-element. However, in this case the resulting shift is the same as that for incident unpolarized light because the centroid shifts of the \((U\!\rightarrow\!I)\)- and \((V\!\rightarrow\!I)\)-elements are equal to that of the \((I\!\rightarrow\!I)\)-element.
### Angular Goos-Hanchen shift
The angular GH shift, \(\mathbf{\theta}_{\mathrm{aGH}}\), is an angular deviation of the beam of light upon reflection and, similar to the spatial GH shift, occurs in the plane of incidence (e.g., Aiello & Woerdman 2008; Aiello et al. 2009; Gotte & Dennis 2012; Bliokh & Aiello 2013). The definition of the angular GH shift is shown in Fig. 6 (top). Similar to the spatial GH shift, the angular GH shift can be understood from a 2D picture of the beam of light. Each ray in the incident beam hits the mirror at a different angle of incidence and therefore experiences a different reflection coefficient. Over the range of angles of incidence this results in a gradient in the amplitude across the reflected beam (see Fig. 3, left), which translates into a shift of the centroid in intensity. Contrary to the spatial GH shift, the size of the angular shift depends on the divergence angle, and thus the f-number, of the incident beam. This is because a more strongly converging or diverging beam covers a larger range of angles of incidence and therefore yields a larger gradient. The angular GH shift is truly a deflection of the beam centroid as described by an angle, which is the same whether the reflection occurs in the focus or the converging or diverging part of the beam (see Fig. 6, top). The resulting physical displacement of the beam centroid vanishes in the focus and increases with distance from the focus. That the physical displacement of the beam centroid is zero in the focus can easily be understood in the Fraunhofer approximation: The amplitude gradient in the exit pupil will lead to a point-symmetric change in the PSF, which cannot change the centroid of the intensity distribution.
The angular GH shift can be computed as follows:
\[\mathbf{\theta}_{\mathrm{aGH}}=\frac{-\alpha^{2}}{2}\frac{R_{p}}{\partial\theta} I_{x}+R_{s}\frac{\partial R_{s}}{\partial\theta}I_{y}\\ \frac{R_{p}^{2}I_{x}+R_{s}^{2}I_{y}}{R_{p}^{2}I_{x}+R_{s}^{2}I_{y}}, \tag{28}\]
where, similar to the spatial GH shift, \(I_{x}\) and \(I_{y}\) are functions of Stokes \(Q\) (see Eqs. (8) and (9)), and \(R_{p}\), \(R_{s}\), and the amplitude gradients \(\partial R_{p}/\partial\theta\) and \(\partial R_{s}/\partial\theta\) (see Fig. 3, left) are evaluated at the central angle of incidence. The divergence angle of the beam, \(\alpha\), is given by:
\[\alpha=\arctan\left(\frac{1}{2|F|}\right), \tag{29}\]
with \(F\) the f-number of the beam. Contrary to the spatial GH shift, the angular GH shift only depends on the amplitude of the reflection coefficients, and not on the phase. The angular GH shift is produced by the amplitude gradients, whereas \(R_{p}\) and \(R_{s}\) only have a small effect. The structure of Eq. (28) is quite similar to that of Eq. (26), which describes the spatial GH shift. Indeed, when setting \(I_{x}=0\) or \(I_{y}=0\) in Eq. (28), we see that the angular GH shift also consists of two components for the light polarized in the \(x\)- and \(y\)-directions:
\[\mathbf{\theta}_{\mathrm{aGH,x}/y}=\frac{-\alpha^{2}}{2}\frac{1}{R_{p/s}}\frac{ \partial R_{p/s}}{\partial\theta}. \tag{30}\]
Equation (28) therefore constitutes the intensity-weighted average of these two shifts. Finally, the physical displacement of the beam centroid at a distance \(z_{\mathrm{f}}\) from the focus of the beam is given by:
\[X_{\mathrm{aGH}}=z_{\mathrm{f}}\,\mathbf{\theta}_{\mathrm{aGH}}, \tag{31}\]
where \(z_{\mathrm{f}}>0\) in the diverging part of the beam and \(z_{\mathrm{f}}<0\) in the converging part. We can compute the physical displacement of the centroid of the intensity in the pupil plane by inserting \(z_{\mathrm{f}}=-f\) in Eq. (31), where \(f\) is the focal distance (\(f>0\) in a converging beam and \(f<0\) in a diverging beam).
Figure 8 shows the angular GH shift as a function of the angle of incidence for different incident polarization states as computed from Eq. (28). The figure also shows the shifts as obtained from the exit pupil (data points) using the polarization ray tracing as explained in Sect. 3. We have computed these numerical
shifts by dividing the physical displacements of the centroid in the pupil plane by the negative value of the focal distance (see Eq. (31)). Contrary to the analytically computed shifts, we have computed the numerical shifts only for 100% polarized light (i.e., not for unpolarized light) because the Jones calculus used cannot describe unpolarized or partially polarized light. Similar to the spatial GH shift, the analytical and numerical results in Fig. 8 agree closely and small deviations are only visible for very large angles of incidence. These deviations are due to the angular GH shift depending on the precise beam intensity profile and vanish when performing the polarization ray tracing for a Gaussian beam.
Figure 8 indicates that the angular GH shift is on the order of microradians for the particular configuration studied. For normal incidence, the shift is zero. The largest shifts are found for light polarized in the \(x\)-direction (i.e., for \(\chi=0^{\circ}\) and \(\chi=180^{\circ}\), or \(Q=1\)), whereas the shifts of the light polarized in the \(y\)-direction (i.e., for \(\chi=90^{\circ}\) or \(Q=-1\)) are much smaller. The curves can be understood from the amplitude gradients governing the angular GH shift as shown in Fig. 3 (left): Whereas \(\partial R_{s}/\partial\theta\) increases monotonically with increasing angle of incidence, \(\partial R_{p}/\partial\theta\) is initially negative, reaches a value of zero, and then attains large positive values. The curves in Fig. 8 follow a similar pattern as those of the spatial GH shift (see Fig. 7), with the shifts for incident light that is not 100% \(x\)- or \(y\)-polarized being an intensity-weighted average of the shifts of the \(x\)- and \(y\)-polarizations.
As shown in the \(R_{xx^{\ast}}\) and \(R_{yy^{\ast}}\)-elements of Fig. 4 (top; red borders), the amplitude gradients associated with the angular GH shift are visible in the Jones pupil expressed in the \(xyz\)-basis. In the antidiagonal elements \(R_{xy}\) and \(R_{yx}\) these amplitude gradients also exist, but they are overshadowed by the left-right symmetric structure visible in those elements. For a diverging rather than converging beam, the amplitude gradients have opposite signs (see also Fig. 6, top). Because a diverging beam implies a negative focal distance, that is, the focal plane is virtual and located in front of the mirror, the signs of the angular shifts themselves do not change (see Eq. (31)). Finally, the angular GH shift is not visible in the PSM (Fig. 5) because it is zero in the focus.
### Spatial Imbert-Fedorov shift
The spatial IF shift, \(Y_{\rm AF}\), is a displacement of the entire beam of light upon reflection and occurs in the direction perpendicular to the plane of incidence (e.g., Fedorov 1955; Imbert 1972; Bliokh & Bliokh 2006; Aiello & Woerdman 2008; Hermosa et al. 2011; Gotte & Dennis 2012; Bliokh & Aiello 2013; Bliokh et al. 2015). A schematic with the definition of the spatial IF shift is shown in Fig. 6 (bottom). Similar to the spatial GH shift, the spatial IF shift is independent of the f-number of the beam and the position within the beam where the reflection occurs. To understand the spatial IF shift from a plane-wave decomposition, it is necessary to consider the full 3D picture (e.g., Aiello & Woerdman 2008; Bliokh & Aiello 2013). Each plane wave in the incident beam has a different (3D) propagation direction. Therefore, not only the angles of incidence (and thus the reflection coefficients) are different among the waves, but also the orientations of the local planes of incidence. These rotations of the planes of incidence induce different geometric (Berry) phases among the circularly polarized components of the waves. This results in a gradient of the geometric phases in the direction perpendicular to the plane of incidence, with the gradient having opposite sign for the right-handed and left-handed circular polarizations. Accounting for the reflection coefficients of each wave as well as the geometric phases within the reflected beam, the reflected beam is found to be shifted in the direction perpendicular to the plane of incidence when integrating over all waves.
The spatial IF shift is more easily understood in terms of conservation of total angular momentum (e.g., Bliokh & Bliokh 2006; Bliokh & Aiello 2013; Bliokh & Nori 2015; Bliokh et al. 2015). Disregarding vortex beams, the total angular momentum of a beam of light consists of the spin angular momentum (SAM) and the external orbital angular momentum. In the quantum-mechanical description of light, photons carry one of two spin states that correspond to right-handed and left-handed circular polarization. The SAM of a beam of light is a vector quantity pointing in the direction of propagation that is proportional to the difference between the number of right-handed and left-handed photons, that is, it is proportional to Stokes \(V\). The external orbital angular momentum is given by the cross product of the radius vector of the beam centroid with respect to some origin and the linear momentum of the beam, with the latter pointing in the direction of propagation. Upon reflection, the total angular momentum in the direction normal to the surface of the mirror is conserved. As a result, any change in the SAM of the beam, that is, in the circular polarization, must be compensated for by a shift of the beam in the direction perpendicular to the plane of incidence. This shift is the spatial IF shift, which is therefore considered to be a spin-orbit interaction of light.
The spatial IF shift can be calculated as follows:
\[Y_{\rm AF}=\frac{-\lambda}{2\pi}\frac{\cot\theta}{R_{p}^{2}I_{x}+R_{s}^{2}I_{ y}}\left[V\left(\frac{R_{p}^{2}+R_{s}^{2}}{2}\right)+R_{p}R_{s}\left(V\cos \Delta+U\sin\Delta\right)\right], \tag{32}\]
where \(R_{p}\), \(R_{s}\), and the retardance \(\Delta\) (see Eq. (16) and Fig. 3) are evaluated at the central angle of incidence \(\theta\), and \(\cot\theta\) is the transverse gradient of the induced geometric phase. Although the spatial IF shift has a weak dependence on Stokes \(Q\) through \(I_{x}\) and \(I_{y}\) (see Eqs. (8) and (9)), the shift depends primarily on the incident Stokes \(U\) and \(V\). So, whereas the GH shift consists of two separate shifts for light polarized in the \(x\)- and \(y\)-directions, the spatial IF shift comprises separate and opposite shifts for the diagonally and antiidiagonally polarized components (because
Figure 8: Angular GH shift as a function of the angle of incidence at a wavelength of 820 nm for a beam of light with an f-number of 61.3 that reflects off gold as obtained from the closed-form expression of Eq. (28) (curves) and polarization ray tracing (data points). The shift is shown for an incident beam that is completely unpolarized, 100% linearly polarized with various angles of linear polarization \(\chi\), and 100% right-handed (\(V=1\)) or left-handed (\(V=-1\)) circularly polarized.
\(U=I_{d}-I_{a}\), see Eq. (6)) as well as for the right-handed and left-handed circularly polarized components (because \(V=I_{r}-I_{l}\), see Eq. (7)). For metallic reflections, the spatial IF shift results primarily from the retardance, whereas \(R_{p}\) and \(R_{r}\) can be considered to be small corrections. Indeed, we can simplify Eq. (32) by assuming that the incident beam is totally reflected. Setting \(R_{p}=R_{x}=1\) and inserting \(I_{x}+I_{y}=1\) (see Eq. (4)), we obtain:
\[Y_{\rm{sIF}}=\frac{-\lambda}{2\pi}\cot\theta\left[V\left(1+\cos\Delta\right)+ U\sin\Delta\right] \tag{33}\]
In this equation, the factor \([V(1+\cos\Delta)+U\sin\Delta]\) is proportional to the change of the SAM upon reflection, with \(V(1)\) proportional to the incident SAM and \(-(V\cos\Delta+U\sin\Delta)\), which gives Stokes \(V\) after reflection, proportional to the SAM of the reflected beam. The spatial IF shift thus depends on the crosstalk from \(U\) to \(V\) (\(U\sin\Delta\)) and the crosstalk from \(V\) to \(U\) or even the crosstalk creating a change of handedness of the circular polarization (\(V\cos\Delta\)).
Figure 9 shows the spatial IF shift as a function of the angle of incidence for different incident polarization states as computed from Eq. (32). Also shown are the shifts in the focal plane (data points) as numerically determined using polarization ray tracing (see Sect. 4), which agree closely with the analytical computations. The small deviations among the results vanish when performing the polarization ray tracing with a Gaussian beam.
Figure 9 illustrates that the spatial IF shift is (somewhat) smaller than the spatial GH shift and is always smaller than the wavelength. At normal incidence, where \(\Delta=180^{\circ}\) (see Fig. 3), the spatial IF shift is zero. For nonzero angles of incidence, where \(\Delta\neq 180^{\circ}\), changes in the SAM occur for incident \(U\)- or \(V\)-polarized light, thus leading to spatial IF shifts. The spatial IF shifts are in opposite directions for opposite signs of \(U\) (e.g., for \(\chi=45^{\circ}\) and \(\chi=135^{\circ}\)) and \(V\) (for right-handed and left-handed circular polarization). The shifts initially become larger with increasing angle of incidence (because \(\Delta\) monotonically decreases), but then become smaller again for (very) large angles of incidence as \(\cot\theta\to 0\) when \(\theta\to 90^{\circ}\), resulting in no shift at \(\theta=90^{\circ}\). The spatial IF shift for \(U\) (\(\chi=45^{\circ}\) and \(\chi=135^{\circ}\)) reaches larger values than that of \(V\) with the maximum of \(U\) occurring at a smaller angle of incidence than the maximum of \(V\). The maxima of the curves are lower for partially polarized light or light with both \(Q\) and \(U\) nonzero (e.g., \(\chi=22.5^{\circ}\), \(\chi=67.5^{\circ}\), \(\chi=112.5^{\circ}\), or \(\chi=157.5^{\circ}\)). Although the light with \(\chi=22.5^{\circ}\) and \(\chi=67.5^{\circ}\) (and similar for \(\chi=112.5^{\circ}\) and \(\chi=157.5^{\circ}\)) have the same value for \(U\), small differences in the size of the shifts occur due to the dependence on \(Q\) via \(I_{x}\) and \(I_{y}\). The curves of incident light with both \(U\) and \(V\) nonzero (not shown in Fig. 9) are combinations of the curves for the individual Stokes parameters. Finally, for unpolarized light or light polarized in the \(x\)- or \(y\)-direction (i.e., \(Q\)-polarized light), the spatial IF shift is always zero because the incident beam overall carries no SAM and no SAM can be created upon reflection.
Similar to the spatial GH shift, the spatial IF shift is expected to create gradients in phase in the Jones pupil. However, in the Jones pupil expressed in the _xyz_-basis (see Fig. 4, top), phase gradients in the \(y\)-direction are not visible. This is because the spatial IF shift primarily depends on Stokes \(U\) and \(V\) (see Eq. (33)), and therefore results from the complex linear combination of all four Jones-pupil elements in this basis. Nevertheless, a hint of a gradient in the \(y\)-direction is visible in the \(R_{xy^{\circ}}\), \(R_{yz^{\circ}}\), \(\phi_{xy^{\circ}}\), and \(\phi_{yz^{\circ}}\)-elements when considering that a phase difference of \(\pi\) between the left and right sides of the pupil implies that the reflection coefficients on either side have opposite signs. Actual phase gradients in the \(y\)-direction naturally appear in the Jones pupils expressed in the bases of Stokes \(U\) and \(V\), that is, in the Jones pupils expressed in the _daz_- and _rlx_-bases (see Fig. 4, center and bottom). The gradients are visible in the \(\phi_{daz}\), \(\phi_{adz^{\prime}}\), \(\phi_{rr^{\prime}}\), and \(\phi_{ll^{\prime}}\)-elements (green borders). The Jones pupils also show the phase gradient in the \(x\)-direction produced by the spatial GH shift (blue borders), with the \(\phi_{daz}\)- and \(\phi_{ad}\)-elements exhibiting a combination of gradients in the \(x\)- and \(y\)-directions. In Fig. 4 (center and bottom), the amplitude gradient in the \(x\)-direction due to the angular GH is visible as well (red borders). Lastly, we note that although the spatial IF shift does not depend on the f-number, its size relative to the PSF scales as \(1/|F|\), analogous to the spatial GH shift (see Sect. 4.1).
Finally, we show how the spatial IF shift is visible in the PSM (see Fig. 5). As explained in Sect. 4.1, the focal-plane shifts are determined from the image created as a linear combination of the PSM elements in the top row, weighted with the incident Stokes parameters. Because the \((I\!\rightarrow\!I)\)- and \((Q\!\rightarrow\!I)\)-elements are symmetric with respect to the \(x\)-axis (i.e., they are left-right symmetric in Fig. 5), no shift results for unpolarized light or light that is polarized in the \(x\)- or \(y\)-direction. The \((U\!\rightarrow\!I)\)- and \((V\!\rightarrow\!I)\)-elements on the other hand are asymmetric, with positive and negative signals on opposite sides of the \(x\)-axis. For incident light with nonzero \(U\) and/or \(V\), scaled versions of these elements are added to or subtracted from the \((I\!\rightarrow\!I)\)-element, producing a PSF with the centroid shifted in the \(y\)-direction. We note that the relative intensity of the \((U\!\rightarrow\!I)\)-element is larger than that of the \((V\!\rightarrow\!I)\)-element, in agreement with the spatial IF shift being larger for \(U\) than for \(V\) at an angle of incidence of \(45^{\circ}\) (see Fig. 9).
Figure 9: Spatial IF shift as a function of the angle of incidence for reflection off gold at a wavelength of 820 nm as obtained from the closed-form expression of Eq. (32) (curves) and polarization ray tracing (data points). The shift is shown for an incident beam of light that is completely unpolarized, 100% linearly polarized with various angles of linear polarization \(\chi\), and 100% right-handed (\(V=1\)) or left-handed (\(V=-1\)) circularly polarized. The shifts for \(\chi=67.5^{\circ}\) and \(\chi=157.5^{\circ}\) are not shown, but are very close to the shifts for \(\chi=22.5^{\circ}\) and \(\chi=112.5^{\circ}\), respectively. The colors indicate different polarization states than in Figs. 7 and 8.
### Angular Imbert-Fedorov shift
The angular IF shift, \(\Theta_{\rm aIF}\), is an angular deviation of the beam of light upon reflection directed away from the plane of incidence (e.g., Bliokh & Bliokh 2007; Aiello & Woerdman 2008; Hermosa et al. 2011; Gotte & Dennis 2012; Bliokh & Aiello 2013). The definition of the angular IF shift is shown in Fig. 6 (bottom). The angular IF shift is related to the conservation of linear momentum in the direction perpendicular to the plane of incidence, and, similar to the spatial IF shift, results from the differences in induced geometric phase across the beam. Similar to the angular GH shift, the size of the angular IF shift depends on the f-number of the incident beam and is the same whether the beam is reflected in the focus or in the converging or diverging parts of the beam. The physical displacement of the centroid of the beam is zero in the focus and increases with distance from the focus.
The angular IF shift can be calculated as follows:
\[\Theta_{\rm aIF}=\frac{\alpha^{2}}{4}\frac{\cot\theta}{R_{p}^{2}L_{x}+R_{x}^{ 2}I_{y}}U\left(R_{p}^{2}-R_{s}^{2}\right), \tag{34}\]
where \(R_{p}\) and \(R_{s}\) are computed at the central angle of incidence, and the divergence angle \(\alpha\) is given by Eq. (29). Similar to the angular GH shift, the angular IF shift does not depend on the phases of the reflection coefficients, but only on the amplitudes. Although the angular IF shift has small \(Q\)-dependent corrections through \(I_{x}\) and \(I_{y}\) (see Eqs. (8) and (9)), the shift depends primarily on the incident Stokes \(U\). The angular IF shift consists of separate and opposite shifts for the diagonally and antidiagonally polarized components (because \(U=I_{d}-I_{a}\), see Eq. (6)) and results primarily from the diattenuation. Indeed, if \(Q=0\), that is, \(I_{x}=I_{y}=\nicefrac{{1}}{{2}}\), Eq. (34) reduces to:
\[\Theta_{\rm aIF}=\frac{-\alpha^{2}}{2}U\epsilon\cot\theta, \tag{35}\]
with \(\epsilon\) the diattenuation from Eq. (15). Finally, the physical displacement of the centroid of the beam is given by:
\[Y_{\rm aIF}=z_{\rm f}\,\Theta_{\rm aIF}, \tag{36}\]
with \(z_{\rm f}\) the distance from the focus, similar to Eq. (31).
Figure 10 shows the angular IF shift as a function of the angle of incidence for different incident polarization states as computed from Eq. (34). The shifts as obtained from the exit pupil (data points) using polarization ray tracing (see Sect. 3) are also shown. These numerical shifts are computed using Eq. (36) and are only calculated for 100% polarized light, similarly to the angular GH shifts (see Sect. 4.2). The analytical and numerical results agree closely, with the small deviations vanishing when performing the polarization ray tracing for a Gaussian beam.
Figure 10 shows that the angular IF shift is on the order of less than a microradian for the particular configuration considered. For incident light with \(U\) nonzero, angular IF shifts occur that are in the opposite direction for opposite signs of \(U\). The shifts are zero for angles of incidence of \(0^{\circ}\) and \(90^{\circ}\). The shape of the curves is related to the diattenuation (roughly the difference between \(R_{s}\) and \(R_{p}\) in Fig. 3), which initially increases with increasing angle of incidence, reaches a maximum, and then decreases again to zero at \(\theta=90^{\circ}\). For incident light with \(U=0\) (i.e., \(\chi=0^{\circ},\chi=90^{\circ},\chi=180^{\circ}\), \(V=1\), \(V=-1\), or unpolarized lightly, the shift is zero for any angle of incidence.
Finally, the amplitude gradients in the \(y\)-direction associated with the angular IF shift are visible in the \(R_{da}\)- and \(R_{ad}\)-elements of the Jones pupil expressed in the \(daz\)-basis (see Fig. 4, center). The gradients of these elements are a combination of gradients in the \(y\)-direction and the \(x\)-direction, with the latter due to the angular GH shift (red borders). Because the angular IF shift is zero in the focus, it is not visible in the PSM.
## 5 Discussion
In Sect. 4 we explained the origin and characteristics of the spatial and angular GH and IF shifts and investigated their size and direction as a function of the angle of incidence and incident polarization state. We also showed that all four beam shifts are fully reproduced by polarization ray tracing as described in Sect. 3 and that the exact beam intensity profile (i.e., whether it is Gaussian or uniform) has a negligible effect. Of the four beam shifts, only the spatial GH and IF shifts are relevant for high-contrast images and telescopes because these shifts are visible in the focal plane; the angular GH and IF shifts are not important because, besides a small point-symmetric deformation of the PSF for angles of incidence close to grazing incidence (which do not occur in high-contrast imagers), they have no effect in the focus. We thus find that the polarization structure in the PSF that limits the performance of coronagraphs and the speckle suppression of polarimetric imagers is created by the spatial GH and IF shifts. We note that the effect of these shifts on high-resolution spectroscopy and astrometry of planets should generally be small. The fiber-positioning system of a high-resolution spectrograph maximizes the amount of planet light that enters the fiber, thereby automatically correcting for the beam shifts. And because the beam shifts are similar for astronomical objects at different locations on the science detector, relative astrometry is almost not affected.
In Sect. 5.1 we investigate the polarization structure in the PSF created by the spatial GH and IF shifts. Subsequently, in Sect. 5.2, we examine the effect of the spatial GH and IF shifts on
Figure 10: Angular IF shift as a function of the angle of incidence at a wavelength of 820 nm for a beam of light with an f-number of 61.3 that reflects off gold as obtained from the closed-form expression of Eq. (34) (curves) and polarization ray tracing (data points). The shift is shown for an incident beam that is completely unpolarized, 100% linearly polarized with various angles of linear polarization \(\chi\), and 100% right-handed (\(V=1\)) or left-handed (\(V=-1\)) circularly polarized. The shifts for \(\chi=67.5^{\circ}\) and \(\chi=157.5^{\circ}\) are not shown, but are very close to the shifts for \(\chi=22.5^{\circ}\) and \(\chi=112.5^{\circ}\), respectively. Except for the circular polarization, the colors used indicate the same polarization states as in Fig. 9.
polarimetric measurements. In Sect. 5.3 we then briefly discuss the size of the spatial GH and IF shifts for various mirror materials and wavelengths. After that, in Sect. 5.4, we use our understanding of the spatial GH and IF shifts to discuss and refine the approaches to mitigate the shifts. Finally, we present a table summarizing the properties of the four beam shifts in Sect. 5.5.
### Polarization structure in the PSF due to beam shifts
In this section, we investigate the polarization structure in the PSF created by the spatial GH and IF shifts. This polarization structure must be taken into account when designing the coronagraphs of high-contrast images that aim to detect planets in reflected light (Breckinridge et al., 2015). For our analysis, we consider the reflection off a single flat mirror at an angle of incidence of 45\({}^{\circ}\), using the same configuration as examined in Sects. 3 and 4.
The observed light of the stars around which high-contrast imagers search for planets is unpolarized or has a degree of polarization of only several percent (see e.g., Heiles, 2000). For this case of (nearly) unpolarized incident light, the Stokes vector after reflection off a flat mirror is given by the elements in the left column of the PSM in Fig. 5, that is, the \((I\!\rightarrow\!I)\)-, \((I\!\rightarrow\!Q)\)-, \((I\!\rightarrow\!U)\)-, and \((I\!\rightarrow\!V)\)-elements. These elements are the same as those in the top row of the PSM, except for the \((I\!\rightarrow\!U)\)-element which has opposite sign. Because the spatial GH and IF shifts follow from these top-row elements (see Sects. 4.1 and 4.3), the polarization-dependent structures visible in the Stokes vector for reflection of incident unpolarized light must be created by the spatial GH and IF shifts. In the following, we refer to the \((I\!\rightarrow\!I)\)-, \((I\!\rightarrow\!Q)\)-, \((I\!\rightarrow\!U)\)-, and \((I\!\rightarrow\!V)\)-elements as the intensity image and the \(Q\)-, \(U\)-, and \(V\)-images, respectively.
As outlined in Sect. 4.1, the spatial GH shift is described by two opposite shifts of different size for the incident light polarized in the \(x\)- and \(y\)-directions, that is, for the incident \(I_{x}\)- and \(I_{y}\)-components of the light. Because unpolarized light can be described as the sum of equal amounts of the \(I_{x}\)- and \(I_{y}\)-components (see Eqs. (4), (8), and (9)), the intensity image consists of two PSF components that are slightly shifted in opposite directions along the \(x\)-axis. As a result, the PSF in intensity is not only shifted (by 15 nm or 1.8% of the wavelength for the configuration considered; see Fig. 7, black curve), but also broadened in the \(x\)-direction. The \(Q\)-image is equal to the difference of the \(I_{x}\)- and \(I_{y}\)-components (see Eq. (5)). Due to the diattenuation (see Eq. (15)), the two components are not reflected by an equal amount. Therefore, an overall negative signal with a minimum of \(\sim\)0.9% remains in the image, which constitutes the instrumental polarization. But because the \(I_{x}\)- and \(I_{y}\)-components are also shifted in opposite directions, this instrumental-polarization signal itself also has a large shift (see also Breckinridge et al., 2015).
As explained in Sect. 4.3, the spatial IF shift is opposite for incident diagonally (\(d\)) and antidiagonally (\(a\)) polarized light (i.e., for positive and negative 100% \(U\)-polarized light) as well as for incident right-handed (\(r\)) and left-handed (\(l\)) circularly polarized light (i.e., for positive and negative 100% \(V\)-polarized light). Unpolarized light can be described as the sum of equal amounts of these \(I_{x}\)- and \(I_{x}\)-components as well as the sum of equal amounts of the \(I_{x}\)- and \(I_{I}\)-components (see Eqs. (4), (6), and (7)). Therefore, the intensity image consists of PSF components that are slightly shifted by equal amounts in opposite directions parallel to the \(y\)-axis. So although the PSF in intensity is not shifted by the spatial IF shift when the incident light is unpolarized (see Fig. 9, black curve), it is broadened in the \(y\)-direction in addition to the broadening in the \(x\)-direction (due to the spatial GH shift). The opposite shifts of the \(I_{x}\)- and \(I_{u}\)-components and the \(I_{r}\)- and \(I_{l}\)-components can also be seen in the \(U\)- and \(V\)-images, respectively, where they create asymmetric structures with positive and negative signals on opposite sides of the \(x\)-axis. For the configuration considered, these structures have values below 0.1% of the intensity (with the \(U\)-image having larger values than the \(V\)-image as can be expected from Fig. 9). The asymmetric structures are also visible in the \(R^{\prime}_{xy}\)-, \(R^{\prime}_{xx}\)-, \(\phi^{\prime}_{xy}\)-, and \(\phi^{\prime}_{xx}\)-elements of the ARM (see Fig. A.1). Breckinridge et al. (2015) refer to these structures in the ARM as ghost PSFs (see Sect. 1). Our results therefore show that these ghost PSFs are created by the spatial IF shifts and are elliptically polarized. Finally, we note that due to the splitting of the orthogonal circular polarization states in the \(V\)-image, the spatial IF shift is often also referred to as the spin Hall effect of light (e.g., Hermosa et al., 2011; Bliokh & Aiello, 2013; Bliokh & Nori, 2015; Bliokh et al., 2015).
The PSM in Fig. 5 as calculated with polarization ray tracing includes all orders of polarization aberrations. Still, we find that the polarization structure in the PSF for the case of incident unpolarized light is adequately described by the diattenuation (i.e., the instrumental polarization) and the first-order polarization aberrations in the focus, that is, the spatial GH and IF shifts. We therefore conclude that only for curved mirrors the higher-order polarization aberrations, such as polarization-dependent astigmatism (Breckinridge et al., 2015), come into play. For a discussion on the combined effect of a series of flat mirrors and the polarization aberrations of curved mirrors with normal incidence, we refer to Breckinridge et al. (2015).
### Effect of beam shifts on polarimetric measurements
In this section, we investigate the effect of the spatial GH and IF shifts on polarimetric measurements with high-contrast imagers. The physics literature does not describe beam shifts for the case where unpolarized light is incident on a mirror and where the reflected light is subsequently measured by a polarimeter. However, we can understand this case from our insights into the beam shifts and our results from the polarization ray tracing.
We shall consider a rotatable linear polarizer placed behind the mirror that we analyzed in Sect. 5.1. In that case, the Stokes vector incident on the polarizer is the same Stokes vector as examined in Sect. 5.1: It is equal to the left column of the PSM in Fig. 5. If we then align the transmission axis of the polarizer with the \(x\)-, \(y\)-, \(d\)-, and \(a\)-directions, we measure the \(I_{x}\)-, \(I_{y}\)-, \(I_{d}\)-, and \(I_{d}\)-components of the beam. Also, if we replace the polarizer with a right-handed or left-handed circular polarizer, we measure the \(I_{x}\)- and \(I_{l}\)-components of the beam. As a result, these six measurements are sensitive to exactly the same spatial GH and IF shifts of these components as described in Sect. 5.1. Therefore, when we compute the differences of the \(x\)- and \(y\)-, \(d\)- and \(a\)-, and \(r\)- and \(l\)-measurements, we obtain the \(Q\)-, \(U\)-, and \(V\)-images of the Stokes vector after reflection.
Because stars are generally unpolarized, polarimetric measurements strongly suppress the light from the star, thereby making the detection of planets in reflected light easier. However, the maximum gain in contrast from polarimetry is limited by the spatial GH and IF shifts and the polarization structure that they create. Although the instrumental polarization is a larger aberration, this effect is routinely subtracted in the data reduction and/or removed by using a half-wave plate in front of the optical path in current high-contrast imaging polarimeters (Witzel et al., 2011; Canovas et al., 2011; Wiktorowicz et al., 2014; Millar
Blancher et al., 2016; de Boer et al., 2020; van Holstein et al., 2020, 2020).
To quantify the maximum gain in contrast from polarimetry as limited by the spatial GH and IF shifts, we compute the mirror-induced fractional polarization in \(Q\), \(U\), and \(V\) over the PSF. To this end, we convolve the intensity image and the \(Q\)-, \(U\)-, and \(V\)-images using a top-hat kernel with a diameter equal to the full width at half maximum of the PSF in the intensity image. This diameter is equal to the diameter of the apertures one would use to extract the fluxes of detected planets and determine the noise level in the images (e.g., Mawet et al., 2014). After convolving the images, we compute the instrumental polarization in the \(Q\)-image by dividing the total flux in the \(Q\)-image by the total flux in the intensity image. We then subtract the instrumental polarization from the \(Q\)-image by multiplying the intensity image by the instrumental polarization and subtracting the resulting image from the \(Q\)-image. Subsequently, we compute the images of the normalized Stokes parameters \(q=Q/I\), \(u=U/I\), and \(e=V/I\) by dividing the (instrumental-polarization-subtracted) \(Q\)-, \(U\)-, and \(V\)-images by the intensity image. The resulting images as well as the images of the intensity and the degree and angle of linear polarization \(P\) and \(\chi\) (see Eqs. (11) and (12)) are shown in Fig. 11.
Figure 11 (top) shows that the spatial GH and IF shifts create a significant polarization structure in the PSF. In all images the PSF core and the Airy rings show an asymmetric structure with successive positively and negatively polarized regions. In case of the \(u\)- and \(v\)-images, we found that these structures are created by the spatial IF shifts and identified them as the ghost PSFs described by Breckinridge et al. (2015) (see Sect. 5.1). However, by subtracting the instrumental polarization, we have revealed an even stronger asymmetric structure or ghost PSF in the \(q\)-image. In this case, the structure is produced by the spatial GH shifts and is oriented orthogonally to the structures in the \(u\)- and \(v\)-images.
Figure 11 (top) also shows that the PSF has significant fractional-polarization levels, with the largest values in the \(q\)-image and the smallest values in the \(v\)-image. The relative strengths of the fractional polarization in the \(q\)-, \(u\)-, and \(v\)-images are directly related to the relative sizes of the spatial GH and IF shifts at an angle of incidence of 45\({}^{\circ}\) (see Figs. 7 and 9). Figure 11 (bottom) indicates that the degree of linear polarization in the PSF reaches a maximum of 0.56%. Finally, we see that the angle of linear polarization rotates 180\({}^{\circ}\) when moving in a circle around the center of the PSF and that it differs by 90\({}^{\circ}\) between the inner and outer regions of the Airy rings.
The polarization structure in the \(q\)-, \(u\)-, and \(v\)-images limit the local gain in contrast achievable with polarimetry. The degree of (linear) polarization is several tenths of a percent on average; hence the average contrast gain is a factor of \(\sim\)350, which is the gain compared to the contrast in intensity including the effects of seeing. This is because any speckles due to the seeing are also polarized at approximately this level. We stress that the exact numerical values presented in Fig. 11 are only valid for the specific configuration considered. For example, for a series of mirrors and/or beams with smaller f-numbers, the fractional-polarization levels are much higher and therefore the gain in contrast due to polarimetry is much lower.
Finally, as discussed in Sect. 1, the polarimetric speckle suppression of the high-contrast imaging polarimeter SPHERE-ZIMPOL is limited by polarization-dependent beam shifts (Schmid et al., 2018). Indeed, the structures visible in the on-sky polarimetric images of Fig. 26 of Schmid et al. (2018) agree very well with the asymmetric structures (ghost PSFs) in the \(q\)- and \(u\)-images of Fig. 11 (top). Therefore, the polarimetric contrast of SPHERE-ZIMPOL at small angular separations from the star is clearly limited by both the spatial GH and IF shifts.
### Size of beam shifts for various mirror materials and wavelengths
So far we have only considered the beam shifts for reflection off gold at a wavelength of 820 nm. Here we briefly discuss the maximum size of the spatial GH and IF shifts as a function of wavelength from the ultraviolet to the near-infrared for the three most common (bulk) mirror materials used in astronomical telescopes and instruments. We note, however, that actual mirrors in astronomical telescopes and instruments are likely to consist of a stack of thin films and so the exact sizes of the shifts will be different. To compute the shifts, we use the complex refractive indices of gold, silver, and aluminum for the range of wavelengths from Rakic et al. (1998). Figure 12 shows the spatial GH shift for \(x\)-polarized light (from Eq. (26)) and the spatial IF shift for antidiagonally polarized light (from Eq. (32)), both normalized with the wavelength, for angles of incidence \(\theta\) equal to 45\({}^{\circ}\) and 70\({}^{\circ}\).
Figure 12 shows that the spatial GH shift is larger than the spatial IF shift for all mirror materials, that the size of the shifts is always less than the wavelength, and that the shifts relative to the wavelength are larger for shorter wavelengths. Of the three materials, aluminum produces the smallest shifts, whereas gold and silver create larger shifts. For all materials and wavelengths, the spatial GH shift is smaller for \(\theta=45^{\circ}\) than for \(\theta=70^{\circ}\). The same is true for the spatial IF shift, except for the shortest wavelengths where the shift for \(\theta=45^{\circ}\) becomes larger than that for \(\theta=70^{\circ}\).
Figure 11: Images of the PSF structures visible in normalized Stokes \(q\) (without instrumental polarization), \(u\), and \(v\) (top), degree of linear polarization \(P\), angle of linear polarization \(\chi\), and intensity (bottom) at a wavelength of 820 nm for a converging beam of light with an f-number of 61.3 that reflects off gold at an angle of incidence of 45\({}^{\circ}\). The images are convolved with a top-hat kernel with a diameter equal to the full width at half maximum of the PSF in intensity. The images show the core of the PSF and the first three complete Airy rings. The positive \(x\)- and \(y\)-directions are upward and to the left, respectively.
### Mitigation of beam shifts
Breckinridge et al. (2015) provide possible approaches to mitigate polarization aberrations in optical systems, which include using beams of light with large f-numbers, keeping the angles of incidence small, and tuning the coatings of the mirrors. In this section, we discuss and refine these approaches based on our fundamental understanding of the beam shifts. Breckinridge et al. (2015) also discuss the use of possible optical devices that could compensate polarization aberrations (see also Clark & Breckinridge 2011; Sit et al. 2017; Dai et al. 2019), but a discussion of these devices is beyond the scope of this paper. We also note that Schmid et al. (2018) and Hunziker et al. (2020) are able to correct the beam shifts of SPHERE-ZIMPOL by measuring them from on-sky data. This correction significantly reduces the speckle noise at angular separations \(>\)0.6\({}^{\prime\prime}\) from the star, but residuals remain at separations \(<\)0.6\({}^{\prime\prime}\). These residuals are particularly strong for broadband data because the beam shifts are wavelength dependent and thus cannot be corrected with a simple shift for a broad wavelength range. Therefore, mitigating the beam shifts already during the optical design is the preferred approach.
The size of the spatial GH and IF shifts is independent of the f-number \(F\) of the beam of light incident on a mirror. However, as explained in Sect. 4.1, the size of these shifts relative to the size of the PSF is inversely proportional to the f-number. Therefore, to limit the effect of the beam shifts and the polarization structure they create, the absolute f-numbers of the beams falling onto the mirrors in the optical system should be large; the beams should converge or diverge slowly. In the limit of a perfectly collimated beam (\(F=\infty\)), the spatial GH and IF shifts are even negligibly small compared to the size of the PSF. Because any beam of finite extent corresponds to an angular spectrum of plane waves, the spatial GH and IF shifts still occur for a perfectly collimated beam, but the PSF is located at an infinite distance and is infinitely large. We note that magnifications in the optical system after the reflection off the mirror do not affect the size of the beam shifts relative to the PSF, because magnifications change the size of the shifts and the PSF by an equal amount.
The spatial GH and IF shifts are created by respectively the phase gradient and the retardance of the mirror at the central angle of incidence of the beam; the amplitudes of the reflection coefficients have only a marginal effect and are therefore not important. Hence, to minimize the spatial GH and IF shifts, the phase gradient should be kept small and the retardance should have a value close to 180\({}^{\circ}\) (see Eqs. (26) and (33)). Fortunately, the values of the phase gradient and the retardance are closely related: A retardance close to 180\({}^{\circ}\) automatically implies small phase gradients in both the \(p\)- and \(s\)-directions. Figure 3 (right) shows that this situation occurs at small angles of incidence. Therefore, to minimize the spatial GH and IF shifts, the central angle of incidence of the beams should be kept small.
Keeping the f-numbers large and the central angles of incidence small may not always be possible because optical systems need to fit in a limited volume. Therefore, also the design of the coatings of the mirrors should be considered to minimize the spatial GH and IF shifts. In general, mirror coatings are optimized for large reflectivity to maximize the throughput of the optical system. However, highly reflective coatings almost always have retardances significantly different from 180\({}^{\circ}\) and therefore such coatings produce large spatial GH and IF shifts. But for high-contrast imaging, a high system throughput is of little use when one cannot attain the contrast to image exoplanets. Therefore, a paradigm shift in the design of the mirror coatings for high-contrast imagers is necessary: Rather than maximizing the reflectivity, the retardance should be optimized to have values close to 180\({}^{\circ}\) for the central angle of incidence of the mirror and the wavelength range of interest. For linear polarimeters such a design philosophy has the added advantage that it also prevents large losses of signal due to strong polarimetric crosstalk, such as those found for the image derotators of SPHERE and SCExAOCHARIS (de Boer et al. 2020; van Holstein et al. 2020, 2020); 'Hart et al. (2021). The larger instrumental polarization resulting from the suboptimal reflectivity is not an issue because it can be easily removed by adding a half-wave plate to the optical path or subtracting it in the data reduction.
### Table summarizing properties of beam shifts
In Table 1 we present an overview of the properties of the four beam shifts discussed in this paper. For each shift, the table shows the type and nature of the effect, the plane or direction of occurrence, the origin of the shift, the parameters that the shift depends on, the typical size, the effect in the focal plane, and whether or not the shift is important for high-contrast imaging. Table 1 therefore provides a clear summary of the beam shifts and is a useful reference to compare the effects.
Figure 12: Maximum wavelength-normalized spatial GH (_top_) and IF (_bottom_) shifts as a function of wavelength at an angle of incidence \(\theta\) of 45\({}^{\circ}\) and 70\({}^{\circ}\) for reflection off gold, silver, and aluminum. The legend in the bottom panel is valid for both panels. The shifts for gold and silver are only shown for wavelengths longer than 600 nm and 400 nm, respectively, because the reflectivity drops below 90% at shorter wavelengths.
* [14] J.-L. Chen, J.-L. Chen, and J.-L. Chen, "A new approach to the classification of the 3D image of a single image
## 6 Conclusions
We used polarization ray tracing to numerically compute the beam shifts for reflection off a flat metallic mirror and compared the resulting shifts to the closed-form expressions of the spatial and angular GH and IF shifts from the physics literature. We find that all four beam shifts are fully reproduced by polarization ray tracing. In particular, we find that the phase gradients in the Jones pupil and the ghost PSFs as described by Breckinridge et al. (2015) are produced by the spatial GH and IF shifts. We also studied the origin and characteristics of the four shifts and the dependence of their size and direction on the beam intensity profile, incident polarization state, angle of incidence, mirror material, and wavelength. An overview of the properties of the four beam shifts is shown in Table 1.
Whereas the spatial GH and IF shifts depend on the phase of the Fresnel reflection coefficients, the angular GH and IF shifts depend on the amplitude. Only the spatial GH and IF shifts are relevant for high-contrast imagers and telescopes because these shifts are visible in the focal plane. The angular GH and IF shifts on the other hand are not important because they only change the intensity distribution across the reflected beam. As such, the angular shifts have no significant effect in the focus and only create a small point-symmetric deformation of the PSF. We thus conclude that only phase aberrations are important; amplitude aberrations have an almost negligible effect.
The spatial GH and IF shifts create a polarization structure in the PSF that reduces the performance of coronagraphs. In fact, we find that the polarization structure for the case of unpolarized light incident on a flat metallic mirror is adequately described by the diattenuation (i.e., the instrumental polarization) and the spatial GH and IF shifts. The polarization structure created by the spatial GH and IF shifts can also significantly reduce the speckle suppression of polarimetric measurements, thereby limiting the maximum attainable gain in contrast from polarimetry. To mitigate the spatial GH and IF shifts in optical systems, the beams of light reflecting off the mirrors should have large f-numbers and small central angles of incidence. Most importantly, mirror coatings should not be optimized for maximum reflectivity, but should instead be designed to have a retardance close to 180\({}^{\circ}\).
Our study provides a fundamental understanding of the polarization aberrations resulting from reflection off flat metallic mirrors in terms of beam shifts. In addition, we have created the analytical and numerical tools to describe these shifts. The next step is to study the combined effect and wavelength dependence of the beam shifts of complete optical paths of (polarimetric) high-contrast imaging instruments and telescopes with multiple inclined and rotating components, including half-wave plates. In particular, we plan to use our tools to create a detailed model of the beam shifts affecting the polarimetric mode of SPHERE-ZIMPOL and enable accurate corrections of on-sky observations. The insights from our work can be applied to understand and improve the performance of many future space- and ground-based high-contrast imagers and polarimeters, such as the Roman Space Telescope, the Habitable Worlds Observatory, GMagAO-X at the Giant Magellan Telescope, PSI at the Thirty Meter Telescope, and PCS (or EPICS) at the Extremely Large Telescope.
###### Acknowledgements.
We thank Prof. Dr. Hans Martin Schmid (ETH Zurich) for providing valuable comments on the manuscript. RGvH thanks ESO for the studentship at ESO Santiago during which part of this project was performed. The research of FS and SPB leading to these results has received funding from the European Research Council under ERC Starting Grant agreement 678194 (FALCONER). This research has made use of NASA's Astrophysics Data System Bibliographic Services; Scipy, a free and open-source Python library used for scientific computing and technical computing (Virtanen et al., 2020); Astropy, a community-developed core Python package for Astronomy (Robitaille et al., 2013; Price-Whelan et al., 2018); and HCIPy, an open-source object-oriented framework written in Python for performing end-to-end simulations of high-contrast imaging instruments (Por et al., 2018).
|
2307.05276 | Unbiased Scene Graph Generation via Two-stage Causal Modeling | Despite the impressive performance of recent unbiased Scene Graph Generation
(SGG) methods, the current debiasing literature mainly focuses on the
long-tailed distribution problem, whereas it overlooks another source of bias,
i.e., semantic confusion, which makes the SGG model prone to yield false
predictions for similar relationships. In this paper, we explore a debiasing
procedure for the SGG task leveraging causal inference. Our central insight is
that the Sparse Mechanism Shift (SMS) in causality allows independent
intervention on multiple biases, thereby potentially preserving head category
performance while pursuing the prediction of high-informative tail
relationships. However, the noisy datasets lead to unobserved confounders for
the SGG task, and thus the constructed causal models are always
causal-insufficient to benefit from SMS. To remedy this, we propose Two-stage
Causal Modeling (TsCM) for the SGG task, which takes the long-tailed
distribution and semantic confusion as confounders to the Structural Causal
Model (SCM) and then decouples the causal intervention into two stages. The
first stage is causal representation learning, where we use a novel Population
Loss (P-Loss) to intervene in the semantic confusion confounder. The second
stage introduces the Adaptive Logit Adjustment (AL-Adjustment) to eliminate the
long-tailed distribution confounder to complete causal calibration learning.
These two stages are model agnostic and thus can be used in any SGG model that
seeks unbiased predictions. Comprehensive experiments conducted on the popular
SGG backbones and benchmarks show that our TsCM can achieve state-of-the-art
performance in terms of mean recall rate. Furthermore, TsCM can maintain a
higher recall rate than other debiasing methods, which indicates that our
method can achieve a better tradeoff between head and tail relationships. | Shuzhou Sun, Shuaifeng Zhi, Qing Liao, Janne Heikkilä, Li Liu | 2023-07-11T14:11:24Z | http://arxiv.org/abs/2307.05276v1 | # Unbiased Scene Graph Generation via
###### Abstract
Despite the impressive performance of recent unbiased Scene Graph Generation (SGG) methods, the current debiasing literature mainly focuses on the long-tailed distribution problem, whereas it overlooks another source of bias, _i.e._, semantic confusion, which makes the SGG model prone to yield false predictions for similar relationships. In this paper, we explore a debiasing procedure for the SGG task leveraging causal inference. Our central insight is that the Sparse Mechanism Shift (SMS) in causality allows independent intervention on multiple biases, thereby potentially preserving head category performance while pursuing the prediction of high-informative tail relationships. However, the noisy datasets lead to unobserved confounders for the SGG task, and thus the constructed causal models are always causal-insufficient to benefit from SMS. To remedy this, we propose Two-stage Causal Modeling (TsCM) for the SGG task, which takes the long-tailed distribution and semantic confusion as confounders to the Structural Causal Model (SCM) and then decouples the causal intervention into two stages. The first stage is causal representation learning, where we use a novel Population Loss (P-Loss) to intervene in the semantic confusion confound. The second stage introduces the Adaptive Logit Adjustment (AL-Adjustment) to eliminate the long-tailed distribution confounder to complete causal calibration learning. These two stages are model agnostic and thus can be used in any SGG model that seeks unbiased predictions. Comprehensive experiments conducted on the popular SGG backbones and benchmarks show that our TsCM can achieve state-of-the-art performance in terms of mean recall rate. Furthermore, TsCM can maintain a higher recall rate than other debiasing methods, which indicates that our method can achieve a better tradeoff between head and tail relationships.
Scene graph generation, causal inference, counterfactuals, representation learning, long-tailed distribution
## I Introduction
Scene Graph Generation (SGG), first proposed by Scherrer _et al._[1], is an emerging, critical, and challenging intermediate scene-understanding task and has received increasing attention, especially during the past few years [2, 3], due to its potential to be a bridge between computer vision and natural language processing. SGG aims to generate a structured representation of \(s\)- scene that jointly describes objects and their attributes, as well as their pairwise relationships, and is typically formulated as a set of \(<\)_subject, relationship, object\(>\)_ triplets. Such representations can provide a deep understanding of a scene, and thus SGG has been employed for many downstream tasks, such as image-text retrieval [1, 4], visual question answering [5, 6], visual captioning [7, 8], _etc_.
While early SGG work has made significant progress [1, 9, 10], which, however, as discussed in [11, 12], tends to generate biased predictions, _i.e._, informative fine-grained relationships _(e.g., standing on)_ are predicted as less informative coarse-grained relationships (_e.g., on)_ due to the long-tailed distribution problem. As an example, we consider the distribution of the relationships in VG150 [13], a popular benchmark in the SGG task, which, as shown in Fig. 1 (a), clearly suffers from severe long-tailed distribution problems. The SGG model, naturally, cannot learn to represent the features of the head and tail relationships simultaneously from the skewed distribution and, hence, easily yields False Predictions (FP) on head relationships (see Fig. 1 (c)).
For the above biased predictions, many debiasing methods [14, 15, 16, 17, 18] have been proposed to overcome this problem. Unlike earlier work, the primary goal of debiasing methods, however, is to pursue the unbiased scene graphs. Existing debiasing methods can be roughly categorized into four groups: 1) _Resampling methods_[19, 20] upsample the tail relationships and/or downsample the head relationships to rebalance the training data distribution. 2) _Reweighting methods_[15, 17, 21, 22, 23] revise the contribution of different relationships during training, for instance, weighting the prediction loss to strengthen the model's representation ability to the tail categories. 3) _Adjustment methods_[14, 24, 25, 26] modify the learned biased model to obtain unbiased predictions, for example, by adjusting the output logits to increase the likelihood of more informative fine-grained relationships. 4) _Hybrid methods_[27, 28, 29] combine some/all of the above methods. Although debiasing research is rather active in the SGG community, the above methods often fall short in preserving head category performance while pursuing the prediction of informative tail relationships [2, 14]. More importantly, the current debiasing methods mainly focus on a single bias, _i.e._, the long-tailed distribution problem, whereas it overlooks other biases.
Unlike existing work that focuses on a single bias, we reveal the fact that there are multiple biases for the SGG task in this paper. This stems from our observation that some of the False Predictions (FP) are clearly not caused by the long-tailed distribution bias, _e.g._, FP on tail relationships (see Fig. 1 (c)). We therefore argue that there are other biases that have not yet been observed and explored with current debiasing methods. Cognitive psychology [34] and studies on the human visual
system [35] suggest that humans struggle to distinguish similar objects. Inspired by this fact, we hypothesize that the source of the bias of FP on tail relationships is semantic confusion, which refers to two relationships sharing similar semantic information. For instance, as shown in Fig. 1 (b), both _carrying_ and _holding_ are semantic concepts composed of a people and objects in his/her hands. To demonstrate our premise, as shown in Fig. 1 (c), we additionally split FP on tail relationships into FP on tail-similar relationships and FP on tail-dissimilar relationships. As expected, most of the FP on tail relationships occur in tail-similar relationships. This suggests that SGG models, like humans, have difficulties in distinguishing similar relationships. As a result, we take semantic confusion as the second bias.
For the multiple biases in the SGG task, we seek causal inference [36, 37], an inference procedure that achieves impressive performance in statistics, econometrics, and epidemiology, which has also attracted significant attention in the deep learning community in recent years. Our central insight is that the Sparse Mechanism Shift (SMS) [38, 39] in causal inference allows independent intervention on multiple biases, thereby potentially preserving head category performance while pursuing higher performance in fine-grained tail relationships. Inspired by Pearl Causal Hierarchy (PCH) [40], in particular its highest layer, counterfactual, we pose two questions: 1) What happens if there is no semantic confusion between any two relationships in the observed data? 2) What happens if the distribution of relationships in the observed data is balanced? To answer these two counterfactual questions, we first build Structural Causal Models (SCM) [41, 42], a causal modeling method that can support counterfactual inference, based on two observed biases as confounders. In practice, unfortunately, not all confounders for the SGG task can be observed, which means that the built SCM is causal-insufficient (see Section 3.2 for a detailed analysis). Causal-insufficient assumption will invalidate the SMS hypothesis because the variables of the SCM are entangled in this case. Put another way, when we use existing causal intervention methods to overcome the observed biases, unobserved biases could be disturbed and bring about unwanted consequences. To allow SCM with causal-insufficient assumption to also benefit from the SMS hypothesis, we decouple the causal interventions into two stages and, on this basis, propose a novel causal modeling method. Two-stage Causal Modeling (TsCM), tailored for the SGG task.
Our TsCM consists of two stages: 1) Stage 1, causal representation learning, where despite the causal-insufficient assumption of the built SCM, we find that similarity relationships have inherently sparse properties (see Section 3.3), and, hence, sparse perturbations and independent interventions on semantic confusion bias are attainable. To achieve this, we proposed the Population Loss (P-Loss), which intervenes in the model training process to increase the prediction gap between similar relationships, allowing the trained model to obtain the causal representation that can eliminate the semantic confusion bias. As a result, this stage disentangles the confusion bias from the variables of the built SCM, thereby getting a disentangled factorization. 2) Stage 2, causal calibration learning, where thanks to the disentangled factorization obtained in stage 1, we calibrate the model's causal representation to remove the long-tailed distribution bias. Specifically, this is achieved by our proposed Adaptive Logit Adjustment (AL-Adjustment), which can adaptively learn a set of adjustment factors from the observed data for sparse perturbations and independent interventions.
In summary, the contributions of our work are three-fold:
* We thoroughly analyze the sources of bias in the biased SGG model and experimentally verify the bias, _i.e._, semantic confusion bias, ignored by current debiasing methods.
* We propose a new causal modeling framework, Two-stage Causal Modeling (TsCM), to disentangle the multiple biases from the biased SGG model. Our TsCM decouples the causal intervention into two stages. Stage
Fig. 1: The motivations of TsCM. (a) illustrates the long-tailed distribution bias. (b) shows the semantic confusion bias. (c) reports the True Predictions (TP), False Predictions (FP) on head relationships, and FP on tail relationships (further divided into two cases depending on whether the predictions are tail-similar relationships or not) of the MotifsNet [9] framework. Formally, in this paper, head relationships refer to _on_, _has_, _in_, _of_, and wearing, since they account for more than 50%, and the rest are tail relationships, but note often different grouping criteria were also adopted in the literature [30, 31, 32, 33]. For a given category, similar and dissimilar relationships are those found within and outside its population, respectively. A formal introduction to the concept of population is provided in Section 3.3.2.
1 leverages the proposed P-Loss to remove the semantic confusion bias and obtain a disentangled factorization even in the case of insufficient causality, thereby providing the causal representation that can distinguish similar relationships. Stage 2 further calibrates the causal representation to eliminate the long-tailed distribution bias by using the proposed AL-Adjustment.
* Comprehensive experiments on various SGG backbones and the popular benchmark demonstrate the state-of-the-art mean recall rate of the proposed TsCM. Furthermore, our TsCM can maintain a higher recall rate than other debiasing methods, achieving a better tradeoff between head and tail relationships.
## 2 Related works
### _Scene Graph Generation_
SGG produces a structured representation of the scene by assigning appropriate relationships to object pairs and enables a more comprehensive understanding of the scene for intelligent agents [1]. Most early works struggled with employing advanced network structures, _e.g._, Convolutional Neural Network, Recurrent Neural Network, Graph Neural Network, for better feature extraction and representation [2, 43]. Despite continuous improvements in the recall rate, these methods fall into the trap of biased prediction, _i.e._, informative fine-grained relationships are predicted as less informative coarse-grained relationships. As a result, debiasing methods have attracted unprecedented attention in the SGG community in recent years. To keep focus, here we mainly review the debiasing methods for the SGG task. Existing debiasing methods can be roughly categorized into four groups as follows.
_Resampling methods_ downsample the head category relationships and/or upsample the tail ones to balance the training data distribution, and often the prior knowledge, _e.g._, language prior, is taken into account, too. For instance, instead of relying on box-level annotations, SegG [19] argues that pixel-level grounding would naturally be more valuable and, hence, create segmentation annotations for the SGG dataset with the help of auxiliary datasets. Recently, TransRwt [20] rectified the skewed distribution by creating an enhanced dataset using Internal Transfer and External Transfer, the former for transferring the coarse-grained relationships to the fine-grained ones and the latter for re-labeling the relationships that are missing annotations. However, resampling methods may lead to overfitting (oversampling) or information loss (undersampling) by altering relationship category sample distributions.
_Reweighting methods_ design debiasing loss functions to make the model pay more attention to the tail category relationships or to create advanced networks to improve the representation ability of these relationships. Some works in this group begin by extracting prior knowledge from biased distributions, _e.g._, cognitive structure in CogTree [21], predicate lattice in FGL [30], relationship probability distribution in PPDL [17], _etc._, and then combine the proposed debiasing loss functions to supervise the model training. Besides, GCL [16] presents a Stacked Hybrid-Attention network to achieve intra-modal refinement and intermodal interaction and then enhances the representation ability of tail relationships. Nonetheless, reweighting methods may result in an imbalanced focus on relationship categories and suboptimal, unstable performance due to manual or heuristic weight adjustments.
_Adjustment methods_ adjust the output of the biased trained model to obtain unbiased predictions. The adjustment procedure can be based on prior knowledge. For example, Logit-reweight [44] uses label frequencies to adjust the logit outputs by the biased model. DLFE [24] considers the SGG task as a Learning from Positive and Unlabeled data (PU learning) problem, where a target PU dataset contains only positive examples and unlabeled data. However, its prior knowledge, _i.e._, label frequencies, is obtained iteratively during the training process by the proposed Dynamic Label Frequency Estimation method. Furthermore, adjustment procedures can also be modeled by causal inference. For instance, TDE [14] first builds a causal graph for the SGG task and then draws counterfactual causality from the trained model to infer the effect from the negative bias. Note that adjustment methods will increase computational complexity with post-training output adjustments and may cause a decline in other relationship category performances.
_Hybrid methods_ combine some/all of the above techniques. HML [28] and CAME [33] first divide the long-tailed distribution into some balanced subsets. HML [28] then trains the model with coarse-grained relationships and finally learns the fine-grained categories. While CAME [33] then proposes to use a mixture of experts to handle different subsets. RTPB [29] enhances the impact of tail relationships on the training process based on prior bias and designs a contextual encoding backbone network to improve feature extraction capabilities. However, hybrid methods Increase implementation complexity, more challenging parameter tuning, higher computational costs, and potential performance instability due to the interplay of combined methods.
Despite achieving impressive results, the above debiasing methods focus almost exclusively on a single bias, _i.e._, long-tailed distribution bias, which clearly, makes complete debiasing impossible. Moreover, these methods sacrifice head relationships in pursuit of tail category performance. Differently, our method considers multiple biases and removes them using the causal inference technique. Our causal model TsCM consists of two stages covering both the reweighting and adjustment approaches. Thanks to the SMS mechanism, the two stages in our method independently intervene in different biases. In contrast, the different stages in existing hybrid methods only intervene in the same bias.
### _Causal Inference_
Causal analysis has achieved encouraging performance in health, social, behavioral sciences, _etc._, and it has also attracted increasing attention in deep learning community in recent years, such as scene graph generation [14], out-of-distribution generalization [45], and salient object detection [46]. Compared with deep learning models, the causal inference approaches can eliminate the influence of biases/confounders when making predictions [36, 47]. A typical causal inference paradigm usually starts with establishing a graphical model, _e.g._, the Structural Causal Model (SCM) [41, 42, 48], which models the dependencies of causal variables. It then intervenes in (_e.g._, _do_ interventions [41, 47]) these variables to pursue causal inference of interest. The models can therefore be generalized to different distributions.
It should be emphasized that the above interventions can be achieved because the causal variables satisfy the principle of sparse perturbation and independent intervention, which is the cornerstone of causal inference. The independent intervention principle in causality emphasizes that the conditional distribution of each causal variable, given its causes (_i.e._, its mechanism), does not inform or influence the
other mechanisms. The sparse perturbation principle refers to small distribution changes that tend to manifest themselves in a sparse or local way [38, 49]. The sparse principle is extended by independence, which can be seen as a consequence of independence, too [39, 48]. Benefiting from the independent intervention principle, Scherrer _et al._[50] decompose the causal mechanism into modules with knowledge, which, different from monolithic models where full knowledge will be learned directly, enables adaptation to distribution shifts by only updating a subset of parameters. Thanks to the sparse perturbation principle, Ahuja _et al._[49] achieve weakly supervised representation learning by perturbing the causal mechanism sparsely. Inspired by the above work and the multiple confounders in the SGG task, the model proposed in this paper removes these confounders independently and sparsely, which allows our model to preserve the performance of the head categories while pursuing debiasing.
## 3 Methods
### _Overview_
The primary goal of SGG is to model the objects existing in the scene and their pairwise relationships. Most existing works first detect the objects (_e.g._, "man", "horse" ) in the scene with an object detector and then recognize their pairwise relationships (_e.g._, "riding", "standing on") with a relationship classifier. The object detector extracts information about objects, like their bounding boxes, categories, and features. Then the relationship classifier predicts relationships for each pair of objects. Simply, a scene graph is a set of _visual triples_ in the formulation of \(<\)_subject, relationship_, _object\(>\)_. Formally, let \(\mathcal{D}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N_{\mathcal{D}}}\) denote the observed data with \(N_{\mathcal{D}}\) samples, where \(\mathbf{x}_{i}\) is \(i\)-th image and \(\mathbf{y}_{i}\in\mathbb{R}^{N_{i}\times\mathbf{x}}\) is \(N_{i}\) relationships in this sample, \(\mathbf{y}_{i,j}\) is \(K\) dimension one-hot vector denoting the label of \(j\)-th relationship \(\mathbf{x}_{i}\). We therefore need to label the dataset \(\mathcal{D}\) with visual triplets \(\{<(\mathbf{o}_{i}^{\text{sub}},\mathbf{b}_{i}^{\text{sub}}),\mathbf{y}_{i}, (\mathbf{o}_{i}^{\text{obj}},\mathbf{b}_{i}^{\text{obj}})>_{i}^{N_{\mathcal{D }}}\}_{i=1}^{N_{\mathcal{D}}}\) to support model training, where \(\mathbf{o}_{i}^{\text{sub}},\mathbf{o}_{i}^{\text{obj}}\in\mathbb{R}^{N_{i} \times\mathbf{c}}\) and \(\mathbf{b}_{i}^{\text{sub}},\mathbf{b}_{i}^{\text{obj}}\in\mathbb{R}^{N_{i} \times\mathbf{d}}\), \(\mathbf{o}_{ij}\) and \(\mathbf{b}_{ij}\) denoting the category and bounding box information of subject or object of \(j\)-th relationship in \(\mathbf{x}_{i}\) respectively. \(C\) and \(K\) are the numbers of categories of objects and relationships in the observed data, respectively. Although labeling the visual triples is very costly, early efforts have contributed a few benchmarks to the SGG community, such as Visual Genome [51], Scene Graph [1], and Open Images V4 [52]. However, SGG models trained on these datasets typically suffer from two challenges: (1) Semantic confusion, and (2) Long-tailed distribution.
In this work, we address the above two challenges from the perspective of causal inference. Specifically, in Section 3.2, we firstly consider the aforementioned two challenges, _i.e._, semantic confusion and long-tailed distribution, as confounders for the standard SGG framework (see Fig. 2 (a)). Therefore, our method leverages the data-level confounders to model the causality for the SGG task. Compared to model-level confounders [14], our approach is model-agnostic, _i.e._, transferable to arbitrary SGG models. We then propose the Population Loss in Section 3.3 to remove the semantic confusion confounder and get a disentangled factorization for the causal model (see Stage 1 in Fig. 2 (b)). Next, in Section 3.4, we propose AL-Adjustment to remove the long-tailed distribution confounder to obtain unbiased predictions (see Stage 2 in Fig. 2). Finally, in Section 3.5, we show that our method is Fisher-consistent and highlight the differences from existing statistical-based approaches.
### _Modeling structural causal model_
One can use a variety of frameworks to model the causality of their system of interest, such as Causal Graphical Models (CGM), Structural Causal Models (SCM), and Potential Outcomes (PO) [36, 41, 42, 48]. The causality modeling ability of CGM is limited since it cannot support counterfactual inference. PO is active in the system with binary treatment variables, but it is awkward when dealing with special treatment and outcome variables. Considering the limitations of CGM and PO, in this work, we model the causality using SCM, a structural method that contains variables, structural functions, and distributions over the variables (see Definition 1).
**Definition 1** (Structural Causal Model (SCM) [41, 42]).: _A structural causal model \((SCM)\)\(\mathcal{M}\) is a 4-tuple \(\langle\mathcal{V},\mathcal{U},\mathcal{F},P(\mathcal{U})\rangle\), where \(\mathcal{U}=\{U_{1},U_{2},\cdots,U_{n}\}\) is a set of exogenous variables; \(\mathcal{V}=\{V_{1},V_{2},\cdots,V_{n}\}\) is a set of endogenous (observed) variables; \(\mathcal{F}=\{F_{1},F_{2},\cdots,F_{n}\}\) is the set of structural functions determining \(\mathcal{V}\); \(P(\mathcal{U})\) is a distribution over the exogenous variables._
**Definition 2** (Submodel [41, 42]).: _For the SCM \(\mathcal{M}\), let \(\overline{\mathcal{V}}\) be a set of variables in \(\mathcal{V}\), and \(\overline{v}\) a particular value of \(\overline{\mathcal{V}}\). A submodel \(\mathcal{M}_{\overline{v}}\) (of \(\mathcal{M}\) ) is a 4-tuple; \(\mathcal{M}_{\overline{v}}=\langle\mathcal{V},\mathcal{U},\mathcal{F}_{ \overline{v}},P(\mathcal{U})\rangle\), where \(\mathcal{F}_{\overline{v}}=\{F_{i}:V_{i}\notin\overline{\mathcal{V}}\}\cup\{ \overline{\mathcal{V}}\leftarrow\overline{v}\}\), and all other components are preserved from \(\mathcal{M}\)._
Endogenous variables are the fundamental elements of an SCM. However, determining variable \(\mathcal{V}\) in the SGG task is very challenging because its inputs, _i.e._, images, differ greatly from the structured units in traditional causal discovery and reasoning tasks [42, 48]. Inspired by FP on head/tail relationships in Fig. 1, in this paper, we propose a model-agnostic data-level variable that takes semantic confusion and long-tailed distribution as the confounders. As a result, the induced submodel \(\mathcal{M}_{\overline{v}}\) in our work is \(\langle\mathcal{V},\mathcal{U},\mathcal{F}_{\overline{v}},P(\mathcal{U})\rangle\) (see Definition 2). Where \(\mathcal{V}=\{X,Y,S,L\}\), \(X\) is input (images in SGG task), \(Y\) is output (relationships), \(S\) is the semantic confusion confounder, \(L\) is the long-tailed distribution confounder; \(\mathcal{U}=\{U_{X},U_{Y},U_{S},U_{L}\}\); \(\mathcal{F}_{\overline{v}}=\{F_{1},F_{2},F_{3},F_{4},F_{5}\}\); \(P(U_{X},U_{Y},U_{S},U_{L})\) is the distribution over the exogenous variables. The SCM is shown in Fig. 3 (Biased SGG), and thus the structural equations are:
\[\begin{split}& S=P(S),\\ & L=P(L),\\ & X=F_{1}(L,P(L))+F_{2}(S,P(S)),\\ & Y=F_{3}(X,P(X))+F_{4}(L,P(L))+F_{5}(S,P(S)),\end{split} \tag{1}\]
Intuitively, we can directly use interventions to remove the confounders in SCM (see Definition 3). These interventions, however, do not update \(P(\mathcal{U})\), and thus the intervened results are noisy causal effects in most cases [36, 42].
**Definition 3** (Interventions in SCM [41, 48]).: _An intervention \(\mathit{do}\left(V_{i}:=v^{\prime}\right)\) in an SCM \(\mathcal{M}\) is modeled by replacing the \(i\)-th structural equation by \(V_{i}:=v^{\prime}\), where \(v^{\prime}\) is a \(V_{i}\)-independent value._
**Definition 4** (Counterfactual in SCM [41, 48]).: _A counterfactual in an SCM \(\mathcal{M}\) is modeled by replacing the \(i\)-th structural equation by \(V_{i}:=v^{\prime}\) and update the \(P(\mathcal{U})\), where \(v^{\prime}\) shares the same meaning as it does in Definition 3. The above counterfactual intervention induces the submodel \(\mathcal{M}^{V_{i}}\)._
**Assumption 1** (Causal-insufficient).: _The exogenous variable \(\mathcal{U}\) in \(\mathcal{M}\) satisfies that: \(P(U_{1},\ldots,U_{n})\neq P(U_{1})\times P(U_{2})\times\cdots\times P(U_{n})\)._
Fortunately, counterfactual, the highest-level reasoning procedure of cognitive ability [36], overcomes this limitation by imagining pre/post-intervention results (see Definition 4). Note that counterfactual is unfalsifiable since its imaginary results cannot be observed. However, significant designs (_e.g._, average
treatment effect) in statistics, econometrics, and epidemiology can estimate the counterfactuals and are proven effective. The principle difference between intervention and counterfactual is that the latter updates \(P(\mathcal{U})\) when manipulating the structural equations [42]. Thus, one can partially seek the intervention technique to calculate the counterfactuals. Inspired by the above facts, we therefore leverage the counterfactual inference to eliminate the semantic confusion confounder \(S\) and long-tailed distribution confounder \(L\) to obtain an unbiased SCM \(\mathcal{M}_{v}^{S,L}\) for the SGG task. The counterfactuals results can be calculated as:
\[\begin{split}\mathbb{E}[Y\mid X,do\left(S:=s\right)&,do\left(L:=l\right)]\\ &=\mathbb{E}_{S}\mathbb{E}_{L}\mathbb{E}[Y\mid X,s,l]\\ &=\sum_{s}\sum_{l}E[Y\mid X,s,l]P(s)P(l).\end{split} \tag{2}\]
where \(s/l\) is a \(S/L\)-independent value. \(do\) interventions involve manipulating one or more variables to investigate causal relationships [36], where \(do\left(L:=l\right)\) signifies setting the value of variable \(L\) to \(l\) and observing the outcome. Note that, causal sufficiency is an essential assumption for Equation (2), _i.e._, the exogenous variables \(U_{i}\) are jointly independent: \(P(U_{1},\ldots,U_{n})=P(U_{1})\times P(U_{2})\times\cdots\times P(U_{n})\). Thanks to the causal sufficiency assumption, the endogenous variables \(\mathcal{V}\) in \(\mathcal{M}\) can be formulated as a causal/disentangled factorization:
\[P\left(V_{1},V_{2},\ldots,V_{n}\right)=\prod_{i=1}^{n}P\left(V_{i}\mid\text{ pa}(V_{i})\right), \tag{3}\]
where \(\text{pa}(V_{i})\) are the parents of \(V_{i}\). In the SGG task, confounders can be, for instance, the observed confounders such as the semantic confusion confounder and the long-tailed distribution confounder, as well as unobserved ones caused by missing labeled relationships and mislabeled relationships. The latter has been discussed in much literature [24, 25, 27, 28, 31]. We therefore do not expect and cannot model a causal sufficient SCM for the SGG task due to the unobserved confounders. In accordance with this, we assume that \(\mathcal{M}_{\widetilde{v}}\) is causal-insufficient (see Assumption 1), and thus its endogenous variables can only be formulated as an entangled factorization:
\[P\left(V_{1},V_{2},\ldots,V_{n}\right)=\prod_{i=1}^{n}P\left(V_{i}\mid V_{i+1},\ldots,V_{n}\right). \tag{4}\]
**Assumption 2** (Sparse Mechanism Shift (SMS) [38, 39]).: _Small distribution changes tend to manifest themselves sparsely or locally in the causal/disentangled factorization (see Equation (3)), that is, they should usually not affect all factors simultaneously._
Assumption 2 tells us that for a disentangled factorization, a sparse operation allows the learner to remove the confounders and even generalize to unseen distributions. However, unfortunately, \(\mathcal{M}_{\widetilde{v}}\) is causal-insufficient since the SGG task inevitably contains the unobserved confounders, and it, therefore, cannot benefit from the SMS hypothesis. In response to this challenge, we decouple causal modeling into two stages to achieve the goal of intervening in the endogenous variables sparsely:
\[\begin{split}\mathbb{E}[Y\mid X,do\left(S:=s\right),do\left(L:=l \right)]\\ =\mathbb{E}_{X}[\underbrace{\mathbb{E}[Y^{\prime}\mid X,do(S:=s) ]}_{\text{stage 1}}]+\underbrace{\mathbb{E}[Y\mid X,Y^{\prime},do(L:=l)]}_{ \text{stage 2}}].\end{split} \tag{5}\]
where stage 1 exploits the inherent sparse property of similar relationships, even under the condition of causal-insufficient assumption, it can achieve sparse perturbations on variable \(S\) to remove the semantic confusion confounder as well as learn
Fig. 3: The proposed Causal Graphical Model (SCM) for SGG. \(S\) and \(L\) are data-level confounders, _i.e._, the semantic confusion confounder and the long-tailed distribution confounder. \(X\) and \(Y^{\prime}\) are the input image and the predicted relationships, respectively.
Fig. 2: The illustrations of the standard framework and our proposed pipeline. The standard SGG framework is supervised by cross-entropy loss. While TsCM consists of two stages: Stage 1 learns causal representation learning that can better distinguish semantic confusion as well as disentangle the causal-insufficient SCM. Specifically, we achieve this through the proposed Population Loss. Stage 2 leverages the proposed Adaptive Logit Adjustment to calibrate the entangled factorization from the previous stage. As a result, the adjusted logits can avoid the SGG model biases towards the head relationships.
a disentangled representation of \(\mathcal{M}_{\mathbb{E}}\) at the same time, see Section 3.3. Based on the disentangled factorization obtained, we then, in stage 2, manipulate variable \(L\) in a sparse way to remove the long-tailed distribution confounder, see Section 3.4. Both stages are sparse interventions, thereby satisfying the SMS assumption, which allows our method to achieve unbiased prediction while protecting the performance of head relationships. Specifically, stage 1, involving interventions on similar relationships, naturally doesn't harm head relationships. Moreover, the adjustment mechanism in stage 2, adaptively learned from stage 1, further ensures the protection of head relationships.
### _Causal representation learning_
#### 3.3.1 Population Loss
In the SGG task, similar relationships are those with only slightly different visual and semantic features. However, existing SGG models perform poorly in discriminating these similar relationships, for instance, easily mispredicting _standing on_ as _walking on_ or vice versa. This is not surprising, as distinguishing these similar relationships is even challenging for humans. Naturally, one may be curious and then imagine: Would the above error still occur if _standing on_ and _walking on_ are no longer similar? While this only happens in our imagined spaces, it can be formally calculated by counterfactual (see Definition 4) in the causal inference paradigm:
\[P(y|x,do(S:=s)) \tag{6}\] \[\quad=P(y|x,do(S:=s_{1}))-P(y|x,do(S:=s_{0})),\]
where \((x,y)\) is a particular value of \((X,Y)\), \((X,Y)\sim\mathcal{D}\); \(s_{1}\) and \(s_{0}\) indicate that the relationship \(y\) is similar or dissimilar to other relationships, respectively. In fact, the above counterfactual formulated in Equation (6) simulates the potential outcomes of different interventions, _i.e._, \(do(S:=s_{1})\) and \(do(S:=s_{0})\). It is critical because one can often benefit from imagining; for instance, Einstein's thought experiment brought the Special Theory of Relativity to the world. Despite being promising, however, calculating Equation (6) takes a lot of work. TDE [14] is highly relevant to our work, which simulates two interventions by inferring pre/post-processed inputs to obtain counterfactual results. However, it requires two model inferences for each input, thereby introducing unbearable costs. In contrast, Average Treatment Effect (ATE) [53] estimates the counterfactuals in one shot by leveraging statistical knowledge. Thanks to its high estimation efficiency, ATE is a commonly used technique in causal inference, such as exploring the ATE estimation with binary treatments and continuous outcomes in [54] and discussing the propensity score if the average treatment effect is identifiable from observational data in [55]. Inspired by ATE, in this paper, we use statistical knowledge from the observed data \(\mathcal{D}\) to estimate counterfactuals:
\[\mathbb{E}[y|x,do(S:=s)] \tag{7}\] \[\quad=\mathbb{E}_{X}[\mathbb{E}(Y|X,do(S:=s_{1}))-\mathbb{E}(Y|X,do(S:=s_{0}))].\]
**Definition 5** (Population in SGG).: _Let \(y=\{y_{1},y_{2},\cdots,y_{\text{k}}\}\) be relationship categories in observed data \(\mathcal{D}\), and let \(\mathbf{P}_{\alpha}^{y_{i}}\) be population of \(y_{i}\). Then, \(\mathbf{P}_{\alpha}^{y_{i}}\) is a relationship set containing the \(\alpha\) most similar relationships to \(y_{i}\)._
Formally, we first extract knowledge \(\mathcal{P}_{\alpha}\), \(\mathcal{P}_{\alpha}=\{\mathbf{P}_{\alpha}^{y_{i}}\}_{i=1}^{K}\), from the observed data (the calculation of \(\mathcal{P}_{\alpha}\) is placed in Section 3.3.2). \(\mathbf{P}_{\alpha}^{y_{i}}\) is the population of relationship \(y_{i}\), a relationship set containing the statistical knowledge of similar relationships; see Definition 5. Inspired by penalized head categories in [22, 23, 44], here we punish similar relationships based on knowledge \(\mathcal{P}_{\alpha}\). Specifically, we discard the widely used cross-entropy loss \(\ell\) and supervise the SGG model \(f\) through the proposed Population Loss (P-Loss) \(\hat{\ell}\):
\[\hat{\ell}(\mathcal{P}_{\alpha},y,f(x))=\log[1+\sum_{y^{\prime} \in\mathbf{P}_{\alpha}^{y}}\frac{\pi_{y^{\prime}}}{\pi_{y}}\times e^{(f_{y^{ \prime}}(x)-f_{y}(x))} \tag{8}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{y^{\prime}\notin \mathbf{P}_{\alpha}^{x},y^{\prime}\neq y}e^{(f_{y^{\prime}}(x)-f_{y}(x))}],\]
\[\theta^{*}=\underset{\theta}{\arg\min}\mathbb{E}[\underset{(x,y)\sim\mathcal{D }}{\mathbb{E}}(\mathcal{P}_{\alpha},y,f(x))], \tag{9}\]
where \(\pi\) is category frequencies on the observed data \(\mathcal{D}\) and \(\theta^{*}\) is the parameter used to parameterize SGG model \(f_{\theta^{*}}\). \(x\) and \(y\) (\(y=\{y_{i}\}_{i=1}^{K}\)) are the input (_e.g._, image) and output relationship categories, respectively. As an example, for relationship \(y_{i}\), the P-Loss \(\hat{\ell}\) penalizes its confusing relationships with the help of statistical knowledge \(\pi\) and \(\mathbf{P}_{\alpha}^{y_{i}}\) extracted from the observed data \(\mathcal{D}\). The penalty term in Equation (8) can be seen as \(do(S:=s)\) in Equation (6) since it intervenes in the sparse \(\mathcal{P}_{\alpha}\) and makes the model more capable of distinguishing between similar relationships. In other words, P-Loss can remove the confounder \(S\) in \(\mathcal{M}_{\mathbb{E}}\). Thus, the counterfactual can be estimated by the statistical knowledge \(\mathcal{P}_{\alpha}\) as:
\[P(y|x,do(S:=s)) =P(y|x,\mathcal{P}_{\alpha},\pi) \tag{10}\] \[=f_{\theta^{*}}(x)\,.\]
**Assumption 3** (Similar relationships are sparse).: _Let \(y=\{y_{i}\}_{i=1}^{\text{k}}\) be relationships in observed data \(\mathcal{D}\). For any relationship \(y_{i}\) (\(i\in\{1,2,\cdots,\text{K}\}\)), there exist \(k\) relationships similar to it. Then, it holds that \(k\ll\text{K}\)._
Despite achieving the goal of calculating the counterfactuals, however, it is critical to note that our causal-insufficient assumption determines that our manipulation (\(do(S:=s)\)) in Equation (7) may perturb other variables simultaneously since the entangled factorization of \(\mathcal{M}_{\mathbb{E}}\) does not satisfy the SMS hypothesis. Fortunately, in this paper, we empirically find that similar relationships hold the sparse property (see Assumption 3). Based on our observations, relationships within the SGG dataset are often similar to a few specific relationships but not to most others. For example, _standing on_ is only similar to _on_, _walking on_, _etc._, but differs from most other relationships. Therefore, Assumption 3, which shows similar relationships have the sparse property that SMS highlighted, guarantees that even an entangled factorization can be intervened sparsely on the confounder \(S\). In other words, even if \(\mathcal{M}_{\mathbb{E}}\) is causal-insufficient, our proposed P-Loss can still intervene in \(S\) without worrying about perturbing other exogenous variables, such as confounder \(L\).
Furthermore, we argue that \(do(S:=s)\) partially disentangles the \(\mathcal{M}_{\mathbb{E}}\) as it removes the confounder \(S\) and allows us to get a better causal representation. Thus, the endogenous variables in the induced submodel \(\mathcal{M}_{\mathbb{E}}^{S}\) can be roughly formulated as a disentangled factorization:
\[P(X,Y,L)\doteq P(X)\times P(Y)\times P(L). \tag{11}\]
Disentangled factorization is considered to be the key to representation learning due to its potential in abstract reasoning, interpretability, generalization to unseen scenarios, _etc._ Although it has attracted significant attention, evaluating the disentangled representation is still challenging [56]. We will design experiments in the ablation study (Section 4.3) to demonstrate our disentangled claim in Equation (11).
#### 3.3.2 Calculate the relationship-populations
As a supplement to Section 3.3.1, this section shows how to calculate the relationship-populations \(\mathcal{P}_{\alpha}\). For \(\mathcal{P}_{\alpha}\), we give three assumptions (Assumptions 4-6) based on the inspirations of causality as well as the hallmarks of the relationships in the SGG task.
**Assumption 4** (Relationship-population is learner independent).: _Let \(f_{\theta_{1}}\) and \(f_{\theta_{2}}\) be SGG models parameterized by \(\theta_{1}\) and \(\theta_{2}\), respectively. Then, \(\mathbb{E}[\mathcal{P}_{\alpha}^{\prime}\mid f_{\theta_{1}}]=\mathbb{E}[ \mathcal{P}_{\alpha}^{\prime^{\prime}}\mid f_{\theta_{2}}]\)._
**Assumption 5** (Relationship-population is distribution insensitive).: _Let \(D_{obs}^{1}\) and \(D_{obs}^{2}\) be two observed datasets. Then, \(\mathbb{E}[\mathcal{P}_{\alpha}^{\prime}\mid D_{obs}^{1}]=\mathbb{E}[ \mathcal{P}_{\alpha}^{\prime}\mid D_{obs}^{2}]\)._
**Assumption 6** (Relationship-population is asymmetric).: _Let \(y_{i}\) and \(y_{j}\) be two relationships. Then, \(y_{i}\in\mathcal{P}_{\alpha}^{y_{j}}\Leftrightarrow y_{j}\in\mathcal{P}_{ \alpha}^{y_{i}}\)._
Assuming 4 states that whether two relationships are similar is irrelevant to the SGG model. _Standing on_ and _walking on_, for instance, should share the same features no matter what model we use. In light of this, we should not use any SGG model when calculating \(\mathcal{P}_{\alpha}\). Assumption 5 illustrates no correlation between the distribution of two relationships and their similarity. We highlight this because we observe that the SGG dataset often suffers from the long-tailed distribution issue at both the relationship and object levels, which may perturb the calculation procedure of \(\mathcal{P}_{\alpha}\). Assumption 6 is inspired by the fact that cause and effect are directed, _i.e._, the cause can determine the effect, but not vice versa.
To satisfy Assumption 4, we design a model-agnostic relationship feature extraction method. Consider two objects, \(o_{i}\) and \(o_{j}\), whose bounding boxes are \([b_{x}^{i},b_{y}^{i},b_{h}^{i},b_{w}^{i}]\) and \([b_{x}^{j},b_{y}^{j},b_{h}^{j},b_{w}^{j}]\), respectively. Where, as an example, for the bounding box of \(o_{i}\), \((b_{x}^{i},b_{y}^{i})\) is the center point, and \(b_{w}^{i}\) and \(b_{h}^{i}\) are the width and height. We denote the model-agnostic feature of the relationship between these two objects as \(\psi_{<o_{i},o_{j}>}\):
\[\begin{split}&[\frac{2(b_{x}^{i}+b_{x}^{j})-(b_{w}^{i}+b_{w}^{j})}{4 b_{h}^{i}},\frac{2(b_{y}^{i}+b_{y}^{j})-(b_{h}^{i}+b_{h}^{i})}{4b_{h}^{i}},\\ &\frac{2(b_{x}^{i}+b_{x}^{j})+(b_{w}^{i}+b_{w}^{j})}{4b_{h}^{i}},\frac{2(b_{y}^{i}+b_{y}^{j})+(b_{h}^{i}+b_{h}^{j})}{4b_{h}^{i}},\frac{b_{h}^{ i}}{b_{h}^{i}},\frac{b_{w}^{j}}{b_{w}^{i}}].\end{split} \tag{12}\]
Our proposed model-agnostic feature emphasizes the relative position between object pairs, which is inspired by the fact that it is intrinsically linked to the relationships in SGG. For example, the object pairs of _standing on_ are up-down, while _behind_ is front-back. Thanks to the molecules of Equation (12), \(\psi_{<o_{i},o_{j}>}\) is position-insensitive, as the upper left corners of all object pairs are moved to the same coordinate. Besides, the denominator of Equation (12) normalizes the object pairs, ensuring that \(\psi_{<o_{i},o_{j}>}\) is scale-insensitive. The position/scale-insensitive design in our model-agnostic feature extraction method can overcome the problem that the distance of the lens can make the same relationship vary greatly, thereby generalizing to unseen object pairs.
Before extracting the relationship features in the observed data using the above method, however, there is a problem that needs to address: The object-level long-tailed distribution problem may perturb the model-agnostic feature extraction. Consider an example with \(90\%<\)_people, standing on, road\(>\)_ and \(10\%<\)_people, standing on, beach\(>\)_ in the observed data. It is fusing the features of _standing on_ will undoubtedly bias towards _ceptible, standing on, road\(>\)_ due to its dominance in the observed data, which is detrimental to extracting the feature of _standing on_. We address this problem by extracting the object-to-object level features \(\mathbf{\xi}_{y}\) and then normalizing them. Our method is inspired by Inverse Probability Weighting (IPW) [57], a bias correction method commonly used in statistics, econometrics, epidemiology, _etc._ As a result of this improvement, the proposed method satisfies Assumption 5 since it eliminates the disturbance of distribution from the feature extraction. Specifically, \(\mathbf{\xi}_{y}\in\mathbb{R}^{4}\)\((C\times C\times K\times 6)\) is a four-dimensional statistic:
\[\mathbf{\xi}_{y}=\left[\begin{array}{cccc}\mathbf{\xi}_{y}^{(1,1)}&\mathbf{\xi}_{y}^{(1,2 )}&\ldots&\mathbf{\xi}_{y}^{(1,C)}\\ \ldots&\ldots&\ldots&\ldots\\ \mathbf{\xi}_{y}^{(C,1)}&\mathbf{\xi}_{y}^{(C,2)}&\ldots&\mathbf{\xi}_{y}^{(C,C)}\\ \end{array}\right], \tag{13}\]
where \(\mathbf{\xi}_{y}^{(i,j)}=\{\xi_{y_{i}}^{(i,j)}\}_{k=1}^{K}\) is the normalized features of relationship \(<o_{i},y_{t},o_{j}>\), and it can be calculated as:
\[\xi_{y_{t}}^{(i,j)}=\xi_{y_{t}}^{<o_{i},o_{j}>}/|\xi_{y_{t}}^{<o_{i},o_{j}>}|, \tag{14}\]
\(\xi_{y_{t}}^{<o_{i},o_{j}>}\) and \(|\xi_{y_{t}}^{<o_{i},o_{j}>}|\) are the fusion features and numbers of all relationships \(<o_{i},y_{t},o_{j}>\) in the observed data \(\mathcal{D}\), respectively. We then calculate the feature of each relationship, for instance, for the \(t\)-th relationship \(\xi_{y_{t}}\):
\[\xi_{y_{t}}=\sum_{i=1}^{C}\sum_{j=1}^{C}\xi_{y_{t}}^{(i,j)}/C^{2g}. \tag{15}\]
For relationship-populations \(\mathcal{P}_{\alpha}\), \(\mathcal{P}_{\alpha}=\{\mathbf{P}_{\alpha}^{y_{i}}\}_{i=1}^{K}\), the population of \(y_{t}\) can be calculated as:
\[\mathbf{P}_{\alpha}^{y_{t}}=\operatorname*{arg\,small}_{t,t^{\prime}\in\{1,2, \cdots,K\},t\neq t^{\prime}}\left\|\xi_{y_{t}}-\xi_{y_{t^{\prime}}}\right\|, \tag{16}\]
where \(\operatorname*{arg\,small}_{\alpha}\alpha\) is a computation kernel that selects similar relationships based on feature distances. As an example, Equation (16) takes the \(\alpha\) relationships with the smallest feature distance from \(\xi_{y_{t}}\). Our method guarantees that the head and tail relationship categories have the same population scale and thus satisfy Assumption 6. As such, different relationships yield different feature distances in Equation (16), resulting in asymmetric relationship-populations.
### _Causal calibration learning_
#### 3.4.1 Adaptive Logit Adjustment
Fig. 1 illustrates the severe long-tailed distribution problem in the SGG task. The current SGG models, therefore, easily predict informative fine-grained relationships as less informative coarse-grained relationships. For instance, _looking at_ is predicted as _near_. To end this, let us seek the imagination again: If one collected the balanced data, or, in particular, _looking at_ and _near_ share the same distribution in the observed data \(\mathcal{D}\), will the above error still occur? Similar to Equation (6), this question can also be answered with the counterfactual:
\[\begin{split} P(y|x,do(L:=l))\\ &=P(y|x,do(L:=l_{1}))-P(y|x,do(L:=l_{0})),\end{split} \tag{17}\]
where \(l_{1}\) and \(l_{0}\) represent the head and tail categories, respectively; as such, Equation (17) simulates the potential outcomes of different interventions, _i.e._, \(do(L:=l_{1})\) and \(do(L:=l_{0})\). Inspired by logit adjustment [58, 26, 59], in which the class prior knowledge (also known as adjustment factors) extracted from the training data are used to adjust the model results, we
extract the statistical knowledge from the observed data \(\mathcal{D}\) via model \(f_{\theta^{*}}\) to estimate counterfactuals:
\[\begin{split}&\mathbb{E}[y|x,do(L:=l)]\\ &=\mathbb{E}_{X}[\mathbb{E}(Y|X,do(L:=l_{1}))-\mathbb{E}(Y|X,do(L:= l_{0}))].\end{split} \tag{18}\]
Specifically, we leverage the extracted statistical knowledge, adjustment factors \(\mathbf{T}_{\beta}\), to maximize the recall rate of \(f_{\theta^{*}}\) on the observed data (the computation of \(\mathbf{T}_{\beta}\) is placed in Section 3.4.2). Compared to existing logit adjustment methods, our adjustment factors \(\mathbf{T}_{\beta}\) not only extract knowledge from \(\mathcal{D}\) but, more importantly, it fits adaptively to the SGG model \(f_{\theta^{*}}\). Holding this advantage, our adjustment factors \(\mathbf{T}_{\beta}\) outperform the traditional adjustment method by a clear margin (see the experiments in Section 4.3). Despite this, the knowledge extracted directly from \(f_{\theta^{*}}\) and \(\mathcal{D}\) is still suboptimal. We think this is because background relationships will perturb the model training, resulting in 1) the logits of the foreground relationships being less discriminative; 2) the logits of alternating positive and negative make it impossible for the learned factors to adjust to some predictions correctly (see the qualitative results in Fig. 7). Where foreground relationships in the SGG task are those within annotated triplets in the observed data, and background relationships are the ones that are absent between object pairs. To overcome these problems, we augment the logits of \(f_{\theta^{*}}\):
\[\tilde{f}_{\theta^{*},y}(x)=e^{f_{\theta^{*},y}(x)}\times f_{\theta^{*},y}^{ \text{bg}}(x), \tag{19}\]
where \(f_{\theta^{*},y}^{\text{bg}}(x)\) is the logit of the corresponding background relationship output by \(f_{\theta^{*}}\). \(f_{\theta^{*},y}^{\text{bg}}(x)\) acts as a guidance term that can make the augmented logits \(\tilde{f}_{\theta^{*},y}(x)\) more discriminative, which is inspired by the impressive performance of the traditional adjustment methods in the simple classification task. Augmented logits allow us to get better adjustment factors, and then the final prediction \(y_{x}\) of input \(x\) can be calculated as:
\[y_{x}=\operatorname*{arg\,max}_{y\in\{y_{1},\cdots,y_{k}\}}\{(\tilde{f}_{ \theta^{*},y}(x)\times\mathbf{T}_{\beta})_{y\in\mathbf{T}_{\beta}}\cap( \tilde{f}_{\theta^{*},y}(x))_{y\notin\mathbf{T}_{\beta}}\}. \tag{20}\]
Consider a typical false prediction: For an input \(x\) belonging to the tail category \(y_{i}\), the largest and next largest output logits correspond to \(y_{j}\) and \(y_{i}\), where \(y_{j}\) is a head category. However, our proposed adjustment factors can correct this false prediction by penalizing the logits corresponding to the head categories and encouraging the tail categories, thus, eliminating the negative effect caused by the long-tailed distribution problem. Our proposed AL-Adjustment acts as \(do(L:=l)\) in Equation (18) to remove the confounder \(L\) in the induced submodel \(\mathcal{M}_{\beta}^{S}\). Therefore, the estimated counterfactual by the statistical knowledge \(\mathbf{T}_{\beta}\) is:
\[\begin{split} P(y|x,do(L:=l))&=P(y|x,\mathbf{T}_{ \beta},\tilde{f}_{\theta^{*}})\\ &=\tilde{f}_{\theta^{*},y}(x)\times\mathbf{T}_{\beta}\end{split}. \tag{21}\]
In Section 3.3.1, we show that \(\mathcal{M}_{\beta}^{S}\) can be decomposed into a disentangled factorization. As a result, manipulating the factor in \(\mathcal{M}_{\beta}^{S}\), in most cases, should not affect all factors simultaneously (SMS hypothesis, see Assumption 2). We therefore argue that \(do(L:=l)\) in Equation (18) does not affect the exogenous variables \(X\) and \(Y\), and then the induced submodel \(\mathcal{M}_{\beta}^{S,L}\) obtained in this stage can be further roughly formulated as a disentangled factorization:
\[P(X,Y)\doteq P(X)\times P(Y). \tag{22}\]
We will design experiments in the ablation study (Section 4.3) to demonstrate our disentangled claim in Equation (22).
#### 3.4.2 Calculate the adjustment factors
This subsection shows how to extract statistical knowledge \(\mathbf{T}_{\beta}\) from the observed data \(\mathcal{D}\) and the SGG model \(\tilde{f}_{\theta^{*}}\), which can be used to adjust the logits to remove the confounder \(L\) in submodel \(\mathcal{M}_{\beta}^{S}\). For adjustment factors \(\mathbf{T}_{\beta}\), we have two assumptions (Assumptions 7-8).
**Assumption 7** (Adjustment effect should be sparse).: _Let \(\mathbf{T}_{\beta}^{y_{i}}\) and \(\mathbf{T}_{\beta}^{U_{j}}\) are adjustment factors of \(i\)-th and \(j\)-th prediction logits, respectively. Then, \(P(y_{i}\mid x)=P(y_{i}\mid x,\mathbf{T}_{\beta}^{U_{j}})\), \(P(y_{j}\mid x)=P(y_{j}\mid x,\mathbf{T}_{\beta}^{U_{j}})\)._
**Assumption 8** (Adjustment factors should be independent of each other).: _Let \(\mathbf{T}_{\beta}^{y_{i}}\) and \(\mathbf{T}_{\beta}^{U_{j}}\) are adjustment factors of \(i\)-th and \(j\)-th prediction logits, respectively. Then, \(P(y\mid x,Max(\mathbf{T}_{\beta}^{y_{i}},\mathbf{T}_{\beta}^{y_{j}}))=Max(P(y \mid x,\mathbf{T}_{\beta}^{y_{i}}),P(y\mid x,\mathbf{T}_{\beta}^{y_{j}}))\), where \(Max(\cdot|\cdot)\) is a computation kernel to take the maximum value of the corresponding positions of the two sets._
Assumption 7 is inspired by the SMS hypothesis, and it holds due to the disentangled factorization (Equation (11)) obtained in stage \(1\). This assumption also stems from our insight that the false predictions in most cases belong to the largest few logits (see Table VIII). As such, Assumption 7 allows us to correct the false predictions with sparse adjustment factors. However, the existing methods adjust all logits. Assumption 8 views the SMS hypothesis through the relationship level to highlight the causality between the relationships. Here is an intuition for this assumption: To correct any false prediction logits of a binary classification task, we only have to adjust one of these two logits. Assumption 8 allows us to learn the adjustment factors of each relationship independently.
Our proposed adaptive adjustment factors \(\mathbf{T}_{\beta}\) is a two-dimensional \((K\times\beta)\) matrix:
\[\mathbf{T}_{\beta}=\left[\begin{array}{cccc}\mathbf{T}_{\beta}^{y_{1},l_{1} }&T_{\beta}^{y_{1},l_{2}}&\cdots&T_{\beta}^{y_{1},l_{\beta}}\\ \cdots&\cdots&\cdots&\cdots\\ T_{\beta}^{y_{k},l_{1}}&T_{\beta}^{y_{k},l_{2}}&\cdots&T_{\beta}^{y_{k},l_{ \beta}}\end{array}\right], \tag{23}\]
where \(T_{\beta}^{y_{i},l_{j}}\) adjusts the \(j\)-th largest prediction logit to correspond to the \(i\)-th relationship, and it can be calculated as:
\[\begin{split} T_{\beta}^{y_{i},l_{j}}=\operatorname*{arg\,max}_{ \begin{subarray}{c}T_{\beta}^{y_{i},l_{j}}\in\mathcal{T}\\ \end{subarray}}(\operatorname*{TP}_{\begin{subarray}{c}(x,y)\sim\mathcal{D}^{y_{i },l_{j}}(\tilde{f}_{\theta^{*},y}(x)\times T_{\beta}^{y_{i},l_{j}})\\ \text{true prediction with adjustment}\end{subarray}}-\\ \operatorname*{TP}_{\begin{subarray}{c}(x,y)\sim\mathcal{D}^{y_{i },l_{j}}(\tilde{f}_{\theta^{*},y}(x)))\\ \text{true prediction without adjustment}\end{subarray}},\end{split} \tag{24}\]
where \(T\in\mathbb{R}\) and \(\operatorname*{TP}_{\begin{subarray}{c}(X,Y)\end{subarray}}(f)\) is a computation kernel that calculates the true prediction numbers (_e.g._, recall rate (R@K) in SGG task) of model \(f\) on dataset \((X,Y)\). \(\mathcal{D}^{y_{i},l_{j}}\) is all relationships with the \(j\)-th largest prediction logit to correspond to the \(i\)-th category reasoned by \(f_{\theta^{*}}\), and it can be further divided into true predictions \(\mathcal{TD}_{\text{obs}}^{y_{i},l_{j}}\) and false predictions \(\mathcal{FD}_{\text{obs}}^{y_{i},l_{j}}\). Thus, our method is to maximize the recall rate of model \(f_{\theta^{*}}\) on the observed data \(\mathcal{D}\) by the adjustment factors learned in Equation (24).
We then propose an upper-lower bound-based method to compute Equation (24) quickly. As shown in Fig. 4, for each relationship in \(\mathcal{TD}_{\text{obs}}^{y_{i},l_{j}}\), we can compute a lower bound that keeps the correct prediction. Similarly, we can obtain for each relationship in \(\mathcal{FD}_{\text{obs}}^{y_{i},l_{j}}\) an upper bound that can adjust it to the correct prediction. We denote the lower and upper bounds of \(\mathcal{D}^{y_{i},l_{j}}\) as \(\underline{\vee}_{y_{i},l_{j}}\) and \(\bar{\wedge}_{y_{i},l_{j}}\), respectively. It is clear that we
only need to let \(T_{\beta}^{y_{i},l_{j}}\) satisfy the most bounds to maximize the number of correct predictions. Therefore, maximizing the recall rate of model \(f_{\theta^{*}}\) on the observed data \(\mathcal{D}\) by the adjust factor is equivalent to another task that finds the factor that satisfies the most bounds in \(\underline{\vee}_{y_{i},l_{j}}\) and \(\bar{\wedge}_{\eta_{i},l_{j}}\). As a result, \(T_{\beta}^{y_{i},l_{j}}\) in Equation (24) can also be calculated by:
\[\begin{split} T_{\beta}^{y_{i},l_{j}}=\operatorname*{arg\,max}_{t \in\mathcal{I}}(\sum_{m=1}^{|\vee_{y_{i},l_{j}}|}\mathds{1}(t\geq\underline{ \vee}_{y_{i},l_{j}}^{m})+\\ \sum_{n=1}^{|\vee_{y_{i},l_{j}}|}\mathds{1}(t<\bar{\wedge}_{y_{i},l_{j}}^{n})),\end{split} \tag{25}\]
where \(\mathds{1}(\cdot)\) is an indicator function (equals 1 if the expression is _true_ and 0 for _false_) and \(|\cdot|\) is the length/size of the given set/list. However, we further find that the long-tailed distribution problem may perturb the adjustment effect of \(T_{\beta}^{y_{i},l_{j}}\). Specifically, if \(y_{i}\) is a head category, \(|\vee_{y_{i},l_{j}}|\ll|\ \bar{\wedge}_{y_{i},l_{j}}|\), and if it is a tail category, then \(|\ \bar{\vee}_{y_{i},l_{j}}\ |\gg|\vee_{y_{i},l_{j}}|\). It is due to biased training caused by the skewed distribution. To this issue, for relationship \(y_{i}\), we randomly sample the same number (_e.g._, \(min(|\vee_{y_{i},l_{j}}|,|\bar{\vee}_{y_{i},l_{j}}|))\) of bounds \(\underline{\vee}_{y_{i},l_{j}}\) and \(\bar{\vee}_{y_{i},l_{j}}^{n}\) separately from the original lower/upper bounds to ensure unbiased adjustment factors. Therefore, Equation (25) will be adjusted to:
\[\begin{split} T_{\beta}^{y_{i},l_{j}}=\operatorname*{arg\,max}_{t \in\mathcal{I}}(\sum_{m=1}^{min(|\vee_{y_{i},l_{j}}|,|\bar{\vee}_{y_{i},l_{j}} |)}\mathds{1}(t\geq\underline{\vee}_{y_{i},l_{j}}^{m})+\\ \sum_{n=1}^{min(|\vee_{y_{i},l_{j}}|,|\bar{\vee}_{y_{i},l_{j}} |)}\mathds{1}(t<\bar{\wedge}_{y_{i},l_{j}}^{n})).\end{split} \tag{26}\]
Note that in Equation (26), \(T_{\beta}^{y_{i},l_{j}}\) is an interval with extremely close upper and lower bounds, thereby selecting any value within this interval as the adjustment factor has a negligible impact on the results. Consequently, in this paper, we randomly sample a value from \(T_{\beta}^{y_{i},l_{j}}\) as the learned adjustment factor. Finally, for each relationship, we learn only \(\beta\) adjustment factors corresponding to the 1-\(\beta\) positions in the prediction logits. Thus, this sparse adjustment mechanism enables our method to satisfy Assumption 7. Meanwhile, the adjustment factors for each relationship are independently learned by Equation (26), so our method satisfies Assumption 8.
### _Discussion_
This subsection first shows that our method is Fisher consistent, _i.e._, models based on popular learning strategies (_e.g._, empirical risk minimization (ERM)) lead to the Bayes optimal rule of classification that minimizes the balanced error [60, 61]. This is very important for the SGG task, as it prevents the model from heading down a confusing path, _i.e._, biased towards predicting head categories for a high recall rate. We then highlight the differences and advantages of the proposed causal framework with existing methods.
#### 3.5.1 Fisher consistency
To demonstrate that our method is Fisher consistent, we start with the Bayes perspective. [44] thoroughly explored the relationship between the posterior probability of the balanced class-probability function \(P^{\mathrm{bal}}(y\mid x)\) and the unbalanced one \(P(y\mid x)\), and it defined \(P^{\mathrm{bal}}(x\mid y)\propto P(x\mid y)/P(x)\). In the SGG task, however, we find that the models suffer from confounders other than the long-tailed distribution problem, such as semantic confusion confounder, as well as unobserved ones like missing labeled relationship confounder and mislabeled relationship confounder. As such, here we define:
\[P^{\mathrm{bal}}(x\mid y)\propto P(x\mid y)/P(x)P(S)P(U_{o}), \tag{27}\]
where \(U_{o}\) is unobserved confounders. Also, consider:
\[P^{\mathrm{bal}}(y\mid x)=(P^{\mathrm{bal}}(x\mid y)P^{\mathrm{bal}}(y))/P^{ \mathrm{bal}}(x), \tag{28}\]
we therefore have:
\[\begin{split} P^{\mathrm{bal}}(y\mid x)\propto(P(x\mid y)P(X)P^{ \mathrm{bal}}(y))/(P(X)\\ P(Y)P(S)P(U_{o})P^{\mathrm{bal}}(x)).\end{split} \tag{29}\]
For fixed class-conditionals \(P(x|y)\), the optimal predictions will not be affected by \(P(Y)\)[44], hence:
\[P^{\mathrm{bal}}(y\mid x)\propto P(y\mid x)/(P(S)P(U_{o})P^{\mathrm{bal}}(x)). \tag{30}\]
Then, according to the SMS hypothesis (Assumption 2) and small distribution changes hypothesis in [39], there exists an intervention \(\mathcal{I}\) such that:
\[\operatorname*{arg\,max}_{y\in\{y_{1},\cdots,y_{K}\}}f_{\theta^{\mathcal{I}}}(x )=\operatorname*{arg\,max}_{y\in\{y_{1},\cdots,y_{K}\}}(\tilde{f}_{\theta^{*},y}(x)\times\mathbf{T}_{\beta}). \tag{31}\]
Note that we cannot model the intervention \(\mathcal{I}\) directly since the induced submodel \(\mathcal{M}_{\bar{v}}\) can only be formulated as an entangled factorization. We define the adjustment factors corresponding to intervention \(\mathcal{I}\) as \(\mathbf{T}_{\beta}^{\mathcal{I}}\), that is:
\[\operatorname*{arg\,max}_{y\in\{y_{1},\cdots,y_{K}\}}f_{\theta^{\mathcal{I}}}( x)=\operatorname*{arg\,max}_{y\in\{y_{1},\cdots,y_{K}\}}(\tilde{f}_{\theta^{*},y}(x) \times\mathbf{T}_{\beta}^{\mathcal{I}}). \tag{32}\]
Based on the Theorem 1 in the [62], we have
\[\operatorname*{argmax}_{y\in\{y_{1},\cdots,y_{K}\}}\tilde{f}_{\theta^{*},y}(x )=\operatorname*{argmax}_{y\in\{y_{1},\cdots,y_{K}\}}P(x\mid y),\]
thus:
\[\operatorname*{arg\,max}_{y\in\{y_{1},\cdots,y_{K}\}}f_{\theta^{\mathcal{I}}}( x)=\operatorname*{arg\,max}_{y\in\{y_{1},\cdots,y_{K}\}}((P(y\mid x)P(y)/P(x)) \times\mathbf{T}_{\beta}^{\mathcal{I}}). \tag{33}\]
Considering both Equation (30) and Equation (33), when
\[\mathbf{T}_{\beta}^{\mathcal{I}}\propto P(Y)/(P(S)P(X)P(U_{o})P^{\mathrm{bal}}(x )), \tag{34}\]
then
\[\operatorname*{arg\,max}_{y\in\{y_{1},\cdots,y_{K}\}}f_{\theta^{\mathcal{I}}}( x)=\operatorname*{arg\,max}_{y\in\{y_{1},\cdots,y_{K}\}}P^{\mathrm{bal}}(y\mid x). \tag{35}\]
This means that our manipulations on confounder \(S\) and confounder \(L\) can lead to a minimal balanced error (mR@K in SGG task), and thus our method is Fisher consistent.
Fig. 4: The proposed upper-lower bound-based method of calculating Equation (24). Each False/True prediction logit corresponds to an upper/lower bound. The most optimal adjustment factor needs to satisfy the most bounds.
#### 3.5.2 Sparsity and independency
Causal representation learning (stage 1) in our proposed causal framework is inspired by the loss-weighting methods [63, 64, 65, 66], and causal calibration learning (stage 2) is inspired by post-hoc adjustment approaches [26, 58, 59]. In all of these heuristic works, statistical priors (_e.g.,_ category frequencies) are extracted from the observed data to calibrate the decision boundaries. However, we leverage the extracted statistical knowledge to estimate the counterfactual to eliminate confounders \(S\) and \(L\). Where the statistical knowledge in stage 1 is extracted via the proposed model-agnostic method, and that of in stage 2 is adaptively extracted from the learned model \(f_{\theta^{*}}\) and the observed data \(\mathcal{D}\). Besides, our method differs fundamentally from these works in that the interventions using knowledge are sparse and independent, which is the key to preserving head category performance while pursuing the prediction of high-informative tail relationships.
Causal inference models the observed data with modular knowledge, and interventions on partial knowledge can achieve rapid distribution changes [38]. These sparse perturbations simulate human learning, _i.e.,_ the reuse of most knowledge, and thus, have great potential for practical applications, especially for open-world learning. Both stage 1 and stage 2 of our causal framework are sparse, specifically, \(\alpha\) in \(\mathcal{P}_{\alpha}\) and \(\beta\) in T\({}_{\beta}\). The former means that each relationship only takes the \(\alpha\) most similar ones as its population. Therefore, Equation (8) only sparsely adjusts the loss for very few relationships. The latter represents that only the top-\(\beta\) predict logits will be adjusted. Thus, Equation (20) is a sparse adjustment technique.
Independent Causal Mechanisms (ICM) [50] tells us that changing one causal mechanism does not change others. Note that ICM requires causal sufficiency, but the SGG task does not satisfy this. However, as analyzed in Section 3.3.1, our proposed P-Loss can intervene in confounder \(S\) without losing the independent property due to the sparse nature of similar relationships. The result of stage 1 can be roughly formulated as a disentangled factorization. Furthermore, the different logit positions of T\({}_{\beta}\) are independently learned. These enable independent intervention in stage 2.
In addition, [44] shows that loss-reweight and logit-reweight are identical, and merging them brings no further gain. The latter even cancels out the improvement from the former in some cases. However, the post-hoc adjustment factors in our causal framework are adaptively learned from the model obtained in the previous stage and thus always yield positive adjustment effects. More importantly, our method can make the decision boundaries between similar relationships clearer, which traditional methods cannot achieve. We show the two above merge routes in Fig. 5 to compare the boundary adjustment processes.
## 4 Experiments
### _Implementation_
_Datasets._ We evaluate our method on VG150 [13], a subset of the VG dataset [51] that includes the most frequent 150 object categories and 50 relationship classes. VG150 has about 94k images, and we follow the split in [14], _i.e.,_ 62k training images, 5k validation images, and 26k test images.
_Evaluation modes._ Following MotifsNet [9], we use three evaluation modes: 1) Predicate classification (PredCls). This mode requires the SGG model to predict relationships given the ground truth boxes and object classes. 2) Scene Graph Classification (SGCls). This mode requires the SGG model to predict object classes and relationships given the ground truth boxes. 3) Scene Graph Detection (SGDet). This mode requires the SGG model to predict object classes, boxes, and relationships.
_Evaluation metrics._ Following [11, 12, 13], we adopt three evaluation metrics: 1) Recall rate (R@K). R@K is one of the most commonly used evaluation metrics, which calculates the fraction of times the correct relationship is predicted in the top K confident relationship predictions. Typically, K is set to 20, 50, and 100, _i.e.,_ R@20, R@50, and R@100. 2) mean recall rate (mR@K). mR@K calculates the mean of the R@K for each relationship. Compared with R@K, mR@K can comprehensively evaluate the model performance on all relationship categories, especially the tail relationships. 3) Mean of R@K and mR@K (MR@K). Due to the severely long-tailed distribution, the SGG model only needs to perform well on a few head categories to achieve high R@K. Although some current works can achieve a high mR@K, they greatly sacrifice the R@K of the head categories, which is certainly not what we expected since the head categories account for significant proportions in realistic scenarios. We therefore aim to achieve a favorable tradeoff between R@K and mR@K, allowing the model to accommodate both head and tail relationships, which in turn enhances the practical value of the generated scene graph. For this purpose, we calculate the mean of R@K and mR@K, denoted as MR@K, to evaluate the model comprehensively.
_Training and testing._ We evaluate our model-agnostic method on the popular SGG backbones, including MotifsNet [9], VCTree [11], and Transformer [67], in the repository provided by [14]. We follow most of the settings of this repository: 1) The object detector in the pipeline is the Faster R-CNN [68] with the backbone of ResNeXt-101-FPN [69]. The detector was trained with the VG training set and achieved 28.14 mAP on the VG test set. 2) The detector is then frozen and outputs the bounding boxes, categories, and features of the detected objects for the relationship classifier in the pipeline. The classifier is supervised by our proposed P-Loss and optimized by SGD. For MotifsNet [9] and VCTree [11], the batch size and initial learning rate are set to 12 and 0.01, while these parameters in Transformer [67] are 16 and 0.001. We set \(\alpha\) in Equation (8) to 5 unless otherwise mentioned. 3) In the testing phase, the logits will first be augmented by Equation (19) and then adjusted by the adjustment factors learned by Equation (20) to obtain the final predictions. The \(\beta\) in Equation (23) is set to 3.
### _Comparison with state-of-the-art_
_4.2.1 Backbones and baselines_
_Backbones._ We evaluate our proposed method with three popular SGG backbones, _i.e.,_ MotifsNet [9], VCTree [11], and Transformer
Fig. 5: The merging effect of statistical-based methods (top) and our proposed causal components (bottom).
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c}{PredCls} & \multicolumn{4}{c}{SCCls} & \multicolumn{4}{c}{SCDet} \\ & mR@20 & mR@50 & mR@100 & AVG\({}_{mk}\) & mR@20 & mR@50 & mR@100 & AVG\({}_{mR}\) & mR@20 & mR@50 & mR@100 & AVG\({}_{mR}\) \\ \hline \hline MotifsNet (backbone) [9] & \(12.2\) & \(15.5\) & \(16.8\) & \(14.8\) & \(7.2\) & \(9.0\) & \(9.5\) & \(8.6\) & \(5.2\) & \(7.2\) & \(8.5\) & \(7.0\) \\ TDE [14]\({}^{\circ\dagger}\) CVF20 & \(18.5\) & \(25.5\) & \(29.1\) & \(24.4\) & \(9.8\) & \(13.1\) & \(14.9\) & \(12.6\) & \(5.8\) & \(8.2\) & \(9.8\) & \(7.9\) \\ SegC [19]\({}^{\dagger}\) (\({}^{\dagger}\) ACV20) & \(14.5\) & \(18.5\) & \(20.2\) & \(17.7\) & \(8.9\) & \(11.2\) & \(12.1\) & \(10.7\) & \(6.4\) & \(8.3\) & \(9.2\) & \(8.0\) \\ BPL+SA [27]\({}^{\dagger}\) (\({}^{\dagger}\) ACV20) & \(24.8\) & \(29.7\) & \(31.7\) & \(28.7\) & \(14.0\) & \(16.5\) & \(17.5\) & \(16.0\) & \(10.7\) & \(13.5\) & \(15.6\) & \(13.3\) \\ CogTee [21]\({}^{\dagger}\) (\({}^{\dagger}\) ACV20) & \(20.9\) & \(26.4\) & \(29.0\) & \(25.4\) & \(12.1\) & \(14.9\) & \(16.1\) & \(14.4\) & \(7.9\) & \(10.4\) & \(11.8\) & \(10.0\) \\ DLFE [24]\({}^{\dagger}\) (\({}^{\dagger}\) ACV20) & \(22.1\) & \(26.9\) & \(28.8\) & \(25.9\) & \(12.8\) & \(15.2\) & \(15.9\) & \(14.6\) & \(8.6\) & \(11.7\) & \(13.8\) & \(11.4\) \\ EBM-loss [15]\({}^{\dagger}\) (\({}^{\dagger}\) ACV20) & \(14.2\) & \(18.0\) & \(19.5\) & \(17.2\) & \(8.2\) & \(10.2\) & \(11.0\) & \(9.8\) & \(5.7\) & \(7.7\) & \(9.3\) & \(7.6\) \\ Loss-reweight [44]\({}^{\dagger}\) (\({}^{\dagger}\) ACV20) & \(26.5\) & \(32.9\) & \(35.3\) & \(31.6\) & \(13.8\) & \(17.4\) & \(19.3\) & \(16.8\) & \(9.2\) & \(12.8\) & \(16.5\) & \(12.8\) \\ Logit-reweight [44]\({}^{\dagger}\) (\({}^{\dagger}\) ACV20) & \(22.2\) & \(15.4\) & \(16.7\) & \(14.8\) & \(6.4\) & \(7.6\) & \(8.3\) & \(7.4\) & \(4.5\) & \(5.9\) & \(7.7\) & \(6.0\) \\ HML [28]\({}^{\ddagger}\) (\({}^{\ddagger}\) ACV20) & \(30.1\) & \(36.3\) & \(38.7\) & \(35.0\) & \(17.1\) & \(20.8\) & \(22.1\) & \(20.0\) & \(10.8\) & \(14.6\) & \(17.3\) & \(14.2\) \\ FGPL [30]\({}^{\dagger}\) (\({}^{\dagger}\) ACV20) & \(24.3\) & \(33.0\) & \(37.5\) & \(31.6\) & \(17.1\) & \(21.3\) & \(22.5\) & \(20.3\) & \(11.1\) & \(15.4\) & \(18.2\) & \(14.9\) \\ TransRow [20]\({}^{\ddagger}\) (\({}^{\ddagger}\) ACV20) & \(-\) & \(35.8\) & \(39.1\) & \(-\) & \(-\) & \(21.5\) & \(22.8\) & \(-\) & \(-\) & \(15.8\) & \(18.0\) & \(-\) \\ GCL [16]\({}^{\dagger}\) (\({}^{\ddagger}\) (\({}^{\ddagger}\)) & \(30.5\) & \(36.1\) & \(38.2\) & \(34.9\) & \(18.0\) & \(20.8\) & \(21.8\) & \(20.2\) & \(12.9\) & \(16.8\) & \(19.3\) & \(16.3\) \\ PPDL [27]\({}^{\dagger}\) (\({}^{\ddagger}\) ACV20) & \(-\) & \(32.2\) & \(33.3\) & \(-\) & \(-\) & \(17.5\) & \(18.2\) & \(-\) & \(-\) & \(11.4\) & \(13.5\) & \(-\) \\ RTPB [29]\({}^{\ddagger}\) (\({}^{\ddagger}\) ACV20) & \(28.8\) & \(35.3\) & \(37.7\) & \(33.9\) & \(16.3\) & \(19.4\) & \(22.6\) & \(19.4\) & \(9.7\) & \(13.1\) & \(15.5\) & \(12.8\) \\ NICE [21]\({}^{\ddagger}\) (\({}^{\ddagger}\) (\({}^{\ddagger}\)) & \(-\) & \(30.0\) & \(32.1\) & \(-\) & \(16.4\) & \(17.5\) & \(-\) & \(-\) & \(10.4\) & \(12.7\) & \(-\) \\ PKO [20]\({}^{\dagger}\) (\({}^{\ddagger}\) (\({}^{\ddagger}\)) & \(25.0\) & \(31.4\) & \(34.0\) & \(30.1\) & \(14.1\) & \(17.6\) & \(19.1\) & \(16.9\) & \(9.6\) & \(13.4\) & \(16.1\) & \(13.0\) \\ LS-KD(lnet) [32]\({}^{\ddagger}\) (\({}^{\ddagger}\) (\({}^{\ddagger}\)) & \(-\) & \(24.1\) & \(27.4\) & \(-\) & \(-\) & \(13.8\) & \(15.2\) & \(-\) & \(-\) & \(9.7\) & \(11.5\) & \(-\) \\ CAME [33]\({}^{\ddagger}\) (\({}^{\ddagger}\) (\({}^{\ddagger}\)) & \(18.1\) & \(26.2\) & \(32.0\) & \(25.4\) & \(10.5\) & \(15.1\) & \(18.0\) & \(14.5\) & \(6.7\) & \(9.3\) & \(12.1\) & \(9.4\) \\ \hline \hline \multicolumn{12}{l}{\(\mathrm{\text{\small{\small{\small TSCM}}}}\)\({}^{\circ\dagger}\)} & \(31.8\) & \(37.8\) & \(40.9\) & \(36.8\) & \(18.7\) & \(22.4\) & \(23.8\) & \(21.6\) & \(13.7\) & \(17.4\) & \(19.7\) & \(16.9\) \\ \hline \hline \end{tabular}
\end{table} TABLE I:
[67]. Specifically, we first replace the loss function of the above backbones with the P-Loss to supervise the model training. We then leverage AL-Adjustment to optimize the logits outputted by the trained model during inference.
_Baselines._ We classify existing baselines from two perspectives to comprehensively evaluate our proposed framework. 1) Debiasing perspective. We divide the baselines into four groups, resampling methods, reweighting methods, adjustment methods, and hybrid methods. Resampling methods include SegG [19] and TransRwt [20]. Reweighting methods include CogTree [21], EBM-loss [15], Loss-reweight [44], FQPL [30], GCL [16], PPDL [17], and LS-KD(Iter) [32]. Adjustment methods include TDE [14], DLFE [24], Logit-reweight [44], and PKO [25]. Hybrid methods include BPL+SA [27], HML [28], RTPB [29], NICE [31], and CAME [33]. We group from this perspective because Stage 1 in our framework is the reweighting method and Stage 2 is the adjustment method, and thus our TsCM is a hybrid method. 2) Model perspective. We divide the baselines into two groups, model-agnostic and model-dependent methods. The former group includes TDE [14], Loss-reweight [44], Logit-reweight [44], BPL+SA [27], CogTree [21], DLFE [24], HML [28], FQPL [30], TransRwt [20], SegG [19], EBM-loss [15], PPDL [17], NICE [31], PKO [25], LS-KD (Iter) [32], and the latter group includes GCL [16], RTPB [29], CAME [33]. It is generally possible to easily transfer model-agnostic methods to different SGG backbones, thereby generalizing well in real-world applications.
#### 4.2.2 Performance analysis
_Quantitative results analysis._ We report the quantitative results in Table I, Table II, Table III, Table IV, and Table V. Our proposed method achieves state-of-the-art performance on mR@K, the most popular metric for evaluating unbiased SGG. Besides, the proposed method shows more gains on the metrics of R@K and MR@K, which indicates that TsCM obtains a better tradeoff between head and tail categories.
From the quantitative results, we have the following observations: 1) Adjustment methods [14, 24, 25, 44] are the most relevant to our proposed AL-Adjustment approach since they share the same insight in encouraging predicting more informative tail relationships by adjusting the output logits. However, the adjustment factors in our method are adaptively learned from the observed data and thus can support causal calibration since they are sparse and independent. Benefiting from this, for instance, in PredCls mode, TsCM achieves 6.7%/6.5% performance gains on MotifsNet (Table I)/VCTree (Table II) compared with adjustment methods. 2) Reweighting methods [15, 16, 17, 30, 32, 44] suppress partial relationships by modifying the loss function and are thus highly related to our proposed P-Loss as well. However, the difference is that our method focuses on relationships with semantic confusion, which this group of baseline methods has not explored yet. Thanks to P-Loss for providing the causal representation that can distinguish similar relationships, for example, in SGCls mode, TsCM surpasses the reweighting methods on MotifsNet (Table I)/Transformer (Table III) by 1.3%/2.5%. 3) Compared with hybrid methods [27, 28, 29, 31, 33], for example, in SGDet mode, our method observes 2.8%/3.6% improvements on VCTree (Table II)/Transformer (Table III). We believe this is mainly due to the fact that the two stages in our causal framework target different biases. While baseline methods mix different techniques, they only target the same bias. 4) We model the SCM with the data-level confounders so that our method is model-agnostic. TsCM can therefore be used for any SGG backbone that wants to pursue unbiased predictions. Compared with model-agnostic methods [14, 15, 17, 19, 20, 21, 24, 25, 27, 28, 30, 31, 32, 44], for instance, in SGDet mode, we observe 2.7%/2.8% improvements on MotifsNet (Table I)/Transformer (Table III). 5) Table IV shows that our method is slightly weaker than logit-reweight [44] in terms of R@K. However, [44] provides biased prediction, resulting in a poor performance in mR@K, _e.g._, 14.8% mR@K in the PredCls mode of the MotifsNet backbone [9], while our method achieves 36.8% in the same set-up. This indicates that our approach significantly outperforms
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c} \hline \hline & \multicolumn{3}{c|}{PredCls} & \multicolumn{3}{c}{SGCls} & \multicolumn{3}{c}{SGDet} \\ & R@20 & R@50 & R@100 & AVG\({}_{R}\) & R@20 & R@50 & R@100 & AVG\({}_{R}\) & R@20 & R@50 & R@100 & AVG\({}_{R}\) \\ \hline \hline MotifsNet (backbone) [9] & \(59.5\) & \(66.0\) & \(67.9\) & \(64.5\) & \(35.8\) & \(39.1\) & \(39.9\) & \(38.3\) & \(25.1\) & \(32.1\) & \(36.9\) & \(31.4\) \\ TDE [14]\({}^{\triangledown}\) & \(31.1\) & \(35.6\) & \(36.8\) & \(34.5\) & \(19.4\) & \(21.6\) & \(22.2\) & \(21.1\) & \(15.7\) & \(20.0\) & \(22.1\) & \(19.3\) \\ Loss-reweight [44]\({}^{\triangledown}\) & \(54.8\) & \(61.3\) & \(61.9\) & \(59.8\) & \(32.3\) & \(35.4\) & \(36.6\) & \(34.8\) & \(22.3\) & \(28.9\) & \(34.1\) & \(28.4\) \\ TransRwt [20]\({}^{\triangledown}\) & \(-\) & \(-\) & \(27.3\) & \(-\) & \(-\) & \(21.2\) & \(23.9\) & \(-\) \\
15GM\({}^{\triangledown}\) [FOOTNOTE:]Footnote : AVG\({}_{R}\) is the average of R@20, R@50, R@100. \(\Delta\), \(\phi\), \(\lozenge\) and \(\tau\) are with the same meanings as in Table I.[ENDFOOTNOTE] & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(21.2\) & \(23.9\) & \(-\) \\
15GM\({}^{\triangledown}\) [FOOTNOTE:]Footnote : AVG\({}_{R}\) is the average of R@20, R@50, R@100. \(\Delta\), \(\phi\), \(\lozenge\) and \(\tau\) are with the same meanings as in Table I.[ENDFOOTNOTE] & \(49.7\) & \(57.1\) & \(59.5\) & \(55.4\) & \(29.8\) & \(33.6\) & \(34.2\) & \(32.5\) & \(23.7\) & \(25.6\) & \(29.6\) & \(-\) \(27.3\) \\ \hline \hline VCTree (backbone) [11] & \(59.8\) & \(66.2\) & \(68.1\) & \(64.7\) & \(37.0\) & \(40.5\) & \(41.4\) & \(39.6\) & \(24.7\) & \(31.5\) & \(36.2\) & \(30.8\) \\ TDE [14]\({
traditional logit-adjusted methods [44] in terms of unbiased prediction. We attribute this to the inability of conventional logit-adjusted methods, particularly those employing non-heuristic prior knowledge, to effectively adjust for severely biased models in the presence of extremely long-tailed data within the SGG task. Compared with other methods, for instance, in PredCls mode, TsCM achieves 11.7%/13.8% gains on MotifsNet/Transformer. We believe that these exciting improvements come from sparse perturbations in our method, which do not perturb the SGG model largely, thus preserving the performance of head categories while pursuing unbiased predictions. 6) Table V shows that our method can achieve a better tradeoff between R@K and mR@K. Besides methods that benefit from recall rate (_e.g.,_ Logit-reweight [44]), our method achieves 6.4%/7.4%/5.4% improvements in the backbones of MotifsNet/VCTree/Transformer. This illustrates that our method also preserves head category performance while pursuing informative tail category predictions.
_Qualitative results analysis._ Fig. 6 shows the qualitative results generated by the original MotifsNet [9] and MotifsNet equipped with our TsCM. From these qualitative results, we have the following observations: 1) Our proposed method tends to predict more informative relationships, for instance, { _<girl_, _standing on_, _ski> ps <girl_, _on_, _ski>_} and { _<dog_, _laying on_, _bed> ps <dog_, _on_, _bed>_}. We believe these improvements are due, in part, to the fact that our proposed AL-Adjustment can refine less informative predictions into high-informative outputs, and we will discuss this in the ablation study. 2) Our method performs well in distinguishing similar relationships, for example, { _<wire_, _attached to_, _surfboard 1> ps <surfboard 1, has_, _wire>_}. Besides, for objects where two bounding boxes do not intersect, our method can still generate meaningful relationships, _e.g.,_ {_<person_, _behind_, _girl> ps <person_, _near_, _girl>_}. It is evident from the above improvements that our method can optimize the features of the model and classify relationships based on more than just
\begin{table}
\begin{tabular}{c|c c|c c c} \hline \hline & & & \multicolumn{2}{c}{PredCls} & \multicolumn{2}{c}{SCCls} & \multicolumn{2}{c}{SGDet} \\ & P-L & AL-Adj & mR@20/50/100 & mR@20/50/100 & mR@20/50/100 \\ \hline & ✗ & ✗ & 12.2/15.5/16.8 & 7.2/9.0/9.5 & 5.2/7.2/8.5 \\ & ✓ & ✗ & 12.9/16.9/12.0 & 1.7 /4.6/11.2 & 5.3/7.6/8.8 \\ & ✓ & ✗ & 24.0/30.7/33.3 & 14.2/17.1/18.4 & 8.1/10.8/13.3 \\ & ✓ & ✓ & 31.8/37.8/40.9 & 18.7/22.4/23.8 & 13.7/17.4/19.7 \\ \hline & ✗ & 12.4/15.4/16.6 & 6.3/7.5/8.0 & 4.9/6.6/7.7 \\ & ✓ & ✗ & 12.7/16.4/19.8 & 8.4/10.6/11.4 & 5.8/7.4/9.8 \\ & ✗ & ✓ & 23.6/30.3/33.1 & 17.6/20.5/22.7 & 14.0/13.1/5.9 \\ & ✓ & ✓ & 32.3/38.7/41.5 & 23.4/26.9/28.9 & 12.5/16.9/19.3 \\ \hline & ✗ & ✗ & 12.4/16.6/17.5 & 7.7/9.6/10.2 & 5.3/7.3/8.8 \\ & ✓ & ✗ & 13.1/17.2/20.3 & 9.4/11.3/12.4 & 6.6/8.1/9.4 \\ & ✗ & ✓ & 24.8/31.2/33.9 & 14.3/17.8/19.4 & 9.8/13.5/16.5 \\ & ✓ & ✓ & 32.8/40.1/42.3 & 19.6/23.7/25.1 & 13.8/18.3/21.2 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Results under different combinations of P-Loss and AL-Adjustment, respectively.
\begin{table}
\begin{tabular}{l|c|c c c|c c c|c c c} \hline \hline & & \multicolumn{2}{c|}{PredCls} & \multicolumn{2}{c|}{SCCls} & \multicolumn{2}{c}{SGDet} \\ & \(AVG_{mR}\) & \(AVG_{R}\) & MR@K & \(AVG_{mR}\) & \(AVG_{R}\) & MR@K & \(AVG_{mR}\) & \(AVG_{R}\) & MR@K \\ \hline MotifsNet (backbone) [9] & \(14.8\) & \(64.5\) & \(39.7\) & \(8.6\) & \(38.3\) & \(23.5\) & \(7.0\) & \(31.4\) & \(19.2\) \\ TDE [14]\({}^{\circ}\)\({
simple information (_e.g._, the distance between objects). We think the optimized features are achieved by our proposed P-Loss, which can learn representations that build causality between relationships.
### _Ablation Study_
_Exploring the contributions of the two stages._ TsCM consists of P-Loss and AL-Adjustment to eliminate the confounders \(S\) and \(L\), respectively. We first ablate our proposed causal framework using different combinations of P-Loss and AL-Adjustment, and the results are shown in Table VI. The results show that both components of TsCM contribute a lot to the performance. Specifically, for AL-Adjustment, it can significantly improve the mean recall rate of the model. For example, VCTree [11] equipped with AL-Adjustment has 11.2%/14.9%/16.5% gains on the metrics of mR@20/50/100. Although we can only observe trivial boosts for P-Loss alone, its purpose is to obtain causal representations that can well distinguish similar relationships. Therefore, these trivial boosts can be seen as by-products of the pursuit of causal representations. Table VI shows that the causal representation greatly enhances AL-Adjustment. For instance, AL-Adjustment equipped with P-Loss achieves 8.7%/8.4%/8.4% improvements on the backbone of VCTree [11].
We also present the output logits, the augmented logits, and the adjusted logits in Fig. 7 to show the process of P-Loss and AL-Adjustment adjusting the SGG model. These results show that AL-Adjustment can adjust less informative predictions to high-informative ones. For example, \(<\)_hear, on, chair\(>\)_ is adjusted to \(<\)_hear, standing on, chair\(>\)_ (see Fig. 7 (a)). Then, thanks to the proposed P-Loss, compared with MotifsNet [9], TsCM performs better at distinguishing similar relationships, _i.e._, similar relationships have more significant logit gaps. As an example shown in Fig. 7 (b): In the output logits of MotifsNet [9], _carrying_ has a 1.31\(\times\) logit gap over _holding_, but in TsCM, it is 2.18\(\times\). Large logit gaps will clarify the decision boundary between similar relationships, thereby overcoming semantic confusion. Finally, we can observe the issues discussed in Section 3.4.1, _i.e._, the logits of the foreground relationships being less discriminative and the logits of alternating positive and negative. It is possible, however, to unify logits to positive values and improve their discrimination, especially for the top few large logits, by using our logit augmentation method (Equation (19)). We argue that the logit augmentation procedure is critical for learning adjustment factors. To prove this, we ablate our logit augmentation method and show the results in Table VII. These results demonstrate that our logit augmentation method can provide significant improvements. In addition, the guidance term \(f^{\text{bg}}_{\theta^{*}_{\theta^{*}_{\theta^{*}_{\theta^{*}_{\theta^{*}_{ \theta^{*}_{\theta^{*}_{\theta^{*}_{\theta^{*}_{\theta^{*}_{\theta^{*}_{ \theta}_{\theta^{*}_{\theta}_{\theta^{*}_{*}_{\theta^{*}_{\theta}_{\theta^{*} _{*}_{\theta}_{\theta}_{\theta^{*}_{*}}_{\theta}}_{\theta}^{*}_{\theta}}_{\theta} ^{*}_{\theta}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\) \
sharp adjustment factors (refer to the logits shown in Fig. 8), which in effect, learn a set of factors that overfit the observed data. Fig. 8 also shows that the model performs best when \(\alpha=5\). We think that when \(\alpha\) is small, \(\mathcal{P}_{\alpha}\) will miss some similar relationships, and conversely when \(\alpha\) is large, many dissimilar relationships will be included in \(\mathcal{P}_{\alpha}\). It is worth noting that \(\alpha\) should be set according to observed data. In other words, for 50 relationship categories in VG150, \(\alpha=5\) is the optimal choice, but \(\alpha\) may have other optimal values for other datasets.
_Disentangled claim in Equation (11)._ We set up three baseline loss functions to support our disentangled claims: 1) Cross-entropy loss \(\ell\). 2) Modified P-Loss \(\ell^{\circ}\). This baseline loss function replaces \(\mathcal{P}_{\alpha}\) in Equation (8) with \(\mathcal{P}_{\alpha}^{\circ}\). \(\mathcal{P}_{\alpha}^{\circ}\) means to take the \(\alpha\) relationships with the largest feature distance (dissimilar relationships) as relationship-population. 3) Modified P-Loss \(\ell^{\circ}\). This baseline loss function replaces \(\mathcal{P}_{\alpha}\) in Equation (8) with \(\mathcal{P}_{\alpha}^{\circ}\). \(\mathcal{P}_{\alpha}^{\circ}\) means to take the \(\alpha^{\prime}\) relationships with the largest feature distance as relationship-population (\(\alpha^{\prime}>\alpha\), \(\alpha^{\prime}=8\)). 4) Modified P-Loss \(\ell^{\circ}\). This baseline loss function replaces \(\mathcal{P}_{\alpha}\) in Equation (8) with \(\mathcal{P}_{\alpha}^{\circ}\). \(\mathcal{P}_{\alpha}^{\circ}\) means to take the \(\alpha\) relationships with the largest feature distance belonging to the tail category as the relationship-population. Table IX reports the model results supervised by different loss functions. The results of \(\ell\), \(\hat{\ell}^{\hat{\alpha}}\), and \(\hat{\ell}^{\hat{\alpha}}\) are very close, which proves that intervening in dissimilar tail relationships has very limited impacts. Even with the possible inclusion of head categories, the supervised performance of \(\hat{\ell}^{\hat{\nu}}\) is still close to \(\ell\) and \(\hat{\ell}^{\hat{\alpha}}\). This further shows that only intervening in dissimilar relationships has tiny perturbations to the model. However, P-Loss observes drastic changes due to intervening in similar relationships. Hence, we argue that P-Loss can intervene in similar relationships sparsely. In other words, it eliminates confounder \(S\) without affecting other confounders. As a result, the model trained with P-Loss can be roughly formulated as a disentangled factorization.
_Disentangled claim in Equation (22)._ This subsection designs a new metric, _i.e._, mean Correction Rate (mC@K), to support our disentangled claims. mC@K calculates the balanced fraction of rates that the false relationships are corrected, where K shares the same meaning as in mainstream metrics (_e.g._, R@K and mR@K). In keeping with the claim to be demonstrated, Fig. 9 shows mC@K on the tail categories, which allows us to evaluate the performance of the AL-Adjustment in alleviating the long-tailed distribution problem. These results clearly show that AL-Adjustment can adjust a considerable number of false predictions of tail categories to correct ones, and, hence, the long-tailed distribution confounder can be removed by our proposed adjustment procedure. Taking together the disentangled claim in Equation (11), we can naturally come to the disentangled claim in Equation (22).
## 5 Conclusion
In this paper, we have proposed a novel causal modeling framework, TsCM, for unbiased scene graph generation, which decouples the causal intervention into two stages to eliminate semantic confusion bias and long-tailed distribution bias, where the former bias is rarely explored in the existing debiasing methods. In stage 1, we analyzed that the SCM modeled for SGG is always causal-insufficient and the sparsity of relationship categories. On this basis, a causal representation
Fig. 8: Model performance with different \(\alpha\) and \(\beta\). The evaluation mode here is PredCis. The shaded areas represent the upper and lower bounds of performance for different combinations of \(\alpha\) and \(\beta\).
Fig. 9: The performance of tail categories on the proposed metric mC@K. The evaluation mode here is PredCis.
learning method is proposed to achieve sparse interventions on semantic confusion bias in the case of insufficient causality. As a result, this stage also provides a disentangled factorization. Benefiting from this factorization, stage 2 then proposes causal calibration learning to intervene sparsely and independently in the long-tailed distribution bias to achieve unbiased predictions. Experiments were conducted on the popular SGG backbones and dataset, and our method achieved state-of-the-art debiasing performance. Furthermore, our method achieved a better tradeoff between recall rate and mean recall rate thanks to the sparse causal interventions.
Although our method can remove multiple biases in the SGG task, it is still challenging to overcome the unobservable biases. In the future, we will focus on exploring the unobservable biases and develop the automatic debiasing causal framework to pursue unbiased SGG predictions.
|
2303.17385 | Compact curve shortening flow solutions out of non compact curve | We construct a slingshot, that is a compact, embedded solution to curve
shortening flow that comes out of a non compact curve and exists for a finite
time. | Theodora Bourni, Martin Reiris | 2023-03-30T13:54:18Z | http://arxiv.org/abs/2303.17385v1 | # Compact curve shortening flow solutions
###### Abstract.
We construct a slingshot, that is a compact, embedded solution to curve shortening flow that comes out of a non compact curve and exists for a finite time.
## 1. Introduction
A smooth one-parameter family \(\{\Gamma_{t}\}_{t\in I}\) of immersed planar curves \(\Gamma_{t}\subset\mathbb{R}^{2}\) evolves by curve shortening flow if
\[\frac{\partial\gamma}{\partial t}(u,t)=\vec{\kappa}(u,t)\,,\,\,\forall(u,t)\in \Gamma\times I\,, \tag{1}\]
for some smooth family \(\gamma:\Gamma\times I\to\mathbb{R}^{2}\) of immersions \(\gamma(\cdot,t):\Gamma\to\mathbb{R}^{2}\) of \(\Gamma_{t}\), and where \(\vec{\kappa}(u,t)\) is the curvature vector of \(\Gamma_{t}\) at the point \(\gamma(u,t)\).
When \(\Gamma_{0}\) is a smooth embedded compact curve, then by a famous theorem of Grayson [6], the solution of the curve shortening flow starting from \(\Gamma_{0}\) exists on a maximal time interval \([0,T)\) and as \(t\to T\) the solution converges to a round point. In the case when \(\Gamma_{0}\) is additionally convex, this theorem was previously proved by Gage and Hamilton [5]. Contrary to the compact case, when \(\Gamma_{0}\) is not compact solutions to curve shortening flow starting from \(\Gamma_{0}\) are not that well understood in general. The particular case of graphical solutions has been extensively studied in the work of Ecker and Huisken [3, 4], who, among other things, showed that the flow of entire graphs exists for all times. In [2], K-S Chou and X-P Zhu, showed that that if the initial curve divides the plane into two regions of infinite area, then a solution exists for all time. For the case that one of the regions of the plane defined by the curve has finite area, they showed that, if additionally the curve has finite total absolute curvature, then a solution exists for a finite
Introduction
Let \(\Gamma_{0}\) be a smooth embedded \(1\)-manifold diffeomorphic to \((0,1)\) and \((-c,c)\). We consider the following two cases:
* \(\Gamma_{0}\) is a smooth embedded \(1\)-manifold diffeomorphic to \((0,1)\) and it separates \(\mathbb{R}^{2}\) into two regions, one of which has finite area, which we denote by \(A_{0}\in(0,\infty)\).
* \(a+1<b\) and \(c>0\) are real numbers such that \(\Gamma_{0}\subset(a,\infty)\times(-c,c)\) and \(\Gamma_{0}\cap([b,\infty)\times(-c,c))\) is the union of two smooth graphs, \(u^{\pm}\in[b,\infty)\to\mathbb{R}\) with \(u^{+}\) positive and decreasing to zero at infinity and \(u^{-}\) negative and increasing to zero at infinity, and with the derivatives of \(u^{\pm}\) converging to zero at infinity, as in Figure 1.
Moreover, we will denote by \(B(\Gamma_{0},\varepsilon)\) the \(\varepsilon\) neighborhood of \(\Gamma_{0}\), that is
\[B(\Gamma_{0},\varepsilon):=\left\{p\in\mathbb{R}^{2}:\operatorname{dist}(p, \Gamma_{0})<\varepsilon\right\}.\]
Our main theorem is the following
**Theorem 1**.: _Let \(\Gamma_{0}\) be a curve satisfying the above hypotheses (i)-(ii). There exists a smooth solution \(\gamma:\mathrm{S}^{1}\times(0,\frac{A_{0}}{2\pi})\to\mathbb{R}^{2}\) to the curve shortening flow (1) such that for any \(\varepsilon>0\) there exists \(t_{\varepsilon}>0\) such that \(\Gamma_{t}\subset B(\Gamma_{0},\varepsilon)\) for \(0<t<t_{\varepsilon}\)._
The construction of the solution described in Theorem 1 is roughly as follows. We start with a sequence of compact curves \(\Gamma_{0}^{i}\) that approximate \(\Gamma_{0}\). Then, we define a sequence of curve shortening flows, using the curves \(\Gamma_{0}^{i}\) as initial conditions, which we refer to as slingshots. The idea, then, is to show that one can extract a limit of these slingshots. To do this, we establish uniform curvature bounds for the slingshots away from the initial time \(0\). This argument, which is the most novel part of this construction, is a direct argument, based on repeated applications of the avoidance principle, and in particular the fact that the number of intersections between two solutions of curve shortening flow (at least one of which is compact) cannot increase in time[1], together with the curvature estimates of Ecker and Huisken [4].
### Acknowledgements
We would like to thank Facultad de Ciencias, Universidad de la Republica in Montevideo, Uruguay, for hosting a visit of the first named author, during which this collaboration began. We also like to thank Sigurd Angenent and Mat Langford for conversations on the state of the art concerning non compact solutions to curve shortening flow.
TB was supported through grant 707699 of the Simons Foundation and grant DMS-2105026 of the National Science Foundation.
## 2. Construction
We first show that if a curve is locally, in some rectangle, a graph, then under curve shortening flow and in a smaller rectangle it remains a graph. Moreover, we obtain estimates on the gradient. We remark
Figure 1. Schematic figure of the evolution.
that such estimates are known in more general contexts but as the proof of the version we need here is relatively simple we do include it for the convenience of the reader.
**Proposition 2**.: _Let \(\gamma_{0}:\mathrm{S}^{1}\to\mathbb{R}^{2}\) be a smooth embedding and suppose that for \(D>0\), \(R>0\) and \(r<\frac{D}{2}\), the following holds:_
1. _for any_ \(|x_{1}|\leq R\) _and_ \(|x_{2}|\leq R\)_, the segment joining_ \((x_{1},0)\) _to_ \((x_{2},D)\) _intersects_ \(\Gamma_{0}=\gamma_{0}(S^{1})\) _transversely and at just one point._
2. _for any_ \(|x|\leq R\)_, the balls_ \(B_{r}((x,0))\) _and_ \(B_{r}((x,D))\) _are disjoint from_ \(\Gamma_{0}\)_._
_Then, the curve shortening flow solution \(\gamma:S^{1}\times[0,T)\) starting at \(\gamma(\cdot,0)=\gamma_{0}(\cdot)\) satisfies \(T\geq\frac{r^{2}}{2}\), and for all \(t\in[0,\frac{r^{2}}{2}]\) the timeslices \(\Gamma_{t}\) satisfy the following: \(\Gamma_{t}\cap([-R,R]\times[0,D])\) can be represented as the graph of a smooth function \(g_{t}:[-R,R]\to\mathbb{R}\), with_
\[\sup_{x\in[-\frac{R}{2},\frac{R}{2}]}|g_{t}^{\prime}(x)|\leq\frac{2D}{R}\,,\; and\]
\[\sqrt{r^{2}-2t}<g_{t}(x)<D-\sqrt{r^{2}-2t}\,,\;\forall x\in[-R,R]\,.\]
Proof.: Note first that by hypothesis (2) of the proposition and the avoidance principle we obtain that
\[([-R,R]\times\{0,D\})\cap\Gamma_{t}=\emptyset\,,\;\forall t\in[0,\tfrac{r^{2} }{2}]\,, \tag{2}\]
and note that a simple linking argument shows that the curve shortening flow solution starting at \(\Gamma_{0}\) does indeed have a lifespan of time at least \(\frac{r^{2}}{2}\). Recall that the number of intersections between two compact solutions of curve shortening flow cannot increase [1]. Therefore, hypothesis (1) of the proposition applied to segments with endpoints \((x,0)\) and \((x,D)\), \(x\in[-R,R]\), along with (2), imply that \(\Gamma_{t}\cap([-\frac{R}{2},\frac{R}{2}]\times[0,D])\) can be represented as a graph of a smooth function \(g_{t}:[-R,R]\to\mathbb{R}\). To prove the gradient bound, consider a point on the graph \(p=(x,g_{t}(x))\) with \(x\in[-\frac{R}{2},\frac{R}{2}]\) and suppose that \(g_{t}(x)\geq\frac{D}{2}\). Consider the two line segments joining \((x\pm\frac{R}{2},0)\) to \(p\) and extending them pass \(p\) we note that they intersect the segment \([-R,R]\times\{D\}\). Thus, by hypothesis (1), these segments lie below the graph of \(g_{t}\) and we obtain that \(|g_{t}^{\prime}(x)|\leq\frac{g_{t}(x)}{R/2}\leq\frac{2D}{R}\). If the point
satisfies \(g_{t}(x)\leq\frac{D}{2}\), we obtain the same estimate by considering the segments joining \((x\pm\frac{R}{2},D)\) to \(p\) and extending them pass \(p\). Finally, the height bounds are a cosequence of the avoidance principle and hypothesis (2).
Proposition 2 and the curvature estimates of Ecker-Huisken [4] yield the following
**Corollary 3**.: _Under the hypothesis of Proposition 2, for every integer \(m\geq 1\), there is a constant \(c_{m}=c(m,R,D,\Gamma_{0})\) such that_
\[\sup_{p\in\Gamma_{t}\cap([-\frac{R}{4},\frac{R}{4}]\times[0,D])}|\partial_{s}^ {m}\kappa(p,t)|\leq c_{m}\,,\ \forall t\in[0,\tfrac{r^{2}}{2}]\,, \tag{3}\]
_where \(\kappa(p,t)\) denotes the curvature of \(\Gamma_{t}\) at the point \(p\)._
Proof.: The proof is evident from the estimates in [4] by removing the time dependence from the bounds. Nonetheless, we include a sketch here for the convenience of the reader.
We first prove the case \(m=0\). Consider a point \(p_{0}=(x,y)\), with \(|x|<\frac{R}{4}\) and \(y\in(0,D)\), and let \(v=v(p,t)=\langle\nu,e_{2}\rangle^{-2}\), where \(\nu=\nu(p,t)\) is a choice of the unit normal to \(\Gamma_{t}\) at \(p\). Consider now \(G_{t}\) to be the connected component of \(\Gamma_{t}\cap B_{\frac{R}{4}}(p_{0})\) that is the graph of \(g_{t}\) as in Proposition 2. Then, by Proposition 2, we have that
\[v(p,t)\leq 1+\frac{4D^{2}}{R^{2}}\,,\ \forall p\in G_{t}\,,\ \forall t\in[0, \tfrac{r^{2}}{2}]\,.\]
Define the function \(g(p,t)=\kappa(p,t)^{2}\frac{v^{2}}{1-k^{2}v^{2}}((\frac{R}{4})^{2}-|p-p_{0}|^{2 })^{2}\), where \(k=\frac{1}{2}+\frac{2D^{2}}{R^{2}}\). Note that \(g(p,0)\leq CR^{2}\), where \(C=\sup_{G_{0}}\kappa^{2}\), a constant that depends only on \(\gamma_{0}\). If \(g\) has a maximum at a point \((p,t)\in G_{t}\times(0,\frac{r^{2}}{2}]\), then, by computing the heat operator of \(g\) (see [4, proof of Theorem 3.1]), we obtain
\[g(p,t)\leq c(n,k)R^{2}\,.\]
We therefore conclude the estimate for \(m=0\). The higher derivative bounds can be computed similarly by considering \(\psi=1\) in [4, proof of Theorem 3.4].
**Definition 4**.: _A basic rectangle \(\mathcal{F}(R,D,r)\) for an embedded curve \(\Gamma\) consists of a number \(r>0\) and a rectangle isometric to \([-R,R]\times[0,D]\) by an isometry \(T\), such that:_
1. _for any_ \(|x_{1}|\leq R\) _and_ \(|x_{2}|\leq R\)_, the segment joining_ \(T((x_{1},0))\) _to_ \(T((x_{2},D))\) _intersects_ \(\Gamma\) _transversely and at just one point._
2. _for any_ \(|x|\leq R\) _the balls_ \(B_{r}(T(x,0))\) _and_ \(B_{r}(T(x,D))\) _are disjoint from_ \(\Gamma\)_._
\(T\) _as above, will be referred to the isometry associated to \(\mathcal{F}(R,D,r)\)._
_If \(\mathcal{F}(R,D,r)\) is a basic rectangle for \(\Gamma\) and \(T\) is its associated isometry, then \(T([-\frac{R}{4},\frac{R}{4}]\times[0,D])\) together with \(r\), form also a basic rectangle for \(\Gamma\), which will be denoted by \(\mathcal{F}_{*}(R,D,r)\)._
It is clear that the estimates in the statement of Corollary 3 work exactly the same when we replace the basic rectangle \([-R,R]\times[0,D]\) by basic rectangles \(\mathcal{F}(R,D,r)\) for the curve \(\Gamma_{0}\). More precisely, Proposition 2 and Corollary 3 yield the following:
**Proposition 5**.: _Assume that \(\mathcal{F}(R,D,r)\) is a basic rectangle for an embedded smooth curve \(\Gamma_{0}\). Then the curve shortening flow solution starting from \(\Gamma_{0}\) exists for time at least \(\frac{r^{2}}{2}\) and the timeslices \(\Gamma_{t}\) satisfy the following curvature estimate. For every integer \(m\geq 1\), there is a constant \(c_{m}=c(m,R,D,\Gamma_{0})\), such that_
\[\sup_{p\in\Gamma_{t}\cap\mathcal{F}_{*}(R,D,r)}|\partial_{s}^{m}\kappa(p,t)| \leq c_{m}\,,\ \forall t\in[0,\tfrac{r^{2}}{2}]\,,\]
_where \(\kappa(p,t)\) denotes the curvature of \(\Gamma_{t}\) at the point \(p\)._
**Definition 6**.: _For every integer \(i\geq b+3\), consider the connected part of \(\Gamma_{0}\) between \((i,u^{+}(i))\) and \((i,u^{-}(i))\) and cup it up with an embedded piece joining these two end points and lying inside the rectangle \([i,i+1]\times[u^{-}(i),u^{+}(i)]\), so that we obtain a smooth embedded and compact curve which we denote by \(\Gamma_{0}^{i}\). Let \(\gamma_{0}^{i}:\mathrm{S}^{1}\to\mathbb{R}^{2}\) be a parametrization of \(\Gamma_{0}^{i}\). The solutions to the curve shortening flow starting from \(\Gamma_{0}^{i}\) are denoted by \(\Gamma_{t}^{i}\) and are called slingshots. Moreover, for each \(i\), we will use \(\gamma^{i}(\cdot,t)\) to denote any parametrization of the flow, which, as such, satisfies (1)._
The following lemma says essentially that the slings enter compact regions in arbitrarily small times uniformly in \(i\).
**Lemma 7**.: _For any decreasing sequence of times \(t_{j}\downarrow 0\), there exists a sequence of numbers \(x_{j}\), such that the slingshots, after passing to a subsequence \(\Gamma_{t}^{j}\), satisfy_
\[\Gamma_{t}^{k}\subset[a,x_{j}]\times[-c,c]\,,\ \forall k\geq j,\text{ and }\,t\geq t_{j}\,.\]
Proof.: Consider a sequence \(t_{j}\downarrow 0\). Then, by the assumptions on the initial curve \(\Gamma_{0}\) and by construction of the approximating sequence \(\Gamma_{0}^{i}\), the slingshots, after passing to a subsequence \(\Gamma_{t}^{j}\), satisfy the following. For any \(j\), we can pick \(x_{j}\) such that the following hold.
1. Let \(\mathcal{F}(R,2c,\sqrt{2t_{j}}):=[-R+x_{j},R+x_{j}]\times[-c,c]\), with \(R=\frac{16c}{\pi}\). Then, for all \(k\geq j\), \(\Gamma_{0}^{k}\cap\mathcal{F}(R,2c,\sqrt{2t_{j}})\) has two connected components, and for each of them \(\mathcal{F}(R,2c,\sqrt{2t_{j}})\) is a basic rectangle in the sense that on both components (i) and (ii) of Definition 4 are satisfied.
2. For all \(k\geq j\), the area of the compact region bounded by \(\Gamma_{0}^{k}\) in the halfplane \(\{x\geq x_{j}-R\}\) is at most \(\frac{\pi t_{j}}{2}\).
To prove the lemma, we will show that for all \(j\) and \(t\geq t_{j}\) we have \(\Gamma_{t}^{k}\subset[a,R+x_{j}]\times[-c,c]\), for all \(k\geq j\), for which it suffices to prove that \(\Gamma_{t_{j}}^{k}\subset[a,R+x_{j}]\times[-c,c]\), for all \(k\geq j\). Assume on the contrary that for some \(j\) and \(k\geq j\) we have \(\Gamma_{t_{j}}^{k}\cap((R+x_{j},\infty)\times[-c,c])\neq\emptyset\). First note that, by considering a small ball inside \(\Gamma_{0}\) and by (i), the avoidance principle implies that
\[\Gamma_{t}^{k}\cap\mathcal{F}(R,2c,\sqrt{2t_{j}})\text{ has two connected components, }\forall t\in[0,t_{j}]\,.\]
Let now \(A_{+}^{k}(t)\) be the area of the compact region bounded by \(\Gamma_{t}^{k}\) in the halfplane \(\{x\geq x_{j}\}\). Since \(\Gamma_{t}^{k}\cap\mathcal{F}(R,2c,\sqrt{2t_{j}})\) has two connected components, for all \(t\in[0,t_{j}]\), Proposition 2 implies that
\[-\frac{d}{dt}A_{+}^{k}(t)\geq\pi-\frac{8c}{R}\]
and integration yields
\[A_{+}^{k}(t_{j})\leq A_{+}^{k}(0)-t_{j}\left(\pi-\frac{8c}{R}\right)\leq- \frac{\pi t_{j}}{2}+\frac{8c}{R}t_{j}<0\]
which contradicts the hypothesis that \(A_{+}^{k}(t_{j})\) is positive, which is implied since we assumed that \(\Gamma_{t_{j}}^{k}\cap((x_{j}+R,\infty)\times[-c,c])\neq\emptyset\).
The following lemma, which is the central lemma for our constructions, says that there is a decreasing sequence \(t_{j}\downarrow 0\) such that the slingshots, after passing to a subsequence \(\Gamma_{t}^{j}\), for all \(j\) and \(t_{j}\leq t\leq t_{0}\) (where \(t_{0}\) is some fixed positive time), are covered by a fixed and finite set of basic rectangles and are therefore globally subject to the estimates of Corollary 3.
**Lemma 8**.: _There exists a decreasing sequence of times \(t_{j}\downarrow 0\), \(j\geq 0\), such that the slingshots, after passing to a subsequence \(\Gamma_{t}^{j}\) satisfy the following. For every \(j\geq 0\) there is a finite set of rectangles,_
\[\mathcal{F}(R_{j,1},D_{j,1},r_{j,1}),\ldots,\mathcal{F}(R_{j,n_{j}},D_{j,n_{j} },r_{j,n_{j}}), \tag{4}\]
_with \(r_{j,k}\geq\sqrt{2t_{0}}\), \(k=1,\ldots,n_{j}\), that are basic for \(\Gamma_{t}^{j}\) for any \(t\in[0,t_{0}]\), and moreover,_
\[\Gamma_{t}^{j}\subset\bigcup_{k=1}^{k=n_{j}}\mathcal{F}_{*}(R_{j,k},D_{j,k}, r_{j,k})\,,\forall t\in[t_{j},t_{0}]. \tag{5}\]
Proof.: We first construct basic rectangles that will cover the slingshots in a compact set, where all the initial curves \(\Gamma_{0}^{i}\) coincide.
Let \(r_{0}>0\) be such that \([b,b+2]\times[0,c]\) and \([b,b+2]\times[-c,0]\) together with \(r_{0}\) form basic rectangles for \(\Gamma_{0}\), and we denote these by \(\mathcal{F}^{\pm}\), respectively. Then, let
\[\mathcal{F}^{1}=\mathcal{F}(R_{1},D_{1},r_{1}),\ldots,\mathcal{F}^{l}= \mathcal{F}(R_{l},D_{l},r_{l}), \tag{6}\]
be a collection of basic rectangles for \(\Gamma_{0}\) with associated isometries \(T_{m}\) and such that:
* \(\mathcal{F}^{m}\subset\{x<b+2\}\), for \(m=1,\ldots,l\),
* \(\mathcal{F}^{1}\subset\mathrm{Int}(\mathcal{F}_{*}^{+})\) and \(\mathcal{F}^{l}\subset\mathrm{Int}(\mathcal{F}_{*}^{-})\),
* \(T_{m}(\{\frac{R_{m}}{4}\}\times[0,D_{m}])\subset\mathrm{Int}(\mathcal{F}_{*} ^{m-1})\), for \(m=2,\ldots,l\).
Note that the rectangles \(\mathcal{F}^{\pm}\) and \(\mathcal{F}^{m}\), for \(m=1,\ldots,l\), are also basic rectangles for \(\Gamma_{0}^{i}\), for all \(i\in\mathbb{N}\). This is because they are contained in the half plane \(\{x\leq b+3\}\), where \(\Gamma_{0}^{i}\) and \(\Gamma_{0}\) coincide. Define,
\[\bar{t}:=\tfrac{1}{2}\mathrm{min}\{r_{0}^{2},r_{1}^{2},\ldots,r_{l}^{2}\} \tag{7}\]
and also
\[\mathscr{F}(0):=\{\mathcal{F}^{+},\mathcal{F}^{-},\mathcal{F}^{1},\ldots, \mathcal{F}^{l}\}\,.\]
We claim that for any \(i\) and \(0\leq t\leq\bar{t}\) we have,
\[\Gamma^{i}_{t}\cap\{x\leq b+5/4\}\ \subset\bigcup_{\mathcal{F}\in\mathscr{F}(0)} \mathcal{F}_{*}\,. \tag{8}\]
To see this, let \(\mathcal{F}\in\mathscr{F}(0)\). Then, by Proposition 2, we have that, for any \(i\) and any \(0\leq t\leq\bar{t}\), \(\Gamma^{i}_{t}\cap\mathcal{F}_{*}\) is a connected \(1\)-manifold with two boundary points lying in two opposite sides of the corresponding rectangle: \(T_{m}(\{\pm\frac{R_{m}}{4}\}\times[0,D_{m}])\) if \(\mathcal{F}=\mathcal{F}^{m}\), \(m=1,\ldots,l\), and accordingly if \(\mathcal{F}=\mathcal{F}^{\pm}\). By conditions (ii) and (iii) above the claim follows.
The next step is to construct basic rectangles that cover the entirety of the slingshots for times \(t>t_{j}\). An essential tool to do that is Lemma 7, which allows us to deduce that after time \(t_{j}\) all slingshots have entered a compact set.
For any integer \(k>b+1\), we let \(y_{k}:=\min\{u^{+}(2k),-u^{-}(2k)\}\) and set \(q_{k}:=(b,-y_{k})\). We then define \(s^{1}_{k}\) and \(s^{2}_{k}\) be the two rays starting from \(q_{k}\) and passing through \((2k,0)\) and \((b+1,c)\) respectively. Note that both rays intersect \(\Gamma_{0}\) transversely and only once at a point with positive \(y\)-coordinate. Define also the rectangle \(\mathcal{R}^{+}_{k}:=[b+1,k]\times[\frac{-ky_{k}}{2k-b},c]\) and note that it lies in the region between the two rays and has one vertex on each of them. Hence, any infinite ray from \(q_{k}\) and passing through
any point in \(\mathcal{R}_{k}^{+}\) intersects \(\Gamma_{0}\) transversely and only once. We will use this fact to cover the slingshots by basic rectangles in \(\mathcal{R}_{k}^{+}\).
Let \(\hat{r}\in(0,1)\) be such that \(\overline{B}_{\hat{r}}((b,0))\) is contained in the open region of finite area enclosed by \(\Gamma_{0}\). Since, \(y_{k}\downarrow 0\) as \(k\to\infty\), we can choose \(\hat{k}\) such that \(y_{k}\leq\frac{\hat{r}}{4}\), for all \(k\geq\hat{k}\), and from now on we consider such a \(k\geq\hat{k}\). Consider \(s\) to be a ray starting from \(q_{k}\) and passing through a point in \(\mathcal{R}_{k}^{+}\). Since every such ray has positive slope and intersects \(\Gamma_{0}\) transversely and only once, for each such \(s\), we can find a rectangle \(T_{s}([-R_{s},R_{s}]\times[0,D_{s}])\), for some isometry \(T_{s}\), with the following properties:
1. \(T_{s}(\{0\}\times[0,D_{s}])\subset s\) and \(T_{s}((0,0))=q_{k}\),
2. \(R_{s}\leq\frac{\hat{r}}{4}\) and \(D_{s}\) is large enough so that \(\langle T((0,D_{s})),e_{2}\rangle\geq c+\hat{r}\),
3. \(\Gamma_{0}\cap T_{s}([-R_{s},R_{s}]\times[0,D_{s}])\) is a graph over \(T([-R_{s},R_{s}]\times\{0\})\).
Since \(T_{s}([-R_{s},R_{s}]\times\{0\})\subset B_{\frac{\hat{r}}{2}}((b,0))\) and by properties (2) and (3) above we conclude that \(T_{s}([-R_{s},R_{s}]\times[0,D_{s}])\) together with \(r=\frac{\hat{r}}{4}\) is a basic rectangle for \(\Gamma_{0}\), which we denote as \(\mathcal{F}^{+,s}\). By compactness, we can find a collection of rays \(s_{k,1},\ldots,s_{k,l_{k}}\) such that \(\mathcal{F}_{*}^{+,s_{k,1}},\ldots,\mathcal{F}_{*}^{+,s_{k,l_{k}}}\) cover \(\mathcal{R}_{k}^{+}\). From now on and to simplify notation we write \(\mathcal{F}^{+,k,j}\) instead of \(\mathcal{F}^{+,s_{k,j}}\). An identical reasoning shows that we can find a collection of basic rectangles,
\[\mathcal{F}^{-,k,1},\ldots,\mathcal{F}^{-,k,h_{k}}, \tag{9}\]
for \(\Gamma_{0}\), all with \(r=\frac{\hat{r}}{4}\) and such that \(\mathcal{F}_{*}^{-,k,1},\ldots,\mathcal{F}_{*}^{-,k,h_{k}}\) are covering the rectangle \(\mathcal{R}_{k}^{-}:=[b+1,k]\times[-c,\frac{ky_{k}}{2k-b}]\). We will denote by \(\mathscr{F}(k)\) all these rectangles
\[\mathscr{F}(k)=\left\{\mathcal{F}^{+,k,1},\ldots,\mathcal{F}^{+,k,l_{k}}, \mathcal{F}^{-,k,1},\ldots,\mathcal{F}^{-,k,h_{k}}\right\}. \tag{10}\]
Note that \(\mathcal{R}_{k}^{+}\cup\mathcal{R}_{k}^{-}=[b+1,k]\times[-c,c]\) and therefore
\[[b+1,k]\times[-c,c]\subset\bigcup_{\mathcal{F}\in\mathscr{F}(k)}\mathcal{F}_ {*}\,. \tag{11}\]
Given \(k\geq\hat{k}\) let \(\hat{i}_{k}>0\) be large enough so that none of the basic rectangles \(\mathcal{F}\in\mathscr{F}(k)\) intersects the region \([\hat{i}_{k},\infty]\times[-c,c]\). Note that this is possible, since all these rectangles have non zero slope and width bounded by \(\frac{\hat{r}}{4}\). Recalling the definition of \(\Gamma_{0}^{i}\), we deduce that these basic rectangles for \(\Gamma_{0}\) are also basic rectangles for \(\Gamma_{0}^{i}\) when \(i\geq\hat{i}_{k}\). Let \(t_{j}\downarrow 0\) and \(x_{j}\) be the sequences of Lemma 7, for which, after dropping some
initial terms if necessary, we will assume that \(t_{1}<t_{0}:=\min\{\bar{t},\frac{\hat{r}^{2}}{32}\}\). Let \(k_{1}\) be any integer such that \(k_{1}\geq\max\{\hat{k},x_{1}\}\). By Lemma 7, we have that the slingshots, after passing to a subsequence \(\Gamma_{t}^{j}\), satisfy, for any \(j\) and \(t\geq t_{1}\),
\[\begin{split}\Gamma_{t}^{j}\subset[a,x_{1}]\times[-c,c]& \subset([a,b+1]\times[-c,c])\cup\mathcal{R}_{k_{1}}^{+}\cup \mathcal{R}_{k_{1}}^{-}\\ &\subset\bigcup_{\mathcal{F}\in\mathscr{F}(0)\cup\mathscr{F}(k_{ 1})}\mathcal{F}_{*}\,,\end{split} \tag{12}\]
with the second inclusion following by (8) and (11), and where \(\mathscr{F}(k)\) is as constructed in (10). Finally note that for any \(t\leq t_{0}\) and for \(i\geq\hat{i}_{k}\), \(\mathcal{F}\) is a basic rectangle for \(\Gamma_{t}^{i}\) for all \(\mathcal{F}\in\mathscr{F}(0)\cup\mathscr{F}(k_{1})\) and \(t\in[0,t_{0}]\). Hence, the slingshots, after passing to a further subsequence, still denoted by \(\Gamma_{t}^{j}\), satisfy, for any \(j\),
\[\Gamma_{t}^{j}\subset\bigcup_{\mathcal{F}\in\mathscr{F}(0)\cup\mathscr{F}(k_{ 1})}\mathcal{F}_{*}\,,\ \forall t\in[t_{1},t_{0}],\]
where \(\mathscr{F}(0)\cup\mathscr{F}(k_{1})\) is a finite family of rectangles that are basic for \(\Gamma_{t}^{j}\), for all \(j\) and \(t\leq t_{0}\) and moreover these rectangles are of the form \(\mathcal{F}(R,D,r)\) with \(r\geq\sqrt{2t_{0}}\). We can now finish the proof of the proposition, by constructing the rest of the sequence as follows. For each \(t_{j}\) as above (from Lemma 7), with \(j\geq 2\), we choose \(k_{j}\geq\max\{k_{j-1},x_{j}\}\). Then we construct the family of basic rectangles \(\mathscr{F}(k_{j})\) as in (10). We then note that there exists \(\hat{i}_{k_{j}}\) large enough, so that none of the basic rectangles \(\mathcal{F}\in\mathscr{F}(k_{j})\) intersects the region \([\hat{i}_{k_{j}},\infty]\times[-c,c]\), therefore for all \(i\geq\hat{i}_{k_{j}}\) and \(t\in[0,t_{0}]\), \(\mathcal{F}\) is a basic rectangle for \(\Gamma_{t}^{i}\) for all \(\mathcal{F}\in\mathscr{F}(0)\cup\mathscr{F}(k_{j})\). Hence, the slingshots, after passing to a further subsequence, still denoted by \(\Gamma_{t}^{j}\), satisfy, for any \(j\),
\[\Gamma_{t}^{j}\subset\bigcup_{\mathcal{F}\in\mathscr{F}(0)\cup\mathscr{F}(k_{ j})}\mathcal{F}_{*}\,,\ \forall t\in[t_{j},t_{0}],\]
where \(\mathscr{F}(0)\cup\mathscr{F}(k_{j})\) is a finite family of rectangles that are basic for \(\Gamma_{t}^{j}\), for \(j\geq 1\) and \(t\leq t_{0}\) and moreover these rectangles are of the form \(\mathcal{F}(R,D,r)\) with \(r\geq\sqrt{2t_{0}}\).
Proof of Theorem 1.: Consider \(t_{0}>0\) as in Lemma 8. Lemma 8 and Proposition 5 imply that we can apply a compactness argument (which amounts to the Arzela-Ascoli theorem) to the sequence of embeddings
\(\gamma_{t_{0}}^{j}:S^{1}\to\mathbb{R}^{2}\). This yields that there exists a smooth embedding \(\gamma_{t_{0}}^{\infty}:S^{1}\to\mathbb{R}^{2}\) and a sequence of diffeomorphisms of \(S^{1}\), \(\phi_{j}\), such that after passing to a subsequence, \(\gamma_{t_{0}}^{j}\circ\phi_{j}\) converges smoothly to \(\gamma_{t_{0}}^{\infty}\). Let \(t_{j}\downarrow 0\) be as in Lemma 8 and define the diffeomorphisms
\[\psi_{j}:S^{1}\times[t_{j},t_{0}] \to S^{1}\times[t_{j},t_{0}]\] \[(x,t) \mapsto\psi_{j}(x,t)=(\phi_{j}(x),t)\,.\]
Note that Lemma 8 and Proposition 5, along with the evolution equation of the curvature and its derivatives (which yield time derivative bounds on the curvature and its derivatives), imply uniform bounds on the curvature and its derivatives for the sequence \(\gamma^{j}\circ\psi_{j}\) (locally in \(S^{1}\times(0,t_{0}]\)). Therefore, the Arzela-Ascoli theorem and a diagonal argument yield that there exists a smooth map \(\gamma^{\infty}:S^{1}\times(0,t_{0}]\to\mathbb{R}^{2}\), with \(\gamma^{\infty}(\cdot,t):S^{1}\to\mathbb{R}^{2}\) a smooth embedding for each \(t\in(0,t_{0}]\) and \(\gamma^{\infty}(\cdot,t_{0})=\gamma_{t_{0}}^{\infty}(\cdot)\), and such that, after passing to a further subsequence, \(\gamma^{j}\circ\psi_{j}\) converges to \(\gamma^{\infty}\) smoothly on compact sets of \(S^{1}\times(0,t_{0}]\). The smooth convergence does imply that \(\gamma^{\infty}\) satisfies curve shortening flow (1). Also, since \(\gamma^{\infty}(\cdot,t):S^{1}\to\mathbb{R}^{2}\) a smooth embedding for each \(t\in(0,t_{0}]\), by Grayson's theorem [6], we can extend the flow until it disappears to a round point. We have created thus a smooth flow \(\gamma^{\infty}:S^{1}\times(0,T)\to\mathbb{R}^{2}\), which agrees with the above defined \(\gamma^{\infty}\) in \((0,t_{0})\) and such that it converges to a round point as \(t\to T\).
Finally, to finish the proof we need to show that
1. \(T=\frac{A_{0}}{2\pi}\) and
2. \(\forall\varepsilon>0\), \(\exists\,t_{\varepsilon}>0\): \(\Gamma_{t}:=\gamma^{\infty}(S^{1},t)\subset B(\Gamma_{0},\varepsilon)\), \(\forall\,0<t<t_{\varepsilon}\).
To see (i), let \(A^{\infty}(t)\) denote the (finite) area enclosed by \(\Gamma_{t}\) and \(A^{j}(t)\) that of the approximating curves \(\Gamma_{t}^{j}=\gamma^{j}(S^{1},t)\). By the convergence for \(t\in(0,t_{0}]\), we have
\[A^{\infty}(t_{0})=\lim_{j}A^{j}(t_{0})=\lim_{j}A^{j}(0)-2\pi t_{0}=A_{0}-2\pi t _{0}\,.\]
Since \(0=\lim_{t\to T}A^{\infty}(t)=A^{\infty}(t_{0})-2\pi(T-t_{0})\), we obtain (i).
In order to see (ii), we let \(\varepsilon>0\). It suffices to show that there exists \(t_{\varepsilon}\) such that for all \(j\) large enough \(\Gamma_{t}^{j}\subset B(\Gamma_{0},\varepsilon)\), for all \(t\in(0,t_{\varepsilon})\). Assume that this is not the case, but instead, there exists a sequence of times \(t_{k}\downarrow 0\) and a sequence of points of the slingshots \(x_{k}\in\Gamma_{t_{k}}^{j_{k}}\), with
\(j_{k}\to\infty\), such that \(\operatorname{dist}(x_{k},\Gamma_{0})>\varepsilon\). Note first, that by the assumption on \(\Gamma_{0}\) and the approximating sequence \(\Gamma_{0}^{i}\), a simple argument using grim reapers, parallel to the x-axis, as barriers implies that eventually the points \(x_{k}\) must be in a compact set, that is, there exists \(k_{0}\) and a compact set \(K\), such that for all \(k\geq k_{0}\), \(x_{k}\in K\).
Finally, the proof of Lemma 8, yields a uniform curvature bound for the slingshots in compact sets, which amounts to a uniform bound in the velocity. This implies that the distance traveled goes uniformly to zero, that is \(\operatorname{dist}(\Gamma_{t_{k}}^{jk}\cap K,\Gamma_{0})\to 0\), as \(k\to\infty\), and thus we obtain a contradiction.
|
2305.14502 | RetICL: Sequential Retrieval of In-Context Examples with Reinforcement
Learning | Recent developments in large pre-trained language models have enabled
unprecedented performance on a variety of downstream tasks. Achieving best
performance with these models often leverages in-context learning, where a
model performs a (possibly new) task given one or more examples. However,
recent work has shown that the choice of examples can have a large impact on
task performance and that finding an optimal set of examples is non-trivial.
While there are many existing methods for selecting in-context examples, they
generally score examples independently, ignoring the dependency between them
and the order in which they are provided to the model. In this work, we propose
Retrieval for In-Context Learning (RetICL), a learnable method for modeling and
optimally selecting examples sequentially for in-context learning. We frame the
problem of sequential example selection as a Markov decision process and train
an example retriever using reinforcement learning. We evaluate RetICL on math
word problem solving and scientific question answering tasks and show that it
consistently outperforms or matches heuristic and learnable baselines. We also
use case studies to show that RetICL implicitly learns representations of
problem solving strategies. | Alexander Scarlatos, Andrew Lan | 2023-05-23T20:15:56Z | http://arxiv.org/abs/2305.14502v2 | # RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning
###### Abstract
Many recent developments in large language models focus on prompting them to perform specific tasks. One effective prompting method is in-context learning, where the model performs a (possibly new) generation/prediction task given one (or more) examples. Past work has shown that the choice of examples can make a large impact on task performance. However, finding good examples is not straightforward since the definition of a representative group of examples can vary greatly depending on the task. While there are many existing methods for selecting in-context examples, they generally score examples independently, ignoring the dependency between them and the order in which they are provided to the large language model. In this work, we propose Retrieval for In-Context Learning (RetICL), a learnable method for modeling and optimally selecting examples sequentially for in-context learning. We frame the problem of sequential example selection as a Markov decision process, design an example retriever model using an LSTM, and train it using proximal policy optimization (PPO). We validate RetICL on math problem solving datasets and show that it outperforms both heuristic and learnable baselines, and achieves state-of-the-art accuracy on the TabMWP dataset. We also use case studies to show that RetICL implicitly learns representations of math problem solving strategies.1
Footnote 1: Our code will be publicly released soon at [https://github.com/umass-ml4ed/RetICL](https://github.com/umass-ml4ed/RetICL).
## 1 Introduction
With the rising prominence of pre-trained large language models (LLMs), prior work has focused on how to best utilize them for various natural language tasks. One of the most popular methods for doing so is prompt tuning, which deals with carefully selecting the natural language prompt that maximizes model performance Liu et al. (2021). While there are many approaches to prompt tuning, a very successful one is in-context learning (ICL) Brown et al. (2020). In ICL, examples of a new task the LLM may not have been trained on before are included in the prompt to the LLM, enabling it to leverage patterns in these examples in a few-shot way. However, the choice of which examples the LLM sees for a particular task can significantly affect the model's performance Zhao et al. (2021).
The primary goal of ICL example selection is to define a function that ranks a list of examples, where the rank measures how well those examples will elicit a desired response from an LLM when they are given in the prompt. We formalize this function as \(\phi(x,e_{1},\dots,e_{T})\), where \(x\) is the input for the current task (or problem statement in a math setting) and \(e_{1},\dots,e_{T}\) is a list of \(T\) examples drawn from a corpus \(\mathcal{C}\). Most prior works simplify the definition of \(\phi\) by assuming that examples are conditionally independent given \(x\). This assumption allows the factorization \(\phi(x,e_{1},\dots,e_{T})=\prod_{t=1}^{T}\phi_{t}(x,e_{t})\), thus only having to rank one example at a time and allowing us to select an optimal set of examples by selecting the \(T\) examples in the corpus with the highest values of \(\phi_{t}(x,e_{t})\). However, this conditional independence assumption doesn't always hold: there is likely significant interplay between the roles of different examples in deciding the output of LLMs. Some tasks benefit from example diversity Su et al. (2022), while others may benefit from combining specific information across examples. In these cases, simply selecting the top-\(T\) ranked examples may neglect ones that are ranked lower on their own but are useful in conjunction with other examples. Additionally, top-\(T\) selection ignores the _order_ in which examples are provided, which can have an impact on performance Lu et al. (2021).
### Contributions
In this work, we propose RetICL (Retrieval for In-Context Learning), a fully learnable method that
sequentially retrieves ICL examples by conditioning on both the current problem and examples that have already been selected. We frame the problem of sequential example selection as a Markov decision process (MDP) and train an example retriever model using reinforcement learning. We construct the model using an LSTM, where hidden states act as latent representations of MDP states, and model the example ranking function using a bilinear transformation between the latent and corpus spaces, enabling efficient inference-time maximization of \(\phi(x,e_{1},\dots,e_{T})\). We additionally propose a novel _confidence_ reward function, which uses the perplexity of the _generated_ solution to help guide training. We validate RetICL on the math word problem solving datasets TabMWP and GSM8K where it outperforms both heuristic and learnable baselines. Additionally, RetICL achieves state-of-the-art accuracy on TabMWP, improving over the current best method by \(8.20\%\)2. Finally, we qualitatively analyze RetICL's learned policies and find that RetICL is able to implicitly infer problem solving strategies from problem statements and examples.
Footnote 2: TabMWP Leaderboard: [https://github.com/lupantech/PromptPG](https://github.com/lupantech/PromptPG) We note that publicly reported results on TabMWP use flawed evaluation code, and discuss this in detail in the Experiments section.
## 2 Related Work
In-Context Example SelectionWhen solving tasks in an ICL setting, it is common to either randomly select in-context examples Brown et al. (2020); Lewkowycz et al. (2022) or use a set of handcrafted set of examples Hendrycks et al. (2021); Wei et al. (2023). However, it is now well known that example selection and ordering for each input instance can have a large impact on downstream text generation performance Gao et al. (2020); Liu et al. (2021); Zhao et al. (2021); Lu et al. (2021). Several prior works have found success in using heuristics to select examples Fu et al. (2022); Liu et al. (2021); Su et al. (2022), while others have developed learnable example selection methods that use heuristics as training signals Rubin et al. (2021); Pitis et al. (2023). Our method differs from these in that our model's training signal comes from per-problem performance in the target task. While we expect this distinction to help RetICL generalize to various tasks across domains, we focus only on math word problem solving in this work and leave exploration of other domains for future work. There are several other works that use reinforcement learning for ICL example selection. Lu et al. (2022) developed a policy gradient method for example selection, although their method does not include previously selected examples in the state, which is the key to our method. Zhang et al. (2022) used deep Q-Learning for policy learning, although their method only considers high-level summary information of previously selected examples, while our method uses their exact textual content.
Reinforcement LearningReinforcement learning (RL) is a machine learning paradigm where the goal is to learn a policy that maximizes the expected sum of rewards in an MDP. An MDP is a system defined by a temporal state, a set of potential actions, and a reward function Sutton and Barto (2018). While there are many RL algorithms, PPO Schulman et al. (2017), a policy gradient algorithm, has found significant success in natural language tasks, such as in its use for training ChatGPT OpenAI (2022) via reinforcement learning from human feedback Ziegler et al. (2019). However, RL algorithms can be notoriously challenging to train and often require a series of optimizations on top of the core algorithm. In this work, we implement PPO along with several modifications that are inspired by the observations listed in Andrychowicz et al. (2020), which we find to be necessary to improve training stability.
MWP Solving via In-Context LearningMath word problem (MWP) solving is a difficult task due to limited mathematical reasoning abilities in LLMs Lewkowycz et al. (2022). As a result, various methods have been developed for MWP solving in ICL settings. A common technique known as chain-of-thought (CoT) prompting Wei et al. (2023) includes detailed reasoning steps for examples in the prompt, eliciting the LLM to also generate detailed reasoning steps and thus improve performance. Prior work also shows that providing in-context examples with more complex solutions can lead to improved performance Fu et al. (2022). Additionally, several inference-time methods have been developed to help LLMs with MWP solving. Using a calculator in the decoding pipeline Cobbe et al. (2021) can effectively reduce arithmetic errors in LLM output. Randomly sampling multiple solution outputs from the LLM and selecting the most common or consensus final answer, referred to as self-consistency Wang et al. (2023), also significantly improves accuracy. We note that in this
work, we combine CoT prompting with our method but do not use inference-time techniques such as calculators or self-consistency, since they focus on a different aspect of the MWP solving pipeline and can be combined with our method.
## 3 Methodology
In this section, we detail how we frame ICL example selection as an MDP, how our example retriever model works, and how we train it using reinforcement learning. We show an overview of our methodology in Figure 1.
### Example Selection as an MDP
We can view ICL example selection as a sequential decision making problem, where we select examples one at a time in such a way that we maximize our chances of achieving some goal when the examples are used as context. This goal is to maximize \(r(\mathcal{M}(x,e_{1},\dots,e_{T}),y)\), where \(\mathcal{M}(\cdot)\) returns the generated output of an LLM given a prompt, \(y\) is the label corresponding to \(x\), and \(r\) is a task-specific function that returns how good the generated output is. We note that in this setup, the order in which examples are selected matters since the order in which they are provided to \(\mathcal{M}\) must be defined. We also note that while in this work we set \(T\) to a constant, it can also be dynamically set during the decision-making process, which we leave for future work. With this framing, we can naturally define an MDP where the state at time step \(t\) corresponds to both \(x\) and the first \(t\) examples that have been selected, and the action space is the set of potential candidates to be the next example. Formally,
\[S_{0} =x\] \[S_{t} =x,e_{1},\dots,e_{t}\] \[A_{t} =e_{t+1}\in\mathcal{C}.\]
We now define the reward function for the MDP, which we break into two parts: a task-specific **goal** reward, and a supplementary **confidence** reward. We define the goal reward, \(R^{G}\), simply as the output of \(r\), as long as it can be formulated to return a scalar value. In the context of MWP solving, it is natural for \(r\) to be binary, where it returns 1 when the generated solution results in a correct answer and returns -1 when the generated solution results in an incorrect answer, as is done in Lu et al. (2022). However, this reward function treats all correct solutions equally and all incorrect solutions equally, which can lead to suboptimal behavior since the model may have trouble distinguishing which correct solutions are better and which incorrect solutions are worse. To address this issue, we introduce the confidence reward, \(R^{C}\), which we define as the inverse perplexity of the generated solution assigned by the LLM, normalized to the range \([-1,1]\). We hypothesize that when an LLM generates a correct solution with high probability (low perplexity), it is likely that the model "knew" how to solve the problem, rather than getting it correct by guessing or using unsound reasoning to arrive at a final answer. Additionally, we hypothesize that when an LLM generates an incorrect solution with high probability, it may have sound reasoning overall but contain a small error, such as an incorrect calculation, that leads to an incorrect final answer. While we find that the confidence reward is helpful in improving accuracy, we do not perform further analyses to validate our hypotheses, and leave such investigations for future work. We define the final reward function to be the average of \(R^{G}\) and \(R^{C}\) at the final time step and 0 at all prior time steps. Formally,
Figure 1: RetICL model architecture. Each latent state is constructed from the previous one and the current example. Examples are selected based on the bilinear transformation between the latent state and the corpus. After all examples are selected, the LLM is queried, the reward is calculated, and the loss is backpropagated into the model.
\[\hat{y}=\mathcal{M}(x,e_{1},\ldots,e_{T})\] \[R^{G}=r(\hat{y},y)\stackrel{{\text{MWP}}}{{=}}2\cdot \mathbb{I}[g(\hat{y},y)]-1\] \[R^{C}=2\cdot p_{\mathcal{M}}(\hat{y})^{\frac{1}{|\hat{y}|}}-1\] \[R_{t}=\begin{cases}0.5R^{G}+0.5R^{C}&\text{if }t=T\\ 0&\text{otherwise},\end{cases}\]
where \(\hat{y}\) is the generated solution, \(g\) is a function that checks if two solutions have the same final answer, \(\mathbb{I}\) is the indicator function, and \(p_{\mathcal{M}}\) returns the probability assigned by the LLM. Next, we describe the retriever model which will define our policy, and how we train it using reinforcement learning.
### Retriever Model
We now detail our model for example retrieval. At a high level, the model constructs a latent representation for each state \(S_{t}\) in the MDP. The latent representation at \(S_{t}\) is used to construct a probability distribution over all examples in the corpus, which we treat as the policy \(\pi(S_{t},\cdot)\). We then use this policy to decide which example to select for state \(S_{t+1}\) and the process continues sequentially.
We use an LSTM Hochreiter and Schmidhuber (1997) as the base model, where the hidden state \(\mathbf{h}_{t}\) acts as the latent representation for \(S_{t}\). We set the initial hidden state of the LSTM, \(\mathbf{h}_{0}\), to be a vectorized embedding of the problem statement \(x\), and set the input of the LSTM at time step \(t\) to be a vectorized embedding of the example \(e_{t}\). In this work, we construct these vectorized embeddings using a pre-trained S-BERT model Reimers and Gurevych (2019) and additionally provide learnable soft prompts Lester et al. (2021) to S-BERT to help align the embeddings with the current task. We found that fine-tuning the S-BERT parameters directly did not improve performance.
We use each hidden state \(\mathbf{h}_{t}\) to produce two key outputs: the value function estimate at \(S_{t}\) and the policy at \(S_{t}\). The value function estimate, \(\hat{v}(S_{t})\), is a learnable approximation of the expected sum of discounted rewards from \(S_{t}\) till the final time step, and is required for variance reduction in policy gradient training. We produce this estimate using a simple linear transformation on top of \(\mathbf{h}_{t}\). The policy, \(\pi(S_{t},e)\), represents the probability of choosing \(e\) to be the next example when we are in state \(S_{t}\). We construct the policy by first producing an unnormalized _activation_ value for each example in the corpus, \(\phi(S_{t},e)\), and then use the softmax function to convert these activations into a probability distribution. We construct each \(\phi(S_{t},e)\) by performing a learnable bilinear transformation between \(\mathbf{h}_{t}\) and the vectorized embedding of \(e\). We choose to model \(\phi\) using a bilinear transformation for two reasons. First, the bilinear learns a mapping between the model's latent space and the example embedding space, allowing generalization to examples not seen during training and also enabling some interpretability, as we will show later in this paper. Second, the bilinear enables efficient computation of the policy over a large corpus at inference time, as we will show later in this section. We formalize our model architecture as follows:
\[\mathbf{x}=\text{S-BERT}(\mathbf{P}_{x},x)\] \[\mathbf{e}=\text{S-BERT}(\mathbf{P}_{e},e)\] \[\mathbf{h}_{0}=\tanh(\mathbf{W}_{c}\mathbf{x}+\mathbf{b}_{c})\] \[\mathbf{h}_{t>0}=\text{LSTM}(\mathbf{e}_{1},\ldots,\mathbf{e}_{ t};\mathbf{h}_{0})\] \[\hat{v}(S_{t})=\mathbf{h}_{t}^{T}\mathbf{w}_{v}+b_{v}\] \[\phi(S_{t},e)=\begin{cases}\mathbf{h}_{t}^{T}\mathbf{W}_{a} \mathbf{e}&\text{if }e\notin\{e_{1},\ldots,e_{t}\}\\ -\infty&\text{otherwise}\end{cases}\] \[\pi(S_{t},e)=\frac{e^{\phi(S_{t},e)}}{\sum_{e^{\prime}\in\mathcal{ C}}e^{\phi(S_{t},e^{\prime})}}\]
where \(\mathbf{P}_{x}\in\mathbb{R}^{d_{p}\times d_{i}}\) and \(\mathbf{P}_{e}\in\mathbb{R}^{d_{p}\times d_{i}}\) are learnable soft prompts, \(\mathbf{W}_{c}\in\mathbb{R}^{d_{h}\times d_{e}}\) and \(\mathbf{b}_{c}\in\mathbb{R}^{d_{h}}\) transform the problem statement embedding space into the latent space, \(\mathbf{w}_{v}\in\mathbb{R}^{d_{h}}\) and \(b_{v}\in\mathbb{R}\) produce the value function estimate from the latent space, \(\mathbf{W}_{a}\in\mathbb{R}^{d_{h}\times d_{e}}\) performs the bilinear transformation between the latent space and example embedding space, \(d_{p}\) is the soft prompt length, \(d_{i}\) is the S-BERT input embedding size, \(d_{e}\) is the S-BERT hidden size, and \(d_{h}\) is the size of the latent space. We note that we set \(\phi(S_{t},e)\) to \(-\infty\) when \(e\) has already been selected to avoid selecting the same example multiple times, which is in line with existing methods.
We now note that our formulation for \(\phi\) allows efficient retrieval of the top-ranking example at each time step via maximum inner-product search (MIPS). We first note that \(\phi(S_{t},e)\) is maximized by finding the example \(e\) that maximizes the inner product \(\langle\mathbf{h}_{t},\mathbf{W}_{a}\mathbf{e}\rangle\). We leverage this information by first pre-computing \(\mathbf{W}_{a}\mathbf{e}\) for each example in the corpus and constructing a MIPS index over these vectors, using a library such as _fails_ Johnson
et al., 2019). At inference time, we can now leverage algorithms that perform approximate MIPS in sublinear time, i.e., maximize \(\phi(S_{t},e)\) without evaluating \(\mathbf{h}_{t}^{T}\mathbf{W}_{a}\mathbf{e}\) for each example in the corpus. We note that we do not use MIPS in this work since the corpora we experiment on are sufficiently small such that evaluating \(\mathbf{h}_{t}^{T}\mathbf{W}_{a}\mathbf{e}\) for each example in a corpus is relatively inexpensive. However, we expect that significant computational time can be saved with MIPS when evaluating on corpora at much larger scales.
### Training and Inference
We train the retriever model using proximal policy optimization (PPO) Schulman et al. (2017) with generalized advantage estimation (GAE) Schulman et al. (2015) as our advantage function. We additionally use a reward discount of \(\gamma=1\) since all episodes have fixed length and the reward is assigned only at the final time step. We train the value function estimator using mean squared error (MSE) with \(R_{T}\) as the target at each time step and weigh the value function loss with a hyperparameter \(c_{\text{VF}}\). We also encourage exploration by adding the negative entropy of the policy at each time step to the loss Ahmed et al. (2019), where we additionally weigh the entropy by a hyperparameter \(c_{\text{E}}\) and normalize by \(\frac{1}{\log(|\mathcal{C}|)}\) to account for training with different corpus sizes.
At train time, we select a batch of problems from the dataset, and then construct a sequence of examples for each problem by sequentially sampling from the policy, i.e., \(e_{t+1}\sim\pi(S_{t},\cdot)\). When \(T\) examples have been selected for each problem, we prompt the LLM with the examples and the current problem statement, calculate the reward from the LLM's generations, average the PPO loss, value function loss, and entropy loss over the batch, and backpropagate through our model. At inference time, we greedily select examples from the policy, i.e., \(e_{t+1}=\operatorname*{argmax}_{e\in\mathcal{C}}\pi(S_{t},e)\), since we find that greedy selection yields higher accuracy than sampling.
## 4 Experiments
In this section, we validate RetICL on math word problem (MWP) solving tasks and quantitatively compare its performance to several baselines. We also perform an ablation study and examine the effects of adjusting several parameters in order to determine which aspects of the methodology are working well and which may need improvement in future work.
### Datasets
We validate RetICL on two MWP datasets that contain detailed solution steps: TabMWP Lu et al. (2022), where solving each problem requires extracting and reasoning with information from tables, and GSM8K Cobbe et al. (2021), where solving each problem requires multi-step mathematical reasoning and applying various arithmetic operations. We choose to use these datasets since the detailed solutions steps both allow CoT prompting and allow our model to interpret the solution steps when making example selections. To the best of our knowledge, these are the only two existing MWP datsets that contain detailed solutions steps, other than MATH Hendrycks et al. (2021), which we found in preliminary investigations to be too difficult to achieve high accuracy on via ICL with the LLM we used. TabMWP has a pre-defined train/validation/test split of 23,059/7,686/7,686 problems, and GSM8K has a pre-defined train/test split of 7,473/1,319. We reserve 1,000 random problems from GSM8K's train set for validation. For both datasets, we include the full step-by-step example solutions in the prompts and embeddings but evaluate the correctness of the solution based on only the final answer. We note that the official TabMWP evaluation code is flawed, since issues with regular expressions cause both false positives and false negatives when evaluating correctness on multiple choice problems. We instead use our own code to evaluate correctness on TabMWP, which we find fixes the issues with the original code.
### Experimental Settings
We implement the PPO algorithm and the retriever model in PyTorch. We encode problem statements and examples using the _all-distilroberta-v1_ pre-trained S-BERT model Reimers and Gurevych (2019), take the normalized mean-pooled final layer outputs as the embeddings, and use a soft prompt length of 20. We use OpenAI's _code-davinci-002_ Codex model Chen et al. (2021) as the LLM with greedy decoding and set the maximum number of generated tokens to 400. We use Codex since we found it to be more accurate than open-source models and _text-davinci-003_ on our tasks, to have better ICL prompting behavior than _gpt-3.5-turbo_ on our tasks, and the API is free. We set the LSTM's hidden size to 800, PPO's \(\epsilon\) to 0.1, GAE's \(\lambda\) to 0.9,
and \(c_{\text{VF}}\) to 0.5. We set \(c_{\text{E}}\) to 0.05 for TabMWP and \(c_{\text{E}}\) to 0.1 for GSM8K, where different values are necessary since we find that our method performs differently across datasets and that \(c_{\text{E}}\) has a large impact on training stability. We use orthogonal initialization Hu et al. (2020) for all weight parameters, initialize all bias parameters to 0, and initialize soft prompts using a standard normal distribution. We train using the AdamW optimizer for 50 epochs with a learning rate of 0.001, a weight decay of 0.01, and a batch size of 20. We additionally apply gradient norm clipping on all parameters using a value of 2, which we find is critical to avoid spikes in training losses.
For each dataset, we randomly select 5,000 problems from the training set as our problems to train on, since we find that this number achieves a good balance between high accuracy and minimizing training time. We randomly select an additional 200 problems from the training set to use as the corpus of examples. While it is possible to use a larger corpus, e.g., all remaining problems in the training set, we find that training on a smaller corpus results in higher accuracy and that 200 works well in practice. We randomly select 500 problems from the validation set to evaluate performance on after each epoch. For validation and testing, we use the entire training set as the corpus, since we find that having access to as many examples as possible at inference time generally increases accuracy. We save the model at the epoch with the highest accuracy on the validation set for evaluation on the test set. We set the number of in-context examples to \(T=2\) for all experiments, since this is the minimal number of examples required to evaluate the impact of selecting examples sequentially. We note that while modifying \(T\) can have an impact on performance for both RetICL and baselines, we find that using a constant across methods provides a fair comparison of performance, and we leave exploration of this parameter for future work.
### Baselines
We compare RetICL to three baseline in-context example selection methods: **random** selection, similarity-based **kNN** selection Liu et al. (2021), and **PromptPG**Lu et al. (2022). We also perform **exhaustive** evaluation to serve as an approximate upper bound to the example selection methods.
RandomWith random selection, for each problem, we randomly sample \(T\) unique examples from the corpus for the ICL prompt. We evaluate random selection on 3 different random seeds and report the average accuracy across all 3 runs.
kNNWith kNN selection Liu et al. (2021), for each problem, we select the \(T\) examples with the most similar problem statements from the corpus and use those for the ICL prompt. We evaluate similarity by minimizing the Euclidean distance between the S-BERT embeddings of the problem statements using the same pre-trained S-BERT model as RetICL.
PromptPGWith PromptPG Lu et al. (2022), for each problem, a learned scoring function is evaluated on each individual example in the corpus, and the top \(T\) scoring examples are selected for the ICL prompt. PromptPG is a similar method to RetICL in that it uses a policy gradient method to learn an ICL example scoring function. However, there are many key differences between their method and ours: they do not include previously selected examples in the state, their reward function only considers correctness of the final answer, they encode text using BERT instead of S-BERT and do not use fine-tuning or soft prompting, they use REINFORCE instead of PPO, they do not use entropy to boost exploration, they use a much smaller training size of 160, and they use corpus size of 20 for both training and inference. We evaluate PromptPG's performance by running their code with modifications to match our prompting style and use our fixed evaluation code for fair comparison.
ExhaustiveWith exhaustive evaluation, for each problem, we construct a one-shot ICL prompt for each example in the corpus, and consider the current problem to be solved if a correct solution is generated from any of the prompts. We use one-shot prompts instead of few-shot prompts to reduce the search space. Additionally, we restrict the corpus size to 100 and only evaluate on the pre-defined 1,000-sample subset of the test set for TabMWP to reduce computation time.
### Results
Table 1 shows the performance of all methods on both datasets. We see that RetICL performs the best among non-exhaustive methods on both datasets, on par with kNN on TabMWP and significantly better on GSM8K. We also see that kNN is much better than Random on TabMWP but is only slightly
better than Random on GSM8K. However, after investigating the dataset, we conclude that the high performance of kNN on TabMWP is likely due to the presence of many problems in the dataset with very high similarity. For example, many problems will have exactly the same question text other than a few numbers or names changed, making it easy for the LLM to generate a correct solution given a highly similar example. On the contrary, GSM8K does not tend to contain problems that are almost identical, which makes kNN ineffective since problems with high textual similarity may not have similar solution strategies.
Perhaps surprisingly, we see that PromptPG is only slightly better than Random on TabMWP and performs on par with Random on GSM8K. While these results contradict the trends reported in Lu et al. (2022), we believe the discrepancy is mostly due to using the fixed evaluation code. We believe that PromptPG's relatively low performance also highlights the challenges of solving the ICL example selection problem using RL; many of the optimizations we implemented are required to achieve high performance in practice.
Additionally, we see that the Exhaustive method achieves almost perfect accuracy on both datasets. We find this surprising, especially due to the fact that Exhaustive only uses a single ICL example and has access to a smaller corpus. This result implies that there is significant room for growth in ICL example selection methods and also implies that one-shot ICL has the potential to be an extremely powerful inference method as long as the example corpus is informative, even for challenging text generation tasks.
In order to determine how well RetICL can generalize to low-resource settings, we examine the effect of reducing the number of available examples at test time. We evaluate RetICL and kNN on both datasets where 0.1%, 1%, 10%, and 100% of all examples are available as candidates, and show our results in Figure 2. Additionally, in order to more clearly demonstrate the generalizability of the methods, we show the relative accuracy compared to RetICL's accuracy when the full corpus is available. For TabMWP, we evaluate on the pre-defined 1000-sample subset of the test set. We first observe that even with 0.1% of examples available, both methods still retain a large portion of their performance, which implies that both methods are still viable in low resource settings. We also note that RetICL still outperforms or ties kNN on all corpus sizes, implying that RetICL is still the preferred method across corpus sizes. We note that the relative drop in performance is greater for TabMWP than GSM8K, likely because the TabMWP corpus loses many examples that have high similarity to the test problems, whereas GSM8K doesn't have such high similarity between problems. Finally, we note that in some cases a smaller corpus leads to higher relative performance. We believe that the peak in performance for kNN on GSM8K at 1% implies that kNN is a poor heuristic for this dataset, since the early peak means that examples with higher similarity can result in less accuracy. We note that RetICL is slightly higher at 10% than 100% on TabMWP, implying that the policy is not perfectly tuned, since the peak means that some examples that are preferred by the policy can lead to lower accuracy.
### Ablation study
We now examine the impact of various modeling and algorithmic choices via an ablation study. We train on 1,000 problems instead of 5,000 for all ablations for faster experimentation, and denote this reduced training size with \(\mathbf{T_{1k}}\). We experiment with the following adjustments:
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Method** & **TabMWP** & ** GSM8K** \\ \hline \hline Exhaustive & 98.20 & 97.95 \\ \hline Random & 72.20 & 57.19 \\ PromptPGLu et al. (2022) & 73.43 & 56.94 \\ kNNLiu et al. (2021) & 88.49 & 59.21 \\ RetICL & **88.51** & **66.11** \\ \hline \end{tabular}
\end{table}
Table 1: Test set accuracy for RetICL and baselines.
Figure 2: Change in relative accuracy as number of available example candidates increases.
* **Conf. Rew.**: We no longer use the confidence reward, \(R^{C}\), and instead only use the goal reward, \(R^{G}\).
* **LSTM**: We no longer condition on previously selected examples by removing the LSTM architecture and instead set the latent state for all time steps to be \(\mathbf{h}_{0}\).
* **Ent.**: We no longer include an entropy term in the loss function.
* **SP**: We no longer provide learnable soft prompts to the S-BERT encoder. We note that this change significantly reduces the memory footprint since soft prompt tuning requires storing copies of partial gradients over all S-BERT parameters for each candidate example.
* \(\mathbf{TC_{20}}\) and \(\mathbf{TC_{all}}\): We vary the size of the corpus at train time, using a corpus with 20 problems from the training set and all remaining problems from the training set, respectively. We also apply the SP ablation for the TC ablations since otherwise the partial gradients over the S-BERT parameters will cause the system to run out of memory for \(\mathbf{TC_{all}}\).
Table 2 shows the results of the ablation study. We list both accuracy as well as the number of unique examples selected at inference time to examine the impact of each ablation on the diversity in the selected examples. For TabMWP, we evaluate on the pre-defined 1,000-sample subset of the test set.
We see that using 1,000 training problems only slightly reduces accuracy, although it significantly reduces example diversity. We also see that removing the confidence reward and the entropy loss significantly reduce accuracy and example diversity, implying that these modifications are key optimizations for training. Next, we see that removing the LSTM slightly improves accuracy for TabMWP but significantly drops accuracy for GSM8K, and for both datasets does not significantly impact example diversity. To explain this discrepancy, we find that for TabMWP, there are several cases where the non-LSTM model selects examples that are more relevant to the current problem compared to the LSTM model, implying that the LSTM model may have more trouble converging on a good policy and may require additional optimizations to fix this issue. Next, we see that removing soft prompting slightly drops accuracy for TabMWP but slightly improves accuracy and significantly reduces example diversity for GSM8K. We note that the training run for SP for GSM8K peaked in validation accuracy at an early epoch, so it is possible that the selected model for this run was in a local optimum that happened to have slightly better performance. Next, we see that only using 20 candidate examples at train time significantly hurts accuracy across datasets, although perhaps surprisingly, increases example diversity for GSM8K. Finally, we see that using all available examples as candidates during training hurts accuracy on both datasets. This drop in performance may be due to slower convergence on an optimal policy since the policy needs to explore a huge search space. We note that Lu et al. (2022) also observe that increasing the corpus size at train time does not necessary increase accuracy.
## 5 Qualitative Analysis
We now present several qualitative analyses in order to interpret RetICL's example selection policy. Our goal in these analyses is to determine what features RetICL focuses on in individual examples, as well as what strategy RetICL uses to select an entire sequence of examples. We investigate these strategies by first visualizing learned latent example embeddings and then analyzing trends in per-problem example selections.
### Latent Space Analysis
In order to identify features in the selected examples that are being emphasized by RetICL, we perform a visual analysis of the example embeddings in the model's latent space. Specifically, we transform each example embedding \(\mathbf{e}\) into the model's latent space using the right half of the bilinear from the \(\phi\) function, i.e., \(\mathbf{W_{a}}\mathbf{e}\). We note that because
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Ablation** & \multicolumn{2}{c|}{**TabMWP**} & \multicolumn{2}{c|}{**GSM8K**} \\ & **Acc.** & **Ex.** & **Acc.** & **Ex.** \\ \hline \hline None & 88.20 & 197 & 66.11 & 97 \\ \hline T\({}_{1k}\) & 87.20 & 115 & 65.96 & 34 \\ T\({}_{1k}\), Conf. Rew. & 84.40 & 64 & 64.67 & 20 \\ T\({}_{1k}\), LSTM & 88.40 & 113 & 63.91 & 38 \\ T\({}_{1k}\), Ent. & 83.00 & 26 & 62.77 & 6 \\ T\({}_{1k}\), SP & 86.50 & 131 & 66.26 & 3 \\ T\({}_{1k}\), SP, TC\({}_{20}\) & 84.20 & 81 & 61.87 & 58 \\ T\({}_{1k}\), SP, TC\({}_{all}\) & 85.40 & 99 & 65.88 & 58 \\ \hline \end{tabular}
\end{table}
Table 2: Ablation study, measuring accuracy and number of examples used on both datasets.
maximizing \(\phi(S_{t},e)\) is equivalent to maximizing \(\langle\mathbf{h}_{t},\mathbf{W}_{a}\mathbf{e}\rangle\), the most likely example to select at \(S_{t}\) is the example where \(\mathbf{W}_{a}\mathbf{e}\) is closest to \(\mathbf{h}_{t}\) in the latent space3. We can thus infer that examples that are close in the latent space have similar likelihood of being selected by the policy, which enables us to manually examine the policy's rankings by examining patterns in local regions of latent example embeddings.
Footnote 3: The equivalence between maximum inner product and minimum distance is not guaranteed in the general case, but is true in our case because the \(\mathbf{e}\) vectors are normalized.
For both TabMWP and GSM8K, we randomly select 1,000 examples from the example corpus and then apply t-SNE (Van der Maaten and Hinton, 2008) to reduce their embeddings to 2 dimensions for visualization. Additionally, for the same sets of examples, we also visualize their pre-trained S-BERT embeddings in the same way, in order to demonstrate how inter-example similarities change after RetICL training. We color points based on the number of steps in an example's solution, with red being the least, green being the most, and yellow being in the middle.
Figure 3 shows these visualizations. For GSM8K, we see that RetICL groups examples based on the number of solution steps, whereas the pre-trained S-BERT embeddings do not. This observation implies that the number of solution steps is an important factor in example selection, and confirms findings from prior work that solution complexity impacts generation performance (Fu et al., 2022). We also see that clusters in the RetICL embeddings have been somewhat merged together from the pre-trained embeddings. This result can be interpreted by observing the pre-trained embeddings to be primarily clustered based on topic, e.g., problems about money and problems about time belong to separate clusters, since S-BERT embeddings reflect the semantic content of the examples. While local neighbors in the RetICL embedding space also tend to have similar topics, the clusters are less well-separated, which implies that both topic and the solution strategy, which is partly reflected in the length of solution steps, are used for example selection by RetICL.
For TabMWP, we see that the space looks very different from GSM8K, with many separate clusters being present in both the RetICL and pre-trained spaces, primarily based on the problem's
Figure 3: Example embedding visualizations. From left to right, top to bottom, GSM8K pre-trained embeddings, GSM8K RetICL embeddings, TabMWP pre-trained embeddings, and TabMWP RetIC embeddings.
template. For example, there is one cluster for asking yes/no questions about schedules, one for asking if someone has enough money to buy something, and one for asking what the mean of a set of numbers is. Since RetICL retains these clusters, we can infer that an example's template is key to example selection. This observation is also validated by kNN's high performance on this dataset, since the most semantically similar problems are always from the same cluster. While there are not many differences between the RetICL and pre-trained S-BERT embedding spaces, we observe that RetICL has pulled several clusters closer together. For example, it partially merges together problems that require finding the largest value and problems that require finding the smallest value from a set. This observation suggests that problems across the merged template clusters can be used interchangeably as examples, since their problems tend to have similar reasoning strategies.
### Per-Problem Example Selection
We now examine example selections at the per-problem level in order to gain further insight into RetICL's learned example selection policy.
Table 3 shows the in-context examples selected to help solve a representative example problem from the GSM8K dataset. We see that RetICL tends to select examples that share some unique high-level features with the input problem. Examples of such features are subtracting from a total value, adding up monetary values over some period of time, and defining some variables to be proportional to other variables. We note that each problem in the dataset exhibits several such features, so RetICL has to implicitly decide which particular features are most important for the current problem and which examples most represent those features. We also see that RetICL tends to select examples with solutions that are relatively long and have substantial verbal reasoning. While RetICL's selection strategy works well in many cases, there are several common scenarios where it fails. First, the LLM can exhibit misconceptions when it lacks an example to provide context, such as misinterpreting the meaning of a "discount" when not explicitly instructed. Second, the LLM can try to follow the examples too closely and use reasoning that does not necessarily apply to the current problem. These errors indicate that RetICL's policy could be improved by selecting based on a broader and more targeted set of features. However, we note that many of the incorrect solutions do not appear to be due to poor example selection, since they contain simple errors such as incorrect arithmetic or switching the roles of variables in the problem. We believe these errors are due to limitations of the LLM and are a likely source of noise in the training signal, making it harder to find an optimal policy. We note that such errors can be fixed by using a calculator, self-consistency, or external computation engines (Wolfram, 2023), and we leave integration of such methods into RetICL for future work.
Table 4 shows the in-context examples selected to help solve a representative example problem from the TabMWP dataset. We see that RetICL's selections tend to follow a surprising pattern: the first selected example is seemingly unrelated to the current problem, while the second selected example has similar reasoning steps to the current problem. This strategy has several implications. First, it suggests that RetICL can infer reasoning steps from the current problem and select examples based on this information. Second, it shows that there may be a benefit to a diverse set of examples in the prompt, possibly because an unrelated first example prevents overfitting to example solutions, or because there is some subtle benefit to seeing different kinds of calculations earlier in the context. We note that the incorrect solutions tend to be caused by either cases where RetICL diverges from the previously described strategy and selects an unrelated second example, or selects a second example that is similar to the current problem but requires a slightly different reasoning strategy. These failure cases imply that RetICL could benefit from training improvements to make its policy more consistent and stable, as well as architectural improvements of the model to be more accurate in inferring reasoning strategy from problem statement.
## 6 Conclusion
In this work, we proposed RetICL, a method for sequential in-context example selection that, unlike existing methods that select all examples independently, takes previously selected examples into account. We framed the problem of example selection as a Markov decision process and developed a novel reward function, an example retriever model, and ways to train the model. We demonstrated that RetICL learns an effective strategy for example selection and outperforms baselines on
the task of math word problem solving. There are many avenues for future work. First, since we only validated RetICL for math problem solving, we can explore its usage in other natural language generation tasks to see if it can be used a generic method for selecting ICL examples. Second, we can explore other architectural modifications that could further improve the retriever model, such as using a Transformer instead of an LSTM. Third, since we used a fixed number of examples, we can extend RetICL to let it learn how many examples are needed. Fourth, we can explore whether RetICL can be applied to real-world educational settings, e.g., selecting worked examples to help students solve practice problems.
## Limitations
We note that there are several practical limitations to our method. First, we note that RetICL can be expensive and time-consuming to train, with each of our main training runs requiring up to 250,000 LLM inferences. This high number of inferences makes training on paid models prohibitively expensive; for example, it could cost up to approximately $2,500 to train on OpenAI's _text-davinci_ models. Additionally, newer OpenAI models, such as _gpt-3.5-turbo_ and _gpt-4_, do not return likelihood information on generated text, making the confidence reward impossible to use for these models.
## Ethical Considerations
We first note that the high number of inferences required to train RetICL give the method an outsized cost in terms of energy usage; however, we note that the method has a relatively low cost at inference time given its low number of parameters and potential for optimization with MIPS. Additionally, we note that because RetICL uses a black-box LLM reward signal, its example selections are not guaranteed to be interpretable by humans. Finally, because we only examine the math problem solving domain, we did not perform any analysis of bias in RetICL's selections. However, it is possible that RetICL could reflect biases in the LLM it is being trained on. As such, we recommend an analysis of bias in future works that use RetICL in sensitive settings such as student-facing educational tools.
|
2308.13291 | Gaussian boson sampling at finite temperature | Gaussian boson sampling (GBS) is a promising candidate for an experimental
demonstration of quantum advantage using photons. However, sufficiently large
noise might hinder a GBS implementation from entering the regime where quantum
speedup is achievable. Here, we investigate how thermal noise affects the
classical intractability of generic quantum optical sampling experiments, GBS
being a particular instance of the latter. We do so by establishing sufficient
conditions for an efficient simulation to be feasible, expressed in the form of
inequalities between the relevant parameters that characterize the system and
its imperfections. We demonstrate that the addition of thermal noise has the
effect of tightening the constraints on the remaining noise parameters,
required to show quantum advantage. Furthermore, we show that there exist a
threshold temperature at which any quantum sampling experiment becomes
classically simulable, and provide an intuitive physical interpretation by
relating this occurrence with the disappearance of the quantum state's
non-classical properties. | Gabriele Bressanini, Hyukjoon Kwon, M. S. Kim | 2023-08-25T10:33:06Z | http://arxiv.org/abs/2308.13291v2 | # Gaussian boson sampling at finite temperature
###### Abstract
Gaussian boson sampling (GBS) is a promising candidate for an experimental demonstration of quantum advantage using photons. However, sufficiently large noise might hinder a GBS implementation from entering the regime where quantum speedup is achievable. Here, we investigate how thermal noise affects the classical intractability of generic quantum optical sampling experiments, GBS being a particular instance of the latter. We do so by establishing sufficient conditions for an efficient simulation to be feasible, expressed in the form of inequalities between the relevant parameters that characterize the system and its imperfections. We demonstrate that the addition of thermal noise has the effect of tightening the constraints on the remaining noise parameters, required to show quantum advantage. Furthermore, we show that there exist a threshold temperature at which any quantum sampling experiment becomes classically simulable, and provide an intuitive physical interpretation by relating this occurrence with the disappearance of the quantum state's non-classical properties.
## I Introduction
Gaussian boson sampling (GBS) is a computational task that, under widely accepted complexity-theoretic conjectures, is believed to be intractable using classical machines [1, 2, 3]. The original formulation of the problem consists of sampling from a non-classical \(m-\)mode Gaussian state obtained by sending identical squeezed vacuum states through a passive linear optical network (LON), with photon-number-resolving (PNR) detectors. More experimentally-feasible variants of the task employing threshold detectors [4] and click-counting detectors [5] have been proposed since. The advancements of photonic platforms in recent years made an experimental GBS demonstration of quantum advantage feasible with current technological capabilities, with state-of-the-art experiments consisting of 216 modes and up to 125 measured photons [6]. Besides the quest for quantum advantage, Gaussian boson samplers can also be used to tackle problems of practical interest such as simulating molecular vibronic spectra [7], measuring graph similarity [8], perfect matching counting [9], identifying dense subgraphs [10, 11], and predicting stable molecular docking configurations for drug design [12, 13].
As near term quantum devices do not benefit from error correction, it is well known that sufficient noise can prevent GBS experiments from entering the regime where quantum advantage is, in principle, attainable. Extensive research has been conducted to investigate the impact of various sources of noise and imperfections on the classical simulability of the sampling task. These include losses [14], partial distinguishability [15, 16], and detectors' inefficiencies and random counts [17, 18]. In particular, Ref. [17] provides sufficient conditions for efficient classical simulability of generic quantum optical experiments \(-\) GBS being a specific instance \(-\) expressed in the form of inequalities that involve the noise parameters. The method is based on expressing the output probability distribution in terms of phase-space quasi-probability distributions (PQDs) of the input state, the measurement operators and the transition function associated with the specific quantum process, and further identifies their negativity as a necessary condition to achieve quantum advantage [19, 20].
It is well known that a sampling task with thermal state inputs can be efficiently simulated (i.e., in polynomial time) on a classical computer [21]. This fact suggests that there should exist a transition in the computational complexity of GBS as temperature grows. Nevertheless, finite-temperature effects have received limited attention in this setting thus far. This paper investigates in this direction by assessing how the addition of thermal noise affects the classical intractability of simulating quantum optical sampling experiments. Significant attention is then dedicated to predicting how much thermal noise can a GBS experiment tolerate before it becomes classically efficiently simulable.
In addition to fundamental interest, thermal noise effects are particularly relevant for experiments conducted in the MHz domain, e.g. those involving the phononic modes of motion of trapped ions [22, 23], or those in the GHz domain that employ superconducting architectures. These platforms offer highly efficient photon-number-resolving detection [24], enabled by quantum-non-demolition measurements [25] that allow for repeated detection, ultimately leading to higher measurement fidelities. Moreover, superconducting circuits provide an excellent degree of control over the required interactions \(-\) namely beam splitter and phase shifter operations \(-\) to build an arbitrary passive LON [26]. Squeezing and displacement operations may also be achieved in circuit QED, thus allowing for the implementation of GBS experiments [27]. The non-linearities provided by Josephson junctions enable efficient preparation of non-classical states of light [28], including multi-photon Fock states [29, 30], and make it possible to engineer non-linear operations that would otherwise be challenging to implement in optical systems. A notable example is given by Kerr-type unitaries, which have recently been introduced into the boson sampling framework as a mean to enhance the system's robustness against noise [31].
The rest of this paper is structured as follows. In Sec. II we revise key aspects of the phase-space formulation of quantum optics and generalize the formalism introduced in Ref. [17] by including finite-temperature effects. This allows us to obtain a general sufficient condition for the classical simulability of a generic quantum optical experiment, that we then apply to a noisy GBS instance employing threshold detectors. We find that, as one might expect, the additional thermal noise has the effect of reducing the detection imperfections sufficient to ef
ficiently simulate the sampling task. In Sec. III we show that there exist a threshold temperature \(-\) which depends on the system's losses \(-\) at which any sampling experiment becomes classically simulable, even in the presence of ideal detection. We provide a physical interpretation of this phenomenon and show that it is linked to the disappearance of the state's genuine quantum properties. In Sec. IV we build on the approach of Ref. [18] and present a sufficient condition for the classical simulability of noisy GBS experiments that takes into account both thermal noise and approximate sampling, overcoming the main limitation of Ref. [17]. Lastly, in Sec. V we summarize our findings and provide concluding remarks.
## II Sufficient conditions for efficient classical simulation of quantum optics at finite temperature
In this section we investigate how thermal noise affects the noise thresholds in Ref. [17] that are sufficient for an efficient classical simulation of a generic bosonic experiment. Our main result resides in Eq. (20), the only assumption to derive such classical simulability condition being the model of noisy evolution we employ, outlined later in this section. We then consider the special case of a noisy GBS device and show that the latter may be efficiently simulated by classical means if the following inequality is satisfied
\[\frac{p_{D}}{\eta_{D}}\geq\frac{\eta_{L}}{2}(1-e^{-2\tau})-\overline{n}(1-\eta _{L})\,. \tag{1}\]
Here, \(r\) is the squeezing parameter of the input states, \(\overline{n}\) is the mean number of environmental thermal photons, \(\eta_{L}\) denotes the transmission of the LON, and \(\eta_{D}\) and \(p_{D}\) are the threshold detectors' efficiency and their dark count rate, respectively.
A generic quantum optical experiment is described in terms of an \(m\)-mode initial state \(\rho\), an \(m\)-mode quantum process represented by a completely positive (CP) map \(\mathcal{E}\) and a measurement performed on the final state \(\mathcal{E}(\rho)\). A quantum measurement is characterized by a positive operator-valued measure (POVM), i.e., a collection of operators \(\{\Pi_{\mathbf{n}}\}\) satisfying the conditions \(\Pi_{\mathbf{n}}\geq 0\) and \(\sum_{\mathbf{n}}\Pi_{\mathbf{n}}=\mathcal{I}\), where \(\mathcal{I}\) denotes the identity operator on the Hilbert space. The probability of obtaining a specific measurement outcome \(\mathbf{n}\) is given by the Born rule \(p(\mathbf{n})=\mathrm{Tr}\{\mathcal{E}(\rho)\Pi_{\mathbf{n}}\}\). The outcome probability can alternatively be expressed in terms of ordered phase-space quasi-probability distributions as
\[p(\mathbf{n})=\pi^{m}\!\int\!d^{2m}\mathbf{\beta}\!\!\int\!d^{2m}\mathbf{\alpha}\,W^{(-\bm {s})}_{\Pi_{\mathbf{n}}}(\mathbf{\beta})T^{(\mathbf{s},\mathbf{t})}_{\mathcal{E}}(\mathbf{\alpha},\mathbf{\beta})W^{(\mathbf{t})}_{\rho}(\mathbf{\alpha})\,. \tag{2}\]
Here, \(W^{(-\mathbf{s})}_{\Pi_{\mathbf{n}}}\) and \(W^{(\mathbf{t})}_{\rho}\) denote the PQD of the POVM element \(\Pi_{\mathbf{n}}\) and that of the input state \(\rho\), respectively. In what follows, the dagger transposes a vector of complex numbers to a column vector and takes a complex conjugate. The \(\mathbf{s}-\)ordered PQD (\(\mathbf{s}-\)PQD) of a generic \(m\)-mode Hermitian operator \(O\) is defined as
\[W^{(\mathbf{s})}_{O}(\mathbf{\beta})=\int\frac{d^{2m}\mathbf{\xi}}{\pi^{2m}}\,\,\mathrm{ Tr}\{OD(\mathbf{\xi})\}e^{\frac{\mathbf{\xi}\mathbf{\xi}^{\dagger}}{2}}e^{\mathbf{\beta}\mathbf{ \xi}^{\dagger}-\mathbf{\xi}\mathbf{\beta}^{\dagger}}\,. \tag{3}\]
Here, \(\mathbf{s}=\text{diag}(s_{1},\dots,s_{m})\) is the diagonal matrix of the ordering parameters \(s_{j}\in\mathbb{R}\) and \(D(\mathbf{\xi})\) is the \(m\)-mode displacement operator
\[D(\mathbf{\xi})=e^{\mathbf{\xi}\mathbf{\alpha}^{\dagger}-\mathbf{\alpha}\mathbf{\xi}^{\dagger}}\,, \tag{4}\]
\(\mathbf{a}=(a_{1},\dots,a_{m})\) being the vector of bosonic operators. The well known Husimi \(Q\)-function, Wigner function and Glauber-Sudarshan \(P\)-function are retrieved for \(\mathbf{s}=-\mathbb{I}_{m}\), \(\mathbf{s}=0\) and \(\mathbf{s}=\mathbb{I}_{m}\) respectively, where \(\mathbb{I}_{m}\) denotes the \(m-\)dimensional identity matrix. It is worth noting that the \(\mathbf{s}-\)PQD of a quantum state is normalized to one, however it can in general attain negative values and diverge more severely than a delta function. The remaining function appearing in Eq. (2) is the transition function associated with the quantum process \(\mathcal{E}\), defined as
\[T^{(\mathbf{s},\mathbf{t})}_{\mathcal{E}}(\mathbf{\alpha},\mathbf{\beta}) =\int\frac{d^{2m}\mathbf{\zeta}}{\pi^{2m}}e^{\mathbf{\xi}\mathbf{\alpha}^{ \dagger}}e^{\mathbf{\beta}\mathbf{\zeta}^{\dagger}-\mathbf{\zeta}\mathbf{\beta}^{\dagger}}\int \frac{d^{2m}\mathbf{\xi}}{\pi^{2m}}e^{-\frac{\mathbf{t}\mathbf{\alpha}^{\dagger}}{2}}\] \[e^{\mathbf{\xi}\mathbf{\alpha}^{\dagger}-\mathbf{\alpha}\mathbf{\xi}^{\dagger}}\, \,\mathrm{Tr}\big{\{}\mathcal{E}(D^{\dagger}(\mathbf{\xi}))D(\mathbf{\zeta})\big{\}}\,. \tag{5}\]
One can also prove that
\[\mathcal{E}(D^{\dagger}(\mathbf{\xi}))=e^{\frac{\mathbf{\xi}\mathbf{\xi}^{\dagger}}{2}} \int\frac{d^{2m}\mathbf{\gamma}}{\pi^{m}}e^{\mathbf{\gamma}\mathbf{\xi}^{\dagger}-\mathbf{ \zeta}\mathbf{\gamma}^{\dagger}}\mathcal{E}(|\mathbf{\gamma}\rangle\!\langle\mathbf{\gamma}| \right), \tag{6}\]
meaning that the action of \(\mathcal{E}\) on a multimode coherent state \(|\mathbf{\gamma}\rangle\) is enough to completely characterize the transition function in Eq. (II). If there exist values of the ordering parameters \(\mathbf{t}\) and \(\mathbf{s}\) such that \(W^{(-\mathbf{s})}_{\Pi_{\mathbf{n}}}(\mathbf{\beta})\), \(T^{(\mathbf{s},\mathbf{t})}_{\mathcal{E}}(\mathbf{\alpha},\mathbf{\beta})\) and \(W^{(\mathbf{t})}_{\rho}(\mathbf{\alpha})\) are all non-negative and well-behaved, then it is possible to sample from \(p(\mathbf{n})\) efficiently. We emphasize that this condition is only sufficient, and there might exist other efficient simulation strategies that succeed even in regimes where the PQDs exhibit negativities. It should also be noted that this framework lets us address the feasibility of efficient _exact_ simulations only, i.e., sampling from \(p(\mathbf{n})\), rather than sampling from an approximation of the latter. Despite this shortcoming, the strength of this formalism resides in the wide range of applicability enabled by its modular nature, which allows us to investigate the classical simulability of generic quantum optical experiments.
Let us now assume that \(\mathcal{E}\) is a CP map describing a LON subject to both photon loss and thermal noise. We adopt a simple model where, alongside the system's \(m\) modes, we consider \(m\) additional environmental modes, each initialized in the thermal state
\[\nu_{th}=\frac{1}{1+\overline{n}}\left(\frac{\overline{n}}{1+\overline{n}} \right)^{a^{\dagger}a}\,. \tag{7}\]
Here \(\overline{n}\) is the mean photon number for the given temperature and \(a\) is the annihilation operator of the corresponding environmental mode. These \(2m\) modes interact by means of a fictitious ideal interferometer described by the block unitary matrix
\[\mathbf{U}=\begin{pmatrix}\mathbf{L}&\mathbf{N}\\ \mathbf{P}&\mathbf{M}\end{pmatrix}\,. \tag{8}\]
The unitarity of \(\mathbf{U}\) implies that
\[\mathbf{L}^{\dagger}\mathbf{L}+\mathbf{P}^{\dagger}\mathbf{P}=\mathbb{I}_{m}\,, \tag{9}\]
i.e., \(\mathbf{L}\) is a subunitary matrix when losses are present in the system. Hence, the action of the noisy LON on an \(m\)-mode coherent state \(|\mathbf{\gamma}\rangle\) reads
\[\mathcal{E}(|\mathbf{\gamma}\rangle\!\langle\mathbf{\gamma}|)=\mathrm{Tr}_{env}\{ \mathcal{U}(|\mathbf{\gamma}\rangle\!\langle\mathbf{\gamma}|\otimes\nu_{th}^{\otimes m })\mathcal{U}^{\dagger}\}\,, \tag{10}\]
where \(\mathcal{U}\) is the unitary operator associated with the larger \(2m\)-mode interferometer and the trace is taken over the environmental degrees of freedom. In Fig. (1) we display a schematic representation of the noise model employed to describe the LON. We can expand the \(m\)-mode thermal state over the coherent state basis, making use of its \(P\)-function representation
\[\nu_{th}^{\otimes m}=\int d^{2m}\mathbf{\beta}P_{th}(\mathbf{\beta})\,|\mathbf{\beta} \rangle\!\langle\mathbf{\beta}|\, \tag{11}\]
with
\[P_{th}(\mathbf{\beta})=\frac{2^{m}}{\pi^{m}(k-1)^{m}}e^{-\frac{2}{k-1}\mathbf{\beta} \mathbf{\beta}^{\dagger}}\,, \tag{12}\]
where \(k=2\overline{n}+1\). We can thus write Eq. (10) as
\[\mathcal{E}(|\mathbf{\gamma}\rangle\!\langle\mathbf{\gamma}|)=\int d^{2m}\mathbf{\beta}P_ {th}(\mathbf{\beta})\,\mathrm{Tr}_{env}\{\mathcal{U}\,|\mathbf{\gamma},\mathbf{\beta} \rangle\!\langle\mathbf{\gamma},\mathbf{\beta}|\,\mathcal{U}^{\dagger}\}\,. \tag{13}\]
The action of the larger LON on a multi-mode coherent state is easily computed
\[\mathcal{U}\,|\mathbf{\gamma},\mathbf{\beta}\rangle=|(\mathbf{\gamma},\mathbf{\beta})\mathbf{U} \rangle=|\mathbf{\gamma}\mathbf{L}+\mathbf{\beta}\mathbf{P},\mathbf{\gamma}\mathbf{N}+\mathbf{\beta}\mathbf{Q }\rangle\, \tag{14}\]
thus leading to
\[\mathcal{E}(|\mathbf{\gamma}\rangle\!\langle\mathbf{\gamma}|)=\int d^{2m}\mathbf{\beta}P_ {th}(\mathbf{\beta})\,|\mathbf{\gamma}\mathbf{L}+\mathbf{\beta}\mathbf{P}\rangle\!\langle\mathbf{ \gamma}\mathbf{L}+\mathbf{\beta}\mathbf{P}|. \tag{15}\]
By substituting this expression into Eq. (6), we can compute the trace appearing in Eq. (5)
\[\mathrm{Tr}\big{\{}\mathcal{E}(D^{\dagger}(\mathbf{\xi}))D(\mathbf{\zeta})\big{\}}= \pi^{m}\delta^{2m}(\mathbf{\xi}-\mathbf{\zeta}\mathbf{L}^{\dagger})e^{\mathbf{\zeta}k(\mathbf{L}^ {\dagger}\mathbf{L}-\mathbb{I}_{m})\mathbf{\zeta}^{\dagger}/2}\,, \tag{16}\]
where we have used the identity
\[\int\frac{d^{2m}\mathbf{\beta}}{\pi^{2m}}e^{\mathbf{\zeta}\mathbf{\beta}^{\dagger}-\mathbf{ \beta}\mathbf{\zeta}^{\dagger}}=\delta^{2m}(\mathbf{\zeta})\,, \tag{17}\]
as well as the unitarity constraint Eq. (9) and standard multi-dimensional Gaussian integration. We now plug Eq. (16) into Eq. (5) and finally obtain the transition function
\[T_{\mathcal{E}}^{(\mathbf{s},\mathbf{t})}(\mathbf{\alpha},\mathbf{\beta}) =\int\frac{d^{2m}\mathbf{\zeta}}{\pi^{2m}}e^{-\mathbf{\zeta}\mathbf{\Sigma} \mathbf{\zeta}^{\dagger}/2}e^{(\mathbf{\beta}-\mathbf{\alpha}\mathbf{L})\mathbf{\zeta}^{\dagger}- \mathbf{\zeta}(\mathbf{\beta}^{\dagger}-\mathbf{L}^{\dagger}\mathbf{\alpha}^{\dagger})}\] \[=\frac{2^{m}}{\pi^{m}\sqrt{\det\{\mathbf{\Sigma}\}}}e^{-2(\mathbf{\beta} -\mathbf{\alpha}\mathbf{L})\mathbf{\Sigma}^{-1}(\mathbf{\beta}^{\dagger}-\mathbf{L}^{\dagger}\mathbf{ \alpha}^{\dagger})}\,. \tag{18}\]
The latter is non-negative, well-behaved and has multi-variate Gaussian form _iff_
\[\mathbf{\Sigma}=k(\mathbb{I}_{m}-\mathbf{L}^{\dagger}\mathbf{L})-\mathbf{s}+\mathbf{L}^{\dagger} \mathbf{t}\mathbf{L}\geq 0\,. \tag{19}\]
Furthermore, we can always find \(\mathbf{\tilde{\mathbf{t}}},\mathbf{\tilde{\mathbf{s}}}\in\mathbb{R}^{m}\) such that the input state \(\mathbf{t}-\)PQD is non-negative for \(\mathbf{t}\leq\mathbf{\tilde{t}}\) and the \((-\mathbf{s})-\)PQD associated with the quantum measurement is non-negative for \(\mathbf{s}\geq\mathbf{\tilde{\mathbf{s}}}\). Hence, the following inequality
\[k(\mathbb{I}_{m}-\mathbf{L}^{\dagger}\mathbf{L})-\mathbf{\tilde{\mathbf{s}}}+\mathbf{L}^{\dagger} \mathbf{\tilde{t}}\mathbf{L}\geq 0 \tag{20}\]
constitutes our sufficient classicality condition for an efficient simulation of the sampling task described above to be feasible. We emphasize once more that this condition is only sufficient. On the other hand, the modular nature of the formalism enables its wide applicability, our sole assumption being the noise model of the linear optical evolution given by Eq. (10). As expected, the results of Ref. [17] are retrieved in the zero-temperature limit \(k=1\).
We can now apply Eq. (20) to a noisy GBS experiment. In particular, let us consider an initial state comprising of \(m\) identical squeezed vacuum states \(\bigotimes_{j=1}^{m}S(r)\,|0\rangle\), where
\[S(r)=e^{\frac{r}{2}(a^{\dagger 2}-a^{2})} \tag{21}\]
is the single-mode squeezing operator and \(r>0\) is the squeezing parameter. One can show that the \(\mathbf{t}-\)PQD of a generic \(m\)-mode Gaussian state \(\rho\) reads
\[W_{\rho}^{(\mathbf{t})}(\mathbf{\beta})=\frac{2^{m}}{\pi^{m}\sqrt{\det\bigl{\{}\mathbf{ \sigma}-\tilde{\mathbf{t}}\bigr{\}}}}e^{-2(\mathbf{\beta}-\mathbf{\alpha})^{\intercal}( \mathbf{\sigma}-\tilde{\mathbf{t}})^{-1}(\mathbf{\beta}-\mathbf{\alpha})}\,. \tag{22}\]
Here, \(\mathbf{\sigma}\) is the covariance matrix and \(\mathbf{\alpha}\) is the displacement vector that fully characterizes \(\rho\). The conventions used are such that the covariance matrix of the single-mode thermal state Eq. (7) is proportional to the identity matrix and reads \(\mathbf{\sigma}=k\mathbb{I}_{2}=(2\overline{n}+1)\mathbb{I}_{2}\). Furthermore, \(\tilde{\mathbf{t}}\) is a diagonal matrix defined as
\[\tilde{\mathbf{t}}=\bigoplus_{j=1}^{m}t_{j}\mathbb{I}_{2}\,. \tag{23}\]
Figure 1: Schematics of the noise model used throughout this work, described by the CP map \(\mathcal{E}\). The system’s initial \(m\)-mode state \(\rho\) interacts with the environment \(-\) initialized in a thermal state \(-\) through a loss-less \(2m\)-mode interferometer represented by a unitary operation \(\mathcal{U}\). At the output ports, the system’s modes are measured, while the crosses represent tracing over the environmental degrees of freedom.
Consequently, the \(\mathbf{t}-\)PQD of a Gaussian state \(\rho\) is non-negative _iff_
\[\mathbf{\sigma}-\tilde{\mathbf{t}}\geq 0\,. \tag{24}\]
If this condition is satisfied we will say that \(\rho\) belongs to the set of \(\mathbf{t}-\)classical Gaussian states, which we denote with \(\mathcal{C}_{G}^{(\mathbf{t})}\). For the input state \(\bigotimes_{j=1}^{m}S(r)\,|0\rangle\), the above condition simplifies to
\[\mathbf{t}\leq e^{-2r}\mathbb{I}_{m}\equiv\mathbf{\overline{t}}\,. \tag{25}\]
Let us also consider noisy threshold photo-detectors characterized by sub-unit efficiency \(0\leq\eta_{D}\leq 1\) and by a dark count rate \(0\leq p_{D}\leq 1\). The "off" and "on" elements of the POVM associated with this measurement respectively read
\[\Pi_{0}=(1-p_{D})\sum_{n=0}^{\infty}(1-\eta_{D})^{n}\,|n\rangle\!\langle n|\, \tag{26}\]
\[\Pi_{1}=\mathcal{I}-\Pi_{0}\,. \tag{27}\]
A close inspection of Eq. (26) reveals that \(\Pi_{0}\) is an unnormalized thermal state, hence one can analytically compute the \((-s)-\)PQDs of both POVM elements using Eq. (22) and prove that they are non-negative for \(s\geq 1-2p_{D}/\eta_{D}\). If we consider \(m\) identical noisy threshold detectors as described above at the output ports of our LON, then the \((-\mathbf{s})-\)PQD of the \(m\)-mode measurement is simply the product of the \((-s_{j})-\)PQD of the single-mode measurements, and it is clearly non-negative for
\[\mathbf{s}\geq\left(1-\frac{2p_{D}}{\eta_{D}}\right)\mathbb{I}_{m}\equiv\mathbf{ \overline{s}}\,. \tag{28}\]
If one further assumes that losses are uniform across all possible paths across the network \(-\) usually a good approximation for integrated setups \(-\) then the linear transformation of the input modes is described by \(\mathbf{L}=\sqrt{\eta_{L}}\mathbf{W}\), where \(\mathbf{W}\) is a unitary matrix and \(0\leq\eta_{L}\leq 1\) denotes the transmission of the interferometer. By substituting the threshold values \(\mathbf{\overline{t}}\) and \(\mathbf{\overline{s}}\) into Eq. (20) we obtain the classical simulability condition for the noisy GBS experiment described above, i.e.,
\[\frac{p_{D}}{\eta_{D}}\geq\frac{\eta_{L}}{2}(1-e^{-2r})-\overline{n}(1-\eta_{ L})\,. \tag{29}\]
The term \(-\overline{n}(1-\eta_{L})\) on the right-hand side (r.h.s.) of the inequality above represents a finite-temperature correction that accounts for thermal effects. Notice how the latter is always negative, meaning that thermal noise has the effect of reducing the detection noise needed for a classical simulation of the sampling task to be feasible, with respect to the zero-temperature scenario. Evidently, Eq. (29) is automatically satisfied whenever the r.h.s. becomes negative, i.e., if
\[\overline{n}\geq\frac{\eta_{L}(1-e^{-2r})}{2(1-\eta_{L})}\,. \tag{30}\]
We also remind the reader that the mean photon number of a thermal state is related to the environment's temperature \(T\) via
\[\overline{n}=\frac{1}{e^{\frac{\hbar\omega}{k_{B}T}}-1}\,, \tag{31}\]
where \(\omega\) is the mode's frequency and \(k_{B}\) is the Boltzmann constant. Therefore, Eq. (30) predicts the existence of a threshold temperature above which the sampling task becomes classically efficiently simulable even when ideal detectors are employed. This is somewhat expected, as it is well established that a noiseless boson sampling task with thermal input states leads to a classically simulable problem. Consequently, we envision a transition in the computational complexity of the task as the environment's temperature increases. In the remaining part of this section we formalize these ideas and provide a physical interpretation of this phenomenon. In Appendix A we show that the system's losses and thermal noise effects can be fully absorbed into the initial state, while retaining a unitary evolution via an effective ideal LON. In particular, the noisy evolution given by \(\mathcal{E}\) with \(\mathbf{L}=\sqrt{\eta_{L}}\mathbf{W}\) is equivalent to a loss-less LON described by the unitary matrix \(\mathbf{W}\), preceded by \(m\) identical single-mode maps \(\mathcal{F}\) that mix each input mode with a thermal state by means of a beam splitter with transmissivity equal to \(\eta_{L}\), and finally taking the trace over the environmental degrees of freedom (See Fig. (2) for a schematic representation of the channel decomposition). Hence, the action of the map \(\mathcal{F}\) on a generic single-mode state \(\rho\) reads
\[\mathcal{F}(\rho)=\mathrm{Tr}_{env}\{\mathcal{U}_{BS}(\rho\otimes\nu_{th}(k)) \mathcal{U}_{BS}^{\dagger}\}\,. \tag{32}\]
Here, \(\mathcal{U}_{BS}\) represents the beam splitter unitary operator acting on the system's mode and the corresponding ancillary environmental mode, and we have explicitly displayed the parameter \(k\) that completely identifies the thermal state. Focusing on GBS, each input mode of this loss-less LON is fed with \(\mathcal{F}(S(r)\,|0\rangle\!\langle 0|\,S^{\dagger}(r))\), a Gaussian state whose covariance matrix reads \(\text{diag}\{a_{+},a_{-}\}\), with \(a_{\pm}=\eta_{L}e^{\pm 2r}+k(1-\eta_{L})\)
Figure 2: Decomposition of the CP map \(\mathcal{E}\) describing the noisy linear optical evolution. Under the assumption of uniform losses we can absorb the latter and thermal noise effects into the initial state by means of the quantum channel \(\mathcal{F}\), while retaining an ideal evolution described by the unitary matrix \(\mathbf{W}\).
As the covariance matrix of the vacuum state is simply the identity matrix, it is clear that quadrature squeezing vanishes if \(a_{-}\geq 1\). One then easily proves that this happens when condition Eq. (30) is satisfied.
As a result, the threshold temperature described in Eq. (30) can be physically interpreted as the temperature at which genuine quantum features of the input state completely disappear, so that efficient sampling on a classical machine becomes feasible regardless of the presence of noise in the detectors. In the following section we extend this argument to a generic quantum optical experiment.
## III Thermal classicalization
Let us consider a generic \(m\)-mode input state \(\rho\) undergoing noisy linear optical evolution via the quantum map \(\mathcal{E}\) defined in the previous section, followed by a generic quantum measurement. If we further assume that losses are uniform within the interferometer, then the classicality condition Eq. (20) may be recast as
\[k\mathbb{I}_{m}\geq\frac{\overline{\boldsymbol{s}}-\eta_{L}\boldsymbol{W}^{ \dagger}\overline{\boldsymbol{t}}\boldsymbol{W}}{1-\eta_{L}}\,. \tag{33}\]
The r.h.s. of this inequality (i.e., the threshold temperature) diverges at \(\eta_{L}=1\), which can be understood as the loss parameter also measures the ability of the LON to couple the system with the environment. Eq. (33) allows us to compute the temperature for an efficient classical simulation once the input state and the POVM have been specified. Since \(\overline{\boldsymbol{s}}\leq\mathbb{I}_{m}\) and \(\overline{\boldsymbol{t}}\geq-\mathbb{I}_{m}\), it is clear that the most restrictive scenario is obtained by substituting \(\overline{\boldsymbol{s}}=\mathbb{I}_{m}\) and \(\overline{\boldsymbol{t}}=-\mathbb{I}_{m}\) into Eq. (33), resulting in
\[k\geq\frac{1+\eta_{L}}{1-\eta_{L}}\,, \tag{34}\]
or equivalently
\[\overline{n}\geq\frac{\eta_{L}}{1-\eta_{L}}\,. \tag{35}\]
To summarize, when the condition condition Eq. (35) is satisfied, it enables the efficient simulation of a general noisy quantum optical experiment, our sole assumption pertaining to the model employed to describe the linear optical evolution.
We can also find a physical meaning to the threshold temperature given by Eq. (35). We have already pointed out that the system's losses and thermal noise effects can be absorbed into the initial state by applying the map \(\mathcal{F}\) to each input mode. Furthermore, we introduce the notation \(\mathcal{F}_{m}\equiv\mathcal{F}^{\otimes m}\) for the corresponding \(m-\)mode map.
\[\mathcal{F}_{m}(\rho)=\operatorname{Tr}_{env}\{\mathcal{U}_{BS}^ {\otimes m}(\rho\otimes\nu_{th}^{\otimes m}(k))\mathcal{U}_{BS}^{\dagger \otimes m}\} \tag{36}\] \[=\int d^{2m}\boldsymbol{\alpha}\,P_{\rho}(\boldsymbol{\alpha}) \operatorname{Tr}_{env}\{\mathcal{U}_{BS}^{\otimes m}(|\boldsymbol{\alpha} \rangle\!\langle\boldsymbol{\alpha}|\otimes\nu_{th}^{\otimes m}(k))\mathcal{ U}_{BS}^{\dagger\otimes m}\}\] \[=\int d^{2m}\boldsymbol{\alpha}\,P_{\rho}(\boldsymbol{\alpha}) \mathcal{F}_{m}(|\boldsymbol{\alpha}\rangle\!\langle\boldsymbol{\alpha}|)\,,\]
where we have exploited the Glauber \(P-\)function representation of \(\rho\)
\[\rho=\int d^{2m}\boldsymbol{\alpha}\,P_{\rho}(\boldsymbol{\alpha})\,| \boldsymbol{\alpha}\rangle\!\langle\boldsymbol{\alpha}|. \tag{37}\]
Our objective is to compute the \(P\)-function of \(\mathcal{F}_{m}(\rho)\), as it captures the non-classical properties of the noisy input state at study [32; 33]. In particular, it is well known that having a well-behaved and non-negative \(P-\)function is a necessary and sufficient condition for a state to admit a classical description, i.e., it can be expressed as a statistical mixture of coherent states. This property is also referred to as \(P-\)classicality. As previously mentioned, the \(P\)-function is obtained by substituting \(\boldsymbol{s}=\mathbb{I}_{m}\) into Eq. (3), namely
\[P_{\mathcal{F}_{m}(\rho)}(\boldsymbol{\beta})=\int\frac{d^{2m}\boldsymbol{ \xi}}{\pi^{2m}}\operatorname{Tr}\{\mathcal{F}_{m}(\rho)D(\boldsymbol{\xi})\}e ^{\frac{\boldsymbol{\xi}\boldsymbol{s}^{\dagger}}{2}}e^{\boldsymbol{\beta} \boldsymbol{\xi}^{\dagger}-\boldsymbol{\xi}\boldsymbol{\beta}^{\dagger}}\,. \tag{38}\]
We then substitute Eq. (36) into the previous expression to obtain
\[\begin{split} P_{\mathcal{F}_{m}(\rho)}(\boldsymbol{\beta})& =\int\frac{d^{2m}\boldsymbol{\xi}}{\pi^{2m}}\int d^{2m}\boldsymbol{ \alpha}P_{\rho}(\boldsymbol{\alpha})\\ &\operatorname{Tr}\{\mathcal{F}_{m}(|\boldsymbol{\alpha}\rangle\! \langle\boldsymbol{\alpha}|)D(\boldsymbol{\xi})\}e^{\frac{\boldsymbol{\xi} \boldsymbol{s}^{\dagger}}{2}+\beta\boldsymbol{\xi}^{\dagger}-\boldsymbol{\xi} \boldsymbol{\beta}^{\dagger}}\,.\end{split} \tag{39}\]
The trace above may be computed using standard Gaussian calculation techniques, yielding
\[\begin{split}\operatorname{Tr}\{\mathcal{F}_{m}(|\boldsymbol{ \alpha}\rangle\!\langle\boldsymbol{\alpha}|)D(\boldsymbol{\xi})\}& =e^{-\frac{\lambda}{2}\boldsymbol{\xi}\boldsymbol{\xi}^{\dagger}+\sqrt{ \eta_{L}}\boldsymbol{\xi}\boldsymbol{\alpha}^{\dagger}-\sqrt{\eta_{L}} \boldsymbol{\alpha}\boldsymbol{\xi}^{\dagger}}\,,\end{split} \tag{40}\]
where \(\lambda=k(1-\eta_{L})+\eta_{L}\). Putting everything together, we obtain
\[\begin{split} P_{\mathcal{F}_{m}(\rho)}(\boldsymbol{\beta})& =\int\frac{d^{2m}\boldsymbol{\xi}}{\pi^{2m}}\int d^{2m}\boldsymbol{ \alpha}\,P_{\rho}(\boldsymbol{\alpha})\\ e^{\frac{1-\lambda}{2}\boldsymbol{\xi}\boldsymbol{\xi}^{ \dagger}+\boldsymbol{\xi}(\sqrt{\eta_{L}}\boldsymbol{\alpha}^{\dagger}- \boldsymbol{\beta}^{\dagger})-(\sqrt{\eta_{L}}\boldsymbol{\alpha}-\boldsymbol {\beta})\boldsymbol{\xi}^{\dagger}}\,.\end{split} \tag{41}\]
We then perform straightforward Gaussian integration and obtain
\[P_{\mathcal{F}_{m}(\rho)}(\boldsymbol{\beta})=\frac{2^{m}}{\pi^{m}(\lambda-1)^ {m}}\int d^{2m}\boldsymbol{\alpha}P_{\rho}(\boldsymbol{\alpha})e^{-\frac{ 2}{\lambda-1}|\boldsymbol{\beta}-\sqrt{\eta_{L}}\boldsymbol{\alpha}|^{2}}\,, \tag{42}\]
namely the convolution of \(\rho\)'s \(P-\)function and a Gaussian distribution. At the threshold temperature \(-\) i.e., \(k=\frac{1+\eta_{L}}{1-\eta_{L}}\) or equivalently \(\lambda=1+2\eta_{L}-\) we have
\[P_{\mathcal{F}_{m}(\rho)}(\boldsymbol{\beta})=\int d^{2m}\boldsymbol{\alpha}P_ {\rho}(\boldsymbol{\alpha})\frac{e^{-|\frac{\boldsymbol{\beta}}{\sqrt{\eta_{L}}} -\boldsymbol{\alpha}|^{2}}}{(\pi\eta_{L})^{m}} \tag{43}\]
meaning that the \(P\)-function of the noiseless state \(\rho\) and that of \(\mathcal{F}_{m}(\rho)\) are related by a Weierstrass transform (Gaussian filter). The latter has a smoothing effect on \(P_{\rho}\) that removes its negativities and divergencies, resulting in a full suppression of the state's genuine quantum features, as we prove in the remainder of this section. To this end, we recall that the \(P-\)function and the \(Q-\)function are related via the identity
\[Q_{\rho}(\boldsymbol{\beta})=\frac{\langle\boldsymbol{\beta}|\,\rho\,| \boldsymbol{\beta}\rangle}{\pi^{m}}=\int d^{2m}\boldsymbol{\alpha}\,P_{\rho}( \boldsymbol{\alpha})\frac{e^{-|\boldsymbol{\beta}-\boldsymbol{\alpha}|^{2}}}{ \pi^{m}}\,. \tag{44}\]
Comparing Eq. (43) and Eq. (44) allows us to establish a connection between the \(P-\)function of \(\rho\) and that of its noisy counterpart \(\mathcal{F}_{m}(\rho)\)
\[P_{\mathcal{F}_{m}(\rho)}(\mathbf{\beta})=\frac{1}{\eta_{L}^{m}}Q_{\rho}(\tfrac{\bm {\beta}}{\sqrt{\eta_{L}}})\,. \tag{45}\]
It is well known that any \(Q-\)function is positive definite and well behaved, thus proving our thesis: any input state \(\rho\) becomes \(P-\)classical after interacting with a thermal state via a beam splitter with trasmissivity \(\eta_{L}\), at the threshold temperature given by Eq. (35). In particular, we find that the \(P-\)function of the noisy state \(\mathcal{F}_{m}(\rho)\) is proportional to the \(Q-\)function of the input state \(\rho\), properly rescaled.
It is worth noting the close connection between the temperature bounds derived here and the definition of non-classicality depth as introduced in Ref. [34].
## IV Approximate classical simulation of Gaussian boson sampling under thermal noise
In Ref. [18] the authors investigate the classical simulability of a noisy GBS experiment with lossy linear optical evolution given by Eq. (42) and imperfect threshold detection described by the POVM elements Eq. (26) and Eq. (27). As it will be clear in the following, their approach can account for _approximate_ sampling, thus overcoming the principal limitation of the formalism outlined in Section II. However, this advancement is achieved at the expense of generality, limiting the applicability of this methodology to a noisy GBS experiment as described above. In particular, it is showed that the latter may be efficiently simulated up to error \(\varepsilon\) if the following (sufficient) condition is satisfied
\[\mathrm{sech}\left(\frac{1}{2}\Theta\left[\ln\left(\frac{1-2q_{D}}{\eta_{L}e^{ -2r}+1-\eta_{L}}\right)\right]\right)>e^{-\frac{\varepsilon^{2}}{4\pi}}\,, \tag{46}\]
where \(q_{D}=\frac{p_{D}}{\eta_{D}}\) and \(\Theta(x)=\max\left(x,0\right)\) is the ramp function. If the previous inequality does not admit a solution for any \(0\leq\varepsilon\leq 1\), then the classical simulation algorithm fails and we say that the GBS setup has passed the non-classicality test. Building on the approach of Ref. [18], we establish the following classical simulability condition that accounts for the influence of thermal noise
\[\mathrm{sech}\left(\frac{1}{2}\Theta\left[\ln\left(\frac{1-2q_{D}}{\eta_{L}e^{ -2r}+k(1-\eta_{L})}\right)\right]\right)\geq(1-\varepsilon^{2})^{\frac{1}{m}}\,, \tag{47}\]
where we remind the reader that \(k=2\overline{n}+1\) and \(\overline{n}\) is the mean number of environmental thermal photons. Furthermore, we show that at zero temperature Eq. (47) constitutes a tighter bound than Eq. (46).
As stated earlier, the assumption of uniform losses enables us to absorb all imperfections and thermal noise effects into the initial state state, while retaining an ideal linear optical evolution described by the unitary operator \(\mathcal{W}\). Each input port of this loss-less interferometer is fed with the single-mode Gaussian state
\[\tau=\mathcal{F}(S(r)|0\rangle\!\langle 0|S^{\dagger}(0)\rangle\,, \tag{48}\]
which has a diagonal covariance matrix that reads \(\text{diag}\{a_{+},a_{-}\}\) with \(a_{\pm}=\eta_{L}e^{\pm 2r}+k(1-\eta_{L})\). The probability \(p(\mathbf{n})\) of observing a specific measurement outcome \(\mathbf{n}=(n_{1},\dots,n_{m})\) with \(n_{i}\in\{0,1\}\) is thus given by
\[p(\mathbf{n})=\mathrm{Tr}\{\mathcal{W}\tau^{\otimes m}\mathcal{W}^{\dagger}\Pi_{ \mathbf{n}}\}\,, \tag{49}\]
where \(\Pi_{\mathbf{n}}=\bigotimes_{i=1}^{m}\Pi_{n_{i}}\) is the POVM associated with \(m-\)mode noisy threshold photo-detection. Let us now consider a related sampling problem, where the same loss-less LON is fed with \(m\) identical \(t-\)classical Gaussian states \(\tilde{\tau}\). The resulting outcome probability distribution reads
\[\tilde{p}(\mathbf{n})=\mathrm{Tr}\{\mathcal{W}\tilde{\tau}^{\otimes m}\mathcal{W} ^{\dagger}\Pi_{\mathbf{n}}\}\,. \tag{50}\]
We can then use the classical simulability condition Eq. (20) - with \(\mathbf{L}=\mathbf{W}\) unitary matrix and \(\overline{\mathbf{s}}=(1-2q_{D})\mathbb{I}_{m}\) - to deduce that sampling (exactly) from the output state \(\mathcal{W}\tilde{\tau}^{\otimes n}\mathcal{W}^{\dagger}\) can be efficiently performed if \(\tilde{\tau}\) is \(t-\)classical for \(t\in[1-2q_{D},1]\). The idea is that when the noisy input state \(\tau\) is similar enough to \(\tilde{\tau}\in\mathcal{C}_{G}^{(t)}\) for some \(t\in[1-2q_{D},1]\) the corresponding sampling problem can be efficiently simulated up to a small error. The total variational distance (TVD) between the two output probability distributions \(p\) and \(\tilde{p}\)
\[||p-\tilde{p}||_{1}=\frac{1}{2}\sum_{\mathbf{n}}|p(\mathbf{n})-\tilde{p}(\mathbf{n})|\,, \tag{51}\]
is upper bounded as follows
\[\frac{1}{2}||p-\tilde{p}||_{1} \leq\frac{1}{2}||\mathcal{W}\tau^{\otimes m}\mathcal{W}^{\dagger }-\mathcal{W}\tilde{\tau}^{\otimes m}\mathcal{W}^{\dagger}||_{tr}\] \[=\frac{1}{2}||\tau^{\otimes m}-\tilde{\tau}^{\otimes m}||_{tr}\] \[\leq\sqrt{1-F(\tau^{\otimes m},\tilde{\tau}^{\otimes m})}=\sqrt{1 -(F(\tau,\tilde{\tau}))^{m}}\,. \tag{52}\]
Here \(F(\rho,\tau)=(\mathrm{Tr}\{\sqrt{\sqrt{\rho}\tau\sqrt{\rho}}\})^{2}\) denotes the quantum fidelity and
\[||\rho-\tau||_{tr}=\mathrm{Tr}\bigg{\{}\sqrt{(\rho-\tau)^{\dagger}(\rho-\tau)} \bigg{\}} \tag{53}\]
is the trace norm. We note that both measures are invariant under unitary transformations acting on \(\rho\) and \(\tau\). We emphasize that the bound on the TVD in Eq. (52) is more stringent compared to the one presented in Ref. [18], where the authors exploited a generalization of the Pinkser's inequality instead. As any \(t-\)classical state with \(t\in[1-2q_{D},1]\) leads to an efficiently simulable instance of GBS, we further minimize the TVD (i.e., maximize the fidelity) over all possible choices of \(\tilde{\tau}\)
\[\frac{1}{2}||p-\tilde{p}||_{1}\leq\sqrt{1-(F_{\text{max}}(\tau,\tilde{\tau}))^{m }}\leq\varepsilon\,, \tag{54}\]
where
\[F_{\text{max}}(\tau,\tilde{\tau})=\max_{t\in[1-2q_{D},1]}\max_{\tilde{\tau}\in \mathcal{C}_{G}^{(t)}}F(\tau,\tilde{\tau})\,. \tag{55}\]
The fidelity \(F(\tau,\tilde{\tau})\) between two single-mode Gaussian states \(\tau\) and \(\tilde{\tau}\) has a known analytical expression given by [35]
\[F(\tau,\tilde{\tau})=\frac{1}{\sqrt{\Delta+\Lambda}-\sqrt{\Lambda}}\,. \tag{56}\]
Here
\[\Delta=\frac{1}{4}\det\{\mathbf{\sigma}_{\tau}+\mathbf{\sigma}_{\tilde{\tau}}\}\,, \tag{57}\]
\[\Lambda=\frac{1}{4}(\det\{\mathbf{\sigma}_{\tau}\}-1)(\det\{\mathbf{\sigma}_{\tilde{ \tau}}\}-1)\,, \tag{58}\]
where \(\mathbf{\sigma}_{\tau}\) and \(\mathbf{\sigma}_{\tilde{\tau}}\) denote the covariance matrices of \(\tau\) and \(\tilde{\tau}\), respectively. The optimization of Eq. (56) over \(t\in[1-2q_{D},1]\) and \(\tilde{\tau}\in\mathcal{C}_{G}^{(t)}\) can be carried out analytically following the technique outlined in Ref. [18]. Notice that, if \(\tau\) is itself \(t-\)classical, then clearly fidelity is maximised (and equal to one) when \(\tilde{\tau}=\tau\), and it is possible to efficiently simulate the sampling task exactly. Cumbersome algebra then yields our final sufficient condition for classical simulability of noisy GBS at finite temperature Eq. (47). We observe that in the zero temperature limit \(k=1\) we obtain a bound that is more restrictive than that presented in Eq. (46). Lastly, notice that by fixing \(\varepsilon=0\) (sampling from the exact probability distribution of the noisy experiment), we retrieve the results we obtained in Eq. (29).
## V Conclusions
Using a phase space method based on the negativity of the relevant quasi-probability distributions, we have established a sufficient condition for the efficient classical simulation of generic quantum (linear) optical experiments affected by loss and thermal noise. Our results show how finite temperature effects reduce the threshold of the system's imperfections that are sufficient for an efficient simulation of the experiment to be feasible. We then turned our attention to a GBS task employing threshold detectors, and provided a non-classicality condition in the form of an inequality involving the squeezing and noise parameters (photon loss, mean thermal photon number, detectors inefficiencies and dark count rates), that any potential candidate for an experimental demonstration of quantum advantage must satisfy. Furthermore, we showed that there exist a threshold temperature at which any sampling experiment becomes efficiently simulable, even in the presence of ideal detectors. We presented a physical interpretation of this phenomenon by establishing a connection with the vanishing of the genuine quantum features of the state. We hope that this work inspires rigorous studies on the transition occurring in the computational complexity of GBS subject to increasing levels of thermal noise.
## VI Acknowledgments
G.B. is part of the AppQInfo MSCA ITN which received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 956071. H.K. is supported by the KIAS Individual Grant No. CG085301 at Korea Institute for Advanced Study. MSK acknowledges the KIST Open Research Programme, Samsung GRC programme and the KIAS visiting professorship. The project was supported by the UK EPSRC through EP/Y004752/1 and EP/W032643/1.
|
2308.15838 | Adaptive Lasso, Transfer Lasso, and Beyond: An Asymptotic Perspective | This paper presents a comprehensive exploration of the theoretical properties
inherent in the Adaptive Lasso and the Transfer Lasso. The Adaptive Lasso, a
well-established method, employs regularization divided by initial estimators
and is characterized by asymptotic normality and variable selection
consistency. In contrast, the recently proposed Transfer Lasso employs
regularization subtracted by initial estimators with the demonstrated capacity
to curtail non-asymptotic estimation errors. A pivotal question thus emerges:
Given the distinct ways the Adaptive Lasso and the Transfer Lasso employ
initial estimators, what benefits or drawbacks does this disparity confer upon
each method? This paper conducts a theoretical examination of the asymptotic
properties of the Transfer Lasso, thereby elucidating its differentiation from
the Adaptive Lasso. Informed by the findings of this analysis, we introduce a
novel method, one that amalgamates the strengths and compensates for the
weaknesses of both methods. The paper concludes with validations of our theory
and comparisons of the methods via simulation experiments. | Masaaki Takada, Hironori Fujisawa | 2023-08-30T08:21:46Z | http://arxiv.org/abs/2308.15838v2 | # Adaptive Lasso, Transfer Lasso, and Beyond:
###### Abstract
This paper presents a comprehensive exploration of the theoretical properties inherent in the Adaptive Lasso and the Transfer Lasso. The Adaptive Lasso, a well-established method, employs regularization divided by initial estimators and is characterized by asymptotic normality and variable selection consistency. In contrast, the recently proposed Transfer Lasso employs regularization subtracted by initial estimators with the demonstrated capacity to curtail non-asymptotic estimation errors. A pivotal question thus emerges: Given the distinct ways the Adaptive Lasso and the Transfer Lasso employ initial estimators, what benefits or drawbacks does this disparity confer upon each method? This paper conducts a theoretical examination of the asymptotic properties of the Transfer Lasso, thereby elucidating its differentiation from the Adaptive Lasso. Informed by the findings of this analysis, we introduce a novel method, one that amalgamates the strengths and compensates for the weaknesses of both methods. The paper concludes with validations of our theory and comparisons of the methods via simulation experiments.
## 1 Introduction
We consider an ordinary high-dimensional regression problem. Let \(X=(\mathbf{x}_{1},\ldots,\mathbf{x}_{p})=(x_{1}^{\top},\ldots,x_{n}^{\top})^{ \top}\in\mathbb{R}^{n\times p}\) and \(y\in\mathbb{R}^{n}\) be a feature matrix and response vector, respectively. We suppose a true model is linear with independent and identically distributed (i.i.d.) Gaussian noise, that is,
\[y=X\beta^{*}+\varepsilon,\ \varepsilon_{i}\stackrel{{ i.i.d.}}{{\sim}} \mathcal{N}(0,\sigma^{2}), \tag{1}\]
where \(\beta^{*}\in\mathbb{R}^{p}\) is a true regression parameter and \(\varepsilon\in\mathbb{R}^{n}\) is a Gaussian noise. We presume that \(\beta^{*}\) is sparse, and designate the active and inactive parameters as \(S\) and \(S^{c}\), namely \(S:=\{j:\beta^{*}_{j}\neq 0\}\) and \(S^{c}:=\{j:\beta^{*}_{j}=0\}\), respectively.
The _Lasso_[18] is a classical regression method for high-dimensional data, defined by
\[\hat{\beta}^{\mathcal{L}}_{n}=\operatorname*{argmin}_{\beta}\left\{\frac{1}{n }\|y-X\beta\|_{2}^{2}+\frac{\lambda_{n}}{n}\sum_{j}|\beta_{j}|\right\}. \tag{2}\]
Owing to \(\ell_{1}\) regularization, the solution exhibits sparsity. We denote \(\hat{S}^{\mathcal{L}}_{n}:=\{j:\hat{\beta}^{\mathcal{L}}_{j}\neq 0\}\).
Numerous theoretical studies have elucidated the strengths and limitations of the Lasso. According to asymptotic theory, the Lasso estimator is consistent if \(\lambda_{n}=o(n)\) and is \(\sqrt{n}\)-consistent if \(\lambda_{n}=O(\sqrt{n})\)[5]. However, [21] demonstrates that the Lasso has _inconsistent_ variable selection if \(\lambda_{n}=O(\sqrt{n})\), while it does not have \(\sqrt{n}\)-consistency if \(\lambda_{n}=o(n)\) and \(\lambda_{n}/\sqrt{n}\to\infty\). Hence, the Lasso cannot achieve both \(\sqrt{n}\)-consistency and consistent variable selection simultaneously (see Figure 1 left).
To improve the asymptotic properties of the Lasso, one of the most well-known methods is the _Adaptive Lasso_[21, 10], which is given by
\[\hat{\beta}^{\mathcal{A}}_{n}=\operatorname*{argmin}_{\beta}\left\{\frac{1}{n }\|y-X\beta\|_{2}^{2}+\frac{\lambda_{n}}{n}\sum_{j}w_{j}|\beta_{j}|\right\}, \ w_{j}:=\frac{1}{|\tilde{\beta}_{j}|^{\gamma}}, \tag{3}\]
where \(\tilde{\beta}\) is an initial estimator of the true parameter \(\beta^{*}\) and \(\gamma>0\) is a hyperparameter. We denote \(\hat{S}^{\mathcal{A}}_{n}:=\{j:\hat{\beta}^{\mathcal{A}}_{j}\neq 0\}\). If \(\tilde{\beta}\) is a \(\sqrt{n}\)-consistent estimator, \(\lambda_{n}=o(\sqrt{n})\), and \(\lambda_{n}n^{(\gamma-1)/2}\to\infty\), then the Adaptive Lasso satisfies both \(\sqrt{n}\)-consistency and consistent variable selection, as well as asymptotic normality (Figure 1 right). This is known as the _oracle property_ because it behaves as if the true active variables were given in advance. The Adaptive Lasso assumes the existence of a \(\sqrt{n}\)-consistent initial estimator and uses it as the weight of the \(\ell_{1}\) regularization.
Recently, a different use of an initial estimator has been proposed [16, 1], which is given by
\[\hat{\beta}^{\mathcal{T}}_{n}=\operatorname*{argmin}_{\beta}\left\{\frac{1}{ n}\|y-X\beta\|_{2}^{2}+\frac{\lambda_{n}}{n}\sum_{j}|\beta_{j}|+\frac{\eta_{n}}{n} \sum_{j}\left|\beta_{j}-\tilde{\beta}_{j}\right|\right\}, \tag{4}\]
where \(\tilde{\beta}\) is an initial estimator ("source parameter" in the field of transfer learning). We denote \(\hat{S}_{n}^{\mathcal{T}}:=\{j:\hat{\beta}_{j}^{\mathcal{T}}\neq 0\}\). This method is called _Transfer Lasso_. The first regularization term in (4) shrinks the estimator to zero and induces sparsity. The second regularization term in (4), on the other hand, shrinks the estimator to the initial estimator and induces the sparsity of changes from the initial estimator. The \(\ell_{1}\) regularization of the difference between the initial estimator and the target estimator plays a key role in sparse updating, in which only a small number of parameters are changed from the initial estimator. Non-asymptotic analysis reveals that a small \(\Delta:=\tilde{\beta}-\beta^{*}\) brings advantageous on its estimation error bounds for the Transfer Lasso over the Lasso [16].
The Adaptive Lasso and the Transfer Lasso have similarities and differences. They are similar in that they both use an initial estimator in \(\ell_{1}\) regularization. However, the way the initial estimator is used is different: the Adaptive Lasso uses the parameter "divided" by the initial estimator in the regularization, whereas Transfer Lasso uses the parameter "subtracted" by the initial estimator in the regularization. In addition, the original motivations are different: The Adaptive Lasso aims to reduce estimation bias as well as satisfy consistency in variable selection, whereas the Transfer Lasso aims to sparsify both the estimator itself and the change from the initial estimator, leveraging the knowledge of the initial estimator.
These raise major questions: How do these similarities and differences between Adaptive Lasso and Transfer Lasso affect the theoretical properties and empirical results of each method? In this paper, we highlight the asymp
Figure 1: Phase diagrams with the order of \(\lambda_{n}\) for the Lasso (left) and the Adaptive Lasso (right). The Lasso does not achieve \(\sqrt{n}\)-consistent and consistent variable selection simultaneously, while the Adaptive Lasso satisfies both.
totic properties of each method and seek to answer the following research questions.
1. Does the Transfer Lasso have the same properties as the Adaptive Lasso? Specifically, does the Transfer Lasso have the oracle property that the Adaptive Lasso has?
2. Does the Transfer Lasso have different properties from the Adaptive Lasso? If so, under what conditions of initial estimators, does the Transfer Lasso have an advantage over the Adaptive Lasso, or vice versa?
3. If these two methods have their specific advantages and disadvantages, are there any ways to compensate for the disadvantages of both and to reconcile their advantages?
4. How does the asymptotic property of the estimator change as the order of the hyperparameters changes for each method?
Our theoretical analysis led us to the following findings.
1. The Transfer Lasso does not have the oracle property in general. This is an unfavorable property compared to the Adaptive Lasso.
2. The Transfer Lasso has an advantage in convergence rate if the initial estimator is estimated from sufficiently large data. The Adaptive Lasso, in contrast, does not benefit from such an initial estimator.
3. We found that a non-trivial integration of the Adaptive Lasso and the Transfer Lasso provides a combination of the benefits of both. The superiority of this integration was shown by asymptotic analysis and empirical simulations.
4. We comprehensively analyzed the relation between hyperparameters and asymptotic properties and drew phase diagrams representing them. Figure 2 illustrates the phase diagram of the Adaptive Lasso and the Transfer Lasso, and Figure 3 illustrates the phase diagram of the proposed method. These theoretical results were reproduced empirically by numerical simulations in Figure 5.
This paper discusses the above research questions in the following organization. First, we review the asymptotic properties of the Lasso and Adaptive Lasso (Section 2). Then, we define a setup for our analysis and theoretically
analyze the asymptotic properties of the Adaptive Lasso and the Transfer Lasso (Section 3). This elucidates the advantages and disadvantages of each method. Furthermore, to compensate for their disadvantages and to reconcile their advantages, we propose a novel method, which effectively integrates both of them (Section 4). We demonstrate its superiority through theoretical analysis. We then compare the Adaptive Lasso, the Transfer Lasso, and their integrated method through numerical experiments (Section 5). Finally, we provide additional discussion and conclusions (Sections 6 and 7).
### Notations
Consider a vector \(v\in\mathbb{R}^{p}\). We denote the element-wise absolute vector by \(|v|\), with the \(j\)-th element given by \(|v_{j}|\). The sign vector is represented as \(\mathrm{sgn}(v)\), with its elements being \(1\) for \(v_{j}>0\), \(-1\) for \(v_{j}<0\), and \(0\) for \(v_{j}=0\). The support set of \(v\) is denoted as \(\mathrm{supp}(v)\) and defined as \(\mathrm{supp}(v):=\{j\in\{1,\ldots,p\}|v_{j}\neq 0\}\). The \(\ell_{q}\)-norm of \(v\) is expressed as \(\|v\|_{q}=(\sum_{j=1}^{p}|v_{j}|^{q})^{1/q}\).
For a matrix \(M\in\mathbb{R}^{p\times p}\), we use \(M\succeq O\) for a positive semi-definite matrix and \(M\succ O\) for a positive definite matrix, implying \(v^{\top}Mv\geq 0\) for all \(v\in\mathbb{R}^{p}\) and \(v^{\top}Mv>0\) for all non-zero \(v\in\mathbb{R}^{p}\), respectively.
Given a subset \(S\) of \(\{1,\ldots,p\}\), we denote its cardinality as \(|S|\), and the complement set as \(S^{c}=\{1,\ldots,p\}\backslash S\). The vector \(v_{S}\) represents \(v\) restricted to the index set \(S\). The matrix \(M_{S_{1}S_{2}}\) denotes the submatrix with row indices in \(S_{1}\) and column indices in \(S_{2}\).
For sequences \(a_{n}\) and \(b_{n}\), we use \(a_{n}=O(b_{n})\) to indicate that \(|a_{n}/b_{n}|\) converges to a finite value, and \(a_{n}=o(b_{n})\) to signify \(|a_{n}/b_{n}|\) converging to zero as \(n\to\infty\).
## 2 Literature Review
We review some asymptotic properties for the Lasso and the Adaptive Lasso based on [5] and [21], and then present other related studies. All of the proofs in this section are essentially the same as those in [5] and [21], but for the sake of readability, we provide them in Appendix B.1.
We make the following assumption throughout this paper as in [5, 21].
_Assumption 2.1_.: (5) \[C_{n}:=\frac{1}{n}X^{\top}X\to C\succ O\ (n\to\infty),\]
\[\frac{1}{n}\max_{i}\|x_{i}\|_{2}^{2}\to 0\ (n\to\infty). \tag{6}\]
Let \(W\) be a random variable of a Gaussian distribution with mean \(0\) and covariance \(\sigma^{2}C\), that is, \(W\sim\mathcal{N}(0,\sigma^{2}C)\).
### Asymptotic Properties for the Lasso
The Lasso is given by (2). According to [5, 21], several asymptotic properties have been obtained for the Lasso: consistency (Lemma 2.2 and Corollary 2.3), convergence rate (Lemma 2.4, Corollary 2.5, Lemma 2.7, and Corollary 2.8), and variable selection consistency (Lemma 2.6).
**Lemma 2.2** (Theorem 1 in [5] and Lemma 1 in [21]).: _If \(\lambda_{n}/n\to\lambda_{0}\geq 0\), then_
\[\hat{\beta}_{n}^{\mathcal{L}}\to^{p}\operatorname*{argmin}_{\beta}\left\{( \beta-\beta^{*})^{\top}C(\beta-\beta^{*})+\lambda_{0}\sum_{j}|\beta_{j}|\right\}. \tag{7}\]
**Corollary 2.3** (Consistency for Lasso).: _If \(\lambda_{n}=o(n)\), then \(\hat{\beta}_{n}^{\mathcal{L}}\) is consistent._
**Lemma 2.4** (Theorem 2 in [5] and Lemma 2 in [21]).: _If \(\lambda_{n}/\sqrt{n}\to\lambda_{0}\geq 0\), then_
\[\sqrt{n}(\hat{\beta}_{n}^{\mathcal{L}}-\beta^{*})\] \[\xrightarrow{d}\operatorname*{argmin}_{u}\left\{u^{\top}Cu-2u^{ \top}W+\lambda_{0}\sum_{j}\left(u_{j}\operatorname*{sgn}(\beta_{j}^{*})I( \beta_{j}^{*}\neq 0)+|u_{j}|I(\beta_{j}^{*}=0)\right)\right\}. \tag{8}\]
**Corollary 2.5** (\(\sqrt{n}\)-consistency for Lasso).: _If \(\lambda_{n}=O(\sqrt{n})\), then \(\hat{\beta}_{n}^{\mathcal{L}}\) is \(\sqrt{n}\)-consistent._
**Lemma 2.6** (Inconsistent Variable Selection; Proposition 1 in [21]).: _Let \(\hat{S}_{n}^{\mathcal{L}}:=\{j:\hat{\beta}_{j}^{\mathcal{L}}\neq 0\}\). If \(\lambda_{n}/\sqrt{n}\to\lambda_{0}\geq 0\), then_
\[\limsup_{n\to\infty}P(\hat{S}_{n}^{\mathcal{L}}=S)\leq c<1 \tag{9}\]
_where \(c\) is a constant._
**Lemma 2.7** (Lemma 3 in [21]).: _If \(\lambda_{n}/n\to 0\) and \(\lambda_{n}/\sqrt{n}\to\infty\), then_
\[\frac{n}{\lambda_{n}}(\hat{\beta}_{n}^{\mathcal{L}}-\beta^{*})\overset{d}{ \to}\operatorname*{argmin}_{u}\left\{u^{\top}Cu+\sum_{j=1}^{p}\left(u_{j} \operatorname*{sgn}(\beta_{j}^{*})I(\beta_{j}^{*}\neq 0)+|u_{j}|I(\beta_{j}^{*}=0) \right)\right\}. \tag{10}\]
**Corollary 2.8** (Slower Rate Consistency for Lasso).: _If \(\lambda_{n}/n\to 0\) and \(\lambda_{n}/\sqrt{n}\to\infty\), then the convergence rate of \(\hat{\beta}_{n}^{\mathcal{L}}\) is slower than \(\sqrt{n}\)._
We first obtain a convergence result for \(\lambda_{n}=O(n)\) (Lemma 2.2). If \(\lambda_{n}=o(n)\), then we have consistency for the Lasso (Corollary 2.3). Although \(\lambda_{n}=o(n)\) is sufficient for consistency, it is not always \(\sqrt{n}\)-consistent. We obtain an asymptotic distribution for \(\lambda_{n}=O(\sqrt{n})\) (Lemma 2.4). This implies \(\sqrt{n}\)-consistency for the Lasso (Corollary 2.5). Unfortunately, \(\lambda_{n}=O(\sqrt{n})\) leads to inconsistent variable selection (Lemma 2.6). This implies that \(\lambda_{n}=O(\sqrt{n})\) achieves \(\sqrt{n}\)-consistency but inconsistent variable selection for the Lasso. In contrast, if \(\lambda_{n}\) is greater than \(O(\sqrt{n})\) and \(\lambda_{n}=o(n)\), we obtain an asymptotic distribution (Lemma 2.7). This implies that the convergence rate is slower than \(\sqrt{n}\) (Corollary 2.8), although it can be a consistent variable selection under the incoherence conditions [21, 20].
Figure 1 (left) summarizes the asymptotic properties for the Lasso. It cannot simultaneously achieve both \(\sqrt{n}\)-consistent estimation and consistent variable selection. This is a major limitation of the Lasso and is the motivation to develop the Adaptive Lasso.
### Asymptotic Properties for Adaptive Lasso
Adaptive Lasso is given by (3). It is known that the Adaptive Lasso has the so-called "oracle property" [21].
**Lemma 2.9** (Oracle Property for Adaptive Lasso; Theorem 2 in [21]).: _Suppose that \(\tilde{\beta}_{n}\) is a \(\sqrt{n}\)-consistent estimator. If \(\lambda_{n}/\sqrt{n}\to 0\) and \(\lambda_{n}n^{(\gamma-1)/2}\to\infty\), then the Adaptive Lasso estimator (2) satisfies the oracle property, that is, consistent variable selection and \(\sqrt{n}\)-consistency with asymptotic normality:_
\[\lim_{n\to\infty}P(\hat{S}_{n}^{\mathcal{A}}=S)=1, \tag{11}\]
\[\sqrt{n}(\hat{\beta}_{S}^{\mathcal{A}}-\beta_{S}^{*})\overset{d}{\to} \mathcal{N}(0,\sigma^{2}C_{SS}^{-1}). \tag{12}\]
Proof.: The proof is given in B.1.5.
The oracle property demonstrates a clear advantage of the Adaptive Lasso over the Lasso. With a \(\sqrt{n}\)-consistent initial estimator, the Adaptive Lasso can simultaneously achieve both \(\sqrt{n}\)-consistent estimation and consistent variable selection (Figure 1 right). Thus, the Adaptive Lasso performs as well as if the true active variables were given in advance.
### Other Related Work
Besides the Adaptive Lasso and the Transfer Lasso, several related methods have been studied. In this subsection, we review related methods in three categories: (I) methods with the oracle property similar to the Adaptive Lasso, (II) methods with two-stage estimation to eliminate bias, similar to the Adaptive Lasso, and (III) methods using the \(\ell_{1}\) norm to transfer knowledge about the source data, similar to the Transfer Lasso.
(I) The oracle property is known to hold not only for the Adaptive Lasso but also for the SCAD [4] and MCP [19]. These methods use nonconvex regularization, instead of using an initial estimator. Because of the nonconvexity, the algorithm converges to a local minimum and the oracle property holds only for some local minima or under restricted conditions. The Adaptive Lasso, on the other hand, uses convex regularization and always converges to a global minimum, although it requires an appropriate initial estimator.
(II) The Lasso penalizes the \(\ell_{1}\) norm of the parameters and thus introduces a bias, leading to the failure of the oracle property. Several two-step estimation methods have been proposed to eliminate the bias [13, 9, 3]. In [13], after the Lasso estimation in the first stage, the second stage is another Lasso estimation using only the selected variables. In [9], after the Lasso estimation in the first stage, the second stage is estimated by a linear combination of the first stage estimator and the OLS estimator of the selected variables. These methods are called Relaxed Lasso. [3] generalized these refitting methods as "methods that minimize the loss function with regularization and then decrease the loss function without regularization". Based on this idea, they developed several refitting methods.
(III) Regularization of \(\ell_{1}\)-norm between target and initial estimators was proposed by [1, 11, 17] as well as the Transfer Lasso [16]. [1] corresponds to the case where \(\lambda_{n}=0\) in Transfer Lasso [16]. In the TransLasso [11] and its GLM extension [17], two-stage estimation methods were proposed for the case of multiple source data, where the initial estimator is estimated using
both the source and target data. The Transfer Lasso [16], in contrast, is performed on target data using the initial estimator without the need for source data.
## 3 Asymptotic Properties for Adaptive Lasso and Transfer Lasso
We will perform asymptotic analysis based on the following general settings throughout this paper.
_Assumption 3.1_.: Let \(m\geq 0\) be an integer satisfying \(n/m\to r_{0}\geq 0\). The initial estimator \(\tilde{\beta}\) is a \(\sqrt{m}\)-consistent estimator and \(z:=\sqrt{m}(\tilde{\beta}-\beta^{*})\) converges to some distribution.
Assumption 3.1 implies that the initial estimator is estimated on source data of size \(m\), and then the final estimator is estimated on target data of size \(n\) using the initial estimator. The case \(m=n\) (\(r_{0}=1\)) corresponds to the existing results for the Adaptive Lasso, whereas \(m\gg n\) (\(r_{0}=0\)) corresponds to the typical transfer learning setup. The source and target data are assumed to be independent of each other. We also make assumption 2.1 in our analysis.
We note that the initial estimator \(\tilde{\beta}\) is _not_ a fixed (deterministic) source parameter, but an estimator (random variable). This is the same as the previous studies. The case where \(\tilde{\beta}\) is fixed is discussed in Appendix A.
### Asymptotic Properties for Adaptive Lasso
We provide the property of the Adaptive Lasso for an initial estimator with source data of size \(m\). It is straightforward to extend the oracle property for \(\sqrt{n}\)-consistent initial estimators (Lemma 2.9) to \(\sqrt{m}\)-consistent initial estimators (Lemma 3.2).
**Lemma 3.2** (Oracle Property for Adaptive Lasso with Different Sample Size).: _Suppose that \(\tilde{\beta}\) is a \(\sqrt{m}\)-consistent estimator. If \(\lambda_{n}/\sqrt{n}\to 0\) and \(\lambda_{n}\sqrt{m^{\gamma}/n}\to\infty\), then the Adaptive Lasso estimator (3) satisfies the oracle property, that is, consistent variable selection and \(\sqrt{n}\)-consistency with asymptotic normality:_
\[\lim_{n\to\infty}P(\hat{S}_{n}^{\mathcal{A}}=S)=1, \tag{13}\]
\[\sqrt{n}(\hat{\beta}_{S}^{\mathcal{A}}-\beta_{S}^{*})\stackrel{{ d}}{{\to}}\mathcal{N}(0,\sigma^{2}C_{SS}^{-1}). \tag{14}\]
Proof.: The proof is given in B.2.1.
Furthermore, we extensively analyze the convergence rate depending on the hyperparameter \(\lambda_{n}\). We obtain Theorem 3.3 and Corollary 3.4.
**Theorem 3.3** (Asymptotic Distribution for Adaptive Lasso).: _We have the following asymptotic distributions for the Adaptive Lasso estimator (3). (i) If \(\sqrt{m^{\gamma}/n}\ \lambda_{n}\to\lambda_{1}\geq 0\), then_
\[\sqrt{n}(\hat{\beta}_{n}^{\mathcal{A}}-\beta^{*})\overset{d}{\to}\operatorname {argmin}_{u}\left\{u^{\top}Cu-2u^{\top}W+\sum_{j\in S^{c}}\frac{\lambda_{1}}{|z _{j}|^{\gamma}}|u_{j}|\right\}. \tag{15}\]
_(ii) If \(\sqrt{m^{\gamma}/n}\ \lambda_{n}\to\infty\) and \(\lambda_{n}/\sqrt{n}\to\lambda_{0}\geq 0\), then_
\[\sqrt{n}(\hat{\beta}_{n}^{\mathcal{A}}-\beta^{*})\overset{d}{\to} \operatorname{argmin}_{u\in\mathcal{U}}\left\{u^{\top}Cu-2u^{\top}W+\sum_{j\in S }\lambda_{0}\frac{\operatorname{sgn}(\beta_{j}^{*})}{|\beta_{j}^{*}|^{\gamma }}u_{j}\right\},\ \mathcal{U}:=\left\{u\ |\ u_{S^{c}}=0\right\}. \tag{16}\]
_(iii) If \(\lambda_{n}/\sqrt{n}\to\infty\) and \(\lambda_{n}/n\to 0\), then_
\[\frac{n}{\lambda_{n}}(\hat{\beta}_{n}^{\mathcal{A}}-\beta^{*})\overset{d}{\to }\operatorname{argmin}_{u\in\mathcal{U}}\left\{u^{\top}Cu+\sum_{j\in S}\frac{ \operatorname{sgn}(\beta_{j}^{*})}{|\beta_{j}^{*}|^{\gamma}}u_{j}\right\}, \quad\mathcal{U}:=\left\{u\ |\ u_{S^{c}}=0\right\}. \tag{17}\]
Proof.: This is a special case of Theorem 4.1 and the proof is the same as B.3.1.
**Corollary 3.4** (Convergence Rate for Adaptive Lasso).: _We have the following convergence rates for the Adaptive Lasso estimator (3)._
* _If_ \(\sqrt{m^{\gamma}/n}\ \lambda_{n}\to\lambda_{1}\geq 0\)_, then the convergence rate is_ \(\sqrt{n}\)_._
* _If_ \(\sqrt{m^{\gamma}/n}\ \lambda_{n}\to\infty\) _and_ \(\lambda_{n}/\sqrt{n}\to\lambda_{0}\geq 0\)_, then convergence rate is_ \(\sqrt{n}\)_._
* _If_ \(\lambda_{n}/\sqrt{n}\to\infty\) _and_ \(\lambda_{n}/n\to 0\)_, then the convergence rate is_ \(n/\lambda_{n}\)_, which is slower than_ \(\sqrt{n}\)_._
Lemma 3.2 shows that the oracle property still holds for \(\sqrt{m}\)-consistent estimators, \(\lambda_{n}/\sqrt{n}\to 0\), and \(\lambda_{n}\sqrt{m^{\gamma}/n}\to\infty\). In addition, Theorem 3.3 and Corollary 3.4 show that the convergence rate of the Adaptive Lasso estimator
is equal to \(\sqrt{n}\) in the case (i) and (ii) and is less than \(\sqrt{n}\) in the case (iii). The condition of (ii) in Theorem 3.3 includes the condition of the oracle property in Lemma 3.2. Figure 2 (left) illustrates each hyperparameter region in Theorem 3.3 and Corollary 3.4.
These results imply both an advantage and a disadvantage. The advantage is that the initial estimator does not necessarily require \(\sqrt{n}\)-consistent. The Adaptive Lasso has the oracle property even when the source data is small compared to the target data (\(m\lesssim n\)) and the initial estimator is less than \(\sqrt{n}\)-consistent. The disadvantage of the Adaptive Lasso, however, is that it does not take full advantage even when the sample size of the source data is very large (\(m\gg n\)). This is because the convergence rate is equal to \(\sqrt{n}\) (\(\ll\sqrt{m}\)).
### Asymptotic Properties for Transfer Lasso
Now we consider the asymptotic properties of the Transfer Lasso. The Transfer Lasso has two hyperparameters, \(\lambda_{n}\) and \(\eta_{n}\), and various asymptotic properties appear depending on their values. We first obtain several asymptotic distributions in Theorem 3.5 and convergence rate in Corollary 3.6.
Figure 2: Phase diagrams with \(\lambda_{n}\) for the Adaptive Lasso in Lemma 3.2–Theorem 3.4 (left) and \(\lambda_{n}\) and \(\eta_{n}\) for the Transfer Lasso in Theorem 3.5–Theorem 3.11 (right). The Adaptive Lasso has \(\sqrt{n}\)-consistency in (i) and (ii) and active variable selection consistency in (ii), but the convergence rate in (iii) is slower than \(\sqrt{n}\). The Transfer Lasso has convergence rates of \(\sqrt{m}\), \(\sqrt{n}\), and \(n/\lambda_{n}(<\sqrt{n})\) for (i), (ii), and (iii) respectively. It has invariant variable selection consistency in (i) but does not have active variable selection consistency in (i) and (ii).
The illustration of the division of cases is shown in Figure 2.
**Theorem 3.5** (Asymptotic distribution for Transfer Lasso).: _We have the following asymptotic distributions for the Transfer Lasso estimator (4). (i) If \(\eta_{n}/\sqrt{n}\to\infty\) and \(\lambda_{n}/\eta_{n}\to\rho_{0}\) with \(0\leq\rho_{0}<1\), then_
\[\sqrt{m}(\hat{\beta}_{n}^{\mathcal{T}}-\beta^{*})\overset{d}{\to}z. \tag{18}\]
_(ii) If \(\lambda_{n}/\sqrt{n}\to\lambda_{0}\geq 0\) and \(\eta_{n}/\sqrt{n}\to\eta_{0}\geq 0\), then_
\[\sqrt{n}\left(\hat{\beta}_{n}^{\mathcal{T}}-\beta^{*}\right)\\ \overset{d}{\to}\operatorname*{argmin}_{u}\left\{u^{\top}Cu-2u^{ \top}W+\lambda_{0}\left(\sum_{j\in S}u_{j}\operatorname*{sgn}(\beta_{j}^{*}) +\sum_{j\in S^{c}}|u_{j}|\right)+\eta_{0}\sum_{j=1}^{p}|u_{j}-\sqrt{r_{0}}z_{ j}|\right\}. \tag{19}\]
_(iii) If \(\lambda_{n}/\sqrt{n}\to\infty\), \(\lambda_{n}/n\to 0\), and \(\eta_{n}/\lambda_{n}\to\rho_{0}^{\prime}\geq 0\), then_
\[\frac{n}{\lambda_{n}}(\hat{\beta}_{n}^{\mathcal{T}}-\beta^{*})\overset{d}{\to }\operatorname*{argmin}_{u}\left\{u^{\top}Cu+\sum_{j\in S}\left(u_{j} \operatorname*{sgn}(\beta_{j}^{*})+\rho_{0}^{\prime}\left|u_{j}\right|\right) +\sum_{j\in S^{c}}(1+\rho_{0}^{\prime})\left|u_{j}\right|\right\}. \tag{20}\]
Proof.: The proof is given in B.2.2
**Corollary 3.6** (Convergence Rate for Transfer Lasso).: _We have the following convergence rates for the Transfer Lasso estimator (4)._
* _If_ \(\eta_{n}/\sqrt{n}\to\infty\) _and_ \(\lambda_{n}/\eta_{n}\to\rho_{0}\) _with_ \(0\leq\rho_{0}<1\)_, then the convergence rate is_ \(\sqrt{m}\)_._
* _If_ \(\lambda_{n}/\sqrt{n}\to\lambda_{0}\geq 0\) _and_ \(\eta_{n}/\sqrt{n}\to\eta_{0}\geq 0\)_, then the convergence rate is_ \(\sqrt{n}\)_._
* _If_ \(\lambda_{n}/\sqrt{n}\to\infty\)_,_ \(\lambda_{n}/n\to 0\)_, and_ \(\eta_{n}/\lambda_{n}\to\rho_{0}^{\prime}\) _with_ \(0\leq\rho_{0}^{\prime}<1\)_, then the convergence rate is_ \(n/\lambda_{n}\)_, which is slower than_ \(\sqrt{n}\)_. On the other hand, if_ \(\rho_{0}^{\prime}\geq 1\)_, then the convergence rate is faster than_ \(n/\lambda_{n}\)_._
Theorem 3.5 and Corollary 3.6 show that the Transfer Lasso estimators achieve a convergence rate of \(\sqrt{m}\) in the case (i). This is beneficial when source data is large (\(m\gg n\)) and is an advantage for the Transfer Lasso over the Adaptive Lasso.
Next, we provide the results of variable selection consistency. We first define two types of variable selection consistency.
_Definition 3.7_ (Active Variable Selection Consistency).: We say that an estimator exhibits _consistent active variable selection_ when it estimates the true active variable to be nonzero and the true inactive variable to be zero, that is,
\[P(\hat{S}_{n}=S)\to 1. \tag{21}\]
Conversely, we say that an estimator is an _inconsistent active variable selection_ when this is not the case, that is,
\[\limsup_{n\to\infty}P(\hat{S}_{n}=S)\leq c<1. \tag{22}\]
_Definition 3.8_ (Invariant Variable Selection Consistency).: We say that an estimator exhibits _consistent invariant variable selection_ when the true active variable remains invariant from the initial estimator, that is,
\[P(\hat{\beta}_{S}^{\mathcal{T}}=\tilde{\beta}_{S})\to 1. \tag{23}\]
"Active" and "invariant" variable selection consistency are different but related concepts. The property of "active" variable selection consistency is identical to ordinary variable selection consistency and is induced by the sparsity of the estimator. This property gives a certain justification to decision-making based on zero/non-zero estimators since it guarantees the correct selection of non-zero variables. In contrast, "invariant" variable selection consistency is a property unique to the Transfer Lasso and is induced by the sparsity of the difference from the initial estimator. It guarantees that the estimator of the true active variable remains unchanged, while the inactive variable is allowed to change. Therefore, it gives a justification to decision-making that focuses on change in the estimator. When both active/invariant variable selection holds, the estimator is zero for the true inactive variable and is the value of the initial estimator for the active variable.
Now we give some results of active/invariant variable selection consistency for the Transfer Lasso in Theorems 3.9, 3.10, and 3.11. We suppose that the initial estimator \(\tilde{\beta}\) does _not_ hold consistent active variable selection in the analyses of variable selection.
**Theorem 3.9** (Inconsistent Active Variable Selection for Transfer Lasso).: _Suppose that \(\tilde{\beta}\) is inconsistent with active variable selection. For the cases (i) and (ii) in Theorem 3.5, the Transfer Lasso estimator (4) yields inconsistent active variable selection, that is,_
\[\limsup_{n\to\infty}P(\hat{S}_{n}^{\mathcal{T}}=S)\leq c<1, \tag{24}\]
_where \(c\) is a constant._
Proof.: The proof is given in B.2.3.
**Theorem 3.10** (Consistent Invariant Variable Selection for Transfer Lasso).: _Suppose that \(\tilde{\beta}\) is inconsistent with active variable selection. For the case (i) in Theorem 3.5, the Transfer Lasso estimator (4) yields consistent invariant variable selection, that is,_
\[P(\hat{\beta}_{S}^{\mathcal{T}}=\tilde{\beta}_{S})\to 1. \tag{25}\]
Proof.: The proof is given in B.2.4.
**Theorem 3.11** (Inconsistent Invariant Variable Selection for Transfer Lasso).: _Suppose that \(\tilde{\beta}\) is inconsistent with active variable selection. For the case (ii) in Theorem 3.5, the Transfer Lasso estimators (4) yield inconsistent invariant variable selection, that is,_
\[\limsup_{n\to\infty}P(\hat{\beta}_{S}^{\mathcal{T}}=\tilde{\beta}_{S})\leq c <1. \tag{26}\]
_where \(c\) is a constant._
Proof.: The proof is given in B.2.5.
Theorems 3.9, 3.10, and 3.11 unveil the benefits and drawbacks of the Transfer Lasso. Theorem 3.9 implies that the \(\sqrt{m}\)-consistent region (i) does not hold active variable selection consistency. The \(\sqrt{n}\)-consistent region (ii) does not hold as well. This is a disadvantage for the Transfer Lasso. On the other hand, Theorem 3.10 indicates that the Transfer Lasso in the case (i) has a property of consistent invariant variable selection, which the Adaptive Lasso does not have. Theorem 3.11 implies that the estimators are inconsistent in terms of invariant variable selection in the case (ii).
As shown in Figure 2, the Transfer Lasso cannot simultaneously achieve \(\sqrt{m}\)-consistency and consistent active/invariant variable selection in the regions (i), (ii), and (iii). This is why we explore a new methodology in the next section. We note that in regions other than (i), (ii), and (iii) (e.g., boundary regions), the asymptotic property is unclear. Appendix A.3 contains additional results for boundary regions. At the very least, the above results imply that \(\sqrt{m}\)-consistency and consistent active/invariant variable selection are incompatible in most regions for the Transfer Lasso.
Beyond Adaptive Lasso and Transfer Lasso
The Adaptive Lasso and the Transfer Lasso have their advantages and disadvantages, as seen in the previous section. The Adaptive Lasso achieves both \(\sqrt{n}\)-consistency and consistent variable selection for \(m\leq n\), but its convergence rate is \(\sqrt{n}(\ll\sqrt{m})\) for \(m\gg n\). The Transfer Lasso, on the other hand, achieves a convergence rate of \(\sqrt{m}\) for \(m\gg n\), but it results in inconsistent variable selection. Are there any ways to combine their benefits and compensate for their drawbacks?
### Adaptive Transfer Lasso: A Non-Trivial Integration
To exploit their benefits and compensate for their drawbacks, we integrate the ideas of the Adaptive Lasso and the Transfer Lasso. We propose a novel method using the initial estimator \(\tilde{\beta}\) as
\[\hat{\beta}_{n}^{\#}=\operatorname*{argmin}_{\beta}\left\{ \frac{1}{n}\|y-X\beta\|_{2}^{2}+\frac{\lambda_{n}}{n}\sum_{j}v_{j}|\beta_{j}|+ \frac{\eta_{n}}{n}\sum_{j}w_{j}\left|\beta_{j}-\tilde{\beta}_{j}\right|\right\}, \tag{28}\] \[v_{j}:=\frac{1}{|\tilde{\beta}_{j}|^{\gamma_{1}}},\ w_{j}:=| \tilde{\beta}_{j}|^{\gamma_{2}}, \tag{27}\]
where \(\gamma_{1}\geq 0\) and \(\gamma_{2}\geq 0\) are new hyperparameters. We denote \(\hat{S}_{n}^{\#}:=\{j:\hat{\beta}_{j}^{\#}\neq 0\}\). The weight \(v_{j}=1/|\tilde{\beta}_{j}|^{\gamma_{1}}\) is the same as that of the Adaptive Lasso, whereas the term \(w_{j}=|\tilde{\beta}_{j}|^{\gamma_{2}}\) is a new non-trivial part. Because \(w_{j}\to 0\) as \(\tilde{\beta}_{j}\to 0\), the effect of transfer learning from the initial estimator disappears for inactive parameters. We call this method _Adaptive Transfer Lasso_ because it is a generalization of the Adaptive Lasso and the Transfer Lasso. Indeed, if \(\eta_{n}=0\), then it reduces to the Adaptive Lasso, and if \(\gamma_{1}=\gamma_{2}=0\), then it reduces to the Transfer Lasso.
### Asymptotic Properties for Adaptive Transfer Lasso
We present the asymptotic properties of the Adaptive Transfer Lasso. The assumptions are the same as for the Adaptive Lasso and the Transfer Lasso. To derive the asymptotic distribution and convergence rate, we need a more detailed case analysis than before. The illustration of the division of cases is shown in Figure 3.
**Theorem 4.1** (Asymptotic Distribution for Adaptive Transfer Lasso).: _We have the following asymptotic distributions for the Adaptive Transfer Lasso
estimator (27). (i) If \(\eta_{n}/\sqrt{nm^{\gamma_{1}}+\gamma_{2}}\lambda_{n}\to\infty\), then_
\[\sqrt{m}(\hat{\beta}_{n}^{\#}-\beta^{*})\overset{d}{\to}z. \tag{29}\]
_(ii) If \(\sqrt{m^{\gamma_{1}}/n}\ \lambda_{n}\to\infty\), \(\eta_{n}/\sqrt{n}\to\infty\), \(\eta_{n}/\sqrt{m^{\gamma_{1}}+\gamma_{2}}\lambda_{n}\to 0\), and \(\sqrt{m^{\gamma_{1}}}\lambda_{n}/\eta_{n}\to\rho_{0}\geq 0\), then_
\[\sqrt{m}(\hat{\beta}_{n}^{\#}-\beta^{*})\overset{d}{\to}\begin{cases}0&\text{ for }j\in S^{c},\\ z_{j}&\text{for }j\in S.\end{cases} \tag{30}\]
_(iii) If \(\sqrt{m^{\gamma_{1}}/n}\ \lambda_{n}\to\lambda_{1}\geq 0\) and \(\eta_{n}/\sqrt{n}\to\eta_{0}\geq 0\), then_
\[\sqrt{n}(\hat{\beta}_{n}^{\#}-\beta^{*}) \tag{32}\] \[\overset{d}{\to}\operatorname*{argmin}_{u}\left\{u^{\top}Cu-2u^{ \top}W+\sum_{j\in S^{c}}\frac{\lambda_{1}}{|z_{j}|^{\gamma_{1}}}|u_{j}|+\sum_{ j\in S}\eta_{0}\left|\beta_{j}^{*}\right|^{\gamma_{2}}|u_{j}-\sqrt{r_{0}}z_{j}| \right\}. \tag{31}\]
_(iv) If \(\sqrt{m^{\gamma_{1}}/n}\ \lambda_{n}\to\lambda_{1}\geq 0\), \(\eta_{n}/\sqrt{n}\to\infty\), and \(\eta_{n}/\sqrt{nm^{\gamma_{2}}}\to\eta_{1}\geq 0\), then_
\[\sqrt{n}(\hat{\beta}_{n}^{\#}-\beta^{*}) \tag{34}\] \[\overset{d}{\to}\operatorname*{argmin}_{u\in\mathcal{U}}\left\{u^{ \top}Cu-2u^{\top}W+\sum_{j\in S^{c}}\left(\frac{\lambda_{1}}{|z_{j}|^{\gamma_ {1}}}\left|u_{j}\right|+\eta_{1}|z_{j}|^{\gamma_{2}}|u_{j}-\sqrt{r_{0}}z_{j}| \right)\right\},\] (35) \[\mathcal{U}:=\left\{u\ |\ u_{S}=r_{0}z_{S}\right\}. \tag{33}\]
_(v) If \(\sqrt{m^{\gamma_{1}}/n}\ \lambda_{n}\to\infty\), \(\lambda_{n}/\sqrt{n}\to\lambda_{0}\geq 0\), and \(\eta_{n}/\sqrt{n}\to\eta_{0}\geq 0\), then_
\[\sqrt{n}(\hat{\beta}_{n}^{\#}-\beta^{*}) \tag{37}\] \[\xrightarrow{d}\operatorname*{argmin}_{u\in\mathcal{U}}\left\{u^ {\top}Cu-2u^{\top}W+\sum_{j\in S}\left(\lambda_{0}\frac{\operatorname*{sgn}( \beta_{j}^{*})}{|\beta_{j}^{*}|^{\gamma_{1}}}u_{j}+\eta_{0}\left|\beta_{j}^{*} \right|^{\gamma_{2}}|u_{j}-\sqrt{r_{0}}z_{j}|\right)\right\},\] (38) \[\mathcal{U}:=\left\{u\ |\ u_{S^{c}}=0\right\}. \tag{36}\]
_(vi) If \(\lambda_{n}/\sqrt{n}\to\infty\), \(\lambda_{n}/n\to 0\), and \(\lambda_{n}/\eta_{n}\to\infty\), then_
\[\frac{n}{\lambda_{n}}(\hat{\beta}_{n}^{\#}-\beta^{*})\xrightarrow{d} \operatorname*{argmin}_{u\in\mathcal{U}}\left\{u^{\top}Cu+\sum_{j\in S}\frac{ \operatorname*{sgn}(\beta_{j}^{*})}{|\beta_{j}^{*}|^{\gamma_{1}}}u_{j}\right\},\quad\mathcal{U}:=\left\{u\ |\ u_{S^{c}}=0\right\}. \tag{39}\]
Proof.: The proof is given in B.3.1
**Corollary 4.2** (Convergence Rate for Adaptive Transfer Lasso).: _We have the following convergence rates for the Adaptive Transfer Lasso estimator (27)._
* \(\eta_{n}/\sqrt{nm^{\gamma_{2}}}\to\infty\) _and_ \(\eta_{n}/\sqrt{m^{\gamma_{1}+\gamma_{2}}}\lambda_{n}\to\infty\)_, then the convergence rate is_ \(\sqrt{m}\)_._
* \(\sqrt{m^{\gamma_{1}}/n}\ \lambda_{n}\to\infty\)_,_ \(\eta_{n}/\sqrt{n}\to\infty\)_, and_ \(\eta_{n}/\sqrt{m^{\gamma_{1}+\gamma_{2}}}\lambda_{n}\to 0\)_, then then the convergence rate is_ \(\sqrt{m}\)_._
* _If_ \(\sqrt{m^{\gamma_{1}}/n}\ \lambda_{n}\to\lambda_{1}\geq 0\) _and_ \(\eta_{n}/\sqrt{n}\to\eta_{0}\geq 0\)_, then the convergence rate is_ \(\sqrt{n}\)_._
* _If_ \(\sqrt{m^{\gamma_{1}}/n}\ \lambda_{n}\to\lambda_{1}\geq 0\)_,_ \(\eta_{n}/\sqrt{n}\to\infty\)_, and_ \(\eta_{n}/\sqrt{nm^{\gamma_{2}}}\to\eta_{1}\geq 0\)_, then convergence rate is_ \(\sqrt{n}\)_._
* _If_ \(\sqrt{m^{\gamma_{1}}/n}\ \lambda_{n}\to\infty\)_,_ \(\lambda_{n}/\sqrt{n}\to\lambda_{0}\geq 0\)_, and_ \(\eta_{n}/\sqrt{n}\to\eta_{0}\geq 0\)_, then convergence rate is_ \(\sqrt{n}\)_._
* _If_ \(\lambda_{n}/\sqrt{n}\to\infty\)_,_ \(\lambda_{n}/n\to 0\)_, and_ \(\lambda_{n}/\eta_{n}\to\infty\)_, then the convergence rate is_ \(n/\lambda_{n}\)_, which is slower than_ \(\sqrt{n}\)
Theorem 4.1 and Corollary 4.2 show that the Adaptive Transfer Lasso achieves a convergence rate of \(\sqrt{m}\) in the case (i) and (ii). This property is inherited from the Transfer Lasso. The asymptotic distribution for (i) is the same as the initial estimator. On the other hand, the asymptotic distribution for (ii) is remarkable. The distribution is the same as the initial estimator for the active variables, whereas is zero for the inactive variables. This implies that inactive parameters shrink to zero quickly.
We also provide the results of active/invariant variable selection consistency for the Adaptive Transfer Lasso.
**Theorem 4.3** (Consistent Active Variable Selection for Adaptive Trans
Figure 3: Phase diagrams of convergence rate (top) and active/invariant variable selection (bottom left/right) with \(\lambda_{n}\) and \(\eta_{n}\) for the Adaptive Transfer Lasso in Theorems 4.1, 4.3, 4.4, and Corollary 4.2. They are \(\sqrt{m}\)-consistent in (i - ii), \(\sqrt{n}\)-consistent in (iii - v), and sub-\(\sqrt{n}\)-consistent in (vi). They yield consistent active variable selection in (ii), (v), and (vi) (left), while consistent invariant variable selection in (i), (ii), and (iv) (right). Estimators in (ii) satisfy \(\sqrt{m}\)-consistency and active/invariant variable selection consistency.
fer Lasso).: _For the cases (ii), (v), and (vi) in Theorem 4.1, the Adaptive Transfer Lasso yields consistent active variable selection, that is,_
\[P(\hat{S}_{n}^{\#}=S)\to 1. \tag{40}\]
Proof.: The proof is given in B.3.2.
**Theorem 4.4** (Consistent Invariant Variable Selection for Adaptive Transfer Lasso).: _For the cases (i), (ii), and (iv) in Theorem 4.1, the Adaptive Transfer Lasso yields consistent invariant variable selection, that is,_
\[P(\hat{\beta}_{S}=\tilde{\beta}_{S})\to 1. \tag{41}\]
Proof.: The proof is given in B.3.3.
Theorems 4.3 and 4.4 imply that both active/invariant variable selection consistency hold in the case (ii). Hence, we have the following corollary.
**Corollary 4.5** (Oracle Region for Adaptive Transfer Lasso).: _For the case (ii) in Theorem 4.1, the Adaptive Transfer Lasso estimator satisfies_
* \(\sqrt{m}\)_-consistent:_ \(\sqrt{m}(\hat{\beta}_{n}^{\#}-\beta^{*})\) _converges to some distributions,_
* _consistent active variable selection:_ \(\hat{S}_{n}^{\#}=S\) _with probability tending to_ \(1\)_,_
* _consistent invariant variable selection:_ \(\hat{\beta}_{S}=\tilde{\beta}_{S}\) _with probability tending to_ \(1\)_._
Corollary 4.5 shows that the Adaptive Transfer Lasso incorporates the advantages of both the Adaptive Lasso and the Transfer Lasso. The hyperparameters \(\gamma_{1}\) and \(\gamma_{2}\) play a crucial role in this property. If \(\gamma_{1}=\gamma_{2}=0\), then the region (ii) disappears and it reduces to the Transfer Lasso. If either \(\gamma_{1}\) or \(\gamma_{2}\) is positive, then the region (ii) appears and it holds \(\sqrt{m}\)-consistency and active/invariant variable selection consistency. Both \(\gamma_{1}\) and \(\gamma_{2}\) contribute to expanding the region (ii). One possible advantage of using \(\gamma_{2}>0\) compared to \(\gamma_{1}>0\) is that it is stable since there is no division by zero even when the initial estimator is sparse and the values are exactly zero.
Figure 3 are the phase diagrams that demonstrate the relation between hyperparameters \((\lambda_{n},\eta_{n})\) and their asymptotic properties for the Adaptive Transfer Lasso. We see that the region (ii) is the intersection of the part with \(\sqrt{m}\)-consistency and the part with active/invariant variable selection consistency. Such a region exists neither in the Adaptive Lasso nor in the Transfer Lasso.
Empirical Results
We first empirically validate the theoretical properties. We then compare the performance of various methods through extensive simulations. Appendix C provides additional experimental results1.
Footnote 1: The codes will be available at a later date.
### Empirical Validation of Theory
In this subsection, we empirically validate the theoretical results for the Transfer Lasso and the Adaptive Transfer Lasso.
We first evaluated the \(\ell_{2}\) norm of the estimation error with respect to sample size. Theoretically, the convergence rate is \(\sqrt{m}\), \(\sqrt{n}\), and so on, depending on the hyperparameters. Assuming the convergence rate is \(l(n)\), we have \(E[\log\|\hat{\beta}-\beta^{*}\|_{2}]=\text{const.}-\log l(n)\), since \(l(n)\|\hat{\beta}-\beta^{*}\|_{2}\) converges to some distribution. Therefore, by drawing a graph with \(E[\log\|\hat{\beta}-\beta^{*}\|_{2}]\) on the vertical axis and \(\log n\) on the horizontal axis, the convergence rate can be empirically calculated from its slope. Assuming \(m=n^{2}\), the slope is \(-1/2\) when \(\sqrt{n}\)-consistent, and \(-1\) when \(\sqrt{m}\)-consistent.
We generated data by \(y_{i}=x_{i}^{\top}\beta^{*}+\varepsilon_{i}\) (\(i=1,\ldots,n\)) where \(x_{i}(\in\mathbb{R}^{10})\stackrel{{ i.i.d.}}{{\sim}}\mathcal{N} (0,\Sigma)\), \(\Sigma_{jk}=0.5^{|j-k|}\), \(\varepsilon_{i}\stackrel{{ i.i.d.}}{{\sim}}\mathcal{N}(0,\sigma^{2})\), \(\sigma=1\), and \(\beta^{*}=[3,1.5,0,0,2,0,0,\ldots,0]^{\top}(\in\mathbb{R}^{10})\) (as in [21]). We generated source data of size \(m\) and target data of size \(n\) with \(m=n^{2}\) and \(n=20,50,100,200,500,1000,2000,5000\). The initial estimators were obtained by the ordinary least squares using source data. The hyperparameters for each method were determined as follows according to Figures 1, 2, and 3.
* Lasso: \(\lambda_{n}=n^{1/4}\) (i) and \(\lambda_{n}=n^{3/4}\) (ii).
* Adaptive Lasso: \(\gamma=1\). \(\lambda_{n}=n^{-1}\) (i), \(n^{1/4}\) (ii), and \(n^{3/4}\) (iii).
* Transfer Lasso: \((\lambda_{n},\eta_{n})=(n^{1/2},n^{3/4})\) (i), \((n^{1/4},n^{1/4})\) (ii), and \((n^{3/4},n^{1/2})\) (iii).
* Adaptive Transfer Lasso: \(\gamma_{1}=\gamma_{2}=1\). \((\lambda_{n},\eta_{n})=(n^{-1/2},n^{2})\) (i), \((n^{1/2},n^{3/2})\) (ii), \((n^{-1},n^{1/4})\) (iii), \((n^{-1},n)\) (iv), \((1,n^{1/4})\) (v), and \((n^{3/4},n^{1/2})\) (vi).
We performed each experiment ten times and evaluated their averages and standard errors.
Figure 4 shows the \(\ell_{2}\) estimation errors for the Lasso, the Adaptive Lasso, the Transfer Lasso, and the Adaptive Transfer Lasso with respect to sample size. The slopes of the Transfer Lasso in the region (i) and the Adaptive Transfer Lasso in the region (i) and (ii) are \(-1\), indicating that the convergence rate was \(n=\sqrt{m}\). For the other methods or regions, the slopes are \(-0.5\) or greater, which confirms that the convergence rate is \(\sqrt{n}\) or less. These results were fully consistent with Theorems 3.5, 4.1, and 4.3.
We can observe two potential advantages of Adaptive Transfer Lasso. First, although the convergence rate (for \(n\geq 500\)) is \(\sqrt{n}\) in regions (v) and (vi), the estimation error is on the line of the convergence rate \(\sqrt{m}\) for \(n<500\). In other words, even in regions where the convergence rate is \(\sqrt{n}\), the estimation error can be reduced when the sample size is small. Second, the estimation error is consistently smaller in the region (ii) than in the region (i), although the convergence rates are comparable between the two regions. This might be because the estimator in (i) is more likely to be perfectly matched to the initial estimator, whereas the estimator in (ii) is more likely to be matched to the initial estimator for active variables, but
Figure 4: \(\ell_{2}\) estimation errors for the Lasso (top left), the Adaptive Lasso (top right), the Transfer Lasso (bottom left), and the Adaptive Transfer Lasso (bottom right) with respect to sample size. The convergence rates of the Transfer Lasso in the region (i) and the Adaptive Transfer Lasso in the region (i) and (ii) are \(\sqrt{m}\) (the slopes are \(-1\)), whereas the others are \(\sqrt{n}\) or less (the slope are \(-1/2\) or greater).
not for the inactive variables, and is more likely to be zero.
Having found that the convergence rate can be empirically evaluated accurately, we next empirically drew phase diagrams for the Transfer Lasso and the Adaptive Transfer Lasso as in Figure 3. The experimental setup was the same as in the previous subsection and \(m=n^{2}\). The hyperparameters \(\lambda_{n}\) and \(\eta_{n}\) were set to \(n^{\delta}\) with \(\delta=-2,-1.75,-1.5,\ldots,1.75,2\), respectively. The convergence rates were calculated from the slopes of the \(\ell_{2}\) errors for \(n=1000\) and \(n=5000\). We plotted the exponential parts of \(n\) in the convergence rates, taking the value \(1\) if \(\sqrt{m}\)-consistent and \(0.5\) if \(\sqrt{n}\)-consistent. Active variable selection consistency was evaluated as the ratio of correctly estimated zeros/non-zeros among all variables for \(n=5000\). Invariant variable selection consistency was evaluated by the ratio of variables that did not change from the initial estimator among the active variables for \(n=5000\).
Figure 5 illustrates the empirical phase diagrams of log-log scale for the Transfer Lasso and the Adaptive Transfer Lasso. As Theorems 3.5-3.11 suggest, the Transfer Lasso achieves both \(\sqrt{m}\)-consistency and invariant variable selection consistency in the lower right region (i), but does not have active variable selection consistency. Other regions also do not satisfy these properties simultaneously. On the other hand, in the Adaptive Transfer Lasso, the upper right region (ii) satisfies the properties of \(\sqrt{m}\)-consistency and active/invariant variable selection consistency. The empirical convergence rates and active/invariant variable selection ratios well reproduce Theorems 4.1, 4.3, and 4.4 in other regions as well. These empirical results confirm the theoretical results (Theorems 3.5-4.4, Figures 1-3).
### Empirical Comparison of Methods
In this subsection, we compare the methods in various experimental settings based on hyperparameter determination by cross-validation. The experimental settings include various source/target data sample sizes, number of dimensions, signal-to-noise ratios, and initial estimators. We mainly considered two cases: one with a large amount of source data and the other with the same amount of source data as the target data.
First, we suppose that we have a large amount of source data and its sample size is \(m=10000\). The simulation setting follows the previous subsections. We used \(\sigma=1,3,6,10\); \(p=10,20,50,100\); and \(n=10,20,50,100,200,\ldots,5000,10000\).
Initial estimators were obtained by the Lasso because the number of dimensions \(p\) can be greater than sample size \(n\) in this experiment. We compared other initial estimators including Ridge, Ridgeless [2, 8], and Las
soless [14, 12] in Appendix C.2. The search spaces were \(\gamma=0.5,1,2\) for the Adaptive Lasso; \(\alpha:=\lambda_{n}/(\lambda_{n}+\eta_{n})=0.75,0.5,0.25\) for the Transfer Lasso; and \((\gamma_{1},\gamma_{2})=(0.5,0.5),(1,1),(2,2)\), and \(\alpha:=\lambda_{n}/(\lambda_{n}+\eta_{n})=0.75,0.5,0.25\) for the Adaptive Transfer Lasso. The hyperparameter \(\lambda_{n}\) was determined by 10-fold cross validation with \(\lambda_{\min}/\lambda_{\max}=10^{-6}\) where \(\lambda_{\max}\) is automatically determined by Theorem 4 in [16]. If \(|\tilde{\beta}_{j}|\leq 10^{-3}\), then we set \(|\tilde{\beta}_{j}|=10^{-3}\) to avoid division by zero.
We evaluated the performance by two metrics: \(\ell_{2}\) norm for estimation error and F1 score for variable selection. The F1 score is a harmonic average of precision and recall, where precision = (the number of correct selected variables) / (the number of selected variables) and recall = (the number of
Figure 5: log–log phase diagrams of convergence rate (top), active variable selection ratio (middle), and invariant variable selection ratio (bottom) for the Transfer Lasso (left) and the Adaptive Transfer Lasso (right). These empirical results confirm the theoretical results of Figure 2 and 3.
correct selected variables) / (the number of true active variables). We used the F1 score because it allows us to evaluate the performance of variable selection even when there is an imbalance between the number of active and inactive variables. We also evaluated other metrics in Appendix C.3. They included RMSE for prediction evaluation and sensitivity, specificity, positive predictive value, and the number of active variables for feature selection evaluation.
The results are shown in Figure 6. In terms of estimation errors, the Transfer Lasso and the Adaptive Transfer Lasso outperformed the other methods. The Adaptive Lasso was superior to the Lasso, but it was inferior to the Transfer Lasso and the Adaptive Transfer Lasso. In terms of
Figure 6: \(\ell_{2}\) estimation errors (top) and variable selection F1-score (bottom) for a large amount of source data.
variable selection, the Adaptive Lasso and the Adaptive Transfer Lasso outperformed the others, and the Adaptive Transfer Lasso was slightly superior to the Adaptive Lasso. These results imply the superiority of the Adaptive Transfer Lasso with initial estimators using large amounts of source data. The Adaptive Lasso, however, does not fully utilize the initial estimators in this setting.
Next, we supposed that we have a medium amount of source data and its sample size is the same as the target data. We used the same data generation process, comparison methods, and performance measurements as above.
The results are shown in Figure 7. All methods were comparable in terms of estimation errors, but in terms of variable selection, the Adaptive Lasso
Figure 7: \(\ell_{2}\) estimation errors (top) and variable selection F1-scores (bottom) for a medium amount of source data.
and the Adaptive Transfer Lasso were superior to those of the others. The Adaptive Lasso and the Adaptive Transfer Lasso had similar performances for both estimation error and variable selection. This is consistent with our theoretical analyses.
## 6 Discussion
We discuss additional comparisons among methods from two perspectives; regularization contours and prior distributions. We also discuss future work.
### Regularization Contours
The regularization contours help to intuitively capture the strength and pattern of regularization. Figure 8 shows their contours with an initial estimator with a small value \(\tilde{\beta}_{1}=0.5\) and a large value \(\tilde{\beta}_{2}=2\).
The contours of the Adaptive Lasso are pointed at the coordinate axes (where some elements are zero) and are especially sharp where the initial estimator is small. The contours of the Transfer Lasso are pointed at the points where some elements are zero or equal to the initial estimator, but they are not so sharp. The contours for the Adaptive Transfer Lasso are pointed where some elements are zero or equal to the initial estimator, and the sharpness varies depending on the hyperparameters. These observations indicate that the Adaptive Transfer Lasso flexibly changes the strength of regularization depending on the initial estimator.
### Prior Distribution
In Bayesian perspectives, the Lasso regularization (2) can be seen as a negative log-likelihood of Laplace prior,
\[\lambda|\beta_{j}|=-\log P(\beta_{j};\lambda)+const.,\ P(z;\lambda):=\frac{ \lambda}{2}\exp\left(-\lambda|z|\right). \tag{42}\]
A similar view is possible for the Adaptive Lasso, the Transfer Lasso, and the Adaptive Transfer Lasso. Most generally, the prior distribution of the Adaptive Transfer Lasso is given by
\[\lambda v_{j}|\beta_{j}|+\eta w_{j}|\beta_{j}-\tilde{\beta}_{j}|=- \log P(\beta_{j};\lambda,\eta,v_{j},w_{j},\tilde{\beta}_{j})+const., \tag{44}\] \[P(\beta_{j};\lambda,\eta,v_{j},w_{j},\tilde{\beta}_{j}):=\frac{ 1}{Z}\exp\left(-\lambda v_{j}|\beta_{j}|-\eta w_{j}|\beta_{j}-\tilde{\beta}_{ j}|\right),\] (45) \[Z:=\frac{2\lambda v_{j}}{\lambda^{2}v_{j}^{2}-\eta^{2}w_{j}^{2}} \exp\left(-\eta w_{j}|\tilde{\beta}_{j}|\right)-\frac{2\eta w_{j}}{\lambda^{2 }v_{j}^{2}-\eta^{2}w_{j}^{2}}\exp\left(-\lambda v_{j}|\tilde{\beta}_{j}|\right). \tag{43}\]
The prior distributions for the Adaptive Lasso, the Transfer Lasso, and the Adaptive Transfer Lasso are shown in Figures 9. The prior distributions for the Adaptive Lasso are all sharp at zero, and the distributions become steeper as the initial estimator decreases. This means that the Adaptive Lasso controls the variance of the prior distribution based on how close to zero the initial estimator is. The prior distribution for the Transfer Lasso is sharp at two points: zero and the initial estimator. When the initial estimator is small, it is nearly the same as that for the Lasso, but when the initial estimator is large, the sharpness changes depending on the magnitudes of \(\lambda\) and \(\eta\). The prior distribution for the Adaptive Transfer Lasso is somewhat different from that of the Transfer Lasso. The prior distribution tends to peak at zero when the initial estimator is close to zero, whereas the prior distribution tends to peak at that initial estimator when the initial estimator is large.
Figure 8: Regularization contours for the Adaptive Lasso (top left), the Transfer Lasso (top right), and the Adaptive Transfer Lasso (bottom) with initial estimators \(\tilde{\beta}=[0.5,2]^{\top}\). Hyperparameters are \(\lambda_{n}=0,0.5,1\) (from left to right); \(\eta_{n}=0,0.5,1\) (from top to bottom); and \(\gamma_{1}=\gamma_{2}=1\) for the Adaptive Transfer Lasso.
is far from zero. This suggests that the Adaptive Transfer Lasso can make full use of the information from the initial estimator and achieves accurate active and varying variable selection.
### Future Work
In our asymptotic analysis, we considered the case where \(p\) is fixed and \(n\) is infinitely divergent. The oracle property of Adaptive Lasso [21] can be extended to the case for \(p\gg n\) by high-dimensional asymptotic theory [10], under different kinds of assumptions. As future research, it would be interesting to see whether this can be extended to the Transfer Lasso and the Adaptive Transfer Lasso.
In addition, we assumed that the initial estimator is consistent in our asymptotic analysis. When the initial estimator is incorrectly specified, performance deteriorates significantly for the Adaptive Lasso, but not so much for the Transfer Lasso. It would be interesting to theoretically verify
Figure 9: Prior distributions for the Adaptive Lasso (top left), the Transfer Lasso (top right), and the Adaptive Transfer Lasso (bottom) with various initial estimators. Hyperparameters are \(\gamma_{1}=\gamma_{2}=1\) for the Adaptive Transfer Lasso.
this property.
## 7 Conclusion
The Adaptive Lasso and the Transfer Lasso are similar but have their advantages and disadvantages from the viewpoint of an asymptotic perspective. We proposed the Adaptive Transfer Lasso, which has advantages over the Adaptive Lasso and the Transfer Lasso. We confirmed it in numerical simulations.
## Acknowledgment
The research is the collaborative work of Toshiba Corporation and The Institute of Statistical Mathematics, based on funding from Toshiba Corporation.
|
2305.02360 | Fashionpedia-Ads: Do Your Favorite Advertisements Reveal Your Fashion
Taste? | Consumers are exposed to advertisements across many different domains on the
internet, such as fashion, beauty, car, food, and others. On the other hand,
fashion represents second highest e-commerce shopping category. Does consumer
digital record behavior on various fashion ad images reveal their fashion
taste? Does ads from other domains infer their fashion taste as well? In this
paper, we study the correlation between advertisements and fashion taste.
Towards this goal, we introduce a new dataset, Fashionpedia-Ads, which asks
subjects to provide their preferences on both ad (fashion, beauty, car, and
dessert) and fashion product (social network and e-commerce style) images.
Furthermore, we exhaustively collect and annotate the emotional, visual and
textual information on the ad images from multi-perspectives (abstractive
level, physical level, captions, and brands). We open-source Fashionpedia-Ads
to enable future studies and encourage more approaches to interpretability
research between advertisements and fashion taste. | Mengyun Shi, Claire Cardie, Serge Belongie | 2023-05-03T18:00:42Z | http://arxiv.org/abs/2305.02360v1 | # Fashionpedia-Ads: Do Your Favorite Advertisements Reveal Your Fashion Taste?
###### Abstract
Consumers are exposed to advertisements across many different domains on the internet, such as fashion, beauty, car, food, and others. On the other hand, fashion represents second highest e-commerce shopping category. Does consumers' digital record behavior on various fashion ad images reveal their fashion taste? Does ads from other domains infer their fashion taste as well? In this paper, we study the correlation between advertisements and fashion taste. Towards this goal, we introduce a new dataset, Fashionpedia-Ads, which asks subjects to provide their preferences on both ad (fashion, beauty, car, and dessert) and fashion product (social network and e-commerce style) images. Furthermore, we exhaustively collect and annotate the emotional, visual and textual information on the ad images from multi-perspectives (abstractive level, physical level, captions, and brands). We open-source Fashionpedia-Ads to enable future studies and encourage more approaches to interpretability research between advertisements and fashion taste. Fashionpedia-Ads can be found at: 1
Footnote 1: Fashionpedia project page: fashionpedia.github.io/home/
## 1 Introduction
It is understandable that there could be some correlation between ads and products for a same domain. For example, a user likes the style of a neckline in a fashion ads and might also like a fashion product that has similar style (Fig. 1). However, is there any correlation between ads and products from different domains? Specifically, can we interpret a consumer's product preference from her website browsing
logs of various advertising domains? In the context of fashion online shopping, however, to our knowledge, no study has investigated the correlation between various ads domain and fashion taste on the consumer level, as shown in Fig. 2.
In this paper, we introduce a new user taste understanding dataset, Fashionpedia-Ads, which asks subjects to provide their preference on both ad images of various domain (fashion, beauty, car, food) and fashion product images. Furthermore, unlike fashion product images, ads images usually contains complicated and multiple perspectives of information (emotional, visually, textually...) that cause a consumer like them. For example, for a same ad image (Fig. 1), a consumer might like it because of the neckline of the dress. However, another consumer might like this ad image because the emotional feeling created in this ad image. To fully understand the multi-correlation (both visual and textual) between ads and fashion product images liked by subjects, we exhaustively annotated both ads and fashion images from different perspectives: 1) abstractive level; 2) physical attributes with associated segmentations (localized attributes); 3) caption, and 4) brands on the ads.
The aim of this work is to enable future studies and encourage more exploration to interpretability research between advertisements and fashion taste. The contributions of this work are: 1) we introduce Fashionpedia-Ads, consisting of three datasets (Ads, Social network style and E-commerce style fashion products). We bridge the connection among them through the subjects' preference (like or dislike) on these images and the annotation from multi-perspectives (e.g. abstract & physical attributes). 2) we formalize a new task that not only requires models to predict whether a subject like or dislike a fashion product image based on given ad images of various domains, but also provide a rationale explaination why it makes this prediction from multi-perspectives.
## 2 Dataset Creation Details
**3 sub-datasets** Fashionpedia-Ads dataset consisits of 3 sub-datasets. 1) _Advertisement dataset:_ this dataset consists of ad images from fashion, beauty, car, and dessert domains that consumers could see on the internet. We use the images from the ads dataset [11] for our study; 2) _Social network style fashion product dataset:_ this dataset consists of street and runway style fashion product images (with human body), which simulates images that consumers could see on social network websites. We use the images from Fashionpedia dataset [13] for our study; 3) _E-commerce style fashion product dataset:_ this dataset consists of online shopping style fashion product images (without human body), which simulates images that consumers could see on E-commerce websites. We collect the images and associated product information from an online shopping website called nuji.com.
**Build the relationship among the 3 sub-datasets through subject preference** We build the connection among the 3 sub-datasets by asking the same 100 subjects to annotate their preference (like or dislike) for all these 3 sub-datasets, as illustrated in Fig. 1.
## 3 Advertisement Annotation
**Visual & Abstractive perspective** We extract the sentiment and question-answer pairs (Action & Reason) annotation from the ads dataset [11] directly. These annotations are collected from MTurk workers and can help us understand subjects' preference from sentiment and emotional level.
**Visual & Physical perspective** We follow the method proposed by Fashionpedia dataset [13] and construct taxonomies for the 4 ads domains. By using these taxonomies, we annotate localized objects, sub-objects and fine-grained attributes with associated masks for the ad images of these 4 domains. Why we annotate this? In contrast to the sentimental and emotional perspectives of the ads, a subject's preference on ads can also be impacted by the visual effect of products demonstrated in the ads. With this annotation, we could analyze whether subjects' preference is impacted by the physical perspective of the products shown on the ads.
**Textual perspective** We annotate the leading captions indicated in the ad images with the masks. Because the subjects' preference and emotional feeling can be aroused by the textual information displayed on the ads.
**Brand perspective** We also annotate the brand name indicated in the ad image. The brands and their associated brand culture could also tie to subjects' ads preference. For example, if a subject like fashion products from Dior, she could also like beauty products from Dior, as illustrated in Fig. 1.
Figure 2: Previous fashion datasets focus on recognition [3, 12, 13, 15, 16, 21, 23, 27, 29], detection [2, 13, 19, 31], data mining [4, 5, 6, 9, 17, 20, 18, 25, 28], and retrieval [1, 7, 8, 10, 14, 16, 26, 30]. Fashionpedia-Taste studies fashion taste based on fashion products. Fashionpedia-Ads dataset further investigates fashion taste interpretability and reasoning between advertisements and fashion products.
## 4 Fashion Taste Annotation
To fully understand the contextual information from fashion product images, we annotate the _social network style_ and _E-commerce style_ fashion product images respectively, as mentioned in Sec. 2. 1) _The social network style dataset:_ we directly use the annotation (task1/2/3/4) created in Fashionpedia-Taste, which asked the 100 subjects to explain their fashion taste in 3 perspectives: a) localized attributes; b) human attention; c) caption, as shown in Fig. 1; 2) _The E-commerce style dataset:_ similar to Fashionpedia dataset [13], we annotate localized objects, sub-objects and fine-grained attributes with associated masks. Additionally, we annotate the dress length and introversive/extroversive for each image. Each image also contains detailed product information, as shown in Fig. 1.
## 5 Dataset Analysis
### Visual-Abstractive Attribute Level
**Sentiment** Fig. 3 shows the frequency of the sentiment of the ads images for each ads domain. The results show the sentiment from fashion and beauty ads are more correlated compared to car and dessert ads (such as 'fashionable'). Furthermore, fashion and beauty ads are more correlated to 'feminine' sentiment. However, car ads has more tendency of'manly' sentiment. Dessert ads is invariant of both 'feminine' and'manly' sentiment. All the 4 ads domains are correlated to 'creative' and 'active' sentiment.
**Q&A word count statistics** We use SGRank from Textacy [24] to calculate the frequency of words. Fig. 4 shows the most frequent 1, 2, 3 grams for QA. Similar to the sentiment analysis, fashion and beauty ads share similar emotional feeling. 'Attractive','sexy','stylish', and 'beautiful' are the most frequent adjectives annotated by MTurk workers. For car ads, the emotional feeling is more related to the functional aspect of cars. The high frequent words include 'high quality','reliable', 'great performance', and 'powerful car'. For dessert ads, the emotional words are more connected to the dessert flavors, such as 'delicious', 'new flavor', and 'tasty'.
**Q&A linguistic statistics** We use part-of-speech (POS) tagging from Spacy [22] to tag nouns and adjectives in QA. Table 1 shows the number of most frequent unique nouns by POS. We find the most frequent common nouns is more associated with high level description of a product line indicated in ads images, such as clothes, jeans, perfume, lipstick, ice cream, and cookies. In contrast, the most frequent proper nouns is more related to the brands mentioned in the ads images, such as Gucci, Chanel, Audi, and Haagen-Dazs. This shows the linguistic diversity of our dataset.
### Visual-Physical Attribute Level
**Localized Attribute distribution** Fig. 5 shows the distribution of attributes annotated in the 4 ads domains. For fashion ads,'symmetrical' has highest frequency because most of the garments have balanced silhouette. For beauty ads, 'glossy' has high frequency because the high frequency of lipstick objects in the beauty ads. The similar observation is found for 'cocoa chocolate' in dessert ads. For car ads, 'elegant & luxury' style has high frequency, which could correlate to the fashion ads and products with 'elegant & luxury' feeling.
### Textual information
**Ads caption - Word count statistics** For Fashion ads, Fig. 4 shows Kenneth cole, Calvin Klein, and Banana republic are most mentioned brands from emotional perspective. However, Chanel, Boss and Lacoste are most frequent brands according to the caption written on the ads images (Fig. 6). This indicates Kenneth cole, Calvin Klein, and Banana republic ads might contain some more provocative visual or textual information that arouses the MTurk workers' sentiment. The similar observation is also found in beauty, car, and dessert ads.
**Ads caption - Linguistic statistics** Table 2 shows number of most frequent nouns from ads caption. Compared to QA (Table 1) annotated from emotional perspective, the caption written on the ad images is more concentrated to brand related keywords and less diverse. This is expected since the signal from QA contains more expression of the viewers' emotional feeling and the caption written on the ads images is more focused on describing the brand itself or products that are advertised in the images.
Chanel, Boss and Lacoste are the most frequent brands according to the caption written on the ads images (Fig. 6). However, the brand distribution ((Fig. 7)) shows Ralph Lauren has the highest number of images. This indicates some brands might contain some more provocative visual or textual information that arouses the viewers' feeling even they have relatively lower number of images in the dataset. This needs to be further explored in this study. The similar observation is also found in beauty, car, and dessert ads.
## 6 Conclusion
In this work, we studied the problem of modeling human fashion taste in advertisements. We exhaustively collect and annotate the emotional, visual and textual information on the ad images from multi-perspectives (abstractive level, physical level, captions, and brands). We therefore hope that Fashionpedia-Ads will facilitate future research of interpretability between advertisements and fashion taste.
|
2307.13414 | Spin waves in bilayers of transition-metal dichalcogenides | Van der Waals magnetic materials are currently of great interest as materials
for applications in future ultrathin nanoelectronics and nanospintronics. Due
to weak coupling between individual monolayers, these materials can be easily
obtained in the monolayer and bilayer forms. The latter are of specific
interest as they may be considered as natural two-dimensional spin valves. In
this paper, we study theoretically spin waves in bilayers of transition metal
dichalcogenides. The considerations are carried within the general spin wave
theory based on effective spin Hamiltonian and Hollstein-Primakoff-Bogolubov
transformation. The spin Hamiltonian includes intra-layer as well as
inter-layer nearest-neighbour exchange interactions, easy-plane anisotropy, and
additionally a weak in-plane easy-axis anisotropy. The bilayer systems consist
of two ferromagnetic (in-plane magnetization) monolayers that are coupled
either ferromagnetically or antiferromagnetically. In the latter case, we
analyse the spin wave spectra in all magnetic phases, i.e. in the
antiferromagnetic, spin-flop, and ferromagnetic ones. | Wojciech Rudziński, Józef Barnaś, Anna Dyrdał | 2023-07-25T11:23:24Z | http://arxiv.org/abs/2307.13414v2 | # Spin waves in bilayers of transition-metal dichalcogenides
###### Abstract
Van der Waals magnetic materials are currently of great interest as materials for applications in future ultrathin nanoelectronics and nanospintronics. Due to weak coupling between individual monolayers, these materials can be easily obtained in the monolayer and bilayer forms. The latter are of specific interest as they may be considered as natural two-dimensional spin valves. In this paper, we study theoretically spin waves in bilayers of transition metal dichalcogenides. The considerations are carried within the general spin wave theory based on effective spin Hamiltonian and Holstein-Primakoff-Bogolubov transformation. The spin Hamiltonian includes intra-layer as well as inter-layer nearest-neighbour exchange interactions, easy-plane anisotropy, and additionally a weak in-plane easy-axis anisotropy. The bilayer systems consist of two ferromagnetic (in-plane magnetization) monolayers that are coupled either ferromagnetically or antiferromagnetically. In the latter case, we analyse the spin wave spectra in all magnetic phases, i.e. in the antiferromagnetic, spin-flop, and ferromagnetic ones.
## I Introduction
Two-dimensional (2D) van-der-Waals magnetic materials are currently of great interest due to expected applications in atomically-thin spintronics devices, like for instance spin valves, memory elements, and others [1; 2; 3; 4]. In the bulk form, these materials are build of monolayers that are weakly coupled by van-der-Waals forces. Therefore, it is relatively easy to obtain them in the form of thin films with arbitrary number of monolayers, down to bilayers and single monolayers. Accordingly, magnetic ordering and magnetic properties of van der Waals materials depend on the number of coupled monolayers.
Expected applications of 2D magnetic materials stimulated also intensive theoretical investigations of their physical properties as well as search for new materials with better characteristics, especially with higher Curie/Neel temperatures. Of particular interest are their electronic and magnetic properties, including also spin dynamics and spin wave propagation [5; 6; 7; 8; 9; 10; 11].
Magnetic ground state off van der Waals materials can be relatively easily tuned by external strain or gating [12; 13; 14; 15]. These properties together with strong magnetoresistance effects, magneto-optical properties, spin-filtering, spin-to-charge interconversion, and topological (electronic and magnon) transport make these materials extremely attractive not only for applications but also for theoretical studies. Moreover, magnetic van-der-Waals structures also offer unique possibilities for tuning magnetic anisotropy [16], which is crucial for various applications where magnetic anisotropy plays an active role. Magnetic anisotropy also allows to overcome large spin fluctuations in 2D systems and therefore facilitates stabilization of the spin structure. Interestingly, such a tuning of magnetic anisotropy in van der Waals materials can be easily achieved through chemical doping, externally-induced strains, or proximity effects [1].
Various groups of magnetic van der Waals materials are currently known. These materials have also various physical and especially transport properties, including ferromagnetic semiconductors, e.g. VS\({}_{2}\), VSe\({}_{2}\)[17; 18; 19; 20; 21], itinerant ferromagnets such as Fe\({}_{3}\)GeTe\({}_{2}\)[2; 22], and insulating ferromagnets like CrI\({}_{3}\)[23] or Cr\({}_{2}\)Ge\({}_{2}\)Te\({}_{6}\)[1; 24]. In this paper we focus on magnetic properties and especially on spin wave excitations in a specific group of ferromagnetic 2D van der Waals materials, i.e. in transition-metal dichalcogenides (TMDs) [25; 26; 27; 28], MX\({}_{2}\), where M stands for a transition metal atom and X for a chalcogen one. These materials have Curie temperatures in the vicinity of room temperature, so they are of certain potential for practical applications. We also note that spin wave excitations in van der Waals structures are currently of great interest for magnonics applications, and have been studied theoretically as well as experimentally in various materials, including TMDs, chromium trihalides (CrI\({}_{3}\), CrCl\({}_{3}\)) and others. It has been shown, for instance, that spin waves in chromium trihalides have features that follow from topological properties of these materials [8; 9]. Such topologically-induced features also occur in TMDs [29; 30; 31].
A specific subgroup of TMDs are Vanadium-based dichalcogenides, VX\({}_{2}\) with X=S, Se and Te [17; 18; 19; 20; 21; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 11; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 187; 188; 189; 191; 200; 211; 219; 223; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 281; 289; 300; 31; 329; 310; 332; 336; 337; 338; 391; 329; 330; 340; 351; 352; 353; 354; 355; 356; 357; 358; 36; 379; 40; 41; 429; 42; 436; 44; 45; 46; 47; 48; 49; 50; 425; 49; 40; 426; 40; 437; 41; 45; 416; 427; 45; 428; 46; 47; 438; 47; 44; 48; 49; 51; 529; 539; 54; 55; 56; 57; 58; 59; 60; 617; 628; 63; 64; 65; 66; 67; 68; 69; 70; 71; 729; 73; 75; 76; 77; 78; 79; 81; 82; 84; 85; 86; 87; 88; 89; 910; 88; 89; 921; 85; 89; 930; 81; 86; 87; 88; 89; 931; 89; 940; 82; 88; 89; 95; 81; 87; 88; 89; 96; 97; 82; 89; 98; 99; 97; 98; 99; 100; 101; 103; 104; 105; 106; 107; 108; 109; 111; 112; 113; 114; 115; 116; 117; 118; 119; 122; 124; 125; 126; 127; 128; 129; 131; 140; 143; 144; 145; 146; 147; 148; 149; 151; 160; 175; 176; 177; 178; 189; 192; 180; 181; 193; 194; 182; 183; 185; 186; 187; 188; 189; 195; 196; 197; 198; 199; 201; 219; 224; 219; 233; 244; 256; 257; 268; 279; 281; 293; 294; 295; 287; 296; 297; 298; 301; 299; 310; 320; 337; 321; 338; 393; 339; 341; 342; 343; 358; 359; 360; 370; 381; 382; 397; 398; 40; 390; 391; 411; 42; 430; 45; 46; 471; 48; 49; 50; 491; 51; 52; 53; 54; 56; 57; 59; 61; 58; 59; 62; 58; 59; 70; 510; 50; 51; 53; 571; 58; 59; 51; 59; 63; 59; 640; 50; 52; 51; 50; 54; 51; 5
atoms, sandwiched between two chalcogen (X) atomic planes. In-plane positions of the chalcogen atoms in these two planes are the same (one on the other) in the T phase but these planes are rotated by some angle in the H phase. Experimental and theoretical studies reveal a metal-insulator transition [18] when reducing thickness of VSe\({}_{2}\) layers down to a 2D monolayer. It turn, the intrinsic ferromagnetism in the VSe\({}_{2}\) monolayers has been reported in a number of experimental as well as theoretical works (see e.g. [18; 19; 20]).
Individual monolayers of TMDs are usually ferromagnetic with the magnetization oriented in the layer plane, though systems with magnetization normal to the plane can also occur. Moreover, also antiferromagnetically arranged monolayers are possible as well. Of particular interest seem to be the bilayer structures, especially those with ferromagnetic monolayers coupled antiferromagnetically due to interlayer exchange coupling. This is because such bilayers in an external magnetic filed may be considered as natural atomically thin spin valves [4], and may be a fundamental building blocks of two dimensional spintronics.
In this paper we analyse spin wave excitations in the H-stacked as well as T-stacked bilayers of TMDs, with particular attention to the interplay of the magnetocrystalline anisotropy, intralayer ferromagnetic exchange coupling, and interlayer antiferromagnetic or ferromagnetic coupling. In order to evaluate the magnon spectra we use the spin-wave theory for the Heisenberg model Hamiltonian, and apply the Holstein-Primakoff transformation followed by the diagonalization procedure based on the Bogolubov transformation [32; 33; 34; 22]. The role of easy-plane and in-plane easy-axis anisotropies in the spin wave spectrum is analyzed in detail. These anisotropies stabilize the intralayer ferromagnetic order and also lead to a gap in the magnon spectrum at the Brillouin zone center and to a splitting of the magnon modes. The model and theoretical method used to study spin wave spectra are described in more details in section 2, where the spin Hamiltonian used to describe TMDs is defined. Numerical results are presented and discussed in section 3. Summary and final conclusions are in section 4.
## II Antiferromagnetically coupled bilayers
We consider the case when the perpendicular anisotropy is of easy-plane type, and assume the coordinate system with the axis \(y\) normal to the layers and the axis \(z\) along the in-plane easy axis. Though the in-plane easy-axis anisotropy is rather small in Vanadium dichalcogenides, we consider a general model when this anisotropy may be remarkable and may play some role. We also assume an external magnetic field along the easy axis.
To describe spin waves in a bilayer consisting of two ferromagnetic monolayers coupled antiferromagnetically we use the following model spin Hamiltonian:
\[H=\sum_{\alpha}H^{\alpha}+H_{\rm int}, \tag{1}\]
where \(\alpha\)=T (\(\alpha\)=B) stands the top (bottom) monolayer, and Hamiltonian of the \(\alpha\)th monolayer includes three terms, \(H^{\alpha}=H^{\alpha}_{\rm ex}+H^{\alpha}_{A}+H^{\alpha}_{h}\). Here, the first term stands for the ferromagnetic intralayer exchange coupling, the second term includes the magnetic anisotropies in the system, and the last term is the Zeemann energy in an external magnetic field \(h\). The Hamiltonian \(H^{\alpha}\) can be written explicitly in the form,
\[H^{\alpha}=J_{1}\sum_{\mathbf{r},\mathbf{\delta}}\mathbf{S}_{\mathbf{r}, \alpha}\cdot\mathbf{S}_{\mathbf{r}+\mathbf{\delta},\alpha}+\frac{D_{y}}{2}\sum_{\mathbf{r} }\left(S^{y}_{\mathbf{r},\alpha}\right)^{2}\] \[-\frac{D_{z}}{2}\sum_{\mathbf{r}}\left(S^{z}_{\mathbf{r},\alpha}\right)^{2 }-h\sum_{\mathbf{r}}S^{z}_{\mathbf{r},\alpha}. \tag{2}\]
Here, the exchange coupling between Vanadium atoms within the monolayer is ferromagnetic, \(J_{1}<0\), the easy-plane anisotropy constant \(D_{y}\) and the in-plane easy-axis anisotropy constant \(D_{z}\) are both defined as positive, while the magnetic field \(h\) is taken in the units of \(g\mu_{B}\), where \(g\) denotes the gyromagnetic factor and \(\mu_{B}\) is the Bohr magneton. The summation over \(\mathbf{r}\) denotes here the summation over lattice sites, while that over \(\mathbf{\delta}\) is the summation over nearest neighbours, with \(\mathbf{\delta}\) standing for the vectors connecting a particular site to its in-plane nearest neighbours (NNs). [We neglect here exchange coupling between next-nearest neighbours.] In turn, the last term in Eq.(1) describes the antiferromagnetic exchange coupling between the two monolayers,
\[H_{int}=J_{2}\sum_{\mathbf{r},\mathbf{\delta}}\mathbf{S}_{\mathbf{r},T}\cdot\mathbf{S}_{ \mathbf{r}+\mathbf{\delta},B}, \tag{3}\]
with \(J_{2}>0\). Note, \(\mathbf{\delta}\) corresponds here to inter-layer NNs (NNs in the adjacent layer).
When the above defined system is in an external magnetic field applied along the in-plane easy axis (\(z\) axis), one may expect in general three different stable spin configurations in the bilayer; (i) antiferromagnetic state with the spins of the bottom layer oriented say along \(+z\) axis and of the top layer along \(-z\) axis, respectively, (ii) spin-flop (canted) phase with the spins of the two monolayers oriented in the atomic planes at an angle \(\chi\) to the \(z\) axis, and (iii) the ferromagnetic phase with the spins of both layers oriented along the \(z\) axis (this corresponds to \(\chi=0\)). The transition from antiferromagnetic to spin-flop phase occurs at \(h=h_{\rm sf}\) (see Appendix A), with
\[h_{\rm sf}=S\sqrt{D_{z}(2\xi J_{2}-D_{z})} \tag{4}\]
for \(2\xi J_{2}-D_{z}>0\), where \(\xi\) denotes the structure factor, \(\xi=3\) and \(\xi=1\) for the H and T phases, respectively. The spin-flop phase appears in the range of magnetic fields \(h_{\rm sf}<h<h_{s}\), where \(h_{s}\) is the threshold magnetic field (the saturation field), at which the transition from
the spin-flop phase to the ferromagnetic one occurs (see Appendix A),
\[h_{s}=S(2\xi J_{2}-D_{z}). \tag{5}\]
If \(D_{z}=0\), then \(h_{\rm sf}=0\) and \(h_{s}=2S\xi J_{2}\). Note, that the spin-flop phase appears when \(2\xi J_{2}>D_{z}\), while in the opposite case, \(2\xi J_{2}<D_{z}\), there is a direct transition from the antiferromagnetic to ferromagnetic phase (there is then no spin-flop phase).
### Spin waves in the antiferromagnetic phase
We consider first the antiferromagnetic (AF) phase, which appears below the transition field to the spin-flop phase. The spin moments of the bottom layer are along \(z\) axis, while of the top layer are along the \(-z\) axis, see Fig.1. In the first step we perform the Holstein-Primakoff
Figure 1: Top view of the VX\({}_{2}\) (X= Se, Te, S) monolayer in the H-phase (a) and T-phase (c). The monolayer is in the \((x,z)\) plane while the axis \(y\) is normal to the plane. The side view of the corresponding bilayer systems along the z-axis is shown in (b) and (c) for the H and T phases, respectively. Ground state spin orientation of the bottom layer is assumed along the \(z\)-axis while that of the top layer is along \(-z\) axis for the antiferromagnetically coupled bilayers (black arrows) and along \(z\) axis for the ferromagnetic bilayers (yellow arrows). For the trigonal prismatic structure (H-phase), one monolayer sits on the other with \(\pi\) rotation, which implies that each vanadium atom (see the position of the red dots) has three NNs in the adjacent monolayer. For the octahedral structure (T-phase) of VX\({}_{2}\), each vanadium atom has one NN in the adjacent monolayer.
transformation:
\[S^{x}_{\mathbf{r},T}=\sqrt{\frac{S}{2}}(a^{+}_{\mathbf{r},T}+a_{ \mathbf{r},T}),\] \[S^{y}_{\mathbf{r},T}=i\sqrt{\frac{S}{2}}(a_{\mathbf{r},T}-a^{+}_{ \mathbf{r},T}),\] \[S^{z}_{\mathbf{r},T}=a^{+}_{\mathbf{r},T}a_{\mathbf{r},T}-S, \tag{6}\]
for spins oriented antiparallel to the z-axis (top layer), and
\[S^{x}_{\mathbf{r},B}=\sqrt{\frac{S}{2}}(a^{+}_{\mathbf{r},B}+a_{ \mathbf{r},B}),\] \[S^{y}_{\mathbf{r},B}=i\sqrt{\frac{S}{2}}(a^{+}_{\mathbf{r},B}-a_ {\mathbf{r},B}),\] \[S^{z}_{\mathbf{r},B}=S-a^{+}_{\mathbf{r},B}a_{\mathbf{r},B}, \tag{7}\]
for spins oriented along the z-axis (bottom layer). Here, \(a^{\pm}_{\mathbf{r},\alpha}\) (\(a_{\mathbf{r},\alpha}\)) is the bosonic creation (anihilation) operator.
Upon inserting the Holstein-Primakoff transformation into Eqs. (1) to (3), keeping terms up to the second order in the magnon operators and disregarding any constant terms, one finds,
\[H = J_{1}S\sum_{\mathbf{r},\mathbf{\delta},\alpha}\Big{(}a^{+}_{\mathbf{ r},\alpha}a_{\mathbf{r},\alpha}+a^{+}_{\mathbf{r}+\mathbf{\delta},\alpha}a_{ \mathbf{r}+\mathbf{\delta},\alpha}\] \[-a^{+}_{\mathbf{r},\alpha}a_{\mathbf{r}+\mathbf{\delta},\alpha}-a^{ +}_{\mathbf{r}+\mathbf{\delta},\alpha}a_{\mathbf{r},\alpha}\Big{)}+J_{2}S\sum_{ \mathbf{r}}\Big{[}\sum_{\alpha}a^{+}_{\mathbf{r},\alpha}a_{\mathbf{r},\alpha}\] \[+\sum_{\mathbf{\delta}}\Big{(}a^{+}_{\mathbf{r},T}a^{+}_{\mathbf{r}+ \mathbf{\delta},B}+a_{\mathbf{r},T}a_{\mathbf{r}+\mathbf{\delta},B}\Big{)}\Big{]}\] \[-\frac{D_{y}S}{4}\sum_{\mathbf{r},\alpha}\Big{(}a^{+}_{\mathbf{r},\alpha}a^{+}_{\mathbf{r},\alpha}+a_{\mathbf{r},\alpha}a_{\mathbf{r},\alpha}-2 a^{+}_{\mathbf{r},\alpha}a_{\mathbf{r},\alpha}\Big{)}\] \[+D_{z}S\sum_{\mathbf{r},\alpha}a^{+}_{\mathbf{r},\alpha}a_{ \mathbf{r},\alpha}-h\sum_{\mathbf{r}}\Big{(}a^{+}_{\mathbf{r},B}a_{\mathbf{r}, B}-a^{+}_{\mathbf{r},T}a_{\mathbf{r},T}\Big{)}.\]
Then we perform the Fourier transformation to the momentum space,
\[a_{\mathbf{r},\alpha}=\frac{1}{\sqrt{N}}\sum_{\mathbf{k}}a_{ \mathbf{k},\alpha}e^{-i\mathbf{k}\cdot\mathbf{r}},\] \[a^{+}_{\mathbf{r},\alpha}=\frac{1}{\sqrt{N}}\sum_{\mathbf{k}}a^{+ }_{\mathbf{k},\alpha}e^{i\mathbf{k}\cdot\mathbf{r}}, \tag{9}\]
where N is the number of unit cells, and \(\mathbf{k}\) is the wavevector from the first Brillouin zone, see Appendix B. Upon this transformation one may rewrite Hamiltonian, Eq. (II), in the following form:
\[H = \sum_{\mathbf{k}}\bigg{\{}2J_{1}S\sum_{\alpha}(\gamma_{\mathbf{k} }-6)a^{+}_{\mathbf{k},\alpha}a_{\mathbf{k},\alpha} \tag{10}\] \[+J_{2}S\Big{(}\sum_{\alpha}\xi a^{+}_{\mathbf{k},\alpha}a_{ \mathbf{k},\alpha}+\eta_{\mathbf{k}}a^{+}_{-\mathbf{k},T}a^{+}_{\mathbf{k},B} +\eta^{*}_{\mathbf{k}}a_{-\mathbf{k},T}a_{\mathbf{k},B}\Big{)}\] \[+S\sum_{\alpha}\Big{[}-\frac{D_{y}}{4}(a^{+}_{\mathbf{k},\alpha}a^ {+}_{-\mathbf{k},\alpha}+a_{\mathbf{k},\alpha}a_{-\mathbf{k},\alpha}-2a^{+}_{ \mathbf{k},\alpha}a_{\mathbf{k},\alpha})\] \[+D_{z}a^{+}_{\mathbf{k},\alpha}a_{\mathbf{k},\alpha}\Big{]}+h(a^{ +}_{\mathbf{k},B}a_{\mathbf{k},B}-a^{+}_{\mathbf{k},T}a_{\mathbf{k},T})\bigg{\}},\]
where the quantity \(\xi\) is defined below Eq. (4), while the structure factors \(\gamma_{\mathbf{k}}\) and \(\eta_{\mathbf{k}}\) read
\[\gamma_{\mathbf{k}}=2\bigg{(}\cos(k_{z}a)+2\cos(\frac{\sqrt{3}}{2}k_{x}a)\cos( \frac{1}{2}k_{z}a)\bigg{)}, \tag{11}\]
\[\eta_{\mathbf{k}}=\left\{\begin{array}{ll}1&\text{(for T phase)}\\ e^{i\frac{k_{z}a}{\sqrt{3}}}+2e^{-i\frac{k_{z}a}{2\sqrt{3}}}\cos(\frac{1}{2}k_{ z}a)&\text{(for H phase)}\end{array}\right. \tag{12}\]
As one can easily note, Eq.(10) may be written as:
\[H=H_{\mathbf{k}}+H_{\mathbf{.k}}, \tag{13}\]
where
\[H_{\mathbf{k}} = \sum_{\mathbf{k}}\bigg{[}\bigg{(}\frac{A^{+}_{\mathbf{k}}}{2} \bigg{)}a^{+}_{\mathbf{k},B}a_{\mathbf{k},B}+\bigg{(}\frac{A^{-}_{\mathbf{k}}}{2 }\bigg{)}a^{+}_{\mathbf{k},T}a_{\mathbf{k},T} \tag{14}\] \[+B_{\mathbf{k}}a_{\mathbf{.k},T}a_{\mathbf{k},B}+C\sum_{\alpha}a_ {\mathbf{k},\alpha}a_{\mathbf{.k},\alpha}\bigg{]}+H.c.,\]
with the coefficients \(A^{\pm}_{\mathbf{k}}\), \(B_{\mathbf{k}}\) and C given by the formulae
\[A^{\pm}_{\mathbf{k}}=S\bigg{[}2J_{1}\big{(}\gamma_{\mathbf{k}}-6 \big{)}+\xi J_{2}+\frac{D_{y}}{2}+D_{z}\bigg{]}\pm h,\] \[B_{\mathbf{k}}=\eta^{*}_{\mathbf{k}}J_{2}S,\] \[C=-\frac{D_{y}S}{4}. \tag{15}\]
Following Refs [32; 33; 34; 22], we define now a new wave vector \(\mathbf{\kappa}\) that runs over half of the vector space of \(\mathbf{k}\), and define the four-dimensional Bogolubov transformation to new bosonic operators \(\Theta_{\pm\mathbf{\kappa},\mu}\) and \(\Theta^{+}_{\pm\mathbf{\kappa},\mu}\), with \(\mu=+,-\) indexing the two magnon modes (to be specified below). This transformation can be written as
\[\mathbf{\Theta}_{\mathbf{\kappa}}=\mathbf{\hat{T}_{4}}\mathbf{a}_{\mathbf{k}}, \tag{16}\]
where
\[\mathbf{\Theta}_{\mathbf{\kappa}}=\left(\begin{array}{c}\Theta_{\mathbf{\kappa},I}\\ \Theta_{\mathbf{\kappa},II}\\ \Theta^{+}_{-\mathbf{\kappa},I}\\ \Theta^{+}_{-\mathbf{\kappa},II}\end{array}\right),\hskip 28.452756pt\mathbf{a}_{ \mathbf{k}}=\left(\begin{array}{c}a_{\mathbf{k},T}\\ a_{\mathbf{k},B}\\ a^{+}_{+\mathbf{k},T}\\ a^{+}_{-\mathbf{k},B}\end{array}\right), \tag{17}\]
and \(\mathbf{\hat{T_{4}}}\) denotes a \(2\mathcal{N}\times 2\mathcal{N}\) paraunitary matrix (\(\mathcal{N}\) is a number of internal degrees of freedom within the unit cell), which obeys
\[[\mathbf{\Theta_{\kappa}},\mathbf{\Theta}_{\kappa}^{+}]=\mathbf{\hat{T_{4}}}[\mathbf{a_{k}},\bm {a_{k}^{+}}]\mathbf{\hat{T_{4}}}^{+}=\mathbf{\hat{T_{4}}}\mathbf{\sigma_{z}}\mathbf{\hat{T_{4}}} ^{+}=\mathbf{\sigma_{z}}, \tag{18}\]
with the diagonal matrix \((\mathbf{\hat{\sigma_{z}}})_{l,l^{\prime}}=\delta_{l,l^{\prime}}\sigma_{l}\), where \(\sigma_{l}=1\) for \(l\leq\mathcal{N}\) and \(\sigma_{l}=-1\) otherwise. The requirement given by Eq. (18) follows from the bosonic commutation relations \([\Theta_{\kappa,\mu},\Theta_{\kappa^{\prime},\mu^{\prime}}^{+}]=\delta_{ \kappa,\mathbf{\kappa^{\prime}}}\delta_{\mu,\mu^{\prime}}\). As a consequence, Eq. (16) can be written in the form
\[\mathbf{\Theta_{\kappa}}=\sum_{\alpha}\left(\begin{array}{cc}u_{I,\alpha}&v_{I, \alpha}\\ u_{II,\alpha}&v_{II,\alpha}\\ \tilde{u}_{I,\alpha}&\tilde{v}_{I,\alpha}\\ \tilde{u}_{II,\alpha}&\tilde{v}_{II,\alpha}\end{array}\right)\left(\begin{array} []{c}a_{\mathbf{k},\alpha}\\ a_{-\mathbf{k},\alpha}^{+}\\ \end{array}\right), \tag{19}\]
where the Bogolubov coefficients \(u_{\mu,\alpha}\) and \(v_{\mu,\alpha}\) are evaluated at \(\mathbf{\kappa}\) while the coefficients \(\tilde{u}_{\mu,\alpha}\) and \(\tilde{v}_{\mu,\alpha}\) are evaluated at \(-\mathbf{\kappa}\). Moreover, the relation (19) requires the normalization
\[\sum_{\alpha}\left(|u_{\mu,\alpha}|^{2}+|v_{\mu,\alpha}|^{2}\right)=1,\;\sum_ {\alpha}\big{(}|\tilde{u}_{\mu,\alpha}|^{2}+|\tilde{v}_{\mu,\alpha}|^{2}\big{)} =1. \tag{20}\]
This procedure finally diagonalizes the Hamiltonian,
\[H=\sum_{\mathbf{\kappa},\mathbf{\mu}}\Big{(}\omega_{\mathbf{\kappa},\mu}\Theta_{\mathbf{\kappa },\mu}^{+}\Theta_{\mathbf{\kappa},\mu}+\omega_{-\mathbf{\kappa},\mu}\Theta_{-\mathbf{ \kappa},\mu}^{+}\Theta_{-\mathbf{\kappa},\mu}\Big{)}. \tag{21}\]
Employing Eq. (21), one obtains the relation \([\Theta_{\mathbf{\kappa},\mu},H]=\omega_{\mathbf{\kappa},\mu}\Theta_{\mathbf{\kappa},\mu}\), from which Eqs. (14) and (19) lead to the eigenvalue problem
\[\Lambda_{\mathbf{\kappa}}\mathbf{e}_{\mu}=\omega_{\mathbf{\kappa},\mu}\mathbf{e}_{\mu}, \tag{22}\]
with
\[\Lambda_{\mathbf{\kappa}}=\left(\begin{array}{ccc}A_{\mathbf{\kappa}}^{-}&0&-2C&-B_{ \mathbf{\kappa}}^{*}\\ 0&A_{\mathbf{\kappa}}^{+}&-B_{\mathbf{\kappa}}&-2C\\ 2C&B_{\mathbf{\kappa}}^{*}&-A_{\mathbf{\kappa}}^{-}&0\\ B_{\mathbf{\kappa}}&2C&0&-A_{\mathbf{\kappa}}^{+}\end{array}\right),\mathbf{e}_{\mu}= \left(\begin{array}{c}u_{\mu,T}\\ u_{\mu,B}\\ v_{\mu,T}\\ v_{\mu,B}\end{array}\right). \tag{23}\]
Note that \(A_{\mathbf{\kappa}}^{\pm}=A_{-\mathbf{\kappa}}^{\pm}\), \(B_{\mathbf{\kappa}}^{*}=B_{-\mathbf{\kappa}}\) and therefore we have \(u_{\mu,\alpha}=\tilde{u}_{\mu,\alpha}\), \(v_{\mu,\alpha}=\tilde{v}_{\mu,\alpha}\). Moreover, as \(\omega_{\mathbf{\kappa},\mu}\) is a real quantity, we also have \(\omega_{-\mathbf{\kappa},\mu}=\omega_{\mathbf{\kappa},\mu}\). Finally, from Eqs. (22) and (23) one finds the appropriate dispersion relation.
For convenience, in this dispersion relation we come back to the usual notation for the wavevector, and replace in this relation \(\mathbf{\kappa}\) by \(\mathbf{k}\), which makes no confusion and no ambiguity. Thus, we write the dispersion relation as
\[\omega_{\mathbf{k},\mu} = \bigg{\{}A_{\mathbf{k}}^{2}-|B_{\mathbf{k}}|^{2}-4C^{2}+h^{2} \tag{24}\] \[\pm 2\bigg{[}4C^{2}|B_{\mathbf{k}}|^{2}+h^{2}\Big{(}A_{\mathbf{k}}^{2}-|B_{ \mathbf{k}}|^{2}\Big{)}\bigg{]}^{\frac{1}{2}}\bigg{\}}^{\frac{1}{2}},\]
where \(A_{\mathbf{k}}\!\equiv\!\!A_{\mathbf{k}}^{\pm}\!\mp\!h\), and the sign \(+\) and \(-\) in the fifth term on the right hand side of of Eq. (24) corresponds to the spin-wave mode index \(\mu=+\) and \(\mu=-\), respectively. From this equation follows that nonzero easy-plane anisotropy, \(D_{y}>0\), leads to splitting of the magnon spectrum in the absence of magnetic field, \(h=0\), into two branches, \(\omega_{\mathbf{k},\pm}\).
In the center of the first Brillouin zone, \(\mathbf{k}=\mathbf{0}\) (point \(\Gamma\)), one then finds
\[\omega_{\mathbf{k}=0,\pm}=S\big{[}\xi J_{2}\big{(}D_{y}\pm D_{y}+2D_{z}\big{)}+D_{y }D_{z}+D_{z}^{2}\big{]}^{\frac{1}{2}}. \tag{25}\]
This formula clearly shows that splitting of the magnon modes at the \(\Gamma\) point appears for a nonzero \(D_{y}\). In turn, a nonzero in-plane magnetic anisotropy, \(D_{z}>0\), leads to an energy gap. If \(D_{y}=0\) and \(D_{z}>0\), then the energy gap at \(\Gamma\) is given by
\[\omega_{\mathbf{k}=0,+}=\omega_{\mathbf{k}=0,-}=S\big{(}2\xi J_{2}D_{z}+D_{z}^{2}\big{)} ^{\frac{1}{2}}. \tag{26}\]
This gap vanishes if \(D_{z}=0\), i.e., a two-fold degenerated Goldstone mode, \(\omega_{\mathbf{k}=0,\pm}=0\), then appears. If \(D_{z}=0\) and \(D_{y}>0\), then
\[\omega_{\mathbf{k}=0,+}=S\big{(}2\xi J_{2}D_{y}\big{)}^{\frac{1}{2}}, \tag{27}\]
\[\omega_{\mathbf{k}=0,-}=0\;\;\text{(Goldstone mode)}. \tag{28}\]
### Spin waves in the spin-flop phase
In the spin-flop phase, \(h_{\text{sf}}<h<h_{s}\), one needs first to rotate the spin operators to the local quantization axes appropriate for the top (T) and bottom (B) layers (see Appendix A). This transformation does not affect the ferromagnetic intralayer exchange term of the model Hamiltonian (1,2), while the antiferromagnetic interlayer coupling term \(H_{\text{int}}\) as well as the anisotropy \(H_{A}^{\alpha}\) and Zeeman \(H_{h}^{\alpha}\) terms now become
\[H_{\text{int}} = J_{2}\sum_{\mathbf{r},\mathbf{\delta}}[\cos 2\chi(S_{\mathbf{r},T}^{z}S_{\mathbf{r}+ \delta,B}^{z}-S_{\mathbf{r},T}^{x}S_{\mathbf{r}+\delta,B}^{x}) \tag{29}\] \[-S_{\mathbf{r},T}^{y}S_{\mathbf{r}+\delta,B}^{y}],\]
\[H_{A}^{\alpha} = \frac{1}{2}\sum_{\mathbf{r}}\Big{\{}D_{y}(S_{\mathbf{r},\alpha}^{y})^{2}-D_ {z}\big{[}(S_{\mathbf{r},\alpha}^{x})^{2}\sin^{2}\chi \tag{30}\] \[+(S_{\mathbf{r},\alpha}^{z})^{2}\cos^{2}\chi\big{]}\Big{\}},\]
\[H_{h}^{\alpha}=-h\cos\chi\sum_{\mathbf{r}}S_{\mathbf{r},\alpha}^{z}, \tag{31}\]
where the spin operators are in the corresponding local systems.
Upon Holstein-Primakoff and Fourier transformations (see Appendix C), one arrives at the Hamiltonian which
bearing in mind the Bogolubov transformation can be written as \(H=H_{\bf k}+H_{\bf\cdot k}\), where
\[H_{\bf k} = \sum_{\bf k}\bigg{[}\sum_{\mathbf{\alpha}}\bigg{(}\frac{A_{\bf k}}{2} \bigg{)}a_{{\bf k},\alpha}^{+}a_{{\bf k},\alpha}+B_{\bf k}a_{{\bf k},T}a_{{\bf k },B} \tag{32}\] \[+\tilde{B}_{\bf k}a_{{\bf k},T}^{+}a_{{\bf k},B}+C\sum_{\alpha}a_ {{\bf k},\alpha}a_{{\bf\cdot k},\alpha}\bigg{]}+H.c.,\]
with
\[A_{\bf k}=S\bigg{[}2J_{1}\big{(}\gamma_{\bf k}-6\big{)}-\xi J_{2 }\cos 2\chi+\frac{D_{y}}{2}\] \[\qquad+\frac{D_{z}}{2}\big{(}3\cos^{2}\chi-1\big{)}\bigg{]}+h\cos\chi,\] \[B_{\bf k}=\eta_{\bf k}^{*}J_{2}S\sin^{2}\chi,\] \[\tilde{B}_{\bf k}=-\eta_{\bf k}^{*}J_{2}S\cos^{2}\chi,\] \[C=-\frac{S}{4}\big{(}D_{y}+D_{z}\sin^{2}\chi\big{)}. \tag{33}\]
Then, the diagonalization of the Eq. (32), similarly as described in the preceding subsection, leads to the eigenvalue problem, \(\Lambda_{\mathbf{\kappa}}\mathbf{e}_{\mu}=\omega_{\mathbf{\kappa},\mu}\mathbf{e}_{\mu}\), where
\[\Lambda_{\mathbf{\kappa}}=\left(\begin{array}{cccc}A_{\mathbf{\kappa}}&\tilde{B}_{ \mathbf{\kappa}}^{*}&-2C&-B_{\mathbf{\kappa}}^{*}\\ \tilde{B}_{\mathbf{\kappa}}&A_{\mathbf{\kappa}}&-B_{\mathbf{\kappa}}&-2C\\ 2C&B_{\mathbf{\kappa}}^{*}&-A_{\mathbf{\kappa}}&-\tilde{B}_{\mathbf{\kappa}}^{*}\\ B_{\mathbf{\kappa}}&2C&-\tilde{B}_{\mathbf{\kappa}}&-A_{\mathbf{\kappa}}\end{array}\right),\mathbf{e}_{\mu}=\left(\begin{array}{c}u_{\mu,T}\\ u_{\mu,B}\\ v_{\mu,T}\\ v_{\mu,B}\end{array}\right). \tag{34}\]
Upon replacing the notation \(\mathbf{\kappa}\) by \(\mathbf{k}\), we write the dispersion relation in the form
\[\omega_{\mathbf{k},\mu} = \Big{\{}A_{\mathbf{k}}^{2}-|B_{\mathbf{k}}|^{2}+|\tilde{B}_{\mathbf{k}}|^{2}- 4C^{2} \tag{35}\] \[\pm\Big{[}-8A_{\mathbf{k}}C(B_{\mathbf{k}}^{*}\tilde{B}_{\mathbf{k}}+B_{\mathbf{ k}}\tilde{B}_{\mathbf{k}}^{*})+B_{\mathbf{k}}^{2}\tilde{B}_{\mathbf{k}}^{*2}\] \[+B_{\mathbf{k}}^{*2}\tilde{B}_{\mathbf{k}}^{2}+16|B_{\mathbf{k}}|^{2}C^{2}+4A_ {\mathbf{k}}^{2}|\tilde{B}_{\mathbf{k}}|^{2}\] \[-2|B_{\mathbf{k}}|^{2}|\tilde{B}_{\mathbf{k}}|^{2}\Big{]}^{\frac{1}{2}} \Big{\}}^{\frac{1}{2}},\]
with the \(+\) and \(-\) signs in the fifth term on the right hand side of Eq. (35) corresponding to the mode index \(\mu=+\) and \(\mu=-\), respectively. It is worth noting here that in case of the T-stacked geometry, both \(B_{\mathbf{k}}\) and \(\tilde{B}_{\mathbf{k}}\) are independent of \(\mathbf{k}\), \(B_{\mathbf{k}}=B\), \(\tilde{B}_{\mathbf{k}}=\tilde{B}\), so that Eq. (35) simplifies to the following one:
\[\omega_{\mathbf{k},\mu}=\Big{\{}(A_{\mathbf{k}}-\tilde{B})^{2}-(B\pm 2C)^{2}\Big{\}}^{ \frac{1}{2}}. \tag{36}\]
At the saturation field \(h_{s}\) and for \(\mathbf{k}=\mathbf{0}\), the magnon energies are equal
\[\omega_{\mathbf{k}=0,\mu}=S\Bigg{[}\bigg{(}\xi J_{2}\pm\xi J_{2}+\frac{D_{y}}{2} \bigg{)}^{2}-\bigg{(}\frac{D_{y}}{2}\bigg{)}^{2}\Bigg{]}^{\frac{1}{2}}. \tag{37}\]
Thus, one finds
\[\omega_{\mathbf{k}=0,+}=2S\bigg{[}\big{(}\xi J_{2}\big{)}^{2}+\xi J_{2}\frac{D_{y} }{2}\bigg{]}^{\frac{1}{2}}, \tag{38}\]
\[\omega_{\mathbf{k}=0,-}=0\ \ \text{(Goldstone mode)}. \tag{39}\]
Thus, at the saturation field \(h_{s}\), one of the modes is the Goldstone mode - even in the case of nonzero in-plane easy-axis anisotropy, while the splitting of the modes (gap between the two modes) appears also in the case of vanishing \(D_{y}\). This behaviour is different from that in the antiferromagnetic state.
### Ferromagnetic state above the saturation field
For \(h\geq h_{s}\), the system is in the saturated (ferromagnetic) state, which can be considered as a special case of the spin-flop phase, corresponding to \(\chi=0\). Equations (29) to (31) take then the forms
\[H_{\rm int} = J_{2}\sum_{\bf r,\delta}(S_{\bf r,T}^{z}S_{\bf r+\delta,B}^{z}-S_ {\bf r,T}^{x}S_{\bf r+\delta,B}^{x} \tag{40}\] \[-S_{\bf r,T}^{y}S_{\bf r+\delta,B}^{y})\]
\[H_{A}^{\alpha} = \frac{1}{2}\sum_{\bf r}\Big{[}D_{y}(S_{\bf r,\alpha}^{y})^{2}-D_{ z}(S_{\bf r,\alpha}^{z})^{2}\Big{]} \tag{41}\]
\[H_{h}^{\alpha}=-h\sum_{\bf r}S_{\bf r,\alpha}^{z}. \tag{42}\]
Accordingly, the system Hamiltonian upon Holstein-Primakoff and Fourier transformations acquires the form (32) with \(A_{\bf k}\), \(B_{\bf k}\), \(\tilde{B}_{\bf k}\) and \(C\) given by the formulas
\[A_{\bf k}=S\bigg{[}2J_{1}\big{(}\gamma_{\bf k}-6\big{)}-\xi J_{2} +\frac{D_{y}}{2}+D_{z}\bigg{]}+h\] \[B_{\bf k}=0,\] \[\tilde{B}_{\bf k}=-\eta_{\bf k}^{*}J_{2}S,\] \[C=-\frac{SD_{y}}{4}. \tag{43}\]
Thus, the eigenvalue problem, \(\Lambda_{\mathbf{\kappa}}\mathbf{e}_{\mu}=\omega_{\mathbf{\kappa},\mu}\mathbf{e}_{\mu}\), with
\[\Lambda_{\mathbf{\kappa}}=\left(\begin{array}{cccc}A_{\mathbf{\kappa}}&\tilde{B}_{ \mathbf{\kappa}}^{*}&-2C&0\\ \tilde{B}_{\mathbf{\kappa}}&A_{\mathbf{\kappa}}&0&-2C\\ 2C&0&-A_{\mathbf{\kappa}}&-\tilde{B}_{\mathbf{\kappa}}^{*}\\ 0&2C&-\tilde{B}_{\mathbf{\kappa}}&-A_{\mathbf{\kappa}}\end{array}\right),\mathbf{e}_{\mu}= \left(\begin{array}{c}u_{\mu,T}\\ u_{\mu,B}\\ v_{\mu,T}\\ v_{\mu,B}\end{array}\right), \tag{44}\]
leads to the final dispersion relation for the ferromagnetic phase in the form,
\[\omega_{\mathbf{k},\mu}=\Big{[}\big{(}A_{\mathbf{k}}\pm|\tilde{B}_{\mathbf{k}}|\big{)}^{2}-4C ^{2}\Big{]}^{\frac{1}{2}}. \tag{45}\]
## III Numerical results and discussion
To discuss properties of spin-wave spectra in vanadium-based dichalcogenides, we consider numerical
results of 2H-VX\({}_{2}\) bilayer systems. For numerical analysis one needs to know the relevant parameters, including exchange and magnetic anisotropy constants, and appropriate structural parameters. Most of them were taken from DFT calculations by Jaffari _et al_[2010], and are listed in Table 1. Only the in-plane easy-axis anisotropy constant, \(D_{z}\), was taken from Ref.[19]. In addition, according to our analysis in Sect. II and Appendix A, we also show in Table 1 the threshold magnetic fields \(h_{\rm sf}\) and \(h_{s}\), evaluated from Eqs. (4) and (5) and defining the regions of the antiferromagnetic, spin-flop and ferromagnetic phases of the antiferromagnetically coupled bilayers.
The three-dimensional (3D) presentation of the magnon spectrum, evaluated for the 2H-VSe\({}_{2}\) bilayer in the antiferromagnetic phase and in the absence of external magnetic field is shown in Fig.2a. As there are two magnetic atoms in the unit cell of the considered TMDs bilayer, then one can expect two magnon bands. In the absence of magnetic field, \(h=0\), the two magnon modes have similar energy so they are not resolved in Fig.2a. However, even for \(h=0\), these two modes can differ slightly in energy due to other interactions. As it will be discussed later on in this section, the easy-plane and in-plane easy-axis magnetic anisotropies introduce subtle effects (not resolved within the energy scale of Fig. 2a), which may lead to a splitting of the spectrum from Fig.2a into two spin-wave modes at zero external magnetic field, and also can generate a gap in the spectrum. Low energy spin waves exist in the Brillouin zone center (see the \(\Gamma\) point in Fig. 2a) around \(\mathbf{k}=\mathbf{0}\), while the maxima of magnon energy emerge at the Dirac points K of the Brillouin zone (see Fig. 2a). Let us look more carefully at the spin-wave properties for selected cross sections of 3D magnon bands, displayed along high-symmetry path \(K\)\(\rightarrow\)\(\Gamma\)\(\rightarrow\)\(M\)\(\rightarrow\)\(K\) in the momentum space (see Fig.2b and
Figure 2: The 3D view of the magnon band in the first Brillouin zone, in the absence magnetic field (\(h=\)0), calculated for the 2H-VTe\({}_{2}\) bilayer (a) and its projection onto the (\(k_{z},k_{x}\))-plane (b) with indicated Brillouin zone center (\(\Gamma\)) and high-symmetry points (K,M) and paths. See text for more details.
Figure 3: Dispersion curves of the spin wave spectrum along the path K-\(\Gamma\)-M-K (a). As in Fig.2, the two modes are not resolved here. To observe splitting of the modes in we show in (b) the spin wave spectrum in a close vicinity of the \(\Gamma\) point. Now, the splitting and also gap in the spectrum are clearly visible.
The dispersion relations in Fig.2 are for zero magnetic field, \(h=0\), so they correspond to antiparallel alignment of the magnetic moments of individual layers. Explicit dispersion relations along the \(\Gamma\to K\) and \(\Gamma\to M\to K\) paths in the Brillouin zone are shown in Fig.3a. Generally, there are two modes in the bilayers under considerations. However, separation of these two modes is not resolved in Fig.3a. Therefore, in Fig.3b we zoomed in a small area near the \(\Gamma\) point. Now, splitting of the modes is clearly seen. Moreover, this figure also shows that there is a gap in the spectrum at the \(\Gamma\) point, i.e. the spin wave energy is not vanishing at \(k=0\). This gap is a consequence of the in-plane easy-axis anisotropy, \(D_{z}\), as shown in Fig.4a, where the two modes at the \(\Gamma\) point are plotted as a function of \(D_{z}\). When \(D_{z}=0\), energy of the lowest mode vanishes, while the gap between the two modes survives. This gap, in turn, is determined mainly by the easy plane anisotropy constant, as shown in Fig.4b.
As we have already discussed above, external magnetic field leads to phase transitions associated with reorientation of the magnetic moments of the two monolayers. The transition to spin-flop phase appears at \(h_{\rm sf}\) and then at the saturation field \(h_{s}\) the transition to fully collinear (ferromagnetic) phase takes place. The critical fields \(h_{s}\) and \(h_{\rm sf}\) are determined by the anisotropy constants and interlayer exchange parameter, as described in section 2. Variation of the the fields \(h_{s}\) and \(h_{\rm sf}\) with the easy-axis anisotropy constant \(D_{z}\) is shown in Fig.5 for other parameters typical of 2H-VTe\({}_{2}\). This figure shows that when \(D_{z}=0\), the transition to the spin-flop phase appears already at infinitesimally small magnetic field. This figure also shows that the saturation field decreases with increasing \(D_{z}\) and for a specific value of \(D_{z}\) (denoted as \(D_{z}^{M}\)) the saturation field and the transition field to the spin-flop phase become equal and the system changes from antiferromagnetic to ferromagnetic without the intermediate spin-flop phase.
Variation of the spin wave energy at the \(\Gamma\) point with increasing external magnetic field is shown in Fig.6a, while the same for the M and K points is shown in Fig.6b and Fig.6c, respectively. Note, that the two modes (black and red curves), behave differently in the three regions of magnetic field. In the antiferromagnetic phase, energy of one of the modes (that of higher energy) slightly increases with the field, wheres the second mode (of lower energy) becomes softened and reaches a minimum at the transition point to the spin-flop phase. This minimum is nonzero due to a finite in-plane easy-axis anisotropy. Slightly above the spin-flop field \(h_{\rm sf}\), energy of the lower mode (the soft one) increases with the field \(h\), while energy of the upper mode decreases with increasing \(h\). This tendency is kept till the saturation field \(h_{s}\), where the latter modes becomes soft and its energy vanishes at \(h_{s}\)
Figure 4: Energy of the two modes in the \(\Gamma\) point as a function of (a) the in-plane easy-axis anistropy constant \(D_{z}\) (the blue dashed line indicates \(D_{z}\) taken for VSe\({}_{2}\) from Table 1), and (b) as a function of the easy plane magnetic anisotropy constant \(D_{y}\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**TMD** & **GS** & **a [A]** & \({\bf D}_{z}\) **[meV]** & \({\bf D}_{y}\) **[meV]** & \({\bf J}_{1}\) **[meV]** & \({\bf J}_{2}\) **[meV]** & \({\bf h_{s}/g\mu_{B}}\) **[T]** & \({\bf h_{sf}/g\mu_{B}}\) **[T]** \\ \hline \hline
2H-VS\({}_{2}\) & FM & 3.17147 & 0.006 & -0.4422 & -10.02 & -0.19 & & \\ \hline
2H-VSe\({}_{2}\) & AFM & 3.31971 & 0.014 & -1.3849 & -9.58 & 0.02 & 1.37 & 0.17 \\ \hline
2H-VTe\({}_{2}\) & AFM & 3.59069 & 0.016 & -3.8121 & -6.33 & 0.08 & 6 & 1.11 \\ \hline \end{tabular}
\end{table}
Table 1: The parameters used in the prasent papaer for 2H-VX\({}_{2}\) TMD bilayers with ground state (GS) being ferromagnetic (FM) or antiferromagnetic (AFM), with \(S=3/2\), as estimated from the DFT method. Other data taken from DFT calculations: lattice constant (a), easy-plane MA (\(D_{y}\)), intralayer exchange interaction (\(J_{1}\)) and interlayer exchange interaction (\(J_{2}\)). The in-plane anisotropy parameters (\(D_{z}\)) are taken from [19]. The threshold magnetic fields (\(h_{s}\) and \(h_{\rm sf}\)) are calculated from Eqs. (4) and (5).
while the energy of the former mode still increases with \(h\) for \(h>h_{s}\), however a clear kink appears at the transition between spin-flop and ferromagnetic phases. Similar behavior of the spin waves in the K and M points of the Brillouin zone is shown in Fig.6b and Fig.6c. In the antiferromagnetic phase one mode increase linearly with magnetic field, while the other decrease linearly with \(h\). This is valid for both, K and M points. Interestingly, the two modes in the K point are degenerate in the spin-flop phase as well as in the ferromagnetic phase.
## IV Summary
In this paper we have analyzed spin wave modes in a class of van der Waals magnetic materials, that includes transition-metal dichalcogenides. The description is limited to bilayers with easy plane anisotropy and ferromagnetic intralayer exchange coupling, i.e. individual monolayers are ferromagnetically ordered in the layer plane. In turn, the two layers are coupled either ferromagnetically or antiferromagnetically. To find the spin wave energy we used the Hollstein-Primakoff-Bogolubov diagonalization scheme.
The bilayers support two magnon modes, which are split in general, though the splitting is rather small due to small interlayer exchange coupling. In the absence of external magnetic field and in-plane easy-axis anisotropy, energy of one of the modes vanishes in the \(\Gamma\) point of the Brillouin zone. This mode is the well known Goldstone mode. External field or in-plane anisotropy create a gap at the \(\Gamma\) point.
Van der Waals materials are of current interest from the point of view of possible applications. But of particular interest are bilayers of van der Waals magnetic materials, that can be considered as natural atomically-thin spin valves.
###### Acknowledgements.
This work has been supported by the Norwegian Fi
Figure 5: Critical spin-flop field \(h_{\mathrm{sf}}\) and saturation field \(h_{s}\) as a function of in-plane easy-axis anistropy constant \(D_{z}\).
Figure 6: Magnetic field (\(h\)) dependence of the spin wave spectra in the \(\Gamma\) (a), M (b), and K (c) points of the Brillouin zone. With increasing \(h\), system goes from antiferromagnetic (AF) state to the spin-flop (SF) state at \(h=h_{\mathrm{sf}}\), and then from the spin-flop phase to the ferromagnetic (FM) state at \(h=h_{s}\).
nancial Mechanism 2014- 2021 under the Polish Norwegian Research Project NCN GRIEG 2Dtronics no. 2019/34/H/ST3/00515.
## Appendix A Spin phases
As already mentioned in the main text, for the model Hamiltonian (1) and (2) one may expect in general three stable spin configurations of the bilayer system in an external magnetic field applied along the in-plane easy axis, i.e. (i) antiferromagnetic state at low fields with the spins of the two monolayers oriented along \(+z\) and \(-z\) axis for the bottom and top layers, respectively, (ii) spin-flop phase in a specific range of magnetic field, with the spins of the two monolayers lying in the atomic planes at an angle \(\chi\) to the \(z\) axis, (iii) and the ferromagnetic phase with the spins of both layers along the \(z\) axis. In order to determine these phases in a specific magnetic field (and also bearing in mind the magnon description) it is convenient to use coordinate systems with the local \(z^{\prime}\) axes along the corresponding spin orientations. To do this one has to combine rotation of the spins from the global frame around the y-axis by the canting angle \(\chi\) and around the z-axis by the angle \(\theta_{\alpha}\), where \(\alpha=T\) and \(\alpha=B\) labels the top and bottom layers, respectively,
\[\mathbf{S_{r,\alpha}}=\mathbf{\hat{R}}_{z}(\theta_{\alpha})\mathbf{\hat{R}}_{ y}(\chi)\mathbf{S^{{}^{\prime}}_{r,\alpha}}, \tag{10}\]
where the rotation matrix reads
\[\mathbf{\hat{R}}_{z}(\theta_{\alpha})\mathbf{\hat{R}}_{y}(\chi)=\left(\begin{array} []{ccc}\cos\theta_{\alpha}\cos\chi&-\sin\theta_{\alpha}&\cos\theta_{\alpha} \sin\chi\\ \sin\theta_{\alpha}\cos\chi&\cos\theta_{\alpha}&\sin\theta_{\alpha}\sin\chi\\ -\sin\chi&0&\cos\chi\end{array}\right), \tag{11}\]
with \(\theta_{T}=\pi\) for the top layer, \(\theta_{B}=0\) for the bottom layer, and \(\chi\) being the polar angle between the spin (aligned along the \(z^{\prime}\)-axis of the local coordinate system) and the \(z\)-axis of the global frame. Thus,
\[S^{x}_{\mathbf{r,\alpha}}=\mp S^{{}^{\prime}x}_{\mathbf{r,\alpha} }\sin\chi\mp S^{{}^{\prime}z}_{\mathbf{r,\alpha}}\cos\chi, \tag{12}\] \[S^{y}_{\mathbf{r,\alpha}}=\mp S^{{}^{\prime}y}_{\mathbf{r,\alpha }},\] (13) \[S^{z}_{\mathbf{r,\alpha}}=-S^{{}^{\prime}x}_{\mathbf{r,\alpha} }\cos\chi+S^{{}^{\prime}z}_{\mathbf{r,\alpha}}\sin\chi, \tag{14}\]
where the sign \(-(+)\) corresponds to the layers \(\alpha=T\) (\(\alpha=B\)), respectively.
The \(h\)- dependent regimes of the spin configurations of the bilayer can be found from the classical energy. In the spin-flop phase, \(h_{\rm sf}\leq h\leq h_{s}\)
\[E_{\rm sf}/NS = -6|J_{1}|S+\frac{1}{2}\xi J_{2}S\cos 2\chi-\frac{1}{2}D_{z}S\cos^{2}\chi \tag{15}\] \[-h\cos\chi,\]
where \(N\) is the total number of sites, and \(\xi\) denotes the structure factor: \(\xi=3\) and \(\xi=1\) for the H and T phase, respectively. Hence, minimizing the classical energy, \(\partial E_{\rm sf}/\partial\chi=0\), yields the condition for the canting angle \(\chi\), \(\cos\chi=h/h_{s}\), where \(h_{s}\) is the threshold magnetic field (the saturation field) at which the transition between spin-flop and ferromagnetic (\(\chi=0\)) phases occurs,
\[h_{s}=S(2\xi J_{2}-D_{z}). \tag{16}\]
The threshold \(h_{\rm sf}\) for transition from the antiferromagnetic phase to the spin-flop one can be derived from the condition \(E_{sf}=E_{\rm af}\), where \(E_{\rm af}\) denotes the classical energy for the collinear antiferromagnetic phase,
\[E_{\rm af}/NS=-6|J_{1}|S-\frac{1}{2}\xi J_{2}S-\frac{1}{2}D_{z}S. \tag{17}\]
Thus, from Eqs. (15) to (17) one finds
\[h_{\rm sf}=\sqrt{SD_{z}h_{s}}=S\sqrt{D_{z}(2\xi J_{2}-D_{z})}. \tag{18}\]
One can see that \(h_{\rm sf}=0\) if \(D_{z}=0\). Thus, even for nonvanishing magnetic fields, \(0<h<h_{sf}\), the collinear antiferromagnetic configuration may exist as it is stabilized by the in-plane magnetic anisotropy.
## Appendix B Structural properties
For the considered VSe\({}_{2}\) system, each layer has hexagonal lattice with the primitive lattice vectors
\[\mathbf{a}_{1,2}=a\biggl{(}\pm\frac{\sqrt{3}}{2}\mathbf{\hat{x}}+\frac{1}{2} \mathbf{\hat{z}}\biggr{)},\,\mathbf{a}_{3}=0, \tag{19}\]
where \(a\) is the in-plane lattice constant (distance between Vanadium atoms). Here, each V atom has six intralayer nearest neigbours determined by the \(\mathbf{\delta}\) vectors:
\[\mathbf{\delta}_{1,2} =\pm a\mathbf{\hat{z}},\] \[\mathbf{\delta}_{3,4} =a\biggl{(}\pm\frac{\sqrt{3}}{2}\mathbf{\hat{x}}+\frac{1}{2} \mathbf{\hat{z}}\biggr{)},\] \[\mathbf{\delta}_{5,6} =a\biggl{(}\pm\frac{\sqrt{3}}{2}\mathbf{\hat{x}}-\frac{1}{2} \mathbf{\hat{z}}\biggr{)}. \tag{20}\]
Moreover, for the T-stacked system, each V atom has one NN in the adjacent layer, while for the H-stacked system there are three NNs in the adjacent layer with \(\mathbf{\delta}\) given by
\[\mathbf{\delta}_{1,3}=a\biggl{(}-\frac{1}{2\sqrt{3}}\mathbf{\hat{x}}\pm\frac{1}{2} \mathbf{\hat{z}}\biggr{)},\,\mathbf{\delta}_{2}=\frac{a}{\sqrt{3}}\mathbf{\hat{x}}. \tag{21}\]
## Appendix C Hollstein-Primakoff transformations in SF phase
Using the Holstein-Primakoff transformation, which for the SF configuration reads
\[S^{x}_{\mathbf{r,\alpha}}=\sqrt{\frac{S}{2}}(a^{+}_{\mathbf{r,\alpha}}+a_{ \mathbf{r,\alpha}}), \tag{22}\]
\[S^{y}_{{\bf r},\alpha}=i\sqrt{\frac{S}{2}}(a^{+}_{{\bf r},\alpha}-a_{{\bf r}, \alpha}), \tag{100}\]
\[S^{z}_{{\bf r},\alpha}=S-a^{+}_{{\bf r},\alpha}a_{{\bf r},\alpha}, \tag{101}\]
we arrive at the following form of the Hamiltonian written for the bosonic operators:
\[H = J_{1}S\sum_{{\bf r},{\mathbf{\delta}},\alpha}\left(a^{+}_{{\bf r}, \alpha}a_{{\bf r}+\delta,\alpha}+a^{+}_{{\bf r}+\delta,\alpha}a_{{\bf r},\alpha}\right. \tag{102}\] \[-a^{+}_{{\bf r},\alpha}a_{{\bf r},\alpha}-a^{+}_{{\bf r}+\delta, \alpha}a_{{\bf r}+\delta,\alpha}\Big{)}\] \[+J_{2}S\sum_{{\bf r},{\mathbf{\delta}}}\Big{[}-\cos 2\chi\big{(}a^{+}_ {{\bf r},T}a_{{\bf r},T}+a^{+}_{{\bf r}+\delta,B}a_{{\bf r}+\delta,B}\big{)}\] \[+\sin^{2}\chi\big{(}a^{+}_{{\bf r},T}a^{+}_{{\bf r}+\delta,B}+a_{ {\bf r},T}a_{{\bf r}+\delta,B}\big{)}\] \[-\cos^{2}\chi\big{(}a^{+}_{{\bf r},T}a_{{\bf r}+\delta,B}+a_{{\bf r },T}a^{+}_{{\bf r}+\delta,B}\big{)}\Big{]}\] \[+\frac{D_{y}S}{4}\sum_{{\bf r},\alpha}\Big{(}2a^{+}_{{\bf r}, \alpha}a_{{\bf r},\alpha}-a^{+}_{{\bf r},\alpha}a^{+}_{{\bf r},\alpha}-a_{{\bf r },\alpha}a_{{\bf r},\alpha}\Big{)}\] \[+\frac{D_{z}S}{2}\sum_{{\bf r},\alpha}\Big{[}\big{(}3\cos^{2}\chi -1\big{)}a^{+}_{{\bf r},\alpha}a_{{\bf r},\alpha}\] \[-\frac{1}{2}\sin^{2}\chi\big{(}a^{+}_{{\bf r},\alpha}a^{+}_{{\bf r },\alpha}+a_{{\bf r},\alpha}a_{{\bf r},\alpha}\big{)}\Big{]}\] \[+h\cos\chi\sum_{{\bf r},\alpha}a^{+}_{{\bf r},\alpha}a_{{\bf r}, \alpha}.\]
The Fourier transformation described by Eq. (9) together with Eqs. (100)-(101) yields
\[H = \sum_{\bf k}\bigg{\{}2J_{1}S\sum_{\alpha}(\gamma_{\bf k}-6)a^{+}_ {{\bf k},\alpha}a_{{\bf k},\alpha} \tag{103}\] \[+J_{2}S\Big{[}-\xi\cos 2\chi\sum_{\alpha}a^{+}_{{\bf k},\alpha}a_{{ \bf k},\alpha}\] \[+\sin^{2}\chi\big{(}\eta_{\bf k}a^{+}_{{\bf k},T}a^{+}_{{\bf k},B} +\eta^{*}_{{\bf k}}a_{{\bf k},T}a_{{\bf k},B}\big{)}\] \[-\cos^{2}\chi\big{(}\eta^{*}_{\bf k}a^{+}_{{\bf k},T}a_{{\bf k},B} +\eta_{\bf k}a_{{\bf k},T}a^{+}_{{\bf k},B}\big{)}\Big{]}\] \[+\frac{S}{2}\sum_{\mathbf{\alpha}}\bigg{\{}\Big{[}D_{y}+D_{z}(3\cos^{ 2}\chi-1)\Big{]}a^{+}_{{\bf k},\alpha}a_{{\bf k},\alpha}\] \[-\frac{1}{2}\big{(}D_{y}+D_{z}\sin^{2}\chi\big{)}\big{(}a^{+}_{{ \bf k},\alpha}a^{+}_{{\bf k},\alpha}+a_{{\bf k},\alpha}a_{{\bf k},\alpha}\Big{)}\bigg{\}}\] \[+h\cos\chi\sum_{\alpha}a^{+}_{{\bf k},\alpha}a_{{\bf k},\alpha} \bigg{\}},\]
which within the Bogolubov transformation approach leads to the final form of the Hamiltonian \(H=H_{\bf k}+H_{\bf\cdot k}\), where \(H_{\bf k}\) is given by Eq.(33) in the main text.
## Appendix D Ferromagnetic interlayer coupling
If the TMD 2H-VX\({}_{2}\) bilayer has FM ground state (as e.g. 2H-VS\({}_{2}\)), then Eq. (3) describes ferromagnetic interlayer coupling with exchange coupling parameter \(J_{2}<0\). In such a case the Holstein-Primakoff and Fourier transformations for the top and bottom layers lead to the full Hamiltonian in the form
\[H = \sum_{\bf k}\bigg{\{}2J_{1}S\sum_{\alpha}(\gamma_{\bf k}-6)a^{+}_ {{\bf k},\alpha}a_{{\bf k},\alpha} \tag{104}\] \[+J_{2}S\Big{(}-\xi\sum_{\alpha}a^{+}_{{\bf k},\alpha}a_{{\bf k}, \alpha}+\eta_{\bf k}a_{{\bf k},T}a^{+}_{{\bf k},B}+\eta^{*}_{{\bf k}}a^{+}_{{ \bf k},T}a_{{\bf k},B}\Big{)}\] \[+S\sum_{\alpha}\Big{[}-\frac{D_{y}}{4}\big{(}a^{+}_{{\bf k}, \alpha}a^{+}_{-{\bf k},\alpha}+a_{{\bf k},\alpha}a_{-{\bf k},\alpha}-2a^{+}_{{ \bf k},\alpha}a_{{\bf k},\alpha}\big{)}\] \[+D_{z}a^{+}_{{\bf k},\alpha}a_{{\bf k},\alpha}\Big{]}+h\sum_{\alpha }a^{+}_{{\bf k},\alpha}a_{{\bf k},\alpha}\bigg{\}}.\]
This Hamiltonian can be written as
\[H=H_{\bf k}+H_{\bf\cdot k}, \tag{105}\]
where
\[H_{\bf k} = \sum_{\bf k}\bigg{[}\sum_{\alpha}\bigg{(}\frac{A_{\bf k}}{2} \bigg{)}a^{+}_{{\bf k},\alpha}a_{{\bf k},\alpha}+B_{\bf k}a^{+}_{{\bf k},T}a_{{ \bf k},B} \tag{106}\] \[+C\sum_{\alpha}a_{{\bf k},\alpha}a_{{\bf k},\alpha}\bigg{]}+H.c.,\]
with
\[A_{\bf k}=S\Big{[}2J_{1}\big{(}\gamma_{\bf k}-6\big{)}-\xi J_{2} +\frac{D_{y}}{2}+D_{z}\Big{]}+h,\] \[B_{\bf k}=\eta^{*}_{\bf k}J_{2}S,\] \[C=-\frac{D_{y}S}{4}.\]
The eigenvalue problem evaluated by means of the Bogolubov diagonalization scheme leads finally to the dispersion relation given by the formula
\[\omega_{{\mathbf{k}},\mu}=\Big{[}\big{(}A_{\mathbf{k}}\pm|B_{\mathbf{k}}|\big{)}^{2}-4C^{2} \Big{]}^{\frac{1}{2}}, \tag{107}\]
where \(A_{\mathbf{k}}\) and \(|B_{\mathbf{k}}|\) are given by Eq. (103) (note that here \(|B_{\mathbf{k}}|\equiv|\tilde{B}_{\mathbf{k}}|\)), and where \(\pm\) corresponds to mode \(\mu=+,-\). At the zone center, \({\mathbf{k}}=0\), one gets
\[\omega_{{\mathbf{k}}=0,+}=S\bigg{[}\Big{(}6|J_{2}|+\frac{D_{y}}{2}+D_{z}\Big{)}^{2}- \Big{(}\frac{D_{y}}{2}\Big{)}^{2}\bigg{]}^{\frac{1}{2}} \tag{108}\]
\[\omega_{{\mathbf{k}}=0,-}=S\bigg{[}\Big{(}\frac{D_{y}}{2}+D_{z}\Big{)}^{2}-\Big{(} \frac{D_{y}}{2}\Big{)}^{2}\bigg{]}^{\frac{1}{2}}, \tag{109}\]
so that in the absence of the Zeeman field (\(h=0\)) as well as in the absence of the in-plane anisotropy field (\(D_{z}=0\)), one finds \(\omega_{{\mathbf{k}}=0,-}=0\). In such a case, for \(D_{y}>0\) a gapless, linearly vanishing \(\sim|{\mathbf{k}}|\) Goldstone mode occurs at the zone center, while for \(D_{y}=0\) the mode \(\omega_{{\mathbf{k}},-}\) vanishes non-linearly in the vicinity of \({\mathbf{k}}=0\). |
2310.19290 | Analyzing eyebrow region for morphed image detection | Facial images in passports are designated as primary identifiers for the
verification of travelers according to the International Civil Aviation
Organization (ICAO). Hence, it is important to ascertain the sanctity of the
facial images stored in the electronic Machine-Readable Travel Document
(eMRTD). With the introduction of automated border control (ABC) systems that
rely on face recognition for the verification of travelers, it is even more
crucial to have a system to ensure that the image stored in the eMRTD is free
from any alteration that can hinder or abuse the normal working of a facial
recognition system. One such attack against these systems is the face-morphing
attack. Even though many techniques exist to detect morphed images, morphing
algorithms are also improving to evade these detections. In this work, we
analyze the eyebrow region for morphed image detection. The proposed method is
based on analyzing the frequency content of the eyebrow region. The method was
evaluated on two datasets that each consisted of morphed images created using
two algorithms. The findings suggest that the proposed method can serve as a
valuable tool in morphed image detection, and can be used in various
applications where image authenticity is critical. | Abdullah Zafar, Christoph Busch | 2023-10-30T06:11:27Z | http://arxiv.org/abs/2310.19290v1 | # Analyzing eyebrow region for morphed image detection
###### Abstract
Facial images in passports are designated as primary identifiers for the verification of travelers according to the International Civil Aviation Organization (ICAO) [9]. Hence, it is important to ascertain the sanctity of the facial images stored in the electronic Machine-Readable Travel Document (eMRTD). With the introduction of automated border control (ABC) systems that rely on face recognition for the verification of travelers [2], it is even more crucial to have a system to ensure that the image stored in the eMRTD is free from any alteration that can hinder or abuse the normal working of a facial recognition system. One such attack against these systems is the face-morphing attack. Even though many techniques exist to detect morphed images, morphing algorithms are also improving to evade these detections. In this work, we analyze the eyebrow region for morphed image detection. The proposed method is based on analyzing the frequency content of the eyebrow region. The method was evaluated on two datasets that each consisted of morphed images created using two algorithms. The findings suggest that the proposed method can serve as a valuable tool in morphed image detection, and can be used in various applications where image authenticity is critical.
Keywords:Morphed image detection eyebrow region analysis automatic border control (ABC) security.
## 1 Introduction
Face morphing is a real and live threat against the ABC systems which verify a person's identity by comparing the live image with the facial reference stored in the eMRTD [20]. Despite the simplicity of the solution of having the passport holder come to the center to take photographs, it is still not universally used due to financial reasons. Furthermore, many countries have adopted or are in the process of adopting web-based passport/visa applications for the ease of applicants where the user can upload a digital copy of the image to the web portal [13]. With technological advances, it is counter-intuitive for the general user to be asked to come to an office just to take a photograph. These things make the detection of a morphed image even more relevant in these times.
For simplicity, face morphing can be explained as an attack against face recognition systems where the images of two individuals are combined to create a
mophed image that is used as a reference image. This reference image produces a positive match against the images that were used in creating the morphed image. One serious application of such an attack is explained in [6] where face morphing enables an individual to travel on someone else's passport.
This paper will present a morphing attack detection (MAD) technique to detect morphed images during the enrollment phase. MAD methods can be divided into two categories i.e. single image MAD (S-MAD) and Differential MAD (D-MAD) [10]. S-MAD involves analyzing an image to determine whether the image is a morph or not. D-MAD involves analyzing the image and another trusted live source for detection. The approach presented in this paper is an S-MAD technique. However, the same technique can also be applied in a D-MAD scenario. S-MAD is more relevant for the passport application process because no trusted live source exists during the enrollment phase.
The approach is based on the analysis of the eyebrow region. The assumption is that the eyebrow region has a high-frequency content due to the presence of hairs. The idea is to analyze the possible reduction in this high-frequency information due to the smoothening effect that results from the averaging of two images in the creation of a morphed image. Furthermore, the eyebrow region is interesting because of its universality and importance in the performance of face recognition systems [15]. The rest of the paper is organized as: Section 2 will highlight some of the related work, Section 3 explains the methodology, Section 4 presents the results obtained, and Section 5 concludes with final remarks and discussion.
## 2 Prior Work
Morphing-based attacks against ABC systems were first identified by Ferrara et al. [6] where they demonstrated the hypothetical scenario of a malicious actor who travels on his friend's passport by means of a face morphing attack. After that, the topic of morphed image detection piped the interest of researchers resulting in a number of studies presenting different kinds of morphed detection techniques. In this section, past works are presented where texture descriptors are used for morphing attack detection.
Ramachandra et al. [14] in 2016 proposed the first single image based morphed attack detection system. It relied on texture descriptor differences in bonafide and morphed images. The algorithm worked by obtaining a micro-texture variation using Binarized Statistical Image Features (BSIF) and then making the decision using a linear support vector machine. The same detection technique was tested in [17] against two databases of printed-scanned images. The reason for using print-scan images was to mimic the image quality in a visa application process as the visa application process in many countries requires submitting printed images that later get scanned to be saved in the system [16]. The results showed that the detection performance of this technique dropped compared to the digital images.
Spreeuwers et al. in [19] presented another MAD technique that was based on local binary patterns (LBD). Experiments were conducted on multiple databases and with different morphing algorithms to test the robustness of the proposed method. The results obtained were comparable to the BSIF-based method on one dataset, but the same performance could not be observed while testing on multiple datasets.
The application of Fourier spectrum analysis on different facial characteristics was first suggested by Ndeh de Mbah in [10]. This approach is based on analyzing the power density of six identified facial features. The decision of whether an image is morphed or bonafide is based on the total score obtained from the six classifiers. Experiments were conducted on two databases where the results varied greatly depending on the dataset used. However, the reasoning behind the difference in results was not addressed in the paper.
In this paper, we focus on analyzing the eyebrow region in the frequency domain to distinguish between a morphed and bonafide image. We use bonafide and morphed images from two different datasets containing ICAO-compliant printed scanned bonafide images and their morphs created using two morphing algorithms to test our approach. The proposed method is based on single-image detection that can also be used in a differential detection scenario.
## 3 The Proposed Method
The proposed method is a texture-based detection method where the smoothness property of the eyebrow region is studied to distinguish a morphed image from a bonafide image. The eyebrow region due to the presence of hairs is expected to have high-frequency content present in the Fourier domain. The study aims to find if this high-frequency content is lost due to the smoothening effect in a morphed image making it suitable to differentiate morphed images from bonafide images.
The experiments were based on developing a segmentation technique to crop the eyebrow region from a face image and then analyzing the segmented region in the frequency domain. The different steps of the experiment pipeline are described in the following subsections:
### Eyebrow region segmentation
For eyebrow region segmentation, we used Dlib's facial landmark detector to locate the eyebrow region in a face image. The pre-trained shape predictor provided by Dlib was used in this study [4] which is based on the dataset from [8]. The Dlib shape predictor returns an array of length 68 containing coordinates of different facial features including the eyebrows. Eyebrows are marked by 10 array values with each eyebrow represented by 5 coordinates (Fig. 1a). These eyebrow coordinates are then used to find the limits of the eyebrow region and crop a rectangle around the region as shown in Fig. 1. This segmentation technique based on Dlib's landmark detector proved to be very effective for the ICAO-compliant images from the datasets.
### Pre-processing
In this step, we prepare the cropped region for the frequency domain analysis. First, the image is converted to grayscale to be processed by the next stages. Converting the images to grayscale is important for the system to be used in a differential morph detection setting because some border control cameras only provide the trusted live capture image in grayscale [18].
After converting the images to grayscale, the contrast of the cropped eyebrow region is increased to enhance the variations in the cropped image. Contrast enhancement is done through black clipping and white clipping. It works by converting 1% percentage of the darkest grey pixels to black (black clipping) and 5% of the brightest grey pixels to white (white clipping), and then the rest of the gray pixels are scaled between the highest and the lowest values.
Since bonafide images are expected to have more variations, contrast enhancement is expected to further enhance these variations. Contrary to this, the morphed images because of their smoothened nature will be relatively less affected by this step. This phenomenon is also shown in the results in section 4.3. Fig. 2 shows the image after going through the pre-processing step.
### Fourier analysis
As explained earlier, the idea is to differentiate morphed images from bonafide images by observing the smoothening of eyebrows in morphed images. In a sharp
Figure 1: Cropping the eyebrow region
Figure 2: Preprocessing the cropped image
image, hairs in the eyebrows can be observed as edges separate from each other. These edges are represented by the high-frequency content in the frequency domain.
Next, 2D Fourier transform of the preprocessed image is calculated to get the frequency representation of the image. Since the interest here is in the strength of the frequency content, only the magnitude of the Fourier transform is considered in the analysis. Fig. 3 shows the averaged DFT magnitude spectra of the eyebrow region of bonafide images and morphed (FaceFusion) images from the FRGC dataset.
The plots are shifted to move the values associated with zero frequency to the middle so that the frequency increases as we move away from the origin. It can be observed from the DFT magnitude spectra that the outer circle for bonafide images is bigger than the morphed images indicating a wider spread out of high-frequency content in bonafide images. In our approach, we exploit this difference in the frequency content to distinguish between morphed and bonafide images.
### Calculating frequency content
Once we have the Fourier spectrum, the next step is to establish a way of calculating the frequency content. We calculated the frequency content by taking the sum of the complete magnitude spectrum. The normalized sum calculated in our testing is expressed by the equation 1.
\[sum=\frac{1}{MN}\sum_{n=1}^{N}\sum_{m=1}^{M}f(n,m) \tag{1}\]
Here, f is an MxN array of the 2-dimensional DFT magnitude of the image, and M and N are the length and width of the cropped eyebrow region, respectively.
Figure 3: Averaged DFT magnitude spectra of eyebrow regions of bonafide and morphed images
The sum of coefficients is divided by the number of pixels in the cropped region for generalizing because the size of the cropped region can vary among different people and images of different resolutions.
## 4 Experimental Results
In the following subsections, the datasets used in the experimentation, evaluation metrics, the experiment setup, and results are presented.
### Datasets
The morphed and bonafide images used in this experimentation are taken from [18]. The bonafide images belong to two different datasets i.e. FERET [12] and FRGCv2 [11]. As described in [18], morphs are created by choosing the two subjects among the same dataset. In addition, the subjects are chosen based on their sex and whether they are wearing glasses.
622 bonafide images from the FERET dataset and 1440 images from the FRGC dataset were used in this experiment. Both the bonafide and morph images are post-processed by passing through a print and scan pipeline to mimic the post-processing steps followed in a passport application process.
The morphed images are created using FaceFusion [5] and UBO Morpher [7]. More information about the images used is provided in the table 1. Figures 5 and 4 show sample images from FERET and FRGCv2 datasets and their morphs created using FaceFusion and UBO Morpher.
### Experiment setup and results
The experiments were carried out under different settings and with various datasets to fine-tune and evaluate the reliability of the proposed method. The final results are hereby reported in this section.
#### 4.3.1 Effect of increasing contrast
For the preprocessing step, we experimented with increasing the contrast of the cropped eyebrow region. Since image contrast can be used to enhance the differentiation in the textures present in the image, the idea is that this can further increase the frequency content of the bonafide images as compared to the morphed images. Figure 6 shows the DET curves by varying the contrast on the FRGC images. The ISO metrics are presented in table 2 where it can be seen that the assumption is correct showing that increasing contrast helps improve the efficiency of the system.
#### 4.2.2 Effect of cropping low-frequency content
Since the eyebrows are associated with the presence of high-frequency content, we experimented with cropping the low-frequency coefficients in the frequency spectrum. The experiments were conducted by ignoring 0%, 5%, and 10% of the low-frequency region from the calculations. The results are presented in table 3 which show that cropping the low-frequency region slightly reduces the performance of morphed image detection. Since the results are against the proposed idea, this step was not incorporated in the final proposed algorithm.
#### 4.2.3 Final results on different datasets
Table 4 shows the results by applying the proposed scheme on the two datasets separately and then combining them. It can be seen that the proposed scheme gives a much better result with the FRGC dataset than with the FERET dataset. D-EER of 6.5% obtained with the FRGCv2 dataset is considerably lower than the D-EER of 22.2% on the FERET dataset.
On inspecting the images, it was found that the FERET images have lower quality compared to the FRGC dataset. This can also be attributed to the fact that the FERET images are relatively old (from 2011) compared to the FRGCv2
\begin{table}
\begin{tabular}{c c c c} \hline Crop (\%) & D-EER (\%) & BPCER10 (\%) & BPCER20 (\%) \\ \hline
0 & 6.5 & 4.2 & 9.6 \\
5 & 6.6 & 3.8 & 9.9 \\
10 & 6.7 & 4.1 & 9.8 \\ \end{tabular}
\end{table}
Table 3: Effect of ignoring low-frequency component
Figure 6: DET curves by varying contrast
images (from 2014). Hence, due to the lower resolution of the images in the FERET dataset, it is harder to differentiate between a bonafide image and a morphed image. This claim is supported by the sizes of files from both datasets as shown in the table 5. These results are also supported by the findings in [18], where all MAD algorithms based on texture descriptors gave better results for the FRGC dataset compared to FERET.
#### 4.2.2 Comparison with previous work
In this section, the results will be presented by comparing them with the previous S-MAD techniques. For comparing the results with [10], experiments are performed on the FRGC dataset by dividing the dataset into training and testing subsets. The resulting error rates are presented in table 6. These also include ACER which is the average classification error rate [10]. These error rates are higher than the ones obtained in [10] showing that the proposed scheme does not improve the detection performance. However, if
\begin{table}
\begin{tabular}{l c c c} \hline Dataset & D-EER (\%) & BPCER10 (\%) & BPCER20 (\%) \\ \hline FRGCv2 & 6.5 & 4.2 & 9.6 \\ FERET & 22.2 & 38.2 & 51.7 \\ Combined & 14.2 & 17.02 & 23.2 \\ \end{tabular}
\end{table}
Table 4: Detection performance with different datasets
Figure 7: DET curves obtained by applying the proposed scheme on different datasets
we consider the overall performance on different datasets, the proposed scheme gives a lower D-EER of 22.2% in comparison with ACER of 38.24% in the other paper.
## 5 Conclusion and Discussion
The results indicate the effectiveness of the proposed method in detecting morphed images. The frequency spectrum analysis of the eyebrow region proves to be a promising approach for the detection of morphed images in light of the results. In addition to the S-MAD scenario, the proposed method can also be applied in a D-MAD system where a reduction in error rate is expected. Even though the detection capabilities were found to be dependent on the choice of dataset, these results were expected given the quality of images varied between the datasets, as explained in the results section. However, the detection capabilities were found to be robust against two different kinds of morphing techniques.
There are some limitations of using this approach that need to be investigated further. 1) People with certain diseases can not have eyebrows. Our datasets do not contain any such cases and hence the result of the segmentation method and frequency analysis in these cases needs can be studied for improving this method. 2) The morphed images used in the experiments are created from automated morphing algorithms. For an attacker to conduct a successful attack, a manually generated image is sufficient. So, further analysis can be made to study the possibility of altering a morphed image to bypass the proposed detection scheme.
|
2308.03082 | Simulation of IBM's kicked Ising experiment with Projected Entangled
Pair Operator | We perform classical simulations of the 127-qubit kicked Ising model, which
was recently emulated using a quantum circuit with error mitigation [Nature
618, 500 (2023)]. Our approach is based on the projected entangled pair
operator (PEPO) in the Heisenberg picture. Its main feature is the ability to
automatically identify the underlying low-rank and low-entanglement structures
in the quantum circuit involving Clifford and near-Clifford gates.
We assess our approach using the quantum circuit with 5+1 trotter steps which
was previously considered beyond classical verification. We develop a Clifford
expansion theory to compute exact expectation values and use them to evaluate
algorithms. The results indicate that PEPO significantly outperforms existing
methods, including the tensor network with belief propagation, the matrix
product operator, and the Clifford perturbation theory, in both efficiency and
accuracy. In particular, PEPO with bond dimension $\chi=2$ already gives
similar accuracy to the CPT with $K=10$ and MPO with bond dimension
$\chi=1024$. And PEPO with $\chi=184$ provides exact results in $3$ seconds
using a single CPU.
Furthermore, we apply our method to the circuit with 20 Trotter steps. We
observe the monotonic and consistent convergence of the results with $\chi$,
allowing us to estimate the outcome with $\chi\to\infty$ through
extrapolations. We then compare the extrapolated results to those achieved in
quantum hardware and with existing tensor network methods. Additionally, we
discuss the potential usefulness of our approach in simulating quantum
circuits, especially in scenarios involving near-Clifford circuits and quantum
approximate optimization algorithms. Our approach is the first use of PEPO in
solving the time evolution problem, and our results suggest it could be a
powerful tool for exploring the dynamical properties of quantum many-body
systems. | Hai-Jun Liao, Kang Wang, Zong-Sheng Zhou, Pan Zhang, Tao Xiang | 2023-08-06T10:24:23Z | http://arxiv.org/abs/2308.03082v1 | # Simulation of IBM's kicked Ising experiment with Projected Entangled Pair Operator
###### Abstract
We perform classical simulations of the 127-qubit kicked Ising model, which was recently emulated using a quantum circuit with error mitigation [Nature 618, 500 (2023)]. Our approach is based on the projected entangled pair operator (PEPO) in the Heisenberg picture. Its main feature is the ability to automatically identify the underlying low-rank and low-entanglement structures in the quantum circuit involving Clifford and near-Clifford gates.
We assess our approach using the quantum circuit with 5+1 trotter steps which was previously considered beyond classical verification. We develop a Clifford expansion theory to compute exact expectation values and use them to evaluate algorithms. The results indicate that PEPO significantly outperforms existing methods, including the tensor network with belief propagation, the matrix product operator, and the Clifford perturbation theory, in both efficiency and accuracy. In particular, PEPO with bond dimension \(\chi=2\) already gives similar accuracy to the CPT with \(K=10\) and MPO with bond dimension \(\chi=1024\). And PEPO with \(\chi=184\) provides exact results in 3 seconds using a single CPU.
Furthermore, we apply our method to the circuit with 20 Trotter steps. We observe the monotonic and consistent convergence of the results with \(\chi\), allowing us to estimate the outcome with \(\chi\rightarrow\infty\) through extrapolations. We then compare the extrapolated results to those achieved in quantum hardware and with existing tensor network methods. Additionally, we discuss the potential usefulness of our approach in simulating quantum circuits, especially in scenarios involving near-Clifford circuits and quantum approximate optimization algorithms. Our approach is the first use of PEPO in solving the time evolution problem, and our results suggest it could be a powerful tool for exploring the dynamical properties of quantum many-body systems.
## I Introduction
A recent experiment [1] provided evidence supporting the utility of quantum computing before fault tolerance. This was accomplished through the zero-noise extrapolated quantum simulation of the kicked Ising model, using up to 127 qubits. By comparing with the matrix product state (MPS) and isometric tensor network state (isoTNS) simulations [1], it was shown that IBM's quantum hardware delivered more accurate results when the expectation can be verified using exact values with 5 trotter steps.
Recently, several novel classical algorithms have emerged, aiming to challenge the efficacy of quantum simulations. These include the belief propagation tensor network state (BP-TNS) [2], the Heisenberg matrix product operator (MPO) [3], the Clifford perturbation theory (CPT) [4], the 31-qubit subset simulation [5], and observable's back-propagation on Pauli paths (OBPPP) [6]. These classical algorithms can compute more accurately the expectation values in the verifiable regime with 5 Trotter steps utilizing only moderate computational resources. However, the results of various methods exhibited around a 20% deviation for a quantum circuit with 20 Trotter steps within the regime of \(\pi/8\leq\theta_{h}\leq 3\pi/8\)[3]. This discrepancy suggests that the accurate results with \(\theta_{h}\) away from \(\pi/2\) remain unclear, and it is difficult to access the accuracy of IBM's quantum hardware in that parameter regime.
In this work, we map the expectation computation to the contraction problem of a tensor network with the observable operator in the middle of the network. We propose to contract the tensor network based on the PEPO representation of the Heisenberg evolution operator. It applies the single-qubit and two-qubit rotation gates to the operator layer by layer from the middle to the boundary of the tensor network. Compared with other tensor-network methods, our approach can automatically detect the light-cone structure (i.e., the funnel shape) of the tensor network, the intrinsic low-rank structures circuit involving Clifford ZZ-rotation gates, and the low-entanglement structure when the X-rotation gates are close to the Clifford limit. It completely avoids the use of
long-range operators and swap operations. Consequently, our approach can accurately simulate IBM's 127-qubit quantum circuit.
To quantitatively demonstrate the performance of our method, we use IBM's kicked Ising model with \(5+1\) Trotter steps (corresponding to Fig. 4a in Ref [1]) as a benchmark. For this particular system, we propose an exact Clifford expansion theory to simplify the quantum circuit and manage to obtain exact results for different expectation values. Remarkably, the quantum circuit with \(5+1\) Trotter steps has been considered not in the classically verifiable regime and thus has not been used for evaluating algorithms in previous works due to the lack of exact results. Based on this benchmark, we further show that our method is significantly more accurate than the quantum hardware with error mitigations and other existing tensor network algorithms. Our approach reaches a rounding error in less than 3 seconds on a single CPU.
The paper is organized as follows. In Sec. II, we describe the kicked Ising model and IBM's quantum circuits. In Sec. III, we introduce our PEPO method and compare it with other tensor network approaches. In Sec. IV, we apply the method to IBM's quantum circuit and present the results for the quantum circuits with \(5+1\) and \(20\) Trotter steps. We conclude in Sec. V.
## II IBM's kicked Ising experiment
A recent experiment was carried out by IBM in simulating the dynamics of the transverse-field Ising model (kicked Ising model) on a two-dimensional heavy-hexagon lattice (as illustrated in Fig. 1) using a 127-qubit quantum circuit [1]. The experiment demonstrated evidence for the utility of quantum computing before fault tolerance using error mitigation. The quantum circuit simulates the kicked Ising model with \(T\) steps of unitary evolutions
\[U_{T}(\theta_{h})=\left[R_{ZZ}R_{\mathrm{X}}(\theta_{h})\right]^{T}, \tag{1}\]
where in each step the unitary evolution is composed of the Clifford gates on each edge \(\langle i,j\rangle\), and the X-rotation gates on each qubit
\[R_{\mathrm{ZZ}} =\prod_{\langle i,j\rangle}\exp\left(\mathrm{i}\tfrac{\pi}{2}Z_ {i}Z_{j}\right), \tag{2}\] \[R_{\mathrm{X}}(\theta_{h}) =\prod_{i}\exp\left(-\mathrm{i}\tfrac{\theta_{h}}{2}X_{i}\right). \tag{3}\]
Notice that the X-rotation gates are not Clifford except at \(\theta_{h}=k\pi/2\) with \(k\) an integer.
In Ref. [1], the authors simulated the expectation values using the quantum hardware with error mitigation and compared the results against tensor network algorithms on _three settings_:
1. The circuit is shallow, with depth restricted to \(T=5\) Trotter steps, and the observable is carefully chosen such that the expectation value can be computed exactly. By comparison with the exact results, Ref. [1] shows that the results of quantum hardware are very close to the exact ones, much more accurate than results obtained using MPS and isoTNS even with large bond dimensions.
2. The circuit has 5 Trotter steps with an additional layer of \(R_{X}\) gates and effectively simulates the time evolution after 6 Trotter steps. So we term it as a circuit with \(5+1\) Trotter steps. In this case, the expectation values are much more difficult to compute than in the system with 5 steps, and the previous studies [1; 2; 3; 4] consider the circuit beyond exact verification.
3. The circuit is deep, with \(T=20\) Trotter steps. This setting is not classically verifiable. In Ref. [1], a large deviation between the experimental data and the result of MPS and isoTNS is observed.
It was reported that in settings 2 and 3, the hardware results of expectation values significantly deviate from the tensor network results, demonstrating the utility of near-term quantum devices using error mitigations in the regime of strong entanglements when canonical tensor network methods break down. Soon after Ref. [1] was published, several novel classical algorithms have been proposed [2; 3; 4], reporting that the advanced tensor network algorithms outperform the canonical tensor network methods used in Ref. [1] in setting 1. However, it has been noted that there is a large discrepancy among different methods in setting 3. The accuracy of the hardware results in settings 2 and 3 also remains unknown.
## III Heisenberg pepo evolution
The time-dependent expectation of an operator \(\langle\hat{O}(t)\rangle\) can be calculated in the Schrodinger picture
\[\langle\hat{O}(t)\rangle=\langle\Psi(t)|\hat{O}|\Psi(t)\rangle, \tag{4}\]
Figure 1: Layout of IBM’s 127-qubit quantum processor.
or in the Heisenberg picture
\[\langle\hat{O}(t)\rangle=\langle\Psi|\hat{O}(t)|\Psi\rangle. \tag{5}\]
Where \(|\Psi(t)\rangle=e^{-i\hat{H}t}|\Psi\rangle\) is the time-dependent quantum state, and \(\hat{O}(t)=e^{+i\hat{H}t}\hat{O}e^{-i\hat{H}t}\) is the time-dependent Heisenberg operator. In both pictures, the quantum state \(|\Psi(t)\rangle\) or the time-dependent Heisenberg operator \(\hat{O}(t)\rangle\) can be represented using a tensor network such as MPS or MPO, and are evolved using an algorithm such as the time-evolving block decimation (TEBD) [7; 8], simple-update [9] and full-update [10] methods. When the entanglement of the tensor network is large enough, one needs to adopt approximate truncations on the virtual bond of the tensor network to reduce the computational complexity of the algorithm. In the case of IBM's kicked Ising experiments, MPS [1], isoTNS [1], and BP-TNS [2] methods belong to the Schrodinger picture, and the MPO method of [3] is conducted in the Heisenberg picture.
Both pictures can be regarded as different contraction schemes of a (d+1) tensor network corresponding to the time evolution of the d-dimensional quantum system. In the case of IBM's kicked Ising model, the qubits locate on a two-dimensional heavy-hexagon lattice, then the corresponding tensor network of computing expectation of an observable operator is a three-dimensional tensor network \(\mathcal{T}\) with the observable in the middle of the tensor network. The expectation value is computed by contracting the three-dimensional tensor network.
In the Schrodinger picture, the contraction is carried out from the boundary (corresponding to the initial state) to the middle (corresponding to the observable); while in the Heisenberg picture, the contraction is carried out from the middle to the two boundaries, which means at each time, the time evolution operator and its conjugate are applied simultaneously. The two pictures are mathematically equivalent if no approximation is introduced.
However, in practice, each picture has its advantages and disadvantages. The tensor network state in the Schrodinger picture typically has a much lower space complexity than the tensor network operator in the Heisenberg picture, allowing it to employ a much larger virtual bond dimension and obtain more accurate results. The Heisenberg picture, on the other hand, exploits the intrinsic structure in the form of \(UOU^{\dagger}\) and may greatly simplify the tensor network calculation in some situations, e.g. when the entanglements generated by the unitary and its inverse transformations partially cancel each other.
In this work, we propose to represent the observable of the three-dimensional tensor network \(\mathcal{T}\) using PEPO in the Heisenberg picture and contract \(\mathcal{T}\) by evolving PEPO from the middle to the two boundaries. The compression of the tensors is performed using the simple-update [9], together with an exact contraction of the final tensor network to obtain the expectation value. Compared with the BP-TNS approach [2], which introduces uncontrolled approximations due to the message passing, the error of our method is controlled via the error of singular value decompositions and can reproduce the exact results at \(\chi\to\infty\).
The computational cost at each step of evolution is \(\mathcal{O}(L\chi^{4})\), with \(L=144\) the number of edges of the heavy-hexagon lattice and \(\chi\) the virtual bond dimension. The computational cost of the exact tensor network contraction at the final step is \(\mathcal{O}(\chi^{6})\). At first glance, the computational complexity with respect to the bond dimension \(\chi\) looks much higher than MPS [1], isoTNS [1], BP-TNS [2] and MPO [3], we observe that the computation is more effective than MPS, isoTNS, BP-TNS and MPO for two reasons.
1. PEPO reflects the two-dimensional geometry of the heavy-hexagon lattice, so all the time-evolution operators are local. In contrast, in the one-dimensional MPO representation [3], some of the time evolution operators are long-ranged, so the use of SWAP operations is inevitable in MPO, reducing its efficiency and accuracy.
2. In addition to the cancellation effect of conjugate unitary gates, PEPO in the Heisenberg picture can automatically catch the intrinsic low-rank structure due to the presence of Clifford \(ZZ\) rotations gates and the approximate low-entanglement structures induced by the \(X\) rotation gates with \(\theta_{h}\) close to \(\pi/2\). This can dramatically reduce the computation cost and enhance the algorithm's effectiveness. As a simple example, PEPO with \(\chi=1\) can obtain exact results at the Clifford points, for instance, at \(\theta_{h}=\pi/2\), as illustrated in Fig. 2(b). In contrast, MPS, isoTNS and BP-TNS methods have no chance to meet the form of \(UOU^{\dagger}\) hence can not detect the low-rank structure at all, giving completely wrong results at the Clifford points [1].
## IV Results
### Circuit with \(5+1\) Trotter steps
We first present the results obtained on shallow circuits, focusing on Setting 2: the shallow circuits with \(5+1\) Trotter steps. We choose Setting 2 because it is more difficult to compute than Setting 1, so comparing different algorithms can be demonstrated more clearly. In the previous studies, only setting 1 was used to compare errors because setting 2 was considered unverifiable.
Here we show that setting 2 is also verifiable. We propose an exact Clifford expansion theory (CET) to reduce the depth of the circuit, followed by an exact contraction of the corresponding tensor network using the tensor slicing technique [11; 12]. Details about CET can be found in the Appendix. This technique allows us to rigorously compute the expectation value of \(\tilde{W}_{17}\) for this particular circuit with 5+1 steps. We do not invoke this technique in the PEPO tensor-network calculations.
Figure 2 shows the calculated results. On the left panel, we see that the IBM measurement, MPS, and CPT results deviate clearly from the exact results, while our PEPO results obtained just with a small bond dimension \(\chi=2\) already agree better with the exact values. The right panel of Fig. 2 compares the absolute errors of the results obtained with different algorithms, showing clearly that PEPO with \(\chi=2\) has similar accuracy as MPO with \(\chi=1024\) and CPT with \(K=10\). It indicates that our Heisenberg PEPO method can automatically detect the intrinsic structure of Clifford gates. Furthermore, by taking \(\chi=184\), we find that the errors of the PEPO results already fall below the rounding error of the double-precision floating numbers. Precisely at the Clifford point with \(\theta_{h}=\pi/2\), the error of the \(\chi=2\) PEPO result drops to zero, indicating that our approach perfectly catches the low-rank structure of the Clifford gates. The computation time for each point is less than 3 seconds for \(\chi=184\) using one CPU. In our PEPO calculation, we directly evolve the tensor-network operator using the simple update starting from the original circuit without using any information obtained from the manual Clifford expansions. It clearly shows that the PEPO method can detect the low-entanglement structure of the circuit automatically.
### Circuit with \(20\) Trotter steps
Here we conduct numerical experiments on deep circuits with \(20\) Trotter steps. Figure 3 (left) shows the expectation value of \(\langle Z_{62}\rangle\) computed using PEPO with different \(\chi\). We find that \(\langle Z_{62}\rangle\) converges very quickly with increasing \(\chi\) and becomes nearly \(\chi\) independent in the regimes \(\theta_{h}\leq\pi/8\) and \(\theta_{h}\geq 5\pi/16\). In the intermediate regime, \(\pi/8<\theta_{h}<5\pi/16\), \(\langle Z_{62}\rangle\) shows visible variance with \(\chi\) due to the rapidly increasing entanglement of PEPO with the Trotter steps. However, \(\langle Z_{62}\rangle\) varies monotonically with increasing \(\chi\), unlike the results obtained with MPO, in this regime, allowing us to reliably estimate the values of \(\langle Z_{62}\rangle\) by extrapolation to the limit \(\chi\rightarrow\infty\).
Figure 3 (right) compares our results with the IBM measurement data after error mitigation and those published by other calculations. In the regime \(\theta_{h}\leq\pi/8\), the results of all approximate algorithms agree well with each other. In the regime \(\theta_{h}>5\pi/16\), the results of PEPO, Google 31-qubits, CPT, and MPO all converge to \(0\), while isoTNS, BP-TNS and MPS results deviate from zero significantly. In this regime, the computation is pretty easy because the X-rotation gates are close to the Clifford limit. The deviation of the MPS, isoTNS and BP-TNS is because these methods can not detect the entanglement structures even in the near-Clifford limit. In the intermediate regime, \(\pi/8<\theta_{h}<5\pi/16\), a discrepancy is observed between the results obtained with different methods. The classical simulation becomes challenging in this regime because the X-rotation gates deviate significantly from the Clifford limit, and the entanglement becomes strong. Notably, PEPO gives considerably greater values than other results in this regime. Before extrapolation, the PEPO results increase with increasing \(\chi\). Their differences with the CPT and IBM's measurement results also grow with increasing \(\chi\). However, due to the strong entanglement and non-verifiable nature, we cannot tell which method is more accurate in this regime.
## V Discussion and conclusion
We have developed an accurate and efficient approach for simulating the discretized dynamics of the kicked
Figure 2: _Left_: The expectation value of the modified weight-17 stabilizer \(\tilde{W}_{17}\) obtained by PEPO, MPS [1], CPT [4], the IBM’s quantum hardware with error mitigation (IBM) [1], and compared against exact results, on the quantum circuit with 5+1 Trotter steps (5 Trotter steps with an addition rotation gates, corresponding to Fig.4(a) in Ref. [1]). _Right_: The absolute errors with respect to the exact results. The computation time of the PEPO method with \(\chi=184\) in obtaining a data point is less than 3 seconds using a single Intel Xeon Gold 6326 CPU.
Ising model first investigated on a 127-qubit quantum circuit in Ref. [1]. Our algorithm is based on PEPO representation of the evolution operator in the Heisenberg picture. It automatically identifies the low-rank structure and low-entanglement structures in the circuit, which reduces the computational cost but increases the accuracy of the simulation. For the quantum circuit with 5+1 Trotter steps, which is previously considered not verifiable, we propose an exact Clifford expansion scheme to evaluate the expectation values exactly. This expansion theory outperforms all other simulation methods in this system. Furthermore, we find that the PEPO method with a bond dimension \(\chi=2\) can already give similarly accurate results as CPT with \(K=10\) and MPO with bond dimension \(\chi=1024\). Finally, we apply the PEPO method to the deep circuit with 20 Trotter steps.
Our findings reveal the remarkable effectiveness of the Heisenberg PEPO method in the calculation of dynamical evolutions. This method shows promise for computing quantum system expectations, which is especially useful for applications like the QAOA system, which bears similarities to the kicked Ising model investigated in this study. We intend to delve further into this direction in the future.
###### Acknowledgements.
An implementation of our algorithm can be found at [13]. We thank Garnet Kin-Lic Chan and Tomislav Begusic for offering data in [14], and thank Sajant Anand, Abhinav Kandala, and Michael Zaletel for offering data in [3]. This work is supported by the National Key Research and Development Project of China (Grants No. 2022YFA1403900 and No. 2017YFA0302901), the National Natural Science Foundation of China (Grants Nos. 11888101, 11874095, and 11974396), the Youth Innovation Promotion Association CAS (Grants No. 2021004), and the Strategic Priority Research Program of Chinese Academy of Sciences (Grant Nos. XDB33010100 and XDB33020300).
|
2306.04241 | BRST Symmetry of Non-Lorentzian Yang-Mills Theory | We explore the realization of BRST symmetry in the non-Lorentzian Yang-Mills
Lagrangian within the context of Galilean and Carrollian Yang-Mills theory.
Firstly we demonstrate the nilpotent property of classical BRST transformations
and construct corresponding conserve charges for both cases. Then we analyze
the algebra of these charges and observe the nilpotent properties at the
algebraic level. The findings of this study contribute to a deeper
understanding of BRST symmetry in non-Lorentzian Yang-Mills Lagrangians and
provide insights into the algebraic properties of related conserve charges. | Minhajul Islam | 2023-06-07T08:31:20Z | http://arxiv.org/abs/2306.04241v2 | # BRST Symmetry of Non-Lorentzian Yang-Mills Theory
###### Abstract
We explore the realization of BRST symmetry in the non-Lorentzian Yang-Mills Lagrangian within the context of Galilean and Carrollian Yang-Mills theory. Firstly we demonstrate the nilpotent property of classical BRST transformations and construct corresponding conserve charges for both cases. Then we analyze the algebra of these charges and observe the nilpotent properties at the algebraic level. The findings of this study contribute to a deeper understanding of BRST symmetry in non-Lorentzian Yang-Mills Lagrangians and provide insights into the algebraic properties of related conserve charges.
## 1 Introduction
Yang-Mills theories, named after Chen Ning Yang and Robert Mills [1], are quantum field theories (QFT) that describe the behavior of elementary gauge bosons. This is the main ingredient of the Standard Model of particle physics, which is our best current understanding of the behavior of subatomic particles.
Our current comprehension of the physical world thus heavily relies on the framework of QFT. The textbook formulation of QFT is closely tied to the principles of relativistic physics, including Lorentz and Poincare symmetry. However, when examining real-life systems, it is often necessary to consider approximations and limits of the fundamental theory. In this paper, we will be interested in QFTs where Poincare symmetry is replaced by Galilean and Carrollian symmetries which constitute the the low and high energy sectors of relativistic QFTs.
To understand Galilean and Carrollian QFTs, we will adopt a group-theoretic approach starting from the Poincare algebra and taking the limits of large c (speed of light) and small c. These limits yield two different symmetry algebras: the familiar Galilean algebra and the less familiar Carrollian algebra. In both limits, several counter-intuitive concepts emerge. The spacetime metrics degenerate, light-cones open up for non-relativistic theory, and close up for Carrollian theory. Moreover, symmetry algebra gets enhanced in both cases.
Galilean theories, which correspond to the limit of \(c\to\infty\) (where c is the speed of light), are important for various areas of physics such as condensed matter physics, non-AdS holography, and hydrodynamics. In this limit, the metric of spacetime degenerates, and the structure of spacetime changes from the usual Riemannian structure to a new one called Newton-Cartan spacetime [2; 3; 4; 5; 6]. For developing non-relativistic physics involves starting
from a Poincare-invariant theory and expanding it using a large c-expansion provides many insights into non-relativistic physics, such as enhanced symmetry algebra and actions at each order [6; 7; 8; 9].
The Carrollian limit is the opposite of the limit mentioned earlier, corresponding to \(c\to 0\). The Carroll algebra was first discussed in [10; 11]. It has recently become important in various applications, particularly in the understanding of flat space holography [12]. AdS/CFT duality is one of the most promising tools for understanding quantum gravity. When the radius of curvature is infinite, AdS spacetime becomes flat spacetime. Correspondingly, on the dual side, sending the speed of light to zero results in a Carrollian conformal field theory [13]. Some important references for holography for asymptotically flat spacetime are [12; 13; 14; 15; 16; 17; 18; 19]. The understanding of flat space holography recently has taken two different directions, viz. Celestial holography and Carrollian holography. Celestial holography relates gravity in 4d asymptotically flat spacetimes to a 2d CFT living on the celestial sphere [20; 21]. On the other hand, Carrollian holography relates 4d asymptotically flat gravity to 3d Carrollian CFTs living on the entire null boundary of 4d bulk spacetime [22; 23; 24; 25; 26; 27; 28; 29; 30]. Some works [31; 32; 33] connect both formalisms.
Carrollian physics appears on any null hypersurface, including the horizon of a black hole [34; 35]. Carrollian gravity may provide a tractable version of general relativity and may be useful in various physical contexts [36; 37]. The theory of Carroll is also important in cosmology, inflation [38], for fluids flowing at very high velocities [39], fractons [40; 41; 42], and the study of flat physics in condensed matter systems [43].The Carrollian limit of the string theory worldsheet leads to the very high energy tensionless regime of strings [44; 45; 46].
Before we begin our investigations of aspects of non-Lorentzian QFTs, we will quickly summarize summarize previous research on Galilean and Carrollian gauge theories. Galilean electrodynamics was first studied a long time ago in [47]. Later, in [48; 49; 50], authors discovered infinite-dimensional Galilean conformal symmetry in both Galilean abelian and Galilean Yang-Mills theory at the level of equations of motion. More detailed work has been done on constructing actions for both Galilean abelian [51; 52] and Yang-Mills theory [53]. The quantum properties of Galilean scalar electrodynamics were examined in [54], Galilean QED, Scalar QED, non-linear electrodynamics in [55; 56; 57; 58; 59].
In the Carroll case, conformal structures at the level of equations of motion in [22; 23; 17]. [60] presented the Carrollian action for the so-called electric abelian theory, which is an interacting field theory with a scalar field [61; 38]. Using the small c-expansion, the magnetic sector of Carrollian abelian theory has been recently constructed in [38], and the conformal structure of this magnetic action was analyzed. In [62], the authors constructed the off-shell Carrollian Yang-Mills theory in the Hamiltonian formulation. Finally, in [63], the action formulation for Carrollian Yang-Mills theory was constructed. In [64] the authors constructed Carrollian field theory using null reduction techniques.
BRST symmetry, first introduced by Becchi, Rouet, Stora, and Tyutin in the 1970s [65; 66], plays a crucial role in in the study of gauge theories. BRST symmetry is a type of ghost symmetry, which means that it involves the introduction of additional degrees of freedom
that are not physical, but help to resolve the ambiguity in the system. BRST symmetry of \(SU(N)\) Yang-Mills theory is defined using a set of operators known as BRST operators, which act on the fields of the system and generate BRST symmetry transformations. BRST operators have the property that they are nilpotent, which means that they square to zero. This property is crucial in the construction of the BRST symmetry, as it ensures that the BRST transformations form a closed algebra. BRST symmetry of \(SU(N)\) Yang-Mills theory is important for the computation of physical observables in the system.
In this paper, we will construct BRST symmetry for Galilean and Carrollian Yang-Mills theories. We incorporate important results and notation from our recent previous works, namely [53] and [63]. We first provide a brief review of these theories in Section 2. We then proceed to explore BRST symmetry for Galilean field theories in Section 3.1, and in Section 3.3 we do the same for Carrollian field theories. Finally, in Section 4, we conclud the paper by summarizing our findings and discussing the implications of our results. Overall, our analysis sheds light on BRST symmetry in non-Lorentzian field theories, and provides a foundation for future investigations in this area.
## 2 Brief review of Non-Lorentzian Yang-Mills
### Galilean Yang-Mills
In [53], we constructed the action for Galilean Yang-Mills theory by using the null reduction procedure. We will give a brief review on that. We write the Lagrangian density of Yang-Mills theory in \((d+1)\) dimensions in null coordinates:
\[\mathcal{L}_{YM}=-\frac{1}{4}\eta^{\tilde{\mu}\tilde{\rho}}\eta^{ \tilde{\nu}\tilde{\sigma}}F^{a}_{\tilde{\mu}\tilde{\nu}}F^{a}_{\tilde{\rho} \tilde{\sigma}}=-\frac{1}{4}\Big{[}2F^{a}_{ut}F^{a}_{tu}+F^{ija}F^{a}_{ij}+4F^ {a}_{ui}F^{ia}_{t}\Big{]}, \tag{1}\]
and perform null reduction along the null direction parametrized by the coordinate \(u\). We take the gauge field to be independent of the \(u\)-coordinate, _i.e._\(\partial_{u}A^{a}_{\tilde{\mu}}=0\), and decompose its components as
\[A^{a}_{u}=\phi^{a},\quad A^{a}_{t}=a^{a}_{t},\quad A^{a}_{i}=a^{ a}_{i}. \tag{2}\]
Then the null reduction gives the Lagrangian density in \(d\) spacetime dimensions as
\[\mathcal{L}_{GYM} = \Big{[}\frac{1}{2}(\partial_{t}\phi^{a}-gf^{abc}\phi^{b}a^{c}_{t })(\partial_{t}\phi^{a}-gf^{ade}\phi^{d}a^{e}_{t}) \tag{3}\] \[-\frac{1}{4}(\partial^{i}a^{j}-\partial^{j}a^{i}+gf^{ade}a^{id}a^ {je})(\partial_{i}a_{j}-\partial_{j}a_{i}+gf^{abc}a^{b}_{i}a^{c}_{j})\] \[+(\partial_{i}\phi^{a}-gf^{abc}\phi^{b}a^{c}_{i})(\partial_{t}a^ {ia}-\partial^{i}a^{a}_{t}+gf^{abc}a^{b}_{t}a^{ic})\Big{]},\]
where the subscript GYM stands for Galilean Yang-Mills. It can also be written in a compact form given by
\[\mathcal{L}_{GYM}=\frac{1}{2}D_{t}\phi^{a}D_{t}\phi^{a}+D_{i} \phi^{a}E^{ia}-\frac{1}{4}W^{ija}W^{a}_{ij}, \tag{4}\]
where \(D_{t}\), \(D_{i}\) are gauge-covariant derivatives and \(E^{ia}\), \(W^{a}_{ij}\) are field strength variables defined as
\[D_{t}\phi^{a}=\partial_{t}\phi^{a}-gf^{abc}\phi^{b}a^{c}_{t},\ \ \ D_{i}\phi^{a}=\partial_{i}\phi^{a}-gf^{abc}\phi^{b}a^{c}_{i}, \tag{5a}\] \[E^{ia}=\partial_{t}a^{ia}-\partial^{i}a^{a}_{t}+gf^{abc}a^{b}_{t }a^{ic},\ \ \ W^{a}_{ij}=\partial_{i}a_{j}-\partial_{j}a_{i}+gf^{abc}a^{b}_{i}a^{c}_{j}. \tag{5b}\]
The EOM for the Lagrangian (4) are given by
\[D_{t}D_{t}\phi^{a}+D_{i}E^{ia}=0, \tag{6a}\] \[D_{i}D_{i}\phi^{a}+gf^{abc}\phi^{b}D_{t}\phi^{c}=0,\] (6b) \[D_{t}D_{i}\phi^{a}-D_{j}W^{a}_{ji}-gf^{abc}\phi^{b}E^{ic}=0. \tag{6c}\]
We can also find these equations by doing the procedure of null reduction on relativistic equations.1
Footnote 1: The Lagrangian was also introduced in [67] to derive the EOM for Galilean Yang-Mills theory with \(U(N)\) gauge group obtained as an effective theory from non-relativistic open string theory.
If we put \(\phi^{a}=0\), the equations become
\[D_{i}E^{ia}=0,\ \ D_{j}W^{a}_{ji}=0. \tag{7}\]
An interesting point to re-emphasise is that the EOM even with the fields \(\phi^{a}\) turned off are different from the ones obtained by taking limits as was described earlier in the section. It is possible that one needs to consider scalings of the gauge coupling \(g\) in order to obtain these results2.
Footnote 2: A similar phenomenon was observed when constructing the action of Carrollian scalar electrodynamics in [23]
### Carrollian Yang-Mills
In [63], we analyzed the Carrollian limit of Yang-Mills theory and obtained electric and magnetic sectors, where one subsector of each contained non-abelian or self-interaction terms while the other subsector contained copies of the Carrollian abelian theory. In the above mentioned paper, we obtained Carrollian Yang-Mills actions by taking a small \(c\)-expansion of the Poincare invariant Yang-Mills action, where different values of the parameter \(\delta\) lead to different sectors for Carrollian Yang-Mills theory.
All four sectors were found to be invariant under infinite Carrollian conformal algebra in 4 dimensions. Below we provide a brief review of the two non-trivial sectors, electric and magnetic, which are relevant to our current purpose.
#### Electric Action
The electric sector action which has a non-abelian term, can be written in compact form:
\[\mathcal{L}_{0}=\frac{1}{2}\bigg{(}(\partial_{t}a^{a(0)}_{i}- \partial_{i}a^{a(0)}_{t})(\partial_{t}a^{a(0)}_{i}-\partial_{i}a^{a(0)}_{t})+ 2gf^{abc}(\partial_{t}a^{a(0)}_{i}-\partial_{i}a^{a(0)}_{t})a^{b(0)}_{t}a^{c( 0)}_{i}\] \[+g^{2}f^{abc}f^{ade}a^{b(0)}_{t}a^{c(0)}_{i}a^{d(0)}_{t}a^{e(0)}_ {i}\bigg{)}=\frac{1}{2}E^{a(0)}_{i}E^{a(0)}_{i}, \tag{8}\]
where \(E^{a(0)}_{i}=\partial_{t}a^{a(0)}_{i}-\partial_{i}a^{a(0)}_{t}+gf^{abc}a^{a(0)}_{ t}a^{a(0)}_{i}\). The equations of motion following from the action are given by
\[\partial_{i}E^{a(0)}_{i}+gf^{abc}a^{b(0)}_{i}E^{c(0)}_{i} =D^{(0)}_{i}E^{a(0)}_{i}=0, \tag{11a}\] \[\partial_{t}E^{a(0)}_{i}+gf^{abc}a^{b(0)}_{t}E^{c(0)}_{i} =D^{(0)}_{t}E^{a(0)}_{i}=0, \tag{11b}\]
where \(D_{i}\mathcal{O}^{a}=\partial_{i}\mathcal{O}^{a}+gf^{abc}a^{b(0)}_{i} \mathcal{O}^{c}\), \(D_{t}\mathcal{O}^{a}=\partial_{t}\mathcal{O}^{a}+gf^{abc}a^{b(0)}_{t} \mathcal{O}^{c}\).
The gauge transformations under which the action (10) is invariant are given by
\[a^{a(0)}_{t}\to a^{a(0)^{\prime}}_{t}=a^{a(0)}_{t}+\frac{1}{g} \partial_{t}\alpha^{a}+f^{abc}a^{b(0)}_{t}\alpha^{c}, \tag{12a}\] \[a^{a(0)}_{i}\to a^{a(0)^{\prime}}_{i}=a^{a(0)}_{i}+\frac{1}{g} \partial_{i}\alpha^{a}+f^{abc}a^{b(0)}_{i}\alpha^{c}. \tag{12b}\]
This gauge transformation is the same as parent theory, but now we cannot write it in covariant form like relativistic theory. Because, like the non-relativistic theory, the metrics in Carrollian theory are degenerate, and time and space are not on the same footing.
#### Magnetic Action
Now, we will discuss on the magnetic sector. The next to leading order( NLO) Lagrangian take from [63] contains leading order and NLO fields. From the expansion of action section, we have the NLO Lagrangian (coefficient of \(c^{0}\)) as
\[\mathcal{L}^{(1)}=\big{(}D^{(0)}_{t}a^{a(1)}_{i}\big{)}E^{a(0)}_{i}-\big{(}D^ {(0)}_{i}a^{a(1)}_{t}\big{)}E^{a(0)}_{i}-\frac{1}{4}f^{ija(0)}f^{a(0)}_{ij}. \tag{13}\]
If we take the variation of the Lagrangian with respect to next to leading order fields \(a^{a(1)}_{t},a^{a(1)}_{i}\) we will get Eq.(11), leading order equations of motion as a property of this formalism. If we take variation with respect to leading order fields \((a^{a(0)}_{t},a^{a(0)}_{i})\), equations of motion are
\[D^{(0)}_{i}D^{(0)}_{i}a^{a(1)}_{t}-D^{(0)}_{i}D^{(0)}_{t}a^{a(1)} _{i}-gf^{abc}a^{b(1)}_{i}E^{c(0)}_{i} =0, \tag{14a}\] \[D^{(0)}_{t}D^{(0)}_{t}a^{a(1)}_{i}-D^{(0)}_{t}D^{(0)}_{i}a^{a(1) }_{t}-gf^{abc}a^{b(1)}_{t}E^{c(0)}_{i} -D^{(0)}_{k}f^{a(0)}_{ki}=0, \tag{14b}\]
where \(D^{(0)}_{k}f^{a(0)}_{ki}=\partial_{k}f^{a(0)}_{ki}+gf^{abc}a^{b(0)}_{k}f^{c(0)}_ {ki}\). Although the action and the equations of motion look nice in compact form, these are not Carroll invariant. To make Carroll invariant, we have to take the constraint \(E^{a(0)}_{i}=0\) at the level of action Eq.(13). Then action will become \(-\frac{1}{4}f^{ija(0)}f^{a(0)}_{ij}\) and equations of motion will be \(D^{(0)}_{k}f^{a(0)}_{ki}=0\).
We can derive the Carroll invariant magnetic sector from the relativistic Yang-Mills action if we consider a Lagrange multiplier in relativistic Lagrangian and then take speed of light to zero limit. The relativistic Lagrangian with Lagrange multiplier \(\xi^{a}_{i}\) and explicit \(c\) factor is given by
\[\mathcal{L}=-\frac{c^{2}}{2}\xi^{a}_{i}\xi^{a}_{i}+\xi^{a}_{i}F^{a}_{0i}-\frac {1}{4}F^{a}_{ij}F^{a}_{ij}. \tag{15}\]
From here, we can get back to the usual Yang-Mills action if we integrate out \(\xi_{i}\) fields. Now we can see if we take the small \(c\) limit here, we will get
\[\mathcal{L}^{NLO}=\xi_{i}^{a}(\partial_{t}a_{i}^{a(0)}-\partial_{i }a_{t}^{a(0)})-\frac{1}{4}(\partial_{i}a_{j}^{a}-\partial_{j}a_{i}^{a})( \partial_{i}a_{j}^{a}-\partial_{j}a_{i}^{a})+gf^{abc}a_{t}^{b}a_{i}^{c}\xi_{i} ^{a}\] \[-gf^{abc}a_{i}^{b}a_{j}^{c}\partial_{i}a_{j}^{a}-\frac{1}{4}g^{2} f^{abc}f^{ade}a_{i}^{b}a_{j}^{c}a_{i}^{d}a_{j}^{e}=\xi_{i}^{a}E_{i}^{a}-\frac{1}{4 }f_{ij}^{a}f_{ij}^{a}. \tag{14}\]
The Lagrangian contains non-trivial self-interaction terms or non-abelian terms. The equations of motion of this action are
\[E_{i}^{a}=0,\quad D_{i}\xi_{i}^{a}=0,\quad D_{t}\xi_{i}-D_{j}f_{ji}=0. \tag{15}\]
Here we are getting the constraints \(E_{i}^{a(0)}=0\) as an equations of motion for the Lagrange(\(\xi_{i}^{a}\)). Below we will see the full spacetime symmetry of this action.
The action Eq.(14) is invariant under the gauge transformation
\[a_{t}^{a}\to a_{t}^{{}^{\prime}a}=a_{t}^{a}+\frac{1}{g}\partial_ {t}\alpha^{a}+f^{abc}a_{t}^{b}\alpha^{c}, \tag{16a}\] \[a_{i}^{a}\to a_{i}^{{}^{\prime}a}=a_{i}^{a}+\frac{1}{g}\partial_ {i}\alpha^{a}+f^{abc}a_{i}^{b}\alpha^{c},\] (16b) \[\xi_{i}^{a}\to\xi_{i}^{{}^{\prime}a}=\xi_{i}^{a}+f^{abc}\xi_{i}^{b }\alpha^{c}. \tag{16c}\]
The temporal and spatial component of the gauge field is transformed in the same way as the electric sector. The Lagrange multiplier \(\xi_{i}^{a}\) transforms as a scalar in the adjoint representation of the underlying gauge group.
## 3 BRST Symmetry
### Yang-Mills theory
One of the most important techniques for quantization of gauge theories is BRST quantization. The canonical quantization of the modified action obtained after path integral formulation is important to define physical states and to eliminate the gauge symmetry when restricted to the subspace of the physical Hilbert space.
The total gauge-fixed Lagrangian of the relativistic Yang-Mills theory including the gauge fixing and ghost terms is
\[\mathcal{L}=-\frac{1}{4}F^{\mu\nu a}F_{\mu\nu}^{a}-\frac{1}{2\xi}\big{(} \partial^{\mu}A_{\mu}\big{)}^{2}-\partial^{\mu}\bar{c}^{a}D_{\mu}c^{a}, \tag{17}\]
where \(D_{\mu}c^{a}=\partial_{\mu}c^{a}-gf^{abc}A_{\mu}^{b}c^{c}\). This gauge fixed Lagrangian is not invariant under the gauge symmetry but invariant under a global symmetry, _i.e._ BRST symmetry. The BRST transformations are given by
\[\delta A_{\mu}^{a}=\frac{\omega}{g}D_{\mu}c^{a},\quad\delta c^{a}=-\frac{ \omega}{2}f^{abc}c^{b}c^{c},\quad\delta\bar{c}^{a}=\frac{\omega}{g\xi}\big{(} \partial_{\mu}A^{\mu a}\big{)}^{2}, \tag{18}\]
where \(\omega\) is an anti-commutating constant parameter. The BRST invariance of the Lagrangian leads to many interesting consequences. It helps us in the covariant quantization of the Yang-Mills theory, to prove unitarity of the S-matrix, with gauge fixing and ghost term of the Lagrangian. The BRST invariance also leads to Ward-Takahashi identities; in Yang-Mills theory these are called Slavnov-Taylor identities.
### Galilean Yang-Mills theory
Performing the null reduction of the full Lagrangian (11), the NR Yang-Mills Lagrangian with gauge fixing term and ghost term is
\[\mathcal{L}_{Gfull}=\frac{1}{2}D_{t}\phi^{a}D_{t}\phi^{a}+D_{i} \phi^{a}E^{ia}-\frac{1}{4}f^{ija}f^{a}_{ij}-\frac{1}{2\xi}(\partial_{t}\phi^{a }+\partial^{i}a^{a}_{i})^{2}-gf^{abc}\partial_{t}\bar{c}^{a}c^{b}\phi^{c}\] \[-\partial_{i}\bar{c}^{a}(\partial_{i}c^{a}-gf^{abc}c^{b}a^{c}_{i}). \tag{12}\]
This Galilean Yang-Mills Lagrangian is invariant under the transformations
\[\delta\phi^{a}=\omega f^{abc}\phi^{b}c^{c},\,\delta a^{a}_{t}= \frac{\omega}{g}D_{t}c^{a},\,\delta a^{a}_{i}=\frac{\omega}{g}(D_{i}c^{a}),\] \[\delta c^{a}=-\frac{\omega}{2}f^{abc}c^{b}c^{c},\,\delta\bar{c}^{ a}=\frac{\omega}{g\xi}(\partial_{t}\phi^{a}+\partial_{i}a^{a}_{i}). \tag{13}\]
Here \(c^{a}\) is a Grassmannian variable and represents the ghost field. The transformations of gauge fields contain ghost field. We can see that the transformations of the gauge fields are actually same as the gauge transformation but now the gauge parameter is replaced by the ghost field \(c^{a}\). Thus the Yang-Mills piced of the total Lagrangian is naturally invariant under the above transformations. Under these transformations the gauge-fixing and the ghost terms transform as
\[\delta\big{(}\mathcal{L}_{\mathcal{GF+GH}}\big{)}=gf^{abc}\partial_{t}\big{[} \frac{\omega}{g\xi}\big{(}\partial_{t}\phi^{a}+\partial_{i}a^{a}_{i}\big{)}c^{ b}\phi^{c}\big{]}-\partial_{i}\big{[}\frac{\omega}{g\xi}\big{(}\partial_{t}\phi^{a}+ \partial_{k}a^{a}_{k}\big{)}D_{i}c^{a}\big{]}, \tag{14}\]
which is a total divergence, thus showing invariance of the gauge-fixing and ghost terms.
In relativistic Yang-Mills theories, the BRST transformations are nilpotent, _i.e._\(\delta_{1}\delta_{2}\Phi^{a}=0\), where \(\Phi^{a}\) denotes all the fields in the theory. The nilpotency of the BRST operator is very important to introduce physical states.
Let us check the nilpotency of the BRST transformations (13) in Galilean Yang-Mills theory:
\[\delta^{2}\phi^{a}=-\omega\delta(gf^{abc}c^{b}\phi^{c})=0,\quad \text{using Jacobi identity} \tag{15a}\] \[\delta^{2}a^{a}_{t}=\frac{\omega}{g}\delta(D_{t}c^{a})=0,\,\, \delta^{2}a^{a}_{i}=\frac{\omega}{g}\delta(D_{i}c^{a})=0. \tag{15b}\]
To see this we have to use Jacobi identity for the structure constants
\[f^{cab}f^{ckl}+f^{cal}f^{cdk}+f^{cak}f^{clb}=0, \tag{16}\]
which is derived from the Lie algebra of the underlying gauge group
\[[T^{b},[T^{k},T^{l}]]+[T^{k},[T^{l},T^{b}]]+[T^{l},[T^{b},T^{k}]]=0. \tag{17}\]
For the ghost field, using the Jacobi identity we get
\[\delta^{2}c^{a}=\frac{\omega}{2}Q(f^{abc}c^{b}c^{c})=0. \tag{18}\]
However for the anti-ghost field, the transformations are nilpotent only upon using the equation of motion for the ghost field, _i.e._
\[\delta^{2}\bar{c}^{a}=\delta(\partial_{t}\phi^{a}+\partial^{i}a^{a}_{i})=0\,\, \,\text{upon using}\,\,\,\,\partial_{i}D^{i}c^{a}-gf^{abc}\partial_{t}(c^{b}\phi^{c})=0. \tag{19}\]
Now we have seen that the Lagrangian is invariant under the BRST transformations (10) off-shell but to get full nilpotent BRST operator we have to use the ghost equation of motion. To get the nilpotency off-shell, we introduce an auxiliary field \(F^{a}\). So now relevant part of the modified Lagrangian is
\[\mathcal{L}_{\mathcal{GF+GH}}=\frac{\xi}{2}F^{a}F^{a}+\partial^{t}F^{a}\phi^{a }+\partial^{i}F^{a}a_{i}-gf^{abc}\partial_{t}\bar{c}^{a}c^{b}\phi^{c}+\partial _{i}\bar{c}^{a}(\partial_{i}c^{a}-gf^{abc}c^{b}a_{i}^{c}). \tag{23}\]
Now the transformations which keep this Lagrangian invariant are given below. The transformations of gauge fields and ghost field are same as before, however the transformation of anti-ghost field is changed and written in terms \(F^{a}\). The transformations are
\[\delta\phi^{a}=\omega f^{abc}\phi^{b}c^{c},\,\delta a_{t}^{a}= \frac{\omega}{g}D_{t}c^{a},\,\delta a_{i}^{a}=\frac{\omega}{g}(D_{i}c^{a}),\] \[\delta c^{a}=-\frac{\omega}{2}f^{abc}c^{b}c^{c},\,\delta\bar{c}^{ a}=\frac{\omega}{g}F^{a},\,\delta F^{a}=0. \tag{24}\]
Now doing the same analysis as before and using the Jacobi identity of structure constants, we see that these transformations are nilpotent for all the fields without using any equations of motion. The Lagrangian is invariant under these transformations without any total derivetive term. Now let us calculate the current for the above transformations
\[J^{t} =-\omega f^{abc}\phi^{b}c^{c}D_{t}\phi^{a}+\frac{\omega}{g}D_{i} \phi^{a}D_{i}c^{a}+\omega F^{a}f^{abc}c^{b}\phi^{c}\] \[=-\omega f^{abc}\phi^{b}c^{c}\Pi_{\phi}^{a}+\frac{\omega}{g}\Pi_{ a_{i}}^{a}D_{i}c^{a}-\frac{\omega}{g}F^{a}\Pi_{\bar{c}}^{a}. \tag{25}\]
Then the BRST charge is
\[Q=\int d^{3}xJ^{t}. \tag{26}\]
The Lagrangian respect another symmetry called ghost scaling symmetry. In relativistic Yang-Mills theories, this symmetry is associated with number of ghost field. The ghost scaling transformation is given by
\[\delta c^{a}=\epsilon c^{a},\,\delta\bar{c}^{a}=-\epsilon\bar{c}^{a}, \tag{27}\]
where \(\epsilon\) is a constant commuting parameter. The conserved charge corresponding to this symmetry is
\[Q_{c}=\int d^{3}xJ^{t}_{c}=\int d^{3}x\big{(}gf^{abc}\bar{c}^{a}c^{b}\phi^{c} \big{)}=-\int d^{3}x\bar{c}^{a}\Pi_{\bar{c}}^{a}. \tag{28}\]
The usual equal time (anti-) commutation relations are
\[\big{[}\phi^{a},\Pi_{\phi}^{b}\big{]}=i\delta^{ab}\delta^{3}(x-y ),\,\big{[}a_{i}^{a},\Pi_{a_{k}}^{b}\big{]}=i\delta^{ab}\delta_{ik}\delta^{3} (x-y)\] \[\big{[}a_{t}^{a},\Pi_{a_{t}}^{b}\big{]}=i\delta^{ab}\delta^{3}(x-y ),\,\big{[}F^{a},\Pi_{\bar{c}}^{b}\big{]}=i\delta^{ab}\delta^{3}(x-y)\] \[\big{\{}c^{a},\Pi_{c}^{b}\big{\}}=i\delta^{ab}\delta^{3}(x-y),\, \big{\{}\bar{c}^{a},\Pi_{\bar{c}}^{b}\big{\}}=i\delta^{ab}\delta^{3}(x-y) \tag{29}\]
Using these we can see that
\[\{Q,Q\}=2Q^{2}=0, \tag{23}\]
which shows the nilpotency of the BRST operator. For the ghost operator \(Q_{c}\) we have
\[\big{[}Q_{c},Q_{c}\big{]}=0. \tag{24}\]
To understand the relation between \(Q\) and \(Q_{c}\), we will do the following analysis:
\[(\delta_{c}\delta_{B}-\delta_{B}\delta_{c})\Phi^{a}=\delta_{B} \Phi^{a}, \tag{25}\]
where \(\Phi^{a}\) is any field (\(\phi^{a}\), \(a_{t}^{a}\), \(a_{i}^{a}\), \(c^{a}\), \(\bar{c}^{a}\), \(F^{a}\)). Also doing a similar computation in terms of the operators, we get \(\big{[}Q,Q_{c}\big{]}\Phi^{a}=Q\Phi^{a}\) from which we conclude that
\[\big{[}Q,Q_{c}\big{]}=Q. \tag{26}\]
With many application there is a very good mathematical foundation of BRST symmetry. Using BRST symmetry in relativistic QFTs we can define equivalence class of physical states which constitute the BRST cohomology. In future, we want to do similar analysis in Galilean field theories.
### Carrollian Yang-Mills theory
In this section, we will examine how the BRST symmetry is manifested in the Carrollian Yang-Mills theory. Specifically, we will first investigate the realization of the BRST symmetry at the classical level for both the electric and magnetic sectors of the theory. After that, we will proceed to calculate the BRST charge.
Additionally, there is an important symmetry, denoted as \(U(1)\), that exists in the ghost part of the Lagrangian, corresponding to the conservation of ghost particle number. It is important to note that the ghost field is a Grassmann variable, and therefore the algebra of charge corresponding to this \(U(1)\) symmetry is anti-commuting.
#### Electric Sector
The Lagrangian for the electric sector with gauge fixing term and ghost term is discussed in [63]. The full Lagrangian for the electric sector is
\[\mathcal{L}=\frac{1}{2}E_{i}^{a(0)}E_{i}^{a(0)}-\frac{1}{2\chi} \partial_{t}a_{t}^{a}\partial_{t}a_{t}^{a}++\partial_{t}\bar{c}^{a}D_{t}c^{a}. \tag{27}\]
Similar to the relativistic this Lagrangian also satisfy a global symmetry. The global transformations under which this Lagrangian is invariant is given by
\[\delta a_{t}^{a}=\frac{\omega}{g}\big{(}D_{t}c\big{)}^{a}\quad \delta a_{i}^{a}=\frac{\omega}{g}\big{(}D_{i}c\big{)}^{a}\quad\delta c^{a}=- \frac{\omega}{2}f^{abc}c^{b}c^{c},\quad\delta\bar{c}=-\frac{\omega}{g\xi} \partial_{t}a_{t}^{a}, \tag{28}\]
this is Carrollian BRST transformations for the electric sector. The action is invariant under these transformation with a total derivative term. One of the property BRST transformations is that these transformations are nilpotent. To prove that the above mentioned
transformations are nilpotent we have to use Jacobi identity and equations of motion of ghost fields(\(\partial_{t}D_{t}c^{a}=0\)). The action under the transformation changes as
\[\delta\mathcal{L}^{electric}=-\partial_{t}\big{(}\frac{\omega}{g\xi}\partial_{t}a _{t}^{a}D_{t}c^{a}\big{)} \tag{3.24}\]
to derive the above change we have to use ghost equations of motion. So for nilpotency of the transformations and to see the invariance of the action we need to use equations of motion for the ghost field. We can also have off-shell BRST transformations which is nilpotent without using any equations of motion and the action is invariant without using any equations of motion. To have BRST symmetry with out using ghost equations of motion we need to introduce auxiliary field. With the auxilary field full Lagrangian is
\[\mathcal{L}^{electric}=\frac{1}{2}E_{i}^{a(0)}E_{i}^{a(0)}+\frac{\xi}{2}F^{a}F^{ a}+\partial_{t}F^{a}a_{t}^{a}+\partial_{t}\bar{c}^{a}D_{t}c^{a} \tag{3.25}\]
using equation of motion of \(F^{a}\) we can go back to previous Lagrangian. Now the transformations are
\[\delta a_{t}^{a}=\frac{\omega}{g}\big{(}D_{t}c\big{)}^{a}\quad\delta a_{i}^{a }=\frac{\omega}{g}\big{(}D_{i}c\big{)}^{a}\quad\delta c^{a}=-\frac{\omega}{2}f ^{abc}c^{b}c^{c},\quad\delta\bar{c}=-\frac{\omega}{g}F^{a},\quad\delta F^{a}=0 \tag{3.26}\]
The action is invariant under these transformation without using any equations of motion. These symmetry of action is off-shell symmetry.
There is another symmetry corresponding to ghost number symmetry, is global \(U(1)\) symmetry. Infinitesimal form of the transformation is
\[\delta c^{a}=\epsilon c^{a},\quad\delta\bar{c}^{a}=-\epsilon\bar{c} \tag{3.27}\]
The Lagrangian (3.25) in invariant under this transformation.
Now to go to the Quantum we need calculate charges corresponding to BRST symmetry and above mention \(U(1)\) symmetry. After that we will see relevant commutation and anti commutation relation. Charges are for the BRST and \(U(1)\) respectively are
\[Q_{BRST}=\int d^{3}x\big{[}\frac{\omega}{g}E_{i}^{a}D_{i}c^{a}- \frac{\omega}{g}F^{a}D_{t}c^{a}+\frac{\omega}{2}f^{abc}c^{b}c^{c}\partial_{t} \bar{c}^{a}\big{]} \tag{3.28}\] \[Q_{U(1)}=\int d^{3}x\big{[}\bar{c}^{a}D_{t}c^{a}+c^{a}\partial_{ t}c^{a}\big{]} \tag{3.29}\]
To calculate the algebra of these charges it will convenient to write it using conjugate momentum of the fields. Then we will able to usual bracket between fields and conjugate momentum. Conjugate momentum corresponding to different fields are
\[\Pi_{i}^{a}=\frac{\partial\mathcal{L}}{\partial(\partial_{t}a_{i}^{a})}=E_{i} ^{a},\,\Pi^{a}=\frac{\mathcal{L}}{\partial(\partial_{t}F^{a})}=a_{t}^{a},\, \Pi_{c}^{a}=-\partial_{t}\bar{c}^{a},\,\Pi_{\bar{c}}^{a}=D_{t}c^{a} \tag{3.30}\]
Charges using these definition
\[Q_{BRST}=\int d^{3}x\big{[}\frac{\omega}{g}\Pi_{i}^{a}D_{i}c^{a} -\frac{\omega}{g}F^{a}\Pi_{\bar{c}}^{a}-\frac{\omega}{2}f^{abc}c^{b}c^{c}\Pi_{ c}^{a}\big{]} \tag{3.31}\] \[Q_{U(1)}=\int d^{3}x\big{[}\bar{c}^{a}\Pi_{\bar{c}}^{a}-c^{a}\Pi _{c}^{a}\big{]} \tag{3.32}\]
Algebra satifying these operator using field Poisson bracket
\[\big{[}Q_{BRST},Q_{BRST}\big{]}=0,\quad\{Q_{U(1)},Q_{U(1)}\}=0,\quad \big{[}Q_{BRST},Q_{U(1)}\big{]}=iQ_{BRST}, \tag{3.33}\]
We can also see these relations using (3.26) and (3.27). From first commutation we can confirm the nilpotency of BRST symmetry. From second relation we can say that the ghost particle symmetry is abelian. Lastly from third relation we can see BRST transformation carries unit ghost particle.
#### Magnetic Sector
The full magnetic sector Lagrangian with gauge fixing term and ghost term is(details in [63])
\[\mathcal{L}=\xi_{i}^{a}E_{i}^{a}-\frac{1}{4}f_{ij}^{a}f_{ij}^{a} -\frac{1}{2\chi}\partial_{i}a_{i}^{a}\partial_{j}a_{j}^{a}-\partial_{i}\bar{c }^{a}D_{i}c^{a}. \tag{3.34}\]
. The full Lagrangian which respect the off-shell BRST transformations is
\[\mathcal{L}_{magnetic}=\xi_{i}^{a}E_{i}^{a}-\frac{1}{4}f_{ij}^{a }f_{ij}^{a}+\frac{\xi}{2}F^{a}F^{a}+\partial_{i}F^{a}a_{i}^{a}+\partial_{i} \bar{c}^{a}D_{i}c^{a} \tag{3.35}\]
Off-shell BRST transformations which keep the above Lagrangian invariant is same as eq.(3.26) along with transformation of \(\xi_{i}^{a}\) as \(\delta\xi_{i}^{a}=\omega f^{abc}\xi_{i}^{b}c^{c}\). For magnetic sector BRST charge contain only spatial derivative.
\[Q_{BRST}=\int d^{3}x\big{[}\xi_{i}^{a}D_{i}c^{a}\big{]} \tag{3.36}\]
The Lagrangian invariant under u(1)symmetry mentioned in previous section. But charge corresponding that symmetry is identically zero because in the Lagrangian there is no time derivative of ghost field. The algebra for this sector is trivially realized.
## 4 Conclusions and Discussion
In this paper, we investigate the BRST symmetry of Galilean and Carrollian Yang-Mills theories, which is should be crucial to a more detailed study of gauge theories in the non-Lorentzian regime. We begin by studying the Galilean Yang-Mills theory, which is a non-relativistic theory that describes the interaction between gauge fields and matter fields in a Galilean-invariant framework. We first realized the BRST symmetry for the Galilean Yang-Mills theory and redefine its Lagrangian to make it more concrete. We then analyze the classical and quantum level of the BRST symmetry and observe that it is realized in both cases.
Next, we move on to Carrollian Yang-Mills theory, which is another non-Lorentzian theory that describes the interaction between gauge fields and matter fields in a Carrollian-invariant framework. We analyze the non-trivial sectors of the theory, specifically the electric and magnetic sectors. For the magnetic sector case, the \(U(1)\) charge is zero, and hence the charge algebra is trivially realized. Again, we observe the BRST symmetry at both the classical and quantum level.
The study of the BRST symmetry in non-Lorentzian field theories is crucial in understanding the underlying physical properties of these theories. The BRST symmetry provides a way to fix gauge ambiguities, and its realization at the classical and quantum level is an essential aspect in the computation of physical observables.
In future work, we plan to construct different sectors of the Galilean Yang-Mills Lagrangian and analyze their BRST symmetry as we discussed for Carrollian case in this paper. We also aim to extend our analysis to construct BRST cohomology for non-Lorentzian theories. This analysis will help us gain a better understanding of the fundamental symmetries of non-Lorentzian field theories and their physical properties.
Our investigation of the BRST symmetry of Galilean and Carrollian Yang-Mills theories provides valuable insights into the behavior of non-Lorentzian field theories. The BRST symmetry is an essential tool in understanding the physical properties of these theories, and its realization at both the classical and quantum level is a crucial aspect in the computation of physical observables.
We express our heartfelt gratitude to Arjun Bagchi for fruitful discussions, insightful suggestions, and valuable comments on this manuscript and our work. We would also like to thank Nilay Kundu, Rudranil Basu, Kedar Kolekar, Kunal Pal, and Kuntal Pal for productive discussions.
|
2307.04168 | Possible open charm molecular pentaquarks from
$Λ_cK^{(*)}/Σ_cK^{(*)}$ interactions | In this work, we adopt the one-boson-exchange model to study the $Y_cK^{(*)}
(Y_c=\Lambda_c, \Sigma_c)$ interactions. After considering both of the $S-D$
wave mixing effects and the coupled channel effects, we can predict several
possible open-charm molecular pentaquarks, i.e., the single $\Sigma_cK^*$
molecular states with $I(J^P)=1/2(1/2^-)$, $1/2(3/2^-)$ and $3/2(1/2^-)$, the
coupled $\Lambda_cK^*/\Sigma_cK^*$ molecular states with $1/2(1/2^-)$ and
$1/2(3/2^-)$, and the coupled $\Sigma_cK/\Lambda_cK^*/\Sigma_cK^*$ molecular
state with $1/2(1/2^-)$. Meanwhile, we extend our study to the
$Y_c\bar{K}^{(*)}$ interactions, our results suggest the $\Sigma_c\bar{K}$
system with $I(J^P)=1/2(1/2^-)$, the $\Sigma_c\bar{K}^*$ systems with
$1/2(1/2^-)$, $1/2(3/2^-)$, and $3/2(3/2^-)$, the coupled
$\Lambda_c\bar{K}^*/\Sigma_c\bar K^*$ system with $1/2(1/2^-)$, and the
$\Sigma_c\bar{K}/\Lambda_c\bar{K}^*/\Sigma_c\bar K^*$ system with $1/2(1/2^-)$
can be the prime molecular candidates. | Rui Chen, Qi Huang | 2023-07-09T13:30:56Z | http://arxiv.org/abs/2307.04168v1 | Possible open charm molecular pentaquarks from \(\Lambda_{c}K^{(*)}/\Sigma_{c}K^{(*)}\) interactions
###### Abstract
In this work, we adopt the one-boson-exchange model to study the \(Y_{c}K^{(*)}(Y_{c}=\Lambda_{c},\Sigma_{c})\) interactions. After considering both of the \(S-D\) wave mixing effects and the coupled channel effects, we can predict several possible open-charm molecular pentaquarks, i.e., the single \(\Sigma_{c}K^{*}\) molecular states with \(I(J^{P})=1/2(1/2^{-})\), \(1/2(3/2^{-})\) and \(3/2(1/2^{-})\), the coupled \(\Lambda_{c}K^{*}/\Sigma_{c}K^{*}\) molecular states with \(1/2(1/2^{-})\) and \(1/2(3/2^{-})\), and the coupled \(\Sigma_{c}K/\Lambda_{c}K^{*}/\Sigma_{c}K^{*}\) molecular state with \(1/2(1/2^{-})\). Meanwhile, we extend our study to the \(Y_{c}\bar{K}^{(*)}\) interactions, our results suggest the \(\Sigma_{c}\bar{K}\) system with \(I(J^{P})=1/2(1/2^{-})\), the \(\Sigma_{c}\bar{K}^{*}\) system with \(1/2(1/2^{-})\), \(1/2(3/2^{-})\), and \(3/2(3/2^{-})\), the coupled \(\Lambda_{c}\bar{K}/\Sigma_{c}\bar{K}^{*}\) system with \(1/2(1/2^{-})\), and the \(\Sigma_{c}\bar{K}/\Lambda_{c}\bar{K}^{*}/\Sigma_{c}\bar{K}^{*}\) system with \(1/2(1/2^{-})\) can be the prime molecular candidates.
pacs: 12.39.Pn, 14.20.Pt, 13.75.Jz
## I Introduction
In the past decades, the observations of \(X/Y/Z/P_{c}/T_{cc}\) structures have stimulated theorist's extensive interest in exploring the properties of exotic states. Among the possible configurations, the hadronic molecular state, which is composed of the color-singlet hadrons, plays an important role in explaining the observed exotic structures. The main reason of introducing such a configuration is that many observed \(X/Y/Z/P_{c}/T_{cc}\) structures are near some specific mass thresholds of the hadron pairs, which leads to answers whether these observations can be explained under the framework of the molecular state (one can see Refs. [1; 2; 3; 4; 5] for a detailed review). Thus, carrying out the study of the hadronic molecular state has became an active and important research field in the hadron physics. It is not only helpful to reveal the underlying structures of these near thresholds \(X/Y/Z/P_{c}/T_{cc}\) structures, but also can improve our knowledge of the non-perturbative behavior of the quantum chromodynamics (QCD).
Very recently, the LHCb collaboration continued to report their observations of two open heavy flavor multiquark candidates, \(T_{cs}^{ab}(2900)\) and \(T_{cs}^{a++}(2900)\), where the superscript \(a\) means that their quantum numbers are both \(I(J^{P})=10(^{+})\)[6; 7]. For the \(T_{cs}^{ab}(2900)\), the discovered channel is \(D_{s}^{+}\pi^{-}\), the mass and width are \(2892\pm 14\pm 15\) MeV and \(119\pm 26\pm 12\) MeV, respectively, while for the \(T_{cs}^{a++}(2900)\), the discovered channel, the mass, and the width are \(D_{s}^{+}\pi^{+}\), \(2921\pm 17\pm 19\) MeV and \(137\pm 32\pm 14\) MeV, respectively. According to their channels, mass positions and quantum numbers, it is easy to guess that the \(T_{cs}^{ab}(2900)\) and \(T_{cs}^{a++}(2900)\) belong to the same isovector triplet. Furthermore, the LHCb collaboration also determined their averaged masses and decay widths, which are \(2908\pm 11\pm 20\) MeV and \(136\pm 23\pm 11\) MeV, respectively.
Due to the charged property of \(T_{cs}^{ab(++)}(2900)\), their minimal valance quark components are naturally inferred to be \(csq\bar{q}\) (\(q=u,\ d\)). Since they are very close to the \(D^{*}K^{*}\) mass threshold, it is natural conjecture whether the \(T_{cs}^{ab(++)}(2900)\) states can be the isovector \(D^{*}K^{*}\) molecules with \(J^{P}=0^{+}\). In fact, in our former work [8], we can not only reproduce the \(D_{s,0}^{*}(2317)\) and \(D_{s1}(2460)\) in the \(S-\)wave \(DK\) and \(D^{*}K\) molecular scenario, but also find the one-boson-exchange (OBE) effective potentials are strong enough to form loosely bound molecular states for the \(D^{*}K^{*}\) systems with \(I(J^{P})=0(0^{+},1^{+},2^{+})\), and \(1(0^{+})\). Therefore, the \(D^{*}K^{*}\) hadronic molecular explanations for the \(T_{cs}^{ab(++)}(2900)\) states cannot be excluded. In addition, there are other different theoretical explanations to the \(T_{cs}^{ab(++)}(2900)\) states, like the compact open-charm pentaquark [9; 10; 11] and the \(D^{*}\rho\) molecule [12].
Besides the \(T_{cs}^{ab(++)}(2900)\), another two open-charm states \(X_{0}(2900)\) and \(X_{1}(2900)\), which were observed by the LHCb collaboration in the \(D^{*}K^{*}\) final states of the \(B^{+}\to D^{+}D^{-}K^{+}\) decay process [13; 14], are also interesting. Their spin-parities \(J^{P}\) are \(0^{+}\) and \(1^{+}\), respectively. Because their mass positions are very close to the \(\bar{D}^{*}K^{*}\) and \(\bar{D}_{1}K\) mass thresholds, respectively, many theorists propose the \(X_{0}(2900)\) and \(X_{1}(2900)\) states as the hadronic molecular states [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. At present, the inner structures for the \(T_{cs}^{ab(++)}(2900)\) and \(X_{0,1}(2900)\) are still on discussion (one can see Refs. [5]).
As is well known, the light diquark in the heavy baryons \(Y_{c}=(\Lambda_{c},\Sigma_{c})\) has the same color structure \(\bar{3}_{c}\) with the light anti-quark in the heavy meson \(Q\bar{q}\)[26]. If the \(T_{cs}^{ab(++)}(2900)\) can be assigned as the loosely bound hadronic molecular states composed by the charmed meson and kaon, it is natural to conjecture whether there exist possible open charm molecular pentaquarks counterpart of the \(T_{cs}^{ab(++)}(2900)\), which are near the thresholds of the \(\Lambda_{c}K^{(*)}\) and \(\Sigma_{c}K^{(*)}\), respectively. In this work, we search for such open charm molecular partners composed by \(\Lambda_{c}K^{(*)}\) and \(\Sigma_{c}K^{(*)}\), which can not only enrich the family of the exotic states, but also help us to understand the nature of the newly \(T_{cs}^{ab(++)}(2900)\).
Apart from searching for possible \(\Lambda_{c}K^{(*)}\) and \(\Sigma_{c}K^{(*)}\) molec |
2306.08453 | Dark Matter from a Radiative Inverse Seesaw Majoron Model | We propose a Majoron-like extension of the Standard Model with an extra
global $U(1)_X$-symmetry where neutrino masses are generated through an inverse
seesaw mechanism at the 1-loop level. In contrast to the tree-level inverse
seesaw, our framework contains dark matter (DM) candidates stabilized by a
residual $\mathcal{Z}_2$-symmetry surviving spontaneous breaking of the
$U(1)_X$-group. We explore the case in which the DM is a Majorana fermion.
Furthermore, we provide parameter space regions allowed by current experimental
constraints coming from the dark matter relic abundance, (in)direct detection,
and charged lepton flavor violation. | Cesar Bonilla, A. E. Cárcamo Hernández, Bastián Díaz Sáez, Sergey Kovalenko, Juan Marchant González | 2023-06-14T11:54:17Z | http://arxiv.org/abs/2306.08453v2 | # Dark Matter from a Radiative Inverse Seesaw Majoron Model
###### Abstract
We propose a Majoron-like extension of the Standard Model with an extra global \(U(1)_{X}\)-symmetry where neutrino masses are generated through an inverse seesaw mechanism at the 1-loop level. In contrast to the tree-level inverse seesaw, our framework contains dark matter (DM) candidates stabilized by a residual \(\mathcal{Z}_{2}\)-symmetry surviving spontaneous breaking of the \(U(1)_{X}\)-group. We explore the case in which the DM is a Majorana fermion. Furthermore, we provide parameter space regions allowed by current experimental constraints coming from the dark matter relic abundance, (in)direct detection, and charged lepton flavor violation.
## I Introduction
It took half a century to experimentally confirm with stunning accuracy every single part of what constitutes a unified description of the strong and electroweak interactions dubbed the Standard Model (SM). Despite the revolution and success in particle physics, the SM is far from being the final description of our universe, leaving many of its fundamental properties unexplained. For instance, it does not account for the origin of neutrino masses, lacks any hint of about 85% of the "dark" matter budget of the universe and what is the nature of this hidden sector, and does not explain the baryon asymmetry of the universe. These, among other issues, lead us to think that the SM is at best a low-energy effective field theory that belongs to a bigger framework.
The simplest and most popular realization to explain the smallness of neutrino masses is through the Type-I seesaw mechanism [1; 2; 3; 4; 5; 6; 7]. In this approach, Majorana right-handed neutrinos (RHN), \(N_{R}\), are added to the SM. This implies to extend the Yukawa Lagrangian by including terms \(y_{\nu}\bar{L}N_{R}\tilde{H}+M_{R}N_{R}\overline{N_{R}^{c}}\). If we consider order one Yukawa interactions, i.e. \(y_{\nu}\sim\mathcal{O}(1)\), the size of neutrino masses get determined by RHN masses according to \(m_{\nu}\approx v_{\Phi}^{2}/M_{R}\), where \(v_{\Phi}\) is the Higgs vacuum expectation value (vev). Then, the new physics energy scale is expected to be at fifteen orders of magnitude above the electroweak one for \(m_{\nu}\sim 0.1\) eV. This feature makes the Type-I seesaw mechanism inaccessible to current experimental sensitivities.
One of the main motivations to explore other scenarios accounting for neutrino masses1 is that some manifest new physics signatures at energies around the TeVs, i.e. at the reach of current or upcoming experimental searches [9]. One example of this is the so-called inverse seesaw model which is characterized by predicting significant lepton flavor
violating rates [10; 11; 12; 13]. This model introduces two Majorana fermion pairs to the SM, \(N_{R_{i}}\) and \(S_{L_{j}}\) (\(i,j=1,2,3\)). These fields transform as singlets under the SM gauge group and carry a lepton number of \(+1\). Then, after electroweak symmetry breaking the Lagrangian in the neutrino sector is given by
\[\mathcal{L}_{\nu}=m_{D}\overline{\nu}_{L}N_{R}+M\overline{N}_{R}S_{L}+\mu S_{L }^{T}C^{-1}S_{L}+h.c., \tag{1}\]
where \(m_{D}\) and \(M\) are Dirac \(3\times 3\) matrices, while \(\mu\) is a Majorana \(3\times 3\) matrix breaking lepton number explicitly. The latter can have a dynamical origin [4]. As usual, \(C\) denotes the charge conjugation matrix. The neutrino mass matrix in the \(\left(\nu_{L},N_{R},S_{L}\right)\) basis turns out to be
\[M_{\nu}=\left(\begin{array}{ccc}0&m_{D}&0\\ m_{D}^{T}&0&M\\ 0&M^{T}&\mu\end{array}\right). \tag{2}\]
Taking the limit \(\mu_{ij}<<\left(m_{D}\right)_{ij}<<M_{ij}\) (\(i,j=1,2,3\)) leads to a \(3\times 3\) matrix for light neutrinos given by
\[m_{\nu}\simeq m_{D}\frac{1}{M}\mu\frac{1}{M^{T}}m_{D}^{T}. \tag{3}\]
Note that the lightness of neutrinos could be associated only to the smallness of \(\mu\), which is naturally protected from large radiative corrections by the \(U(1)_{L}\)-symmetry of lepton number conservation, restored in the limit \(\mu\to 0\) in Eq. (1). Obviously, neutrinos become massless in this limit. Nevertheless lepton flavor violating processes are still allowed [14]. Non-zero but small \(\mu\) can be generated via radiative corrections opening up the intriguing possibility for relation of neutrino mass generation to the dark sector [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29].
In this work we propose a variant of the inverse seesaw model where the \(\mu\) term is generated at the 1-loop level after the spontaneous breaking of a global \(U(1)_{X}\) symmetry. This Abelian continuous symmetry breaks down to a discrete subgroup \(\mathcal{Z}_{2}\) which is an exact low energy symmetry which stabilizes the dark candidates of our model. Then, we explore the viability of having as DM the lightest Majorana fermion involved in generation of the \(\mu\) term in Eq. (3) at 1-loop level. We analyze the case in which this thermal relic communicates to the SM mainly via Higgs portal and provide constraints and prospects for direct and indirect detection in DM searches.
This paper is organized as follows. In section II we provide the details of the model such as the particle content, charge assignments and symmetry breaking. In addition, we describe the scalar potential and mass spectrum. In section III the phenomenology of the fermionic dark matter candidate is studied. The implications of the model in charged lepton flavor violation are discussed in section IV. We state our conclusions in section V.
## II The model
We consider a model that adds to the SM, two complex scalars \(\sigma\) and \(\eta\) and six Majorana fermions \(\nu_{R_{k}}\), \(N_{R_{k}}\) and \(\Omega_{R_{k}}\) (\(k=1,2\)). All these new fields are \(SU(2)\) gauge singlets and carry neutral electric charge. In addition, the existence of a global \(U(1)_{X}\) symmetry is assumed. This symmetry breaks down to a \(\mathcal{Z}_{2}\) symmetry when the singlet scalar gets a vacuum expectation value (vev) \(\left\langle\sigma\right\rangle=v_{\sigma}\). Table 1 shows the charge assignments of scalars and leptons under the \(SU\left(2\right)_{L}\otimes U\left(1\right)_{Y}\otimes U\left(1\right)_{X}\) symmetry2.
Footnote 2: Note that the Higgs \(\Phi\) and quarks do not transform under the global symmetry \(U\left(1\right)_{X}\).
In fact, after electroweak symmetry breaking the unbroken symmetry is \(SU(3)_{C}\otimes U(1)_{EM}\otimes\mathcal{Z}_{2}\) where \(\mathcal{Z}_{2}\) turns out to be the symmetry that stabilizes the dark matter candidate of the theory. Schematically, the symmetry breaking
\begin{table}
\begin{tabular}{|c|c c c|c c c c c|} \hline & \(\Phi\) & \(\sigma\) & \(\eta\) & \(L_{L_{i}}\) & \(l_{R_{i}}\) & \(\nu_{R_{k}}\) & \(N_{R_{k}}\) & \(\Omega_{R_{k}}\) \\ \hline \hline \(SU\left(2\right)_{L}\) & \(2\) & \(1\) & \(1\) & \(2\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(U\left(1\right)_{Y}\) & \(1/2\) & \(0\) & \(0\) & \(-1/2\) & \(-1\) & \(0\) & \(0\) & \(0\) \\ \(U\left(1\right)_{X}\) & \(0\) & \(-1\) & \(1/2\) & \(-1\) & \(-1\) & \(-1\) & \(-1/2\) \\ \hline \end{tabular}
\end{table}
Table 1: Charge assignments of scalar and lepton fields. Here \(i=1,2,3\) and \(k=1,2\).
chain goes as follows,
\[{\cal G}=SU(3)_{C}\otimes SU\left(2\right)_{L}\otimes U\left(1 \right)_{Y}\otimes U\left(1\right)_{X}\] \[\Downarrow v_{\sigma}\] \[SU(3)_{C}\otimes SU\left(2\right)_{L}\otimes U\left(1\right)_{Y} \times{\cal Z}_{2}\] \[\Downarrow v_{\Phi}\] \[SU(3)_{C}\otimes U\left(1\right)_{Q}\otimes{\cal Z}_{2} \tag{4}\]
where the Higgs vev is represented by \(\langle\Phi^{0}\rangle=v_{\Phi}\).
Given the charge assignments shown in Table 1 we have that the singlet's vev is invariant under the following transformation, \(e^{2\pi i\hat{X}}\langle\sigma\rangle=\langle\sigma\rangle\), where \(\hat{X}\) is the \(U(1)_{X}\) charge operator. This implies the existence of a residual discrete symmetry \((-1)^{2\hat{X}}\in{\cal Z}_{2}\) surviving spontaneous breaking of the global \(U(1)_{X}\) group. Therefore, to all fields are assigned the corresponding \({\cal Z}_{2}\)-parities \((-1)^{2\hat{Q}_{X}}\) according to their \(U(1)_{X}\) charges \(Q_{X}\) in Table 1. The particles \(\eta\) and \(\Omega_{R_{k}}\) (\(k=1,2\)) have odd \({\cal Z}_{2}\)-parities and form the dark sector of the model.
### Scalar sector
The scalar potential invariant under the symmetry group \({\cal G}\) is given by
\[V(\Phi,\sigma,\eta) = -\frac{\mu_{2}^{2}}{2}|\Phi|^{2}+\frac{\lambda_{\Phi}}{2}|\Phi|^{ 4}-\frac{\mu_{\sigma}^{2}}{2}|\sigma|^{2}+\frac{\lambda_{\sigma}}{2}|\sigma|^ {4}+\frac{\mu_{\eta}^{2}}{2}|\eta|^{2}+\frac{\lambda_{\eta}}{2}|\eta|^{4} \tag{5}\] \[+ \lambda_{1}|\Phi|^{2}|\sigma|^{2}+\lambda_{2}|\Phi|^{2}|\eta|^{2 }+\lambda_{3}|\sigma|^{2}|\eta|^{2}+\frac{\mu_{4}}{\sqrt{2}}\sigma\eta^{2}+h. c.,\]
where the quartic couplings \(\lambda_{a}\) are dimensionless parameters whereas the \(\mu_{a}\) are dimensionful. In our analysis we will impose perturbativity (\(\lambda_{a}<\sqrt{4\pi}\)) and the boundedness conditions given in Appendix A.
The singlet \(\sigma\) and the neutral component of the doublet \(\Phi=(\phi^{+},\phi^{0})^{T}\) acquire vacuum expectation values (vevs). Here the singlet's vev \(v_{\sigma}\) is responsible for the breaking of the global \(U(1)_{X}\) symmetry while the double's vev \(v_{\Phi}\) triggers electroweak symmetry breaking. Therefore we shift the fields as
\[\phi^{0}=\frac{1}{\sqrt{2}}(v_{\Phi}+\phi_{R}+i\phi_{I}),\quad\sigma=\left( \frac{v_{\sigma}+\sigma_{R}+i\sigma_{I}}{\sqrt{2}}\right). \tag{6}\]
Evaluating the second derivatives of the scalar potential at the minimum one finds the CP-even, \(M_{R}^{2}\), and CP-odd, \(M_{I}^{2}\), mass matrices. The CP-even mass matrix \(M_{R}^{2}\) mixes \(\phi_{R}\) and \(\sigma_{R}\) and its eigenvalues correspond to the squared masses of the physical scalar states. They are given by
\[m_{h_{1},h_{2}}^{2}=\frac{1}{2}\left(\lambda_{\sigma}v_{\sigma}^{2}+\lambda_{ \Phi}v_{\Phi}^{2}\mp\frac{\lambda_{\sigma}v_{\sigma}^{2}-\lambda_{\Phi}v_{\Phi }^{2}}{\cos 2\theta}\right), \tag{7}\]
where we identify \(h_{1}\) with the 125 GeV Higgs boson, \(v_{\Phi}=246\) GeV, and the mixing angle \(\theta\) fulfilling
\[\tan 2\theta=\frac{2\lambda_{1}v_{\Phi}v_{\sigma}}{\lambda_{\sigma}v_{\sigma}^{ 2}-\lambda_{\Phi}v_{\Phi}^{2}}. \tag{8}\]
Moreover, the flavor and physical bases are connected through out the following relations,
\[\sigma_{R} = -h_{1}\sin\theta+h_{2}\cos\theta,\] \[\phi_{R} = h_{1}\cos\theta+h_{2}\sin\theta. \tag{9}\]
The CP-odd mass matrix \(M_{I}^{2}\) has two null eigenvalues. One of them corresponds to the would-be Goldstone boson which becomes the longitudinal component of the \(Z\)-boson by virtue of the Higgs mechanism. The other one is the physical Goldstone boson resulting from spontaneous breaking of the global \(U(1)_{X}\) symmetry, similar to the singlet Majoron model in Ref. [4]3. This is what defines our Majoron model variant. The masses of \(Z_{2}\)-odd scalar components \(\eta=\eta_{R}+i\eta_{I}\) turn out to be
Footnote 3: As matter of fact, a massless boson can contribute to dark radiation in the early universe. BBN and CMB constraints can be evaded if this boson gets mass. This may be achieved by considering that the \(U(1)_{X}\) symmetry is softly broken (e.g. by adding to Eq. (5) a term like \(\mu^{2}\sigma^{2}+h.c\)[30].
\[m_{\eta_{R}}^{2} = \mu_{\eta}^{2}+\frac{1}{2}\left(\lambda_{2}v_{\Phi}^{2}+\lambda_ {3}v_{\sigma}^{2}\right)+\mu_{4}v_{\sigma}, \tag{10}\] \[m_{\eta_{I}}^{2} = \mu_{\eta}^{2}+\frac{1}{2}\left(\lambda_{2}v_{\Phi}^{2}+\lambda_ {3}v_{\sigma}^{2}\right)-\mu_{4}v_{\sigma}, \tag{11}\]
where the mass splitting of these two components can be recasted as \(\mu_{4}=(m_{\eta_{R}}^{2}-m_{\eta_{I}}^{2})/(2v_{\sigma})\). Note that \(\eta_{R}\) and \(\eta_{I}\) are degenerate when \(\mu_{4}\to 0\). Here, the lightest of these two components, \(\eta_{R}\) and \(\eta_{I}\), can be the stable dark matter.
### Neutrino sector
Using Table 1, the invariant lepton Yukawa Lagrangian is given by
\[-\mathcal{L}_{Y}^{(l)} = \sum_{i=1}^{3}\sum_{j=1}^{3}\left(y_{l}\right)_{ij}\overline{L}_{ L_{i}}l_{R_{j}}\Phi+\sum_{i=1}^{3}\sum_{k=1}^{2}\left(y_{\nu}\right)_{ik} \overline{L}_{L_{i}}\nu_{R_{k}}\widetilde{\Phi}+\sum_{n=1}^{2}\sum_{k=1}^{2}M_ {nk}\overline{\nu}_{R_{n}}N_{R_{k}}^{c}\] \[+\sum_{n=1}^{2}\sum_{k=1}^{2}\left(y_{N}\right)_{nk}\overline{N}_ {R_{n}}\Omega_{R_{k}}^{c}\eta+\sum_{n=1}^{2}\sum_{k=1}^{2}\left(y_{2}\right)_ {nk}\overline{\Omega}_{R_{n}}\Omega_{R_{k}}^{c}\sigma+h.c.,\]
where \(\psi^{c}=C\overline{\psi}^{T}\) and \(\widetilde{\Phi}=i\sigma_{2}\Phi^{*}\). After spontaneous symmetry breaking (SSB), the neutrino mass matrix has the form,
\[M_{\nu}=\left(\begin{array}{ccc}0_{3\times 3}&m_{D}&0_{3\times 2}\\ m_{D}^{T}&0_{2\times 2}&M\\ 0_{2\times 3}&M^{T}&\mu\end{array}\right), \tag{12}\]
where \(m_{D}\) is the tree-level Dirac mass term
\[\left(m_{D}\right)_{ik}=\left(y_{\nu}\right)_{ik}\frac{v_{\Phi}}{\sqrt{2}}, \tag{13}\]
with \(i=1,2,3\) and \(k=1,2\). The submatrix \(\mu\) in Eq. (12) is generated at one-loop level,
\[\mu_{sp} = \sum_{k=1}^{2}\frac{\left(y_{N}\right)_{sk}\left(y_{N}^{T}\right) _{kp}m_{\Omega_{k}}}{16\pi^{2}}\left[\frac{m_{\eta_{R}}^{2}}{m_{\eta_{R}}^{2} -m_{\Omega_{k}}^{2}}\ln\left(\frac{m_{\eta_{R}}^{2}}{m_{\Omega_{k}}^{2}}\right) -\frac{m_{\eta_{I}}^{2}}{m_{\eta_{I}}^{2}-m_{\Omega_{k}}^{2}}\ln\left(\frac{m _{\eta_{I}}^{2}}{m_{\Omega_{k}}^{2}}\right)\right], \tag{14}\]
with \(s,p=1,2\). The Feynman diagram of \(\mu\) is depicted in Figure 1. One can see from Eq. (14) that the \(\mu\) term vanishes when the scalars \(\eta_{R}\) and \(\eta_{I}\) are degenerate. This implies that neutrino masses go to zero in the limit \(\mu\to 0\). Then one has that active light neutrino masses are generated via an inverse seesaw mechanism at the one-loop level. Physical neutrino mass matrices are given by4:
Footnote 4: The diagonalization of the neutrino mass matrix in Eq. (12) can be followed from Ref. [31]
\[\widetilde{M}_{\nu} = m_{D}\left(M^{T}\right)^{-1}\mu M^{-1}m_{D}^{T}, \tag{15}\] \[M_{\nu}^{(-)} = -\frac{1}{2}\left(M+M^{T}\right)+\frac{1}{2}\mu,\] (16) \[M_{\nu}^{(+)} = \frac{1}{2}\left(M+M^{T}\right)+\frac{1}{2}\mu. \tag{17}\]
Now \(\widetilde{M}_{\nu}\) is the mass matrix for active light neutrinos (\(\nu_{a}\)), whereas \(M_{\nu}^{(-)}\) and \(M_{\nu}^{(+)}\) are the mass matrices for sterile neutrinos. From Eq. (15) one can see that active light neutrinos are massless in the limit \(\mu\to 0\) which implies that lepton number is a conserved quantity. Eqs. (16) and (17) tell us that the smallness of the parameter \(\mu\) (small mass splitting) induces pseudo-Dirac pairs of sterile neutrinos.
From Eq. (15), one can see that a sub-eV neutrino mass scale can be linked to a small lepton number breaking parameter \(\mu\) which depends on the Yukawas \(y_{N}^{2}\), \(y_{\Omega}\) and the masses of the particles running in the loop \((m_{\eta_{R}},m_{\eta_{I}},m_{\Omega})\). This parameter is further suppressed by the loop factor, see Eq. (14). Figure 2 shows the allowed parameter space regions for fixed Yukawa couplings \(y_{N}\) and masses of the \(Z_{2}\)-odd Majorana fermions \(\Omega_{k}\). Each plot is generated using Eq. (14), varying the masses \((m_{\eta_{R}},m_{\eta_{I}})\), fixing \(m_{\Omega}\) and \(y_{N}\). Then, from left to right, Figure 2 shows the parameter space that fulfills \(-10\) keV \(\leq\mu\leq 10\) keV in the \((m_{\eta_{R}},m_{\eta_{I}})\)-plane, considering \(m_{\Omega}=100,500\), and \(1000\) GeV, respectively. In all panels, \(y_{N}=0.01,0.05\), and \(0.1\), the smaller the Yukawa value, the lighter the region. The discontinuity appears when the mass spectrum in Eq. (14) is degenerate.
As we have mentioned, the dark sector is formed by the \(Z_{2}\)-odd particles, see Table 1. The dark matter candidate of the model is the lightest component of either the singlet scalar \(\eta\) or the Majorana fermion \(\Omega\). The phenomenological consequences of having the lightest component of the scalar singlet \(\eta\) as dark matter candidate is similar to what have been discussed in Refs. [20; 23; 28; 32; 33]. For this reason, in what follows we discuss only the constraints and projections of the model for the case in which the DM candidate is the Majorana fermion \(\Omega\).
Figure 1: One-loop Feynman diagram contributing to the Majorana neutrino mass in Eq. (12).
Figure 2: Parameter space fulfilling \(-10\) keV \(\leq\mu\leq 10\) keV, for the DM masses indicated above each plot. The color in each plot, from light to dark, represents \(y_{N}=0.01,0.05\) and \(0.1\), respectively. Here we assume that \(m_{\Omega_{2}}\gg m_{\Omega}\). The discontinuity appears when a degenerate mass spectrum is reached in Eq. (14).
## III Fermion dark matter
For simplicity, we consider the case in which \(y_{\Omega}\) in Eq. (12) is a diagonal matrix and assume that \(\Omega_{R_{1}}\) is the lightest \(\mathcal{Z}_{2}\)-odd state. That is, \(\Omega_{R_{1}}\) is the fermion DM candidate accounting for the \(80\%\) of the matter content of the universe. According to the Planck collaboration the DM relic abundance is [34]
\[\Omega_{c}h^{2}=0.1200\pm 0.0012\ \ \text{at}\ \ 68\%\text{C.L.} \tag{18}\]
In our setup the Lagrangian providing the relevant interactions of \(\Omega_{R_{1}}\) is given by
\[\mathcal{L}\ \supset\ y_{N_{1}}\bar{N}_{R}\Omega_{R_{1}}^{c}\eta+y_{\Omega_{1}} \overline{\Omega}_{R_{1}}\Omega_{R_{1}}^{c}\sigma+h.c.. \tag{19}\]
After SSB Eq. (19) becomes
\[\mathcal{L}\supset(y_{N1}\bar{N}_{R}\Omega_{1R}^{c}\eta+h.c.)+m_{\Omega 1}\overline{\Omega}\Omega+y_{\Omega 1}\overline{\Omega}\Omega(-h_{1}\sin\theta+h_{2}\cos \theta)+y_{\Omega 1}i\overline{\Omega}\gamma^{5}\Omega\chi, \tag{20}\]
where \(m_{\Omega_{1}}=y_{\Omega_{1}}v_{\sigma}/\sqrt{2}\). We have defined \(\Omega\equiv(\Omega_{1R})^{c}+\Omega_{1R}\), \(N_{R}\equiv(-N_{R}^{+}+N_{R}^{-})/\sqrt{2}\) and \(\chi\equiv\sigma_{I}\).
Taking into account the assumptions above, the relic abundance of \(\Omega\) is determined by the annihilation channels shown in Figure 3. Given that \(\Omega\) is the DM candidate of the theory then \(m_{\Omega}<(m_{N_{R}},m_{\eta_{R}},m_{\eta_{I}})\). In this case, the main annihilation channels are those s-channels depicted by diagrams (a), (b) and (c) in Figure 3. We further simplify the analysis by considering that the annihilation channels mediated by the Higgs via the dimensionless parameters \(\lambda_{2}\) and \(\lambda_{3}\) are subleading. That is, we set \(\lambda_{2}=\lambda_{3}=0\). Therefore, the independent parameters to be used in the numerical analysis turn out to be \((m_{\Omega},m_{\eta_{R}},m_{\eta_{I}},m_{h_{2}},m_{N_{R}},y_{N_{1}},y_{\Omega _{1}},\theta)\).
Let us note that in this inverse seesaw model, the Majorana dark matter candidate can interact with the atomic nucleons at tree-level. For this reason, our model gets restricted by direct detection constraints. As a matter of fact, these constraints come from the \(t\)-channel exchange of \(h_{1}\) and \(h_{2}\) shown by Figure 3-(h). Then, here the spin-independent (SI) tree-level DM-nucleon scattering cross section is, approximately, [30; 35]
\[\sigma_{\Omega}\approx\frac{f_{p}^{2}m_{N}^{4}m_{\Omega}^{2}}{4\pi v_{\Phi}^{ 2}(m_{\Omega}+m_{N})^{2}}\left(\frac{1}{m_{h_{1}}^{2}}-\frac{1}{m_{h_{2}}^{2} }\right)^{2}(y_{\Omega_{1}}\sin 2\theta)^{2}, \tag{21}\]
where \(m_{N}\) denotes the nucleon mass and the nuclear elements \(f_{p}\approx 0.27\). The approximation given in Eq. (21) does not take into account the finite width of both Higgs scalars, although the outputs of this expression match the numerical results from Micromegas code v5.3.35 [36] which do include these finite widths.
Figure 3: _Diagrams (a)-(g) are relevant for the freeze-out of \(\Omega\). The diagram (h) is relevant for direct detection, with \(N\) representing the nucleons. Here to simplify notation we have used \(\sigma\) to denote either of \((h_{1},h_{2},\chi)\)._
### Analysis and results
In what follows, we compute the relic abundance of the Majorana fermion \(\Omega\) assuming freeze-out mechanism, the direct detection via non-relativistic scattering, and indirect detection prospects today. For our calculation, we make use of Micromegas code v5.3.35 [36].
Footnote 5: Collider searches of additional scalars restrict the doublet-singlet mixing angle to be \(\theta\lesssim 0.2\)[37].
As mentioned, here we explore the case where \(\lambda_{1}\neq 0\), \(\lambda_{2}=\lambda_{3}=0\). Therefore, the contribution to the relic abundance coming from the annihilation of \(\Omega\) into SM particles happens only via the Higgs portal associated to \(\lambda_{1}\). The left panel in Figure 4 shows the relic abundance of \(\Omega\) as a function of \(y_{\Omega}\), for \(m_{\Omega}=200\) (solid blue) and \(500\) GeV (solid greed), assuming \(m_{h_{2}}=120\) GeV and \(\theta=0.1\)6. This benchmark considers \(y_{N}=0.1\), \(m_{\eta_{{}_{R}}}=2000\) GeV, \(m_{\eta_{{}_{I}}}=2001\) GeV and \(m_{N}=300\) GeV (these parameters will be fixed to such values from now on). The limit of the DM relic abundance given in Eq. (18) is represented by the red dashed line. One can see from Figure 4 that the relic abundance has a strong dependence on the DM mass and the parameter \(y_{\Omega}\). For a small Yukawa coupling \(y_{\Omega}\lesssim 10^{-2}\) and \(m_{\Omega}>m_{N}\), the process \(\Omega\Omega\to NN\) is kinematically allowed and dominates over the other annihilation channels. In the case of \(m_{\Omega}<m_{N}\), the leading contributions to the relic abundance are those processes which involve the fields \((\chi,h_{1},h_{2})\) (diagrams \(b\) and \(c\) in Figure 3). In such a case, the relic abundance turns out to be inversely proportional to \(y_{\Omega}^{2}\). For this reason, as shown on the left-panel in Figure 4, the solid blue (green) curve decreases when the value of Yukawa increases. It is evident that, for a given \(y_{\Omega}\), the relic abundance grows as the DM mass decreases. This behaviour is expected since \(\Omega_{\Omega}h^{2}\) is inversely proportional to the annihilation cross section which depends on the center-of-mass energy of the colliding non-relativistic DM particles, i.e. \(s\approx 4m_{\Omega}^{2}\). Here, we are focusing on the case of small doublet-singlet mixing \(\theta\), i.e. \(\lesssim 0.1\). For this reason, the DM relic abundance turns out to be "blind" to this parameter and is completely determined by the interaction of fields belonging to the dark sector (namely, \(\chi\) and/or \(h_{2}\)). The DM annihilation channel into a pair of \(\chi\) is always present unless further assumptions are made to suppress it.
In contrast to the situation previously described, the DM direct detection given by the SI cross section \(\sigma_{\Omega}\), Eq. (21), is sensitive to the values of the doublet-singlet mixing \(\theta\). This is shown by the plot on the right-hand side of Figure 4 where \(\sigma_{\Omega}\) is depicted as a function of the mass of second CP-even scalar \(m_{h_{2}}\) (considering \(m_{\Omega}=200\) and \(500\) GeV, and each case with \(\theta=0.1,0.01\)). The blue and green curves represent the points in the parameter space fulfilling the correct relic abundance while the red, yellow and gray dashed horizontal lines correspond to the experimental limits provided by XENON1T [38] and LUX-ZEPLIN (LZ) [39], and the projections by XENONnT [40], respectively. From Eq. (21) one can notice that the cross section rests on the existence of a mixing between the scalar doublet \(\Phi\) and the singlet \(\sigma\). This dependence can be observed from Figure 4 which shows the sensitivity of the cross section to \(\theta\) variations. Furthermore, we can see that when the CP-even scalars \(h_{1}\) and \(h_{2}\) are (semi)-degenerate, i.e. \(m_{h_{2}}\approx m_{h_{1}}\), there is a numerical cancellation that generates the inverted peak in the SI cross section. This allows to elude the experimental bounds when \(h_{1}\) and \(h_{2}\) are close in mass. Another possibility to relax the experimental constraints, including the one coming from XENONnT, happens by shrinking the value of doublet-singlet mixing as depicted in Figure 4. This parameter space limit is reached, for instance, when lepton number gets broken at energies much higher than the electroweak scale.
Let us note that this model is characterized by the presence of the process \(\Omega\bar{\Omega}\to\chi h_{1,2}\), with \(h_{1,2}\) decaying into SM particles [30]. These s-wave processes are velocity independent channels not present in other models [20; 23] and give good prospects for indirect DM detection especially when \(\chi\) is a pseudo-Goldstone boson, i.e. \(m_{\chi}\neq 0\). For this reason, we look into regions of the parameter space where the processes \(\Omega\bar{\Omega}\to\chi h_{1,2}\) dominate the DM annihilation and provide the limits coming from the Alpha Magnetic Spectrometer (AMS) experiment as well as the future sensitivities of the Cherenkov Telescope Array (CTA) experiment. For convenience, we assume \(m_{\chi}>m_{h_{2}}/2\) so that the decay \(h_{2}\to 2\chi\) is kinematically disallowed. In this way, the DM candidate does not annihilate primarily into invisible channels.
Using Eq. (9), one can express the \(h_{2}\) branching fraction into SM particles as [41],
\[\text{BR}(h_{2}\to\text{SM})=\sin^{2}\theta\left[\frac{\Gamma(h_{2}\to\text{ SM})}{\Gamma_{\text{tot}}}\right] \tag{22}\]
where \(\Gamma(h_{2}\to\text{SM})\) corresponds to the partial decay width of the scalar boson \(h_{2}\) (with mass \(m_{h_{2}}\)) into SM states,
and the total decay width is given by
\[\Gamma_{\rm tot}=\sin^{2}\theta\times\Gamma_{\rm tot}^{\rm SM}+\Gamma(h_{2}\to 2h_{1})+ \Gamma(h_{2}\to 2\Omega)+\Gamma(h_{2}\to 2\chi). \tag{23}\]
Here \(\Gamma_{\rm tot}^{\rm SM}\) corresponds to the total decay width of \(h_{2}\) into SM states [42]. For simplicity, we focus on the case in which \(m_{h_{2}}<2m_{\Omega}\). Furthermore, in order to assure observability, via \(h_{2}\) decaying into SM particles, we assume \(m_{\chi}\neq 0\) as well as \(m_{h_{2}}<2m_{\chi}\). Therefore, the third and fourth terms in Eq. (23) are not present in our study.
Taking into account the considerations previously stated, we perform a numerical analysis. Figure 5 shows the predictions for indirect detection signals coming from the DM annihilation into a pseudo-Goldstone \(\chi\) and \(h_{2}\). The CP-even scalar subsequently decays into SM particles, see Eq. (22). Then, the DM annihilation would be \(\Omega\bar{\Omega}\to\chi h_{2}\to\chi\)SM. The assumed thermal value \(2\times 10^{-26}\) cm\({}^{3}\)/s\(\leq\left<\sigma v\right>_{\Omega\bar{\Omega}\to\chi h_{2}}\leq 3\times 10^{-26}\) cm\({}^{3}\)/s is depicted by the red region at the top of each panel in Figure. 5. The left-panels consider \(m_{\Omega}=200\) GeV for \(\theta=0.1\) and \(0.01\), while the panels on the right take \(m_{\Omega}=500\) GeV for the same values of \(\theta\). In all panels, the Yukawa coupling is \(y_{N}=0.1\), and the dark scalar and pseudoscalar masses are fixed as \(m_{\eta_{{}_{R}}}=2000\) GeV and \(m_{\eta_{{}_{I}}}=2001\) GeV, respectively. The solid blue (green) line corresponds to the DM matter annihilation into \(b\bar{b}\) (\(W^{+}W^{-}\)). The dashed blue horizontal line is the upper bound of AMS-02 [43] for DM annihilation into \(b\bar{b}\), whereas the dashed green horizontal one represents the future sensitivity of CTA [44] in the \(W^{+}W^{-}\) channel. We also consider the bound projected by CTA for DM annihilation searches with \(W^{+}W^{-}\) in the final state assuming a gNFW profile with a slope parameter \(\gamma=1.26\)[44]. Figure 5 includes direct detection bounds at fixed \(\theta\) and DM mass. The dark orange is the exclusion region coming from the LZ results on DD searches. The light orange area represents the future sensitivity of XENONnT. Therefore, only the light orange and white parts in each panel are allowed by current DM direct detection constraints. In addition, the dark gray area in the left panels of Figure 5 satisfies \(m_{h_{2}}>2m_{\chi}\) and is then forbidden. This is because we are working under the assumption that \(m_{h_{2}}<2m_{\chi}\) or \(m_{\chi}=2m_{\Omega}-m_{h_{2}}>2/3m_{\Omega}\), i.e. \(h_{2}\) does not decay into \(2\chi\). This guarantees that the fermion DM candidate \(\Omega\) annihilates into observable modes.
Figure 5 shows the parameter space fraction that CTA would be able to test in the future by looking for \(W^{+}W^{-}\) products. One can see that the CTA projections do not reach the model predictions if the DM density distribution follows an Einasto profile. On the other hand, if the DM posses a gNFW profile, CTA searches could prove masses \(m_{h_{2}}\gtrsim 150\) GeV. In addition, we have that AMS-02 data impose parameter space restrictions over the parameter space from DM annihilation into \(b\bar{b}\) searches. Notice that, as expected from Eq. (21), the direct detection constraints get weaker when the singlet-doublet mixing takes smaller values.
Figure 4: Left-panel: Relic abundance as a function of \(y_{\Omega}\) for \(m_{\Omega}=200\) and \(500\) GeV. Here we have set \(m_{h_{2}}=120\) GeV and \(\theta=0.1\), with the rest of the parameters specified in the text. Right-panel: Direct detection cross section as a function of \(m_{h_{2}}\) and different combinations of \((m_{\Omega}[\text{GeV}],\tan\theta)\). The horizontal red and orange lines are the current DD upper limits for each value of \(m_{\Omega}\), whereas the grey curves are the XENONnT projections. The solid curves depicted here fulfill the correct relic abundance.
## IV Charged lepton flavor violation
In this section we analyze charged lepton flavor violation (cLFV) processes present due to the mixing between active and heavy sterile neutrinos. Here we focus in the one-loop decays \(l_{i}\to l_{j}\gamma\) whose branching ratios are given by [45; 46; 47]
\[\text{BR}\left(l_{i}\to l_{j}\gamma\right) = \frac{\alpha_{W}^{3}s_{W}^{2}m_{l_{i}}^{5}}{256\pi^{2}m_{W}^{4} \Gamma_{i}}\left|G_{ij}\right|^{2} \tag{24}\] \[G_{ij} \simeq \sum_{k=1}^{3}\left(\left[\left(1-RR^{\dagger}\right)U_{\nu} \right]^{*}\right)_{ik}\left(\left(1-RR^{\dagger}\right)U_{\nu}\right)_{jk}G_{ \gamma}\left(\frac{m_{\nu_{k}}^{2}}{m_{W}^{2}}\right)+2\sum_{l=1}^{2}\left(R^{ *}\right)_{il}\left(R\right)_{jl}G_{\gamma}\left(\frac{m_{N_{R_{l}}}^{2}}{m_{W }^{2}}\right),\] (25) \[G_{\gamma}(x) = \frac{10-43x+78x^{2}-49x^{3}+18x^{3}\ln x+4x^{4}}{12\left(1-x \right)^{4}},\]
where \(\Gamma_{\mu}=3\times 10^{-19}\) GeV is the total muon decay width, \(U_{\nu}\) is the matrix that diagonalizes the light neutrinos mass matrix which, in our case, is equal to the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix since the charged
Figure 5: Parameter space restrictions from direct and indirect dark matter. The thermal value of \(\langle\sigma v\rangle_{\Omega\Omega\to h_{2}\chi}\) is given by the red band on top of the plots. It is assumed that \(h_{2}\) decays into either \(b\bar{b}\) (blue solid line) or \(W^{+}W^{-}\) (green solid line). The dashed blue horizontal line represents the latest bound on the DM annihilation cross section into \(b\bar{b}\) from AMS-02. The green dashed and dotted-dashed are the projected sensitivities of CTA depending on the DM profile. The dark and light orange areas correspond to the LZ exclusion region and the future sensitivity of XENONnT, respectively. The dark gray area in the left panels satisfies \(m_{h_{2}}>2m_{\chi}\) and is then forbidden.
lepton mixing matrix is equal to the identity \(U_{\ell}=\mathbb{I}\). In addition, the matrix \(R\) is given by
\[R=\frac{1}{\sqrt{2}}m_{D}^{*}M^{-1}, \tag{26}\]
where \(M\) and \(m_{D}\) are the heavy Majorana mass matrix and the Dirac neutrino mass matrix, respectively. We provide in Appendix B all assumptions made for computing \(R\) in Eq. (26). Table 3 contains the benchmarks points used to compute \(\mu\to e\gamma\).
Then, feeding Eq. (24) with the values of the dimensionless parameters and masses given in Table 3, one gets the following branching fractions,
\[\text{BR}^{(a)}(\mu\to e\gamma)\simeq 2.02\times 10^{-13}\ \ \text{and}\ \ \text{BR}^{(b)}(\mu\to e\gamma)\simeq 1.13\times 10^{-13}, \tag{27}\]
where \((a)\) is for \(m_{\Omega}=200\) GeV and \((b)\) is for \(m_{\Omega}=500\) GeV.
Figure 6 shows the correlation between the branching ratio \(\text{BR}\left(\mu\to e\gamma\right)\) and the mass of the lightest RH Majorana neutrino \(N_{R}\). One observes that the branching ratio decreases as the mass of \(N_{R}\) increases. In both plots, the red horizontal line and the shaded region represent the latest experimental constraint provided by the MEG [48] collaboration,
\[\text{BR}\left(\mu\to e\gamma\right)^{\text{exp}}<4.2\times 10^{-13}. \tag{28}\]
The black stars in Figure 6 correspond to the branching ratio predicted, Eq. (27), by the best-fit points of the model for \(m_{\Omega}=200\) GeV (left-panel) and \(m_{\Omega}=500\) GeV (right-panel). The scatter plots come from a random variation of the dimensionless parameters up to 30% around the best-fit value. The green points are compatible with current neutrino oscillation experimental limits at \(3\sigma\). One can see that neutrino oscillation data restrict the lightest RH neutrino mass to be in the range \(436.9\text{ GeV}\leq m_{N_{R}}\leq 996.5\text{ GeV}\) for \(m_{\Omega}=200\text{ GeV}\), and \(204.5\text{ GeV}\leq m_{N_{R}}\leq 649.9\text{ GeV}\) for \(m_{\Omega}=500\text{ GeV}\). All orange points are out of the \(3\sigma\) range and, hence, excluded by neutrino oscillation data.
Figure 6: Branching ratio \(\text{BR}\left(\mu\to e\gamma\right)\) as a function of the mass of the lightest RH Majorana neutrino \(N_{R}\). The shadowed region is excluded excluded by MEG [48]. The black star corresponds to the prediction of the best-fit point of the model, for \(m_{\Omega}=200\text{ GeV}\) (left-panel) and \(m_{\Omega}=500\text{ GeV}\) (right-panel). The green points are compatible with current neutrino oscillation experimental limits at \(3\sigma\). The orange point are out of the \(3\sigma\) range and, hence, excluded by neutrino oscillation data.
Conclusions
We have built an SM extension where the tiny masses of the light active neutrinos are generated from an inverse seesaw mechanism at one-loop level. This model adds two scalar singlets and six right handed Majorana neutrinos to the SM particle content. In addition, the SM gauge group is enlarged by a global \(U(1)_{X}\) symmetry, which is spontaneously broken down to an Abelian discrete subgroup, \(\mathcal{Z}_{2}\subset U(1)_{X}\). The latter is an exact low-energy symmetry guarantying a one-loop realization of the inverse seesaw mechanism and the existence of stable scalar and fermionic dark matter candidates. We focused our study on the case in which the dark matter is the stable Majorana fermion.
We have found that our model, besides providing a dynamical origin of the inverse seesaw mechanism, successfully reproduces the measured values of the DM relic density. We have identified the parameter space regions sensitive to (in)direct DM searches. In the case of direct detection searches, we show the parameter space restrictions of the model (imposed by XENON1T and LZ results) and future prospects of XENONnT experiment. Furthermore, we compare the model predictions coming from the Majorana fermion annihilation into \(b\bar{b}\) and \(W^{+}W^{-}\) with AMS-02 bounds and CTA projections. We found that our benchmarks could be tested by CTA if the DM has a gNWA profile.
We have also analysed the lepton flavor violating \(\mu\to e\gamma\) process and provided the model predictions in simplified scenarios that are in agreement with current neutrino oscillation data.
###### Acknowledgements.
The work of C.B. was supported by FONDECYT grant No. 11201240. A.E.C.H is supported by ANID-Chile FONDECYT 1210378, ANID PIA/APOYO AFB220004 and Milenio-ANID-ICN2019.044. S.K is supported by ANID-Chile FONDECYT 1230160 and Milenio-ANID-ICN2019_044. J.M. is supported by ANID Programa de Becas Doctorado Nacional code 21212041. B.D.S has been founded by ANID Grant No. 74200120. C.B. would like to acknowledge the hospitality and support from the ICTP through the Associates Programme (2023-2028).
## Appendix A Boundedness Conditions
The boundedness conditions of the scalar potential in Eq. (5) are derived assuming that the quartic terms dominate over at high energies. In order to do so, we define the following bilinears,
\[a=|\Phi|^{2}\quad;\quad b=|\sigma|^{2}\quad;\quad c=|\eta|^{2}, \tag{10}\]
and rewrite the quartic terms of the scalar potential. Then, using the expressions in Eq. (10) one gets
\[V_{q} = \frac{1}{2}(\sqrt{\lambda_{\Phi}}a-\sqrt{\lambda_{\sigma}}b)^{2} +\frac{1}{2}(\sqrt{\lambda_{\Phi}}a-\sqrt{\lambda_{\eta}}c)^{2}+\frac{1}{2}( \sqrt{\lambda_{\sigma}}b-\sqrt{\lambda_{\eta}}c)^{2}+(\lambda_{1}+\sqrt{ \lambda_{\Phi}\lambda_{\sigma}})ab \tag{11}\] \[+(\lambda_{2}+\sqrt{\lambda_{\Phi}\lambda_{\eta}})ac+(\lambda_{ 3}+\sqrt{\lambda_{\sigma}\lambda_{\eta}})bc-\frac{1}{2}(\lambda_{\Phi}a^{2}+ \lambda_{\eta}c^{2}).\]
Following Refs. [49; 50] the boundedness conditions of the model turn out to be,
\[\lambda_{\Phi} \geq 0\quad;\quad\lambda_{\sigma}\geq 0\quad;\quad\lambda_{\eta}\geq 0 \tag{12}\] \[\lambda_{1}+\sqrt{\lambda_{\Phi}\lambda_{\sigma}} \geq 0\quad;\quad\lambda_{2}+\sqrt{\lambda_{\Phi}\lambda_{\eta}}\geq 0 \quad;\quad\lambda_{3}+\sqrt{\lambda_{\sigma}\lambda_{\eta}}\geq 0 \tag{13}\]
## Appendix B Benchmarks for \(\mu\to e\gamma\)
The cLFV process \(\mu\to e\gamma\) is computed using Eq. (25) and taking as inputs the model outputs that minimize the \(\chi^{2}\) function given by,
\[\chi^{2}=\frac{\left[\Delta m_{21}^{2\,(\exp)}-\Delta m_{21}^{2\,(\text{th})} \right]^{2}}{\sigma_{\Delta m_{21}^{2}}^{2}}+\frac{\left[\Delta m_{31}^{2\,( \exp)}-\Delta m_{31}^{2\,(\text{th})}\right]^{2}}{\sigma_{\Delta m_{31}^{2}} ^{2}}+\sum_{i<j}\frac{\left[s_{ij}^{(\exp)}-s_{ij}^{(\text{th})}\right]^{2}}{ \sigma_{s_{ij}}^{2}}+\frac{\left[\delta_{CP}^{(\exp)}-\delta_{CP}^{(\text{th})} \right]^{2}}{\sigma_{\delta_{CP}}^{2}}\, \tag{14}\]
where \(s_{ij}\equiv\sin\theta_{ij}\) (with \(i,j=1,2,3\)), \(\delta_{CP}\) is the leptonic CP violating phase, the label (th) are used to identify the model outputs, while the ones with label (exp) correspond to the experimental values, and \(\sigma_{a}\) represent the experimental errors. Table 2 shows best fit values and \(1\sigma-3\sigma\) intervals reported by neutrino oscillation global fits [51]1.
Footnote 1: Table 2 correspond to normal neutrino mass ordering. The inverted mass ordering can be consulted in [51]. For other fits of neutrino oscillation parameters we refer the reader to Refs. [52; 53].
In order to compute all neutrino oscillation parameters we perform a random scan of the free parameters in the lepton sector and make assumptions about the flavor structure of the Dirac mass matrix \(m_{D}\), and the Majorana matrices \(M\) and \(\mu\) in Eq. (12). Using the Casas-Ibarra parametrization [54], the matrix \(m_{D}\) in Eq. (13), reads as follows [54; 55; 56; 57; 58; 59],
\[m_{D}=\frac{y_{\nu}v_{\bar{\nu}}}{\sqrt{2}}=U_{\rm PMNS}\left(\hat{m}_{\nu} \right)^{1/2}\hat{R}\mu^{-1/2}M\;, \tag{12}\]
where \(U_{\rm PMNS}\equiv U_{\ell}^{\dagger}U_{\nu}\), \(\hat{m}_{\nu}={\rm diag}(m_{1},m_{2},m_{3})\) is the diagonal neutrino mass matrix and \(\hat{R}\) is a rotation matrix given by,
\[\hat{R}=\left(\begin{array}{cc}0&0\\ \cos\hat{\theta}&\sin\hat{\theta}\\ -\sin\hat{\theta}&\cos\hat{\theta}\end{array}\right)\;\;\mbox{with}\;\;\;\hat{ \theta}\in[0,2\pi]. \tag{13}\]
The matrices \(M\) and \(\mu\) are assumed to be diagonal,
\[M=\left(\begin{array}{cc}M_{1}&0\\ 0&M_{2}\end{array}\right)\;\;\mbox{and}\;\;\mu=\left(\begin{array}{cc}\mu_{ 1}&0\\ 0&\mu_{2}\end{array}\right). \tag{14}\]
For the matrix \(M\) in Eq. (14) we varied \(M_{1}\) within the range \(100\;{\rm GeV}\leq M_{1}\leq 1\;{\rm TeV}\) and considered \(M_{2}=10M_{1}\), where \(M_{1}\) is the mass of the lightest RH neutrino, \(M_{1}=m_{N_{R}}\). In the case of the \(\mu\) matrix we used Eq. (14) to compute \(\mu_{1}\) and assumed \(\mu_{2}=10\mu_{1}\). This matrix depends on \(y_{N}\), \(m_{\Omega}\), \(m_{\eta_{{}_{R}}}\) and \(m_{\eta_{{}_{I}}}\). Then, following the study made in section III, we analyze two situations: one with DM mass of \(m_{\Omega}=200\) and the other with \(m_{\Omega}=500\;{\rm GeV}\). In both cases, the Yukawa \(y_{N}\) was varied in the range \(10^{-2}\leq y_{N}\leq 1\) and the masses of the \({\cal Z}_{2}\)-odd scalars were fixed to
\[m_{\eta_{{}_{R}}}=2000\;{\rm GeV},\;\;m_{\eta_{{}_{I}}}=2001\;{\rm GeV}. \tag{15}\]
We also take the charged lepton mass matrix as a diagonal matrix, i.e. \(M_{l}={\rm diag}(m_{e},m_{\mu},m_{\tau})\), which implies \(U_{\rm PMNS}=U_{\nu}\) in Eq. (12).
Table 3 shows the dimensionless parameters that minimize the \(\chi^{2}\) function in Eq. (11) and that are used to compute \(\mu\to e\gamma\), Eq. (25). The best fits of the model, for \(m_{\Omega}=200\;{\rm GeV}\) and \(m_{\Omega}=500\;{\rm GeV}\) are presented in Table 4.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline Observable & \(\Delta m_{21}^{2}[10^{-5}\;{\rm eV}]\) & \(\Delta m_{31}^{2}[10^{-3}\;{\rm eV}]\) & \(\sin^{2}\theta_{12}/10^{-1}\;\sin^{2}\theta_{23}/10^{-1}\;\sin^{2}\theta_{13}/ 10^{-2}\) & \(\delta_{\rm CP}/^{\circ}\) \\ \hline \hline Best fit \(\pm 1\sigma\) & \(7.50^{+0.22}_{-0.20}\) & \(2.55^{+0.02}_{-0.03}\) & \(3.18\pm 0.16\) & \(5.74\pm 0.14\) & \(2.200^{+0.069}_{-0.062}\) & \(194^{+24}_{-22}\) \\ \(3\sigma\) range & \(6.94-8.14\) & \(2.47-2.63\) & \(2.71-3.69\) & \(4.34-6.10\) & \(2.00-2.405\) & \(128-359\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Neutrino oscillation parameters from global fits [51].
\begin{table}
\begin{tabular}{l|l} \hline \hline \multicolumn{2}{c}{**Dimensionless parameters (\(a\))**} \\ \hline \hline \(y_{\nu_{11}}=0.0109e^{1.87i}\) & \(y_{\nu_{31}}=0.0124e^{1.49i}\) \\ \(y_{\nu_{12}}=0.0105e^{-1.31i}\) & \(y_{\nu_{32}}=0.0808e^{1.77i}\) \\ \(y_{\nu_{21}}=0.0270e^{1.70i}\) & \(y_{N}=2.04\times 10^{-2}\) \\ \(y_{\nu_{22}}=0.0463e^{1.56i}\) & \(\hat{\theta}=1.303\;{\rm rad}\) \\ \hline \hline \end{tabular}
\begin{tabular}{l|l} \hline \hline \multicolumn{2}{c}{**Dimensionless parameters (\(b\))**} \\ \hline \hline \(y_{\nu_{11}}=0.00525e^{-1.51i}\) & \(y_{\nu_{31}}=0.0177e^{1.572i}\) \\ \(y_{\nu_{12}}=0.0151e^{-1.59i}\) & \(y_{\nu_{32}}=0.00424e^{1.61i}\) \\ \(y_{\nu_{21}}=0.0166e^{1.57i}\) & \(y_{N}=1.22\times 10^{-2}\) \\ \(y_{\nu_{22}}=0.0354e^{-1.58i}\) & \(\hat{\theta}=-4.98\;{\rm rad}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dimensionless parameters used to compute \({\rm BR}(\mu\to e\gamma)\) compatible with neutrino oscillation data. Case (a) considers \(m_{\Omega}=200\;{\rm GeV}\) and case (b) is for \(m_{\Omega}=500\;{\rm GeV}\). |
2307.07449 | Differentially Private Clustering in Data Streams | The streaming model is an abstraction of computing over massive data streams,
which is a popular way of dealing with large-scale modern data analysis. In
this model, there is a stream of data points, one after the other. A streaming
algorithm is only allowed one pass over the data stream, and the goal is to
perform some analysis during the stream while using as small space as possible.
Clustering problems (such as $k$-means and $k$-median) are fundamental
unsupervised machine learning primitives, and streaming clustering algorithms
have been extensively studied in the past. However, since data privacy becomes
a central concern in many real-world applications, non-private clustering
algorithms are not applicable in many scenarios.
In this work, we provide the first differentially private streaming
algorithms for $k$-means and $k$-median clustering of $d$-dimensional Euclidean
data points over a stream with length at most $T$ using $poly(k,d,\log(T))$
space to achieve a constant multiplicative error and a $poly(k,d,\log(T))$
additive error. In particular, we present a differentially private streaming
clustering framework which only requires an offline DP coreset or clustering
algorithm as a blackbox. By plugging in existing results from DP clustering
Ghazi, Kumar, Manurangsi 2020 and Kaplan, Stemmer 2018, we achieve (1) a
$(1+\gamma)$-multiplicative approximation with
$\tilde{O}_\gamma(poly(k,d,\log(T)))$ space for any $\gamma>0$, and the
additive error is $poly(k,d,\log(T))$ or (2) an $O(1)$-multiplicative
approximation with $\tilde{O}(k^{1.5} \cdot poly(d,\log(T)))$ space and
$poly(k,d,\log(T))$ additive error. In addition, our algorithmic framework is
also differentially private under the continual release setting, i.e., the
union of outputs of our algorithms at every timestamp is always differentially
private. | Alessandro Epasto, Tamalika Mukherjee, Peilin Zhong | 2023-07-14T16:11:22Z | http://arxiv.org/abs/2307.07449v2 | # Differentially Private Clustering in Data Streams
###### Abstract
The streaming model is an abstraction of computing over massive data streams, which is a popular way of dealing with large-scale modern data analysis. In this model, there is a stream of data points, one after the other. A streaming algorithm is only allowed one pass over the data stream, and the goal is to perform some analysis during the stream while using as small space as possible.
Clustering problems (such as \(k\)-means and \(k\)-median) are fundamental unsupervised machine learning primitives, and streaming clustering algorithms have been extensively studied in the past. However, since data privacy becomes a central concern in many real-world applications, non-private clustering algorithms are not applicable in many scenarios.
In this work, we provide the first differentially private streaming algorithms for \(k\)-means and \(k\)-median clustering of \(d\)-dimensional Euclidean data points over a stream with length at most \(T\) using \(\operatorname{poly}(k,d,\log(T))\) space to achieve a _constant_ multiplicative error and a \(\operatorname{poly}(k,d,\log(T))\) additive error. In particular, we present a differentially private streaming clustering framework which only requires an offline DP coreset algorithm as a blackbox. By plugging in existing DP coreset results [29, 41], we achieve (1) a \((1+\gamma)\)-multiplicative approximation with \(\tilde{O}_{\gamma}(\operatorname{poly}(k,d,\log(T)))\) space for any \(\gamma>0\), and the additive error is \(\operatorname{poly}(k,d,\log(T))\) or (2) an \(O(1)\)-multiplicative approximation with \(\tilde{O}(k\cdot\operatorname{poly}(d,\log(T)))\) space and \(\operatorname{poly}(k,d,\log(T))\) additive error.
In addition, our algorithmic framework is also differentially private under the continual release setting, i.e., the union of outputs of our algorithms at every timestamp is always differentially private.
## 1 Introduction
In real-world applications, a major challenge in dealing with large-scale data is that the entire datasets are too large to be stored in the computing system. The need to address this challenge and the success of large-scale systems (such as Spark Streaming [43]) that process data in streams have driven the study of the _streaming model_, which is introduced by a seminal work [3]. In this model, there is a stream of data points. During the stream, a data point arrives at each timestamp, and a streaming algorithm can only access these data points by a single pass. The goal is to output an (approximate) solution to a problem with respect to the set of data points that are arrived while using as small space as possible. If an algorithm is required to output at every timestamp when a new data point arrives, then it is called the _continual release_ setting. Otherwise, the algorithm only needs to output at the end of the stream, which is called non-continual release or the _one-shot_ setting. The streaming model has attracted a lot of attention from different areas in the past decades. In particular, streaming clustering algorithms have been extensively studied by the clustering literature.
Clustering is an essential primitive in unsupervised machine learning, and its geometric formulations, such as \(k\)-means and \(k\)-median, have been studied extensively, e.g., [4, 11, 33, 12, 13, 5, 40, 37, 2]. Euclidean
\(k\)-clustering problem is stated as follows. Given a point set \(P\subset\mathbb{R}^{d}\), the goal is to output a set of centers \(C\subset\mathbb{R}^{d}\) with size \(|C|=k\) such that the \(k\)-clustering cost \(\mathsf{cost}(C,P)=\sum_{p\in P}\min_{c\in C}\|p-c\|_{2}^{2}\) is minimized, where \(z=1\) and \(z=2\) correspond to \(k\)-median and \(k\)-means problem respectively. There is a long list of work (e.g., a subset includes [33, 32, 14, 26, 8, 18, 20]) studying \(k\)-means and \(k\)-median problem in the streaming setting. In the streaming \(k\)-clustering problem, there is a stream of points in \(\mathbb{R}^{d}\), and the goal is to output a set of \(k\) centers at each timestamp \(t\), and minimize the \(k\)-clustering cost with respect to data points arrived before \(t\). The state-of-the-art result is achieved by [20] which uses \(\bar{O}\left(\frac{kd}{\gamma^{2}}\right)\cdot\min\left(\frac{1}{\gamma^{2}}, k\right)\cdot\text{poly}(\log\log T)\) space to obtain a \((1+\gamma)\)-approximation with probability \(0.9\) at the end of the stream. However, none of these algorithms are private, which means they are not applicable when the dataset involves personal information and privacy is considered to be a major concern in real-world applications.
Differential privacy (DP) [21] has become the de facto standard for preserving data privacy due to its compelling privacy guarantees and mathematically rigorous definition. DP \(k\)-clustering algorithms in the offline setting have been studied for years [38, 25, 28, 31, 6, 35, 39, 41, 29, 15], where the main focus is to improve the approximation ratio and achieve fast sequential running time. DP \(k\)-clustering problem has also been studied in other computational models which are more relevant to large-scale computations such as sublinear-time [7] and massively parallel computing (MPC) [16, 17], and distributed computing setting [42, 10]. However, the landscape of the DP \(k\)-clustering problem in the streaming model is still mysterious. In fact, to the best of our knowledge, there is no previously known DP streaming algorithm achieving \(O(1)\)-multiplicative error using sublinear space even in the one-shot setting.
In this work, we present the _first_ DP streaming algorithms for Euclidean \(k\)-means and \(k\)-median clustering using \(\text{poly}(k,d,\log(T))\) space to achieve an \(O(1)\)-multiplicative error and a \(\text{poly}(k,d,\log(T))\)-additive error. In addition, our algorithms are DP under the continual release setting, which means that the union of all historical outputs of our algorithm is DP. Note that any DP algorithm under the continual release setting is always DP under the one-shot setting.
### Computation Model and Differential Privacy
In this section, we formally define differential privacy and the streaming model under continual release setting. The input is a stream of points \(x_{1},x_{2},\cdots,x_{T}\in\mathbb{R}^{d}\), where each \(x_{i}\in\mathbb{R}^{d}\) satisfies \(\|x_{i}\|_{2}\leq\Lambda\), i.e., we assume all input points are within a ball of radius \(\Lambda\). Streams \(\mathcal{S}=(x_{1},\ldots,x_{T})\) and \(\mathcal{S}^{\prime}=(x_{1}^{\prime},\ldots,x_{T}^{\prime})\) are _neighboring_ if there exists at most one timestamp \(t^{*}\in[T]\) for which \(x_{t^{*}}\neq x_{t^{*}}^{\prime}\) and \(x_{t}=x_{t}^{\prime}\) for all \(t\neq t^{*}\). In this paper, we consider every streaming algorithm \(\mathcal{A}\) under the continual release setting, i.e., the entire output of \(\mathcal{A}\) is \((s_{1},s_{2},\cdots,s_{T})\) where \(s_{t}\) is the output of \(\mathcal{A}\) at timestamp \(t\) with respect to the data arrived no later than \(t\). If we do not specify the timestamp, then the output of \(\mathcal{A}\) indicates the entire output of \(\mathcal{A}\) over the stream.
**Definition 1** (Differential privacy [21]).: _A randomized algorithm \(\mathcal{A}\) is \((\varepsilon,\delta)\)-DP if for every pair of neighboring datasets \(D\sim D^{\prime}\), and for all sets \(\mathcal{S}\) of possible outputs, we have that \(\Pr[\mathcal{A}(D)\in\mathcal{S}]\leq e^{\varepsilon}\Pr[\mathcal{A}(D^{ \prime})\in\mathcal{S}]+\delta\). When \(\delta=0\) we simply say that the algorithm is \(\varepsilon\)-DP._
### Our Results
We present a general framework for DP \(k\)-clustering in the streaming setting which utilizes an offline DP coreset (see Definition 2 for the definition of coreset) algorithm as a black-box. Using existing results from the DP coreset literature, the cost of the resulting clustering output by our framework achieves (1) a \((1+\gamma)\)-multiplicative error with space complexity having a \(\text{poly}(k)\) dependency -- using the DP coreset algorithm from [29] (see Corollary 1), or (2) an \(O(1)\)-multiplicative error with space complexity having a linear dependency on \(k\) -- using the DP coreset algorithm from [41] (see Corollary 2).
We state our results for \(k\)-means in the sequel, but note that our results easily generalize to \(k\)-median. In the following results, we assume we are given a non-DP algorithm in the offline setting that can compute a \(\rho\)-approximation to \(k\)-means--many such algorithms exist with constant approximation (e.g. [2]). As is
standard in DP clustering literature, we assume \(\Lambda\) is an upper bound on the diameter of the space of input points.
**Theorem 1** (Informal version of Corollary 14).: _Given dimension \(d\), clustering parameter \(k\), \((1+\gamma)\)-approx non-DP coreset, offline \(\varepsilon\)-DP coreset from [29], and \(\rho\)-approx non-DP \(k\)-means clustering algorithm. Let \(\mathcal{S}:=\{x_{1},\ldots,x_{T}\}\) be the stream of input points in Euclidean space. There exists a streaming algorithm \(\mathcal{A}^{\prime}\) for \(k\)-means that outputs a set of \(k\) centers \(\mathcal{C}_{\hat{\mathcal{Y}}}\) at every timestep \(t\in[T]\) such that_
1. _(Privacy)_ \(\mathcal{A}^{\prime}\) _is_ \(5\varepsilon\)_-DP under the continual release setting._
2. _(Accuracy) With high probability,_ \[\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\mathcal{S})\leq(1+\gamma^{ \prime})\mathsf{cost}(\mathcal{C}_{\mathcal{S}}^{opt},\mathcal{S})+O_{\gamma, \rho}(1)\cdot V^{\prime}(d,k,T,\Lambda)\] (1) _where_ \(\gamma^{\prime}=2\gamma+\gamma^{2}+\frac{2\rho(1+\gamma)^{4}}{(1-\gamma)^{3}}\)_, and_ \(V^{\prime}(d,k,\varepsilon,T,\Lambda)=\Lambda^{2}\cdot\mathrm{poly}(k,d,\frac{ 1}{\varepsilon},\log(T))\)_._
3. _(Space)_ \(\mathcal{A}^{\prime}\) _consumes_ \(2^{O_{\gamma}(d^{\prime})}\cdot\mathrm{poly}(k,d,\frac{1}{\varepsilon},\log(T))\) _space where_ \(d^{\prime}=O_{\gamma}(\log k)\)_._
**Theorem 2** (Informal version of Corollary 13).: _Given dimension \(d\), clustering parameter \(k\), \((1+\gamma)\)-approx non-DP coreset, \((\varepsilon,\delta)\)-DP coreset from [41], and \(\rho\)-approx non-DP \(k\)-means clustering algorithm. Let \(\mathcal{S}:=\{x_{1},\ldots,x_{T}\}\) be the stream of input points in Euclidean space. There exists a streaming algorithm \(\mathcal{A}^{\prime}\) for \(k\)-means that outputs a set of centers \(\mathcal{C}_{\hat{\mathcal{Y}}}\) at every timestep \(t\in[T]\) such that_
1. _(Privacy)_ \(\mathcal{A}^{\prime}\) _is_ \((5\varepsilon,\delta)\)_-DP under the continual release setting._
2. _(Accuracy) With high probability,_ \[\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\mathcal{S})\leq O_{\gamma, \rho}(1)\cdot\mathsf{cost}(\mathcal{C}_{\mathcal{S}}^{opt},\mathcal{S})+O_{ \gamma,\rho}(1)\cdot V^{\prime}(d,k,T,\Lambda)\] (2) _where_ \(V^{\prime}(d,k,\varepsilon,\delta,T,\Lambda)=\Lambda^{2}\cdot\mathrm{poly}( \log(T),\log(1/\delta),d,\frac{1}{\varepsilon},k)\)_._
3. _(Space)_ \(\mathcal{A}^{\prime}\) _consumes_ \(\tilde{O}(\mathrm{poly}(\log(T),\log(\frac{1}{\delta}),d,\frac{1}{\varepsilon}, k))\) _space._
The notation \(O_{x}(\cdot)\) ignores factors involving \(x\).
### Comparison with Concurrent Work [36]
[36] concurrently released results on differentially private \(k\)-means clustering under continual observation. However their work is mostly incomparable with ours: their main focus is to optimize the approximation ratio under the setting of continual observation, but their algorithm uses a large space. In contrast, our main focus is to optimize the approximation ratio1 using as small space as possible, and the property that our algorithm is DP under continual observation (or equivalently, continual release) setting is a side outcome of our algorithm.
Footnote 1: We note that [36] cites a previous version of our paper which only achieved \(d^{O(1)}\)-multiplicative error.
### Technical Overview
Our techniques apply to both \(k\)-means and \(k\)-median clustering, but we assume we are dealing with \(k\)-means for simplicity. Before discussing our algorithm in more detail, we first outline the challenges to designing a DP \(k\)-means clustering algorithm in the continual release setting.
**Vanilla Merge and Reduce approaches fail.** A natural idea is to follow a standard approach called Merge-and-Reduce [27] in the streaming clustering literature. A coreset of a set of points is a weighted subset of the points such that the clustering cost of the coreset with respect to any \(k\)-centers \(C\) is a good approximation of the clustering cost of the original point set with respect to \(C\). We refer readers to Definition 2 for a formal definition of coreset. According to [18], any point set has a \((1+\gamma)\)-coreset with size \(k\cdot\operatorname{poly}(1/\gamma)\). Merge-and-Reduce approach suggests the following. We maintain \(C_{1},C_{2},\cdots,C_{L}\) for \(L=\log(T)\) during the stream such that \(C_{1}\cup C_{2}\cup\cdots\cup C_{L}\) is a \((1+\gamma)\)-coreset of the points already arrived. We choose some threshold \(M=k\operatorname{poly}(\log(T))\). When a new point \(x\) comes, we make \(C_{1}\gets C_{1}\cup\{x\}\). Then we do the following check: if there is \(i\in[L]\) such that \(|C_{i}|\geq M\), we compute a \((1+1/\operatorname{poly}(\log(T)))\)-coreset \(C^{\prime}\) of \(C_{i}\), and we set \(C_{i}\leftarrow\emptyset\) and \(C_{i+1}\gets C_{i+1}\cup C^{\prime}\). We repeat the above checking process until every \(C_{i}\) has size \(|C_{i}|\leq M\). It is clear that at any time, \(C_{i}\) is a coreset of a subset \(S_{i}\) of input points and \(S_{1}\cup S_{2}\cup\cdots\cup S_{L}\) is the entire set of the input points. By induction, we can show that \(C_{i}\) must be a \((1+1/\operatorname{poly}(\log(T)))^{i}\)-coreset of \(S_{i}\), and thus \(C_{1}\cup C_{2}\cup\cdots\cup C_{L}\) is a \((1+1/\operatorname{poly}(\log(T)))^{L}=(1+O(\log(T))/\operatorname{poly}(\log( T)))\)-coreset of the entire input point set.
However, the above approach fails when using DP coreset for \(C_{1}\). The reason is that any DP coreset must introduce some additive error \(\eta=\Omega(\Lambda^{z})\) (see e.g., [30]). In particular, let us we follow the above Merge and Reduce process, but when \(C_{1}\geq M\), we compute a DP coreset \(C^{\prime}\) of \(C_{1}\) instead. Then \(C^{\prime}\) is merged to \(C_{2}\), and thus \(C_{2}\) is a coreset of \(S_{2}\) with some additive error \(\eta\). Now consider \(\hat{C}\) is a coreset of \(\hat{S}\) with additive error \(\hat{\eta}\) and \(\bar{C}\) is a coreset of \(\bar{S}\) with additive error \(\bar{\eta}\). Then even if we compute any non-DP coreset \(\tilde{C}\) of \(\hat{C}\cup\bar{C}\) with no relative error, \(\tilde{C}\) is a coreset of \(\hat{S}\cup\bar{S}\) with additive error \(\bar{\eta}+\hat{\eta}\). The above example implies that the additive error can actually accumulate. Therefore, \(C_{i}\) is a coreset of \(S_{i}\) with additive error \(2^{O(i)}\cdot\Omega(\Lambda^{z})\) which can be a large \(T^{\Omega(1)}\cdot\Lambda^{z}\).
To overcome the above issue, we develop a novel method of DP Merge and Reduce: instead of running Merge and Reduce for all points together, we firstly partition the space \(\mathbb{R}^{d}\) such that close points are likely to be in the same group, then we run Merge and Reduce in a DP way for points in each group. In this way, we are able to show that the additive error introduced can always be bounded by the optimal clustering cost with some small multiplicative factors. In the remainder of this section, we briefly discuss the high level ideas of our approach.
**Our Approach.** For every timestamp \(t\in[T]\), our algorithm can be split into the following main steps -- (1) Compute a set of centers \(\mathcal{F}\) in an online fashion that satisfies a bicriteria approximation to \(k\)-means (see Theorem 3). (2) Maintain DP coresets of the points assigned to centers in \(\mathcal{F}\) in parallel via a DP Merge-and-Reduce framework (3) Output the union of these coresets as a "semi-coreset" called \(\hat{\mathcal{Y}}\). In a post-processing step -- Compute a non-DP \(k\)-means \(\rho\)-approximation algorithm on \(\hat{\mathcal{Y}}\). We call \(\hat{\mathcal{Y}}\) a semi-coreset because it has an additive error proportional to the optimal cost of clustering with respect to the stream \(\mathcal{S}\) which prevents it from being a standard coreset.
**Bicriteria Approximation.** The bicriteria approximation uses two main ingredients -- quadtrees and heavy hitters. A quadtree creates a nested series of grids that partitions \(\mathbb{R}^{d}\) and can be used to embed input points into a Hierarchically Separated Tree (HST) metric, which often makes computing \(k\)-means cost simpler. We use this embedding to map every input point to the center of a grid (or cell) at every quadtree level. For a fixed level, our goal is to approximately choose the \(O(k)\) cells that have the most points, i.e., we want to find the "heaviest" cells in a DP fashion and store them as candidate centers in set \(\mathcal{F}\). We achieve this by hashing the cells into \(O(k)\) substreams and running a continual release black-box DP heavy hitter algorithm on each hash substream. Since with large enough probability, the heavy cells will not collide, this achieves our goal. Note that since we need to do this over logarithmically many levels of the quadtree, we will end up with a bicriteria approximation (see Theorem 7 for details).
**Theorem 3** (Bicriteria approximation).: _There exists an \(\varepsilon\)-DP algorithm that for every timestamp \(t\in[T]\), computes a weighted set of \(O(k\log^{2}(k)\log(\Lambda)\log T)\) centers with \(d^{O(1)}\)-multiplicative error to the best \(k\)-means (or \(k\)-median) clustering and \(\tilde{O}(\frac{k\rho\Lambda^{2}}{\varepsilon}\cdot(d\log T)^{O(1)}))\)-additive error in \(O(k\log^{2}\Lambda\log(k)\operatorname{poly}\left(\log\left(T\Lambda k\right)\right))\) space._
**Reducing Multiplicative Error.** At this point, one idea is to assign input points to candidate centers in \(\mathcal{F}\) obtained from the bicriteria approximation in an online fashion and release a weighted DP coreset at every timestep. The postprocessing step of applying an offline non-DP clustering algorithm on this weighted DP coreset would ensure a DP set of \(k\) centers, however the cost of clustering of the resulting set of centers incur a multiplicative error of \(d^{O(1)}\) (see Theorem 3), which can be quite large especially in high dimensional Euclidean space.
Inspired by [17] who recently used techniques introduced by [14] to obtain a near optimal DP Clustering algorithm in the MPC setting, we define rings based on distance from the current centers in set \(\mathcal{F}\) and map input points to these rings. The main idea is to compute DP coresets for points in each ring and then by taking the union of the coresets over all rings, we can obtain a DP coreset for the stream of points seen so far. Since the DP coreset algorithms we use [29, 41] achieve constant multiplicative error, intuitively, this technique will result in constant multiplicative error of the resulting DP coreset. In particular, to compute DP coresets for points in each ring we run Merge and Reduce in a DP manner.
**DP Merge and Reduce.** We design an online differentially private variant of the celebrated Merge and Reduce framework [34, 1] to compute DP coresets for points in each ring. Intuitively, the (non-DP) Merge and Reduce technique partitions the input stream into blocks, computes a coreset for each block, take the union of the resulting coresets (merge step), and compute the coreset of the union (reduce step) in a tree-like fashion. On a high-level, our framework computes DP coresets at the base level (of the tree) using a DP coreset Algorithm (e.g. [41, 29]) and then computes coresets for subsequent levels using a non-DP coreset Algorithm (e.g. [19]). Thus using our DP Merge and Reduce framework, at any give timestep, the number of input points we actually need to store is roughly equal to the size of the block at the base level (denoted as \(M\)). This is because as soon as the number of input points exceed \(M\), we can compute a DP coreset for this block. We perform this check privately via the sparse vector technique [23]. Note that the size of the base level block controls the height of the tree, i.e., larger \(M\) results in smaller height of the tree which in turn results in smaller additive error of the resulting DP coreset. However, we show that we can store a sublinear number of input points and achieve reasonable additive error for a fixed instance of DP Merge and Reduce (see Theorem 10 for a formal statement).
**Charging Additive Error to Multiplicative Error.** Although we show that a fixed instance of DP Merge and Reduce for a fixed ring releases a DP coreset that achieves reasonably small additive error, the additive error of the union of these DP coresets can still be large. We use ideas similar to [17] to address this challenge -- by using the properties of the bicriteria solution and carefully choosing \(M\) for the DP Merge and Reduce instances, we show that we can charge the large additive error obtained by summing up the cost of clustering over all rings to a small constant-factor blow up in the multiplicative error (see Theorem 12).
Finally, observe that the bicriteria solution \(\mathcal{F}\) can change over time, i.e., new centers can be added to \(\mathcal{F}\). Thus whenever a new center (or centers) are added to \(\mathcal{F}\), we redefine the rings (based on distance to the set \(\mathcal{F}\)) and initiate new DP Merge and Reduce instances for each ring. See Algorithm 1 for more details.
## 2 Preliminaries
**Norms and heavy hitters.** Let \(p\geq 1\), the \(\ell_{p}\)-norm of a vector \(\mathbf{x}=(x_{1},\ldots,x_{t})\) is defined as \(\|\mathbf{x}\|_{p}=(\sum_{i=1}^{t}|x_{i}|^{p})^{1/p}\). Given a multiset \(\mathcal{S}\), denote the frequency of an item \(x\) appearing in \(\mathcal{S}\) as \(f(x)\). We say that an item \(x\) is an \(\theta\)-heavy hitter (\(\theta\)-HH for short) if \(f(x)\geq\theta\|\mathcal{S}\|_{1}\).
**Theorem 4** (Binary Mechanism BM[9, 22]).: _Let \(\varepsilon\geq 0,\gamma\in(0,0.5)\), there is an \(\varepsilon\)-DP algorithm for the sum of the stream in the continual release model. With probability \(1-\xi\), the additive error of the output for every timestamp \(t\in[T]\) is always at most \(O(\frac{1}{\varepsilon}\log^{2.5}(T)\log(\frac{1}{\varepsilon}))\) and uses \(O(\log T)\) space._
**Theorem 5** (Theorem 5.2 in [41]).: _There is an \((\varepsilon,\delta)\)-DP algorithm that given a database \(S\) containing \(n\) points in the \(d\)-dimensional ball \(\mathcal{B}(0,\Lambda)\), identifies with probability \(1-\beta\), a \((\kappa,\eta)\)-coreset of \(S\) where \(\kappa=O(1)\) and \(\eta=\operatorname{poly}\bigl{(}\log(n),\log(\frac{1}{\beta}),\log(\frac{1}{ \delta}),d,\frac{1}{\varepsilon},k\bigr{)}\)._
**Theorem 6** (Lemma 16 in [29]).: _There is an \(2^{O_{\gamma}(d^{\prime})}\operatorname{poly}(n)\)-time \(\varepsilon\)-DP algorithm that with probability 0.99 produces an \(\left(\gamma,O_{\gamma}\left(\frac{k^{2}2^{O_{\gamma}(d^{\prime})}}{\varepsilon} \operatorname{poly}\log n\right)\right)\)-coreset for \(k\)-means (and \(k\)-median) where \(d^{\prime}=O_{\gamma}(\log k)\). The size of the coreset is \(2^{O_{\gamma}(d^{\prime})}\cdot\operatorname{poly}(k,\log n)\)._
**Theorem 7** (Dp-Hh algorithm [24]).: _Let \(\varepsilon>0\), \(\gamma_{h}\in(0,0.5)\), \(0<\theta<1\), \(\xi\in(0,0.5)\). There is an \(\varepsilon\)-DP algorithm in the streaming continual release model such that with probability at least \(1-\xi\), it always outputs a set \(H\subseteq\mathcal{U}\) and a function \(\hat{f}:H\to\mathbb{R}\) for every timestamp \(t\in[T]\) such that_
1. \(\forall a\in H\)_,_ \(\hat{f}(a)\in(1\pm\gamma_{h})\cdot f_{a}\) _where_ \(f_{a}\) _is the frequency of_ \(a\) _in the stream_ \(\mathcal{S}=(a_{1},a_{2},\ldots,a_{t})\)__
2. \(\forall a\in\mathcal{U}\)_, if_ \(f_{a}\geq\frac{1}{\varepsilon\gamma_{h}}\operatorname{poly}\bigl{(}\log\bigl{(} \frac{T\cdot|\mathcal{U}|}{\theta\xi\gamma_{h}}\bigr{)}\bigr{)}\) _and_ \(f_{a}^{1}\geq\theta\|\mathcal{S}\|_{1}\) _then_ \(a\in H\)__
3. _The size of_ \(H\) _is at most_ \(O((\log(T/\xi)+\log|\mathcal{U}|)\cdot(\frac{1+\gamma_{h}}{1-\gamma_{h}})\cdot \frac{1}{\theta})\)__
_The algorithm uses \(\frac{1}{\gamma_{h}^{2}\theta^{3}}\operatorname{poly}\left(\log\left(\frac{T \cdot|\mathcal{U}|}{\xi\theta}\right)\right)\) space._
Clustering.For points \(x,y\in\mathbb{R}^{d}\), we let \(d(x,y)=\|x-y\|_{2}\) be the Euclidean distance between \(x\) and \(y\). Given a set \(\mathcal{C}\), we define \(d(x,\mathcal{C}):=\min_{c\in\mathcal{C}}d(x,c)\).
For a set of centers \(C\), we define the cost of clustering for the set \(\mathcal{S}\) wrt \(C\) as
\[cost(C,\mathcal{S})=\sum_{x\in\mathcal{S}}d^{z}(x,C)\]
where \(z=1\) for \(k\)-median, and \(z=2\) for \(k\)-means.
Our goal in DP clustering is to produce a set of \(k\) centers \(C_{\mathcal{S}}\) for input stream \(\mathcal{S}\) such that (1) \(C_{\mathcal{S}}\) is \((\varepsilon,\delta)\)-DP wrt \(\mathcal{S}\), and (2) \(cost(C_{\mathcal{S}},\mathcal{S})\leq\alpha\cdot cost(C_{\mathcal{S}}^{opt}, \mathcal{S})+\beta\).
**Definition 2** (\((\kappa,\eta)\)-coreset).: _Given a point set \(P\) in \(\mathbb{R}^{d}\), a weighted subset \(Q\subseteq P\) is a \((\kappa,\eta)\)-coreset of \(P\) for \(k\)-clustering (\(k\)-means or \(k\)-median) if_
\[\frac{1}{\kappa}\cdot\textsf{cost}(C,P)-\eta\leq\textsf{cost}(C,Q)\leq\kappa \cdot\textsf{cost}(C,P)+\eta\]
_for any set of \(k\)-centers \(C\subseteq\mathbb{R}^{d}\)._
In particular, given a point set \(P\) in \(\mathbb{R}^{d}\), a weighted subset \(Q\subseteq P\) is a \((1+\gamma,\eta)\)-coreset of \(P\) for \(k\)-clustering (\(k\)-means or \(k\)-median) if
\[(1-\gamma)\cdot\textsf{cost}(C,P)-\eta\leq\textsf{cost}(C,Q)\leq(1+\gamma) \cdot\textsf{cost}(C,P)+\eta\]
for any set of \(k\)-centers \(C\subseteq\mathbb{R}^{d}\).
**Theorem 8** ([19]).: _There exists a non-DP \((1+\gamma)\)-coreset of size \(O(k\log k\cdot(\gamma^{-2-\max(2,z)})\cdot 2^{O(z\log z)}\cdot\operatorname{poly}\log( \gamma^{-1}))\) for \((k,z)\)-clustering in Euclidean spaces._
## 3 Differentially Private Clustering Framework
Our Differentially Private Clustering Framework in the continual release setting is given by Algorithm 1. Recall that there is a postprocessing step of applying a \(\rho\)-approximation non-DP clustering algorithm on the output of Algorithm 1. Notably, our framework allows us to plug in any existing offline DP coreset Algorithm to obtain a corresponding DP Clustering algorithm in the continual release setting.
**Definition 3** (Ring centered at a Set).: _Let \(r\in\mathbb{R}\). Ring \(R_{r}\) for set \(\mathcal{F}\) contains the set of points \(\{x_{i}\}_{i\in[T]}\) such that \(2^{r-1}\leq d(x_{i},\mathcal{F})<2^{r}\)._
```
0: Stream \(\mathcal{S}\) of points \(x_{1},\ldots,x_{T}\in\mathbb{R}^{d}\), Privacy parameters \(\varepsilon,\delta\)
1: Initialize bicriteria solution \(\mathcal{F}=\emptyset\), and DP coreset \(\hat{\mathcal{Y}}=\emptyset\)
2:for when new point \(x_{t}\) arrives do
3:\(\mathcal{F}_{t}\leftarrow\)Update\((x_{t})\) of BicriteriaDPCenters\((\varepsilon)\)\(\triangleright\) See Algorithm 2
4:if\(\mathcal{F}_{t}\neq\emptyset\) and \(|\mathcal{F}\cap\mathcal{F}_{t}|<|\mathcal{F}_{t}|\)then\(\triangleright\) New centers added to \(\mathcal{F}\) -- need to redefine rings
5:\(\mathcal{F}\leftarrow\mathcal{F}\cup\mathcal{F}_{t}\)
6: Run non-DP coreset algorithm on the coreset \(\hat{\mathcal{Y}}\) of points so far and store resulting coreset
7: Delete existing input points (if any) in memory
8:for\(1\leq r\leq\log\Lambda\), run the following in parallel do
9: Initialize \(\hat{\mathcal{Y}}_{r}=\emptyset\)
10: Let \(R_{r}\) represent the ring centered at \(\mathcal{F}\) (see Definition 3)
11: Create new instance DP-Merge-Reduce\({}_{R_{r}}\) of DP-Merge-Reduce\((\varepsilon,\delta)\)
12:if\(x_{t}\in R_{r}\)then
13:\(\hat{\mathcal{Y}}_{r}\leftarrow\)Update\((x_{t})\) of DP-Merge-Reduce\({}_{R_{r}}\)\(\triangleright\) See Algorithm 4
14:else
15:\(\hat{\mathcal{Y}}_{r}\leftarrow\)Update\((\bot)\) of DP-Merge-Reduce\({}_{R_{r}}\)
16: Set \(\hat{\mathcal{Y}}\leftarrow\hat{\mathcal{Y}}\cup(\cup_{r}\hat{\mathcal{Y}}_{r})\)
17: Output the coreset computed by running a non-DP coreset algorithm on \(\hat{\mathcal{Y}}\)
18:if\(|\mathcal{F}\cap\mathcal{F}_{t}|=|\mathcal{F}_{t}|\)then\(\triangleright\) No new centers -- the rings are already defined wrt existing centers
19:\(\mathcal{F}\leftarrow\mathcal{F}\cup\mathcal{F}_{t}\)
20:for\(1\leq r\leq\log\Lambda\), run the following in parallel do
21: Let \(R_{r}\) represent the ring centered at \(\mathcal{F}\) (see Definition 3)
22:if\(x_{t}\in R_{r}\)then\(\triangleright\) DP-Merge-Reduce\({}_{R_{r}}\) has already been created
23:\(\hat{\mathcal{Y}}_{r}\leftarrow\)Update\((x_{t})\) of DP-Merge-Reduce\({}_{R_{r}}\)
24:else
25:\(\hat{\mathcal{Y}}_{r}\leftarrow\)Update\((\bot)\) of DP-Merge-Reduce\({}_{R_{r}}\)
26: Set \(\hat{\mathcal{Y}}\leftarrow\hat{\mathcal{Y}}\cup(\cup_{r}\hat{\mathcal{Y}}_{r})\)
27: Output the coreset computed by running a non-DP coreset algorithm on \(\hat{\mathcal{Y}}\)
```
**Algorithm 1** Main Algorithm
**Main Algorithm.** When a new point \(x_{t}\) arrives, our algorithm does the following
1. Update the bicriteria solution \(\mathcal{F}\) (see Algorithm 2)
2. Create rings according to Definition 3 for the set \(\mathcal{F}\) and adds \(x_{t}\) to an existing ring2 For each ring \(1\leq r\leq\log(\Lambda)\) maintain an instance of DP-Merge-Reduce\({}_{r}\) (see Algorithm 4) which outputs a DP coreset per ring. If no new centers have been added to \(\mathcal{F}\) in this timestep, then instead of creating new rings, the algorithm adds \(x_{t}\) to an existing ring (and corresponding DP-Merge-Reduce instance). Footnote 2: Notice that in the pseudo code Algorithm 1 the symbol \(\bot\) represents an empty update that is effectively ignored. This is needed for technical reasons to ensure DP by avoiding the value of the input affecting the number of events in the sub-streams.
3. Release the union of these DP coresets as \(\hat{\mathcal{Y}}\) at each timestep.
If new centers have been added to \(\mathcal{F}\), then in order to keep our space usage small, before creating the new rings, we apply a \((1+\gamma)\)-approximation non-DP coreset algorithm to the existing union of coresets of these rings as \(\hat{\mathcal{Y}}\). We do the same thing, i.e., apply \((1+\gamma)\)-approximation non-DP coreset algorithm before releasing \(\hat{\mathcal{Y}}\) at every timestep in order to keep the size of \(\hat{\mathcal{Y}}\) small.
Finally, in an offline postprocessing step, we apply a \(\rho\)-approximate non-DP clustering algorithm to \(\hat{\mathcal{Y}}\).
Analysis.For the sake of analysis, we split the entire stream of points into _epochs_ dictated by the addition of _new_ centers to the bicriteria solution \(\mathcal{F}\). Let \(T_{1},\ldots,T_{e}\) be the epochs such that for a fixed \(i\), the set of bicriteria centers \(\mathcal{F}\) over the timesteps \(t\in T_{i}\) is fixed. Clearly \(T_{1}\cup\ldots\cup T_{e}=[T]\) and \(T_{1}\cap\ldots\cap T_{e}=\emptyset\).
Although the output of Algorithm 1 is technically a semi-coreset as it has an additive error proportional to the optimal cost of clustering with respect to the stream \(\mathcal{S}\) which prevents it from being a standard coreset, we may refer to it as a coreset in the following proofs and theorem statements for simplicity.
We first state the theoretical guarantees of BicriteriaDPCenters and DP-Merge-Reduce as we need these results to state the guarantees of our Main Algorithm (see Algorithm 1). The proofs for these statements can be found in Section 4.1 and Section 5.1 results.
**Theorem 9**.: _[BicriteriaDPCenters] Let \(\mathcal{S}:=\{x_{1},\ldots,x_{T}\}\) be the stream of input points in Euclidean space. For \(t\in[T]\), let \(\mathcal{F}_{t}\) be the set of centers until time step \(t\). Let \(cost(\mathcal{F},\mathcal{S}):=\sum_{t=1}^{T}cost(\mathcal{F}_{t})\) where \(cost(\mathcal{F}_{t}):=\min_{f\in\mathcal{F}_{t}}dist^{2}(x_{t},f)\). There exists an algorithm BicriteriaDPCenters (see Algorithm 2) that outputs a set of centers \(\mathcal{F}\) at every timestep \(t\in[T]\) such that_
1. _(Privacy)_ BicriteriaDPCenters _is_ \(3\varepsilon\)_-DP under the continual release setting._
2. _(Accuracy) With probability at least_ \(1-1/k^{O(\operatorname{poly}(k,\log(\Lambda))}\)_,_ \[cost(\mathcal{F},\mathcal{S})\leq O(d^{3})cost(C_{\mathcal{S}}^{opt},\mathcal{ S})+\tilde{O}\left(\frac{d^{2}\Lambda^{2}k}{\varepsilon}\operatorname{poly} \left(\log\left(T\cdot k\cdot\Lambda\right)\right)\right)\] _where_ \(cost(C_{\mathcal{S}}^{opt},\mathcal{S})\) _is the optimal_ \(k\)_-means cost for_ \(\mathcal{S}\)_._
3. _(Space)_ BicriteriaDPCenters _uses_ \(O(k\log(\Lambda)\log^{2}(k)\operatorname{poly}\left(\log\left(T\Lambda k\right) \right))\) _space._
4. _(Size)_ \(\mathcal{F}\) _has at most_ \(O(k\log^{2}(k)\log(\Lambda)\log T)\) _centers._
Recall that our Main Algorithm runs multiple instances of DP-Merge-Reduce at any given timestep, so \(N\) in the theorem statement below represents the number of points seen by a specific instance of DP-Merge-Reduce. We use a non-DP coreset algorithm in DP-Merge-Reduce which we fix in the analysis to be the current state-of-the-art clustering result by Theorem 8.
**Theorem 10** (DP-Merge-Reduce).: _Let \(0<\xi<1\), \(T\) be the length of the entire stream, \(\varepsilon,\delta\) be privacy parameters, \(M\) be an arbitrary parameter such that \(M>\frac{12}{\varepsilon}\log(\frac{2T}{\xi})\), and \(P\) be a sub-stream of non-empty points with length \(N\)._
_Suppose we are given black-box access to an offline \((\varepsilon,\delta)\)-DP algorithm \(\mathcal{A}\) that computes a \((\kappa,\eta)\)-coreset of \(X\subseteq\mathbb{R}^{d}\) of size \(SZ_{\mathcal{A}}(N,k,d,\varepsilon,\delta,\kappa,\eta,\xi_{A})\) using space \(S_{\mathcal{A}}(N,k,d,\varepsilon,\delta,\kappa,\eta,\xi_{A})\) with failure probability \(\xi_{A}\). And we are given black-box access to an offline non-DP algorithm \(\mathcal{B}\) (see Theorem 8) that computes a \((1+\gamma)\)-coreset in time \(X\subseteq\mathbb{R}^{d}\) of size \(SZ_{\mathcal{B}}(N,k,d,\gamma,\xi_{B})\) using space \(S_{\mathcal{B}}(N,k,d,\gamma,\xi_{B})\) with failure probability \(\xi_{B}\). Then there exists an algorithm DP-Merge-Reduce (see Algorithm 4) in the streaming model such that_
* _(Privacy)_ DP-Merge-Reduce _is_ \((2\varepsilon,\delta)\)_-DP under the continual release setting._
* _(Accuracy) With probability_ \(1-\xi_{A}-\xi_{B}-\xi\)_, the coreset released by_ DP-Merge-Reduce _is a_ \(((1+\gamma)\kappa,(\frac{4N}{M}-1)(1+\gamma)\eta+\tilde{M})\)_-coreset of_ \(P\)_. Where_ \(\tilde{M}:=M+\frac{6}{\varepsilon}\log(\frac{2T}{\xi})\)_._
* _(Space)_ DP-Merge-Reduce _requires_ \(S_{\mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta)\)__\(+\)__\(S_{\mathcal{B}}(SZ_{\mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta),k,d,\gamma)\)__\(+\)__\(\lceil\log(2N/M)\rceil\cdot S_{\mathcal{B}}(SZ_{\mathcal{B}}(M,k,d,\gamma),k,d,\gamma)+3M/2\) _space._
* _(Size) The resulting coreset has size at most_ \(O(k\log k\cdot\gamma^{-4})\)_._
**Remark 1**.: The parameter \(M\) denotes the block size of the base level in DP-Merge-Reduce (see Algorithm 4). We treat \(M\) as an arbitrary parameter in Theorem 10, Lemma 2 and Lemma 4. However we set \(M\) to be a function of \(\alpha\) (the multiplicative approximation error of the bicriteria solution), \(\eta\) (the additive approximation error of the DP coreset algorithm) and a parameter \(C_{M}\) in the proof of Theorem 11 (see Equation 14). We assign an appropriate value to \(C_{M}\) and consequently \(M\) in order to obtain an \(O(1)\)-multiplicative approximation in Corollary 13 and a \((1+\gamma^{\prime})\)-approximation in Corollary 14.
**Privacy.** We first show that the output of our main algorithm is indeed differentially private. We note that the set of candidate centers from our bicriteria approximation algorithm satisfies pure DP, and if we use a DP coreset algorithm that also satisfies pure DP e.g.,[29] then our main algorithm satisfies pure DP.
**Lemma 1**.: _Algorithm 1 is \((5\varepsilon,\delta)\)-DP under the continual release setting._
Proof.: We first observe that for a fixed point \(x_{t}\), it is first used to update the subroutine BicriteriaDPCenters after which it is assigned to an appropriate ring \(R_{r}\) and added to the corresponding DP-Merge-Reduce instance of that ring. Since the rings partition the input space by definition, the corresponding DP-Merge-Reduce instances are disjoint. Thus the claim follows by observing that BicriteriaDPCenters is \(3\varepsilon\)-DP (by Theorem 9) and DP-Merge-Reduce is \((2\varepsilon,\delta)\)-DP (by Theorem 10) and basic composition.
**Space and Size.** We present the space (see Lemma 2) and coreset size (see Lemma 3) of Algorithm 1 in terms of the space and coreset size of the underlying DP coreset algorithm \(\mathcal{A}\) and non-DP coreset algorithm \(\mathcal{B}\) used in DP-Merge-Reduce. We use the algorithm from [19] as our non-DP coreset algorithm \(\mathcal{B}\) whose guarantees are given by Theorem 8.
**Lemma 2**.: _Algorithm 1 consumes_
\[\log(\Lambda)\cdot\left(S_{\mathcal{A}}(M,k,d,\varepsilon, \delta,\kappa,\eta)+S_{\mathcal{B}}(SZ_{\mathcal{A}}(M,k,d,\varepsilon, \delta,\kappa,\eta),k,d,\gamma)\right)+O(\log(\Lambda)(2+\log(T)+3M/2-\log(M))\] \[+O(k\log(k)\log^{2}(\Lambda)\log T)+O(k\log(\Lambda)\log^{2}(k) \operatorname{poly}\left(\log(T\Lambda k)\right)) \tag{3}\]
_space._
Proof.: The last term in Equation 3 is the total space used by BicriteriaDPCenters and the second last term is the space used to store the bicriteria solution \(\mathcal{F}\) (see Theorem 7). We focus on proving that the DP-Merge-Reduce instances consume the space specified by the first term of Equation 3.
We sometimes abuse notation and omit the input for the non-DP coreset size \(SZ_{\mathcal{B}}\) as by Theorem 8 we know that \(SZ_{\mathcal{B}}(\cdot)=\tilde{O}(k\log(k)\cdot\gamma^{-4})\). For a fixed epoch \(T_{i}\), recall that the set of centers \(\mathcal{F}_{t}\) is fixed for \(t\in T_{i}\). According to Algorithm 1, at timestep \(t\in T_{i}\), we compute \(\log(\Lambda)\) instances of DP-Merge-Reduce in parallel. By Theorem 10, since the space required to compute coreset \(\hat{\mathcal{Y}}_{r}^{(t)}\) using DP-Merge-Reduce is \(S_{\mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta)+S_{\mathcal{B}}(SZ_{ \mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta),k,d,\gamma)+\lceil\log(2N/M )\rceil\cdot S_{\mathcal{B}}(SZ_{\mathcal{B}}(\cdot),k,d,\gamma)+3M/2\), the total space at timestep \(t\in T_{i}\) is
\[\sum_{r=1}^{\log(\Lambda)}\left(S_{\mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta)+S_{\mathcal{B}}(SZ_{\mathcal{A}}(M,k,d,\varepsilon, \delta,\kappa,\eta),k,d,\gamma)+\lceil\log(2N_{r}^{(t)}/M)\rceil\cdot S_{ \mathcal{B}}(SZ_{\mathcal{B}}(\cdot),k,d,\gamma)+3M/2\right)\] \[=\log(\Lambda)\cdot\left(S_{\mathcal{A}}(M,k,d,\varepsilon, \delta,\kappa,\eta)+S_{\mathcal{B}}(SZ_{\mathcal{A}}(M,k,d,\varepsilon, \delta,\kappa,\eta),k,d,\gamma)+3M/2\right)\] \[+S_{\mathcal{B}}(SZ_{\mathcal{B}}(\cdot),k,d,\gamma)\sum_{r=1}^{ \log(\Lambda)}\lceil\log(2N_{r}^{(t)}/M)\rceil \tag{4}\]
We focus on the last term in Equation 4. In particular, since the space used by \(\mathcal{B}\) is linear in the coreset size, we have that \(S_{\mathcal{B}}(SZ_{\mathcal{B}}(\cdot),k,d,\gamma)=\tilde{O}(k\log k\cdot \gamma^{-4})\). Next we simplify the adjoining sum
\[\sum_{r=1}^{\log(\Lambda)}\lceil\log(\frac{2N_{r}^{(t)}}{M}) \rceil\leq\sum_{r=1}^{\log(\Lambda)}\left(\log(\frac{2N_{r}^{(t)}}{M})+1 \right)=\log\left(\frac{2^{\log(\Lambda)}}{M^{\log(\Lambda)}}\cdot\prod_{r=1}^ {\log(\Lambda)}N_{r}^{(t)}\right)+\log(\Lambda)\] \[<\log\left(\frac{\Lambda\cdot 2^{\log(\Lambda)}}{M^{\log( \Lambda)}}\cdot T^{\log(\Lambda)}\right)=\log(\Lambda)(2+\log(T)-\log(M)) \tag{5}\]
where we use the fact that the number of points in any ring \(N_{r}\) cannot be larger than \(T\) in the second last step.
**Lemma 3**.: _Algorithm 1 releases a coreset of size at most \(\tilde{O}(k\log(k)\cdot\gamma^{-4})\)._
Proof.: Consider a fixed epoch \(T_{i}\). By Theorem 10, the coreset \(\hat{\mathcal{Y}}_{r}^{(t)}\) has size \(\tilde{O}(k\log(k)\cdot\gamma^{-4})\) for \(t\in T_{i}\). Since we run a non-DP coreset algorithm at the end of each epoch (whose theoretical guarantees are given by Theorem 8), the size of the coreset at the end of the epoch is also \(\tilde{O}(k\log(k)\cdot\gamma^{-4})\). Recall that an epoch is created every time a new center is added to \(\mathcal{F}\), therefore the total size of the coreset is given by \(|\mathcal{F}|\cdot\tilde{O}(k\log(k)\cdot\gamma^{-4})\). But since we apply another non-DP coreset algorithm to \(\hat{\mathcal{Y}}\) before releasing it, the coreset size is \(\tilde{O}(k\log(k)\cdot\gamma^{-4})\).
**Accuracy.** We analyze the accuracy of Algorithm 1 in the sequel. We first state the accuracy guarantee of the DP coreset \(\hat{\mathcal{Y}}_{r}^{(t)}\) released by DP-Merge-Reduce, for each ring \(R_{r}^{(t)}\) at timestep \(t\in[T]\) in Lemma 4.
**Lemma 4**.: _Given a non-DP \((1+\gamma)\)-coreset and a DP \((\kappa,\eta)\)-coreset (e.g., [29, 41]). Let \(\hat{\mathcal{Y}}_{r}^{(t)}\) be the output of DP-Merge-and-Reduce, (see Algorithm 1) and \(N_{r}^{(t)}\) be the number of (non-empty) points in \(R_{r}^{(t)}\) at timestep \(t\in[T]\). Then \(\hat{\mathcal{Y}}_{r}^{(t)}\) is a DP \(((1+\gamma)\kappa,(\frac{4N_{r}^{(t)}}{M}-1)(1+\gamma)\eta{+}\tilde{M})\)-coreset of \(R_{r}^{(t)}\) at timestep \(t\). Where \(\tilde{M}:=M+\frac{6}{\varepsilon}\log(\frac{2T}{\xi})\). In other words, for any set of \(k\) centers \(\mathcal{C}\), with probability \(1-3\xi\),_
\[\frac{(1-\gamma)}{\kappa}\cdot\mathsf{cost}(\mathcal{C},R_{r}^{( t)})-((\frac{4N_{r}^{(t)}}{M}-1)(1+\gamma)\eta{+}\tilde{M})\cdot(2^{r})^{2} \leq\mathsf{cost}(\mathcal{C},\hat{\mathcal{Y}}_{r}^{(t)})\] \[\leq(1+\gamma)\kappa\cdot\mathsf{cost}(\mathcal{C},R_{r}^{(t)})+ ((\frac{4N_{r}^{(t)}}{M}-1)(1+\gamma)\eta{+}\tilde{M})\cdot(2^{r})^{2} \tag{6}\]
Proof.: The lemma follows from the accuracy guarantees of DP-Merge-Reduce (see Theorem 10) and the fact that the additive cost incurred is proportional to the radius of the ring \(R_{r}\) which is given by \(2^{r}\).
Next, we show that the union of the DP coresets over all rings \(\hat{\mathcal{Y}}\) is a semi-coreset for the stream \(\mathcal{S}\), i.e., it has an additive error proportional to the optimal cost of clustering wrt \(\mathcal{S}\).
**Theorem 11**.: _Given dimension \(d\), clustering parameter \(k\), arbitrary parameter \(C_{M}\), non-DP \((1+\gamma)\)-coreset, \(O(d^{3})\)-approximate bicriteria solution from Theorem 9, DP \((\kappa,\eta)\)-coreset (e.g., [29, 41]), and privacy parameter \(\varepsilon\). Let \(\hat{\mathcal{Y}}\) be the output of Algorithm 1 for stream \(\mathcal{S}=\{x_{1},\ldots,x_{T}\}\). Then for any set of \(k\) centers \(\mathcal{C}\), we have that with probability \(1-\frac{1}{T^{2}}-\frac{1}{k^{\mathcal{O}(\log(\kappa,\log(\Lambda))}}\), the following holds_
\[\frac{(1-\gamma)^{2}}{\kappa}\cdot\mathsf{cost}(\mathcal{C}, \mathcal{S})-4(1-\gamma^{2})C_{M}\cdot\mathsf{cost}(\mathcal{C}^{opt}_{ \mathcal{S}},\mathcal{S})-4(1-\gamma^{2})C_{M}\cdot V^{\prime}(d,k,\varepsilon, T,\Lambda)-(1-\gamma)V^{\prime\prime}(d,k,\varepsilon,T,\Lambda)\] \[\leq\mathsf{cost}(\mathcal{C},\hat{\mathcal{Y}})\] \[\leq(1+\gamma)^{2}\kappa\cdot\mathsf{cost}(\mathcal{C},\mathcal{ S})+4(1+\gamma)^{2}C_{M}\cdot\mathsf{cost}(\mathcal{C}^{opt}_{\mathcal{S}}, \mathcal{S})+4(1+\gamma)^{2}C_{M}\cdot V^{\prime}(d,k,\varepsilon,T,\Lambda)+( 1+\gamma)V^{\prime\prime}(d,k,\varepsilon,T,\Lambda) \tag{7}\]
_where \(V^{\prime}(d,k,\varepsilon,T,\Lambda)=O\left(\frac{\Lambda^{2}k^{2}}{d\varepsilon }\log^{2}(\Lambda)\log^{4}(k)\log(T)\operatorname{poly}\log\left(T\cdot k \cdot\Lambda\right)\right)\), \(V^{\prime\prime}(d,k,\varepsilon,T,\Lambda)=O(k\Lambda^{2}\log^{2}(k)\log^{2}( \Lambda)\log T)\cdot(\frac{\alpha\eta}{C_{M}}+\tilde{O}(\frac{1}{\varepsilon} \cdot\operatorname{poly}(\log(T\cdot k\cdot\Lambda))\), and \(\alpha:=O(d^{3})\)._
Proof.: Fix an epoch \(T_{i}\). Let \(\hat{\mathcal{Y}}^{(t)}\) be the output of Algorithm 1 at timestep \(t\in T_{i}\). Since the centers in \(\mathcal{F}|_{T_{i}}\) and consequently the rings \(R_{r}\) are fixed, we can compute the cost of \(\hat{\mathcal{Y}}^{(t)}\) by summing Equation 6 from Lemma 4 over all rings \(R_{r}\) where \(1\leq r\leq\log(\Lambda)\) as follows:
\[\frac{(1-\gamma)}{\kappa}\sum_{r}\mathsf{cost}(\mathcal{C},R_{r}^ {(t)})-(1+\gamma)\eta\sum_{r}(\frac{4N_{r}^{(t)}}{M}-1)\cdot(2^{r})^{2}-\sum_{r} \tilde{M}\cdot(2^{r})^{2}\leq\mathsf{cost}(\mathcal{C},\hat{\mathcal{Y}}^{(t)})\] \[\leq(1+\gamma)\kappa\sum_{r}\mathsf{cost}(\mathcal{C},R_{r}^{(t)}) +(1+\gamma)\eta\sum_{r}(\frac{4N_{r}^{(t)}}{M}-1)\cdot(2^{r})^{2}+\sum_{r} \tilde{M}\cdot(2^{r})^{2} \tag{8}\]
**Claim 1**.: _If \(x_{i}\) is assigned to the ring \(R_{r}\), let \(r(x_{i}):=2^{r}\). Then_
\[\sum_{x_{i}\in\mathcal{S}}r(x_{i})^{2}\leq\sum_{x_{i}\in\mathcal{S}}d(\mathcal{F },x_{i})^{2}=\textsf{cost}(\mathcal{F},\mathcal{S})\leq\alpha\cdot\textsf{cost} (\mathcal{C}^{opt}_{\mathcal{S}},\mathcal{S})+V(d,k,\varepsilon,T,\Lambda) \tag{9}\]
_where \(\alpha:=O(d^{3})\) and \(V(d,k,\varepsilon,T,\Lambda)=O\left(\frac{d^{2}\Lambda^{2}k\log(\Lambda)\log^{ 2}(k)}{\varepsilon}\operatorname{poly}\bigl{(}\log\bigl{(}T\cdot k\cdot\Lambda \bigr{)}\bigr{)}\right)\) are the multiplicative/additive factors from the bicriteria approximation (see Theorem 9)._
Proof.: The statement follows immediately from the definition of the ring \(R_{r}\) (Definition 3) and the guarantees of the bicriteria solution (Theorem 9).
Next, we take the union of \(\hat{\mathcal{Y}}^{(t)}\) over all timesteps in epoch \(T_{i}\). Note that for a fixed epoch and fixed ring \(R_{r}\), the additive error \(\tilde{M}\) is incurred at most once since this error stems from privately testing if the number of points at the base level is larger than the block size \(M\) (see Algorithm 5) in \(\mathsf{DP}\textsf{-Merge}\textsf{-Reduce}_{r}\). Thus summing Equation 8 over all timesteps in \(T_{i}\) gives us
\[\frac{(1-\gamma)}{\kappa}\cdot\textsf{cost}(\mathcal{C},\mathcal{ S}|_{T_{i}})-(1+\gamma)\eta\sum_{t\in T_{i}}\sum_{r}(\frac{4N_{r}^{(t)}}{M}-1) \cdot(2^{r})^{2}-\sum_{r}\tilde{M}\cdot(2^{r})^{2}\leq\textsf{cost}(\mathcal{ C},\hat{\mathcal{Y}}^{(t)}|_{t\in T_{i}})\] \[\leq(1+\gamma)\kappa\cdot\textsf{cost}(\mathcal{C},\mathcal{S}|_ {T_{i}})+(1+\gamma)\eta\sum_{t\in T_{i}}\sum_{r}(\frac{4N_{r}^{(t)}}{M}-1) \cdot(2^{r})^{2}+\sum_{r}\tilde{M}\cdot(2^{r})^{2} \tag{10}\]
where \(\mathcal{S}|_{T_{i}}\) denotes the input points in stream \(\mathcal{S}\) restricted to epoch \(T_{i}\) and \(\hat{\mathcal{Y}}^{(t)}|_{t\in T_{i}}\) is defined analogously.
Observe that \(\sum_{t\in T_{i}}\sum_{r}N_{r}^{(t)}\cdot(2^{r})^{2}=\sum_{x\in\mathcal{S}|_{ T_{i}}}r(x)^{2}\), thus we can use Claim 1 to simplify the additive error in Equation 10 to obtain
\[\frac{(1-\gamma)}{\kappa}\cdot\textsf{cost}(\mathcal{C},\mathcal{ S}|_{T_{i}})-\frac{4}{M}(1+\gamma)\eta(\alpha\textsf{cost}(\mathcal{C}^{opt}_{ \mathcal{S}},\mathcal{S}|_{T_{i}})+V(d,k,\varepsilon,T,\Lambda))-\sum_{r}\tilde {M}\cdot(2^{r})^{2}\leq\textsf{cost}(\mathcal{C},\hat{\mathcal{Y}}^{(t)}|_{t \in T_{i}})\] \[\leq(1+\gamma)\kappa\cdot\textsf{cost}(\mathcal{C},\mathcal{S}|_ {T_{i}})+\frac{4}{M}(1+\gamma)\eta(\alpha\textsf{cost}(\mathcal{C}^{opt}_{ \mathcal{S}},\mathcal{S}|_{T_{i}})+V(d,k,\varepsilon,T,\Lambda))+\sum_{r}\tilde {M}\cdot(2^{r})^{2} \tag{11}\]
Since a new epoch starts whenever a new center is added to the bicriteria solution \(\mathcal{F}\), and we run a \((1+\gamma)\)-approx non-DP coreset algorithm on the coreset \(\hat{\mathcal{Y}}\) at the beginning of a new epoch, denote this new coreset as \(\hat{\mathcal{Y}}_{T_{i}}\). We state the guarantees of \(\hat{\mathcal{Y}}_{T_{i}}\) below.
\[\frac{(1-\gamma)^{2}}{\kappa}\cdot\textsf{cost}(\mathcal{C}, \mathcal{S}|_{T_{i}})-\frac{4}{M}(1-\gamma)(1+\gamma)\eta(\alpha\textsf{cost} (\mathcal{C}^{opt}_{\mathcal{S}},\mathcal{S}|_{T_{i}})+V(d,k,\varepsilon,T, \Lambda))-(1-\gamma){\sum_{r}\tilde{M}\cdot(2^{r})^{2}}\] \[\leq(1+\gamma)^{2}\kappa\cdot\textsf{cost}(\mathcal{C},\mathcal{ S}|_{T_{i}})+\frac{4}{M}(1+\gamma)^{2}\eta(\alpha\textsf{cost}(\mathcal{C}^{opt}_{ \mathcal{S}},\mathcal{S}|_{T_{i}})+V(d,k,\varepsilon,T,\Lambda))+(1+\gamma){ \sum_{r}\tilde{M}\cdot(2^{r})^{2}} \tag{12}\]
Observe that the total number of epochs is bounded by the total number of centers in \(\mathcal{F}\) at the end of the stream. By Theorem 9 we know that \(|\mathcal{F}|=O(k\log^{2}(k)\log(\Lambda)\log T)\). Therefore, we can sum Equation 12 over all epochs to have that for any set of \(k\) centers \(\mathcal{C}\):
\[\frac{(1-\gamma)^{2}}{\kappa}\cdot\textsf{cost}(\mathcal{C}, \mathcal{S})-\frac{4}{M}(1-\gamma)(1+\gamma)\eta(\alpha\textsf{cost}(\mathcal{C} ^{opt}_{\mathcal{S}},\mathcal{S})+V^{\prime}(d,k,\varepsilon,T,\Lambda))-| \mathcal{F}|\cdot\Lambda^{2}\log(\Lambda)\cdot(1-\gamma)\cdot\tilde{M}\] \[\leq\textsf{cost}(\mathcal{C},\hat{\mathcal{Y}})\] \[\leq(1+\gamma)^{2}\kappa\cdot\textsf{cost}(\mathcal{C},\mathcal{ S})+\frac{4}{M}(1+\gamma)^{2}\eta(\alpha\textsf{cost}(\mathcal{C}^{opt}_{ \mathcal{S}},\mathcal{S})+V^{\prime}(d,k,\varepsilon,T,\Lambda))+|\mathcal{F}| \cdot\Lambda^{2}\log(\Lambda)\cdot(1+\gamma)\cdot\tilde{M} \tag{13}\]
where \(V^{\prime}(d,k,\varepsilon,T,\Lambda)=O\left(\frac{d^{2}\Lambda^{2}k^{2}}{ \varepsilon}\log^{2}(\Lambda)\log^{4}(k)\log(T)\operatorname{poly}\left(\log \left(T\cdot k\cdot\Lambda\right)\right)\right)\).
Recall that \(\eta\) is the additive error from the DP coreset Algorithm used as a blackbox in DP-Merge-Reduce (see Theorem 10) and \(\alpha=O(d^{3})\) from the bicriteria approximation. We set
\[M:=\frac{\alpha\eta}{C_{M}} \tag{14}\]
where \(C_{M}\) is a parameter chosen in the sequel (see Remark 1). Simplifying Equation 13 and taking a union bound over all rings and epochs, the desired claim in the theorem statement holds with probability \(1-\log(\Lambda)|\mathcal{F}|3\xi-\frac{1}{k^{O(\operatorname{poly}(k,\log( \Lambda))}}\). Thus we set \(\xi:=\frac{1}{3|\mathcal{F}|\log(\Lambda)T^{2}}\).
Finally, Theorem 12 gives the cost of clustering result for the output \(\hat{\mathcal{Y}}\) after the offline postprocessing step is executed. Recall that the postprocessing step involves running a \(\rho\)-approximation non-DP clustering algorithm on \(\hat{\mathcal{Y}}\). We present the clustering guarantee in terms of the \(\rho\)-approx non-DP clustering algorithm, the non-DP \((1+\gamma)\)-coreset algorithm, the DP \((\kappa,\eta)\)-coreset algorithm and parameter \(C_{M}\). Corollary 13 and Corollary 14 are obtained by plugging in the specific guarantees of the DP coreset algorithms from [41] and [29] and choosing an appropriate value of \(C_{M}\).
**Theorem 12**.: _Given dimension \(d\), clustering parameter \(k\), arbitrary parameter \(C_{M}\), non-DP \((1+\gamma)\)-coreset, \(O(d^{3})\)-approximate bicriteria solution from Theorem 9, DP \((\kappa,\eta)\)-coreset (e.g., [29, 41]), and privacy parameter \(\varepsilon\). Let \(\mathcal{C}_{\hat{\mathcal{Y}}}\) be the set of \(k\) centers obtained from running the offline \(\rho\)-approx non-DP \(k\)-means algorithm on \(\hat{\mathcal{Y}}\). Then,_
\[\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\mathcal{S})\] \[\leq\frac{\kappa}{(1-\gamma)^{3}}\cdot(\rho(1+\gamma)^{3}(\kappa +4C_{M})+4C_{M}(1-\gamma)^{2}(1+\gamma))\cdot\mathsf{cost}(\mathcal{C}^{opt}_ {\mathcal{S}},\mathcal{S})\] \[+\frac{\kappa}{(1-\gamma)^{3}}\cdot 4C_{M}\cdot\left((1-\gamma)^{2 }(1+\gamma)+\frac{\kappa}{(1-\gamma)^{3}}\cdot\rho(1+\gamma)^{3}\right)\cdot V ^{\prime}(d,k,T,\Lambda)\] \[+\frac{\kappa}{(1-\gamma)^{3}}\cdot((1-\gamma)^{2}+(1+\gamma)^{ 2}\rho)\cdot V^{\prime\prime}(d,k,\varepsilon,T,\Lambda) \tag{15}\]
_where \(V^{\prime}(d,k,\varepsilon,T,\Lambda)=O\left(\frac{\Lambda^{2}k^{2}}{d \varepsilon}\log^{2}(\Lambda)\log^{4}(k)\log(T)\operatorname{poly}\log\left(T \cdot k\cdot\Lambda\right)\right)\), \(V^{\prime\prime}(d,k,\varepsilon,T,\Lambda)=O(k\Lambda^{2}\log^{2}(k)\log^{2}( \Lambda)\log T)\cdot\left(\frac{\alpha\eta}{C_{M}}+\tilde{O}(\frac{1}{ \varepsilon}\cdot\operatorname{poly}(\log(T\cdot k\cdot\Lambda)\right)\), and \(\alpha:=O(d^{3})\)._
Proof.: Recall that we release the coreset \(\hat{\mathcal{Y}}\) after running a non-DP \((1+\gamma)\)-coreset algorithm on it. Thus, the guarantee of the resulting coreset which we call \(\hat{\mathcal{Y}}_{off}\) is
\[\frac{(1-\gamma)^{3}}{\kappa}\cdot\mathsf{cost}(\mathcal{C}, \mathcal{S})-4(1-\gamma)^{2}(1+\gamma)C_{M}\cdot\mathsf{cost}(\mathcal{C}^{opt }_{\mathcal{S}},\mathcal{S})-4(1-\gamma)^{2}(1+\gamma)C_{M}\cdot V^{\prime}( d,k,\varepsilon,T,\Lambda)\] \[-(1-\gamma)^{2}V^{\prime\prime}(d,k,\varepsilon,T,\Lambda)\] \[\leq\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\hat{\mathcal{ Y}}_{off})\] \[\leq(1+\gamma)^{3}\kappa\cdot\mathsf{cost}(\mathcal{C}_{\hat{ \mathcal{Y}}},\mathcal{S})+4(1+\gamma)^{3}C_{M}\cdot\mathsf{cost}(\mathcal{C}^{ opt}_{\mathcal{S}},\mathcal{S})+4(1+\gamma)^{3}C_{M}\cdot V^{\prime}(d,k, \varepsilon,T,\Lambda)\] \[+(1+\gamma)^{2}V^{\prime\prime}(d,k,\varepsilon,T,\Lambda) \tag{16}\]
Since we compute a \(\rho\)-approximation non-DP clustering algorithm on \(\hat{\mathcal{Y}}_{off}\) in the offline setting, we have the following guarantee,
\[\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\hat{\mathcal{Y}}_{off})\leq \rho\cdot\mathsf{cost}(\mathcal{C}^{opt}_{\hat{\mathcal{Y}}},\hat{\mathcal{Y}}_{ off}) \tag{17}\]
Since Equation 7 in Theorem 11 is true for any set of \(k\) centers, in particular it holds for \(\mathcal{C}_{\hat{\mathcal{Y}}}\), which gives us the following
\[\frac{(1-\gamma)^{3}}{\kappa}\cdot\mathsf{cost}(\mathcal{C}_{\hat{ \mathcal{Y}}},\mathcal{S})-4(1-\gamma)^{2}(1+\gamma)C_{M}\cdot\mathsf{cost}( \mathcal{C}_{\mathcal{S}}^{opt},\mathcal{S})-4(1-\gamma)^{2}(1+\gamma)C_{M} \cdot V^{\prime}(d,k,\varepsilon,T,\Lambda)\] \[-(1-\gamma)^{2}V^{\prime\prime}(d,k,\varepsilon,T,\Lambda)\leq \mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\hat{\mathcal{Y}}_{off})\leq(1+ \gamma)^{3}\kappa\cdot\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\mathcal{S})\] \[+4(1+\gamma)^{3}C_{M}\cdot\mathsf{cost}(\mathcal{C}_{\mathcal{S}} ^{opt},\mathcal{S})+4(1+\gamma)^{3}C_{M}\cdot V^{\prime}(d,k,\varepsilon,T, \Lambda)+(1+\gamma)^{2}V^{\prime\prime}(d,k,\varepsilon,T,\Lambda) \tag{18}\]
Using Equation 17 and Equation 18,
\[\frac{(1-\gamma)^{3}}{\kappa}\mathsf{cost}(\mathcal{C}_{\hat{ \mathcal{Y}}},\mathcal{S}) \tag{19}\] \[\leq\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\hat{\mathcal{ Y}}_{off})+4(1-\gamma)^{2}(1+\gamma)C_{M}\cdot\mathsf{cost}(\mathcal{C}_{ \mathcal{S}}^{opt},\mathcal{S})+4(1-\gamma)^{2}(1+\gamma)C_{M}\cdot V^{\prime}( d,k,T,\Lambda)\] \[+(1-\gamma)^{2}V^{\prime\prime}(d,k,\varepsilon,T,\Lambda)\] (20) \[\leq\rho\cdot\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}}^{opt}, \hat{\mathcal{Y}}_{off})+4(1-\gamma)^{2}(1+\gamma)C_{M}\cdot\mathsf{cost}( \mathcal{C}_{\mathcal{S}}^{opt},\mathcal{S})+4(1-\gamma)^{2}(1+\gamma)C_{M} \cdot V^{\prime}(d,k,T,\Lambda)\] (21) \[+(1-\gamma)^{2}V^{\prime\prime}(d,k,\varepsilon,T,\Lambda)\] (22) \[\leq\rho\cdot\mathsf{cost}(\mathcal{C}_{\mathcal{S}}^{opt},\hat{ \mathcal{Y}}_{off})+4(1-\gamma)^{2}(1+\gamma)C_{M}\cdot\mathsf{cost}(\mathcal{ C}_{\mathcal{S}}^{opt},\mathcal{S})+4(1-\gamma)^{2}(1+\gamma)C_{M}\cdot V^{ \prime}(d,k,T,\Lambda)\] (23) \[+(1-\gamma)^{2}V^{\prime\prime}(d,k,\varepsilon,T,\Lambda))\] \[\leq(\rho(1+\gamma)^{3}(\kappa+4C_{M})+4C_{M}(1-\gamma)^{2}(1+ \gamma))\mathsf{cost}(\mathcal{C}_{\mathcal{S}}^{opt},\mathcal{S})\] \[+4C_{M}\cdot((1-\gamma)^{2}(1+\gamma)+\rho(1+\gamma)^{3})\cdot V ^{\prime}(d,k,T,\Lambda)\] \[+((1-\gamma)^{2}+(1+\gamma)^{2}\rho)V^{\prime\prime}(d,k, \varepsilon,T,\Lambda) \tag{24}\]
**Corollary 13** (Using [41] as blackbox DP coreset Algorithm).: _Given dimension \(d\), clustering parameter \(k\), non-DP \((1+\gamma)\)-coreset, \((\varepsilon,\delta)\)-DP coreset from [41], and \(\rho\)-approx non-DP Clustering algorithm. Let \(\mathcal{S}:=\{x_{1},\ldots,x_{T}\}\) be the stream of input points in Euclidean space. There exists an algorithm \(\mathcal{A}^{\prime}\) that outputs a set of centers \(\mathcal{C}_{\hat{\mathcal{Y}}}\)_
1. _(Privacy)_ \(\mathcal{A}^{\prime}\) _is_ \((5\varepsilon,\delta)\)_-DP under the continual release setting._
2. _(Accuracy) With probability_ \(1-\frac{1}{T^{2}}-\frac{1}{k^{O(\mathrm{poly}(k,\log(\Lambda))}}\)_,_ \[\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\mathcal{S})\leq O_{\gamma,\rho}(1 )\cdot\mathsf{cost}(\mathcal{C}_{\mathcal{S}}^{opt},\mathcal{S})+O_{\gamma,\rho} (1)\cdot V^{\prime}(d,k,T,\Lambda)+O_{\gamma,\rho}(1)\cdot V^{\prime\prime}(d,k,\varepsilon,\delta,T,\Lambda)\] (25) _where_ \(V^{\prime}(d,k,\varepsilon,T,\Lambda)=\tilde{O}(\frac{\Lambda^{2}k^{2}}{d \varepsilon}\operatorname{poly}\log(T))\)_,_ \(V^{\prime\prime}(d,k,\varepsilon,\delta,T,\Lambda)=\tilde{O}(k\Lambda^{2} \operatorname{poly}(\log(T),\log(\frac{1}{\delta}),d,\frac{1}{\varepsilon},k))\)_._
3. _(Space)_ \(\mathcal{A}^{\prime}\) _consumes_ \(\tilde{O}(\operatorname{poly}(\log(T),\log(\frac{1}{\delta}),d,\frac{1}{ \varepsilon},k))\) _space._
Proof.: The privacy guarantee follows from Lemma 1. The accuracy guarantee follows from Theorem 11. Recall that \(M=\frac{\alpha\eta}{C_{M}}\) (see Equation 14) where \(\alpha=O(d^{3})\) by Theorem 9, \(\eta=\operatorname{poly}\bigl{(}\log(T),\log(\frac{1}{\beta}),\log(\frac{1}{ \delta}),d,\frac{1}{\varepsilon},k\bigr{)}\) by Theorem 5 where we set \(\beta=1/T\). We set \(C_{M}=100\). This makes \(M=O(\operatorname{poly}(\log(T)\log(\frac{1}{\delta}),d,\frac{1}{\varepsilon},k))\).
The space usage follows from Lemma 2, note that we take \(S_{\mathcal{A}}(N)=\operatorname{poly}(N,d,k)\) and total coreset size \(SZ_{\mathcal{A}}(\cdot)=O(k)\). The coreset size claim follows from Lemma 3.
**Corollary 14** (Using [29] as blackbox DP coreset Algorithm).: _Given dimension \(d\), clustering parameter \(k\), non-DP \((1+\gamma)\)-coreset, \(\varepsilon\)-DP coreset from [29], and \(\rho\)-approx non-DP Clustering algorithm. Let \(\mathcal{S}:=\{x_{1},\ldots,x_{T}\}\) be the stream of input points in Euclidean space. There exists an algorithm \(\mathcal{A}^{\prime}\) that outputs a set of centers \(\mathcal{C}_{\hat{\mathcal{Y}}}\)_
1. _(Privacy)_ \(\mathcal{A}^{\prime}\) _is_ \(5\varepsilon\)_-DP under the continual release setting._
2. _(Accuracy) With probability_ \(1-\frac{1}{T^{2}}-\frac{1}{K^{\mathrm{poly}}(\mathrm{k,log}(\Lambda))}\)_,_ \[\mathsf{cost}(\mathcal{C}_{\hat{\mathcal{Y}}},\mathcal{S}) \leq(1+\gamma^{\prime})\mathsf{cost}(\mathcal{C}_{\mathcal{S}}^{opt},\mathcal{S})+((1+\gamma)^{2}+\frac{(1+\gamma)^{5}}{(1-\gamma)^{5}}\cdot\rho) \cdot V^{\prime}(d,k,T,\Lambda)\] \[+(\frac{(1+\gamma)(1-\gamma)^{2}}{(1-\gamma)^{3}}+\frac{(1+ \gamma)^{3}\rho}{(1-\gamma)^{3}})\cdot V^{\prime\prime}(d,k,\varepsilon,T,\Lambda)\] (26) _where_ \(\gamma^{\prime}=2\gamma+\gamma^{2}+\frac{2p(1+\gamma)^{4}}{(1-\gamma)^{3}}\)_,_ \(V^{\prime}(d,k,\varepsilon,T,\Lambda)=\tilde{O}(\frac{\Lambda^{2}k^{2}}{d \varepsilon}\operatorname{poly}\log(T))\)_,_ \(V^{\prime\prime}(d,k,\varepsilon,T,\Lambda)=\tilde{O}_{\gamma}(\frac{(k^{3} \Lambda^{2}d^{3}}{\varepsilon}\operatorname{poly}(\log(T)))\)_._
3. _(Space)_ \(\mathcal{A}^{\prime}\) _consumes_ \(\tilde{O}\left(\operatorname{poly}\left(O_{\gamma}\left(\frac{d^{3}k^{2}2^{O_{ \gamma}(d^{\prime})}}{\varepsilon}\operatorname{poly}\log(T)\right)\right),k,d\right)\) _space._
Proof.: The privacy guarantee follows from Lemma 1. The accuracy guarantee follows from Theorem 11. Recall that \(M=\frac{\alpha\eta}{C_{M}}\) (see Equation 14) where \(\alpha=O(d^{3})\) by Theorem 9, \(\eta=O_{\gamma}\left(\frac{k^{2}2^{O_{\gamma}(d^{\prime})}}{\varepsilon} \operatorname{poly}\log(T)\right)\) by Theorem 6 where \(d^{\prime}=O_{\gamma}(\log k)\). We set \(C_{M}=(1-\gamma)/4\). This makes \(M=O_{\gamma}\left(\frac{d^{3}k^{2}2^{O_{\gamma}(d^{\prime})}}{\varepsilon} \operatorname{poly}\log T\right)\).
The space usage follows from Lemma 2, note that we take \(S_{\mathcal{A}}(n)=\operatorname{poly}(n,d,k)\) where \(n\) is the input size specified by Algorithm 4 and total coreset size \(SZ_{\mathcal{A}}(\cdot)=2^{O_{\gamma}(d^{\prime})}\cdot\operatorname{poly}(k,\log T)\). The coreset size claim follows from Lemma 3.
## 4 Bicriteria Approximation in Continual Release Setting
We describe our bicriteria approximation algorithm and analysis in more detail here.
```
0: Privacy parameter \(\varepsilon\), Stream \(\mathcal{S}\) of points \(x_{1},\ldots,x_{T}\in\mathbb{R}^{d}\) Initialize\((\varepsilon)\):
1:\(\varepsilon^{\prime}:=\frac{\varepsilon}{2\log(\Lambda)\log^{2}(k)}\)
2: Parallel quadtrees \(Q_{1},\ldots,Q_{\log(k)}\) such that: each quadtree \(Q_{q}\) has \(\log(\Lambda)\) levels with the bottom level having grid size \(\Theta(1)\)
3:for\(0\leq\ell\leq\log(\Lambda)\) and \(1\leq q\leq\log(k)\)do
4: Initialize DPFindCenters\({}_{\ell,q}\) of DPFindCenters\((\varepsilon^{\prime})\)
5: Set of candidate centers \(\mathcal{F}:=\emptyset\) Update\((x_{t})\):
6:for\(0\leq\ell\leq\log(\Lambda)\) and \(1\leq q\leq\log(k)\)do
7:\(\tilde{\mathcal{F}}_{t}\leftarrow\) Update\((x_{t})\) of DPFindCenters\({}_{\ell,q}\)\(\triangleright\) See Algorithm 3
8:\(\mathcal{F}\leftarrow\mathcal{F}\cup\tilde{\mathcal{F}}_{t}\)
9: Output \(\mathcal{F}\)
```
**Algorithm.** Our bicriteria approximation algorithm is given by Algorithm 2 which initializes \(\log(k)\) parallel instances of randomly shifted quadtrees. Each input point \(x_{t}\) is assigned to a cell in every level of every quadtree. For a fixed quadtree \(1\leq q\leq\log(k)\) and fixed level \(0\leq\ell\leq\Lambda\), the subroutine DPFindCenters
(see Algorithm 3) returns a candidate set of centers \(\hat{\mathcal{F}}_{t}\) which is added to the current set of candidate set of centers \(\mathcal{F}\).
The DPFindCenters subroutine (see Algorithm 3) finds the approximate heaviest \(O(k)\) cells in a fixed level of a fixed quadtree. It achieves this by first hashing the cell containing the current point to a bucket, note that there are \(w:=O(k)\) many buckets. For each hash bucket \(j\in[w]\), the algorithm maintains a continual release \(\theta\)-heavy hitter instance DP-HH\({}_{j}\). We use the \(\ell_{1}\)-heavy hitter algorithm from [24] as DP-HH -- it returns a set \(H\) of \(\theta\)-heavy hitters and their approximate counts \(\hat{f}(\mathbf{c})\) for all \(\mathbf{c}\in H\). Since we are storing the centerpoints of all the cells marked as heavy hitters as candidate centers, we need to ensure that we do not store too many false positives, i.e., cells whose counts are much smaller than \(\theta\|\mathcal{B}_{j}\|_{1}\). To address this challenge, we have an additional pruning step that eliminates any cell \(\mathbf{c}\) whose approximate count is less than \(\Theta(\theta)\hat{T}_{h(\mathbf{c})}\) where \(\hat{T}_{h(\mathbf{c})}\) denotes the DP count of each hash bucket \(j\in[w]\) at timestep \(t\in[T]\). We keep track of \(\hat{T}_{j}\) via an instance of the Binary Mechanism [22] denoted as \(\mathbf{BM}_{j}\) for each \(j\in[w]\). Finally, only the centerpoints of cells that pass this pruning step are added as candidate centers to the set \(\hat{\mathcal{F}}_{t}\).
```
0: Privacy parameter \(\varepsilon^{\prime}\), Stream of points \(x_{1},\ldots,x_{T}\)
0: Set of candidate centers \(\hat{\mathcal{F}}_{t}\) at every timestep \(t\in[T]\)
0: Initialize\((\varepsilon^{\prime})\): \(\triangleright\) where \(\varepsilon^{\prime}:=\frac{\varepsilon}{2\log(\Lambda)\log^{2}(k)}\)
1:\(\varepsilon^{\prime}\leftarrow\varepsilon^{\prime}\)
2:\(w=O(k)\)
3: Hash function \(h:[2^{\ell}]\rightarrow[w]\) s.t. \(\forall\) cells \(\mathbf{c},\forall j\in[w],\Pr[h(\mathbf{c})=j]=\frac{1}{w}\)
4:\(\hat{T}_{1}=0,\ldots,\hat{T}_{w}=0\)\(\triangleright\) DP Count for the size of hash bucket
5: Initialize \(\mathbf{BM}_{1},\ldots,\mathbf{BM}_{w}\) of BinaryMechanism\((T,\varepsilon^{\prime})\)\(\triangleright\) See Theorem 4[22]
6: Initialize \(\mathbf{DP-HH}_{1},\ldots,\mathbf{DP-HH}_{w}\) of \(\mathbf{DP-HH}(T,\varepsilon^{\prime})\)\(\triangleright\) See Theorem 7[24]\(\mathbf{Update}(x_{t})\):
7: Initialize \(\hat{\mathcal{F}}_{t}=\emptyset\)
8: Let \(x_{t}\) be mapped to cell \(\mathbf{c}^{*}\)\(\triangleright\) Recall DPFindCenters is initialized per level \(\ell\) of quadtree instance \(q\)
9:for\(p=1,\ldots,L\), where \(L:=\log(k^{2})\) run in parallel do
10:for\(j\in[w]\)do\(\triangleright\) Update the DP count of each hash bucket
11:if\(j=h(\mathbf{c}^{*})\)then
12:\(\hat{T}_{j}\leftarrow\mathbf{Update}(1)\) of \(\mathbf{BM}_{j}\)
13:else
14:\(\hat{T}_{j}\leftarrow\mathbf{Update}(0)\) of \(\mathbf{BM}_{j}\)
15:for\(j\in[w]\)do\(\triangleright\) Update the DP HHs of each hash bucket
16:if\(h(\mathbf{c}^{*})=j\)then
17:\(\hat{f},H\leftarrow\mathbf{Update}(\mathbf{c}^{*})\) of \(\mathbf{DP-HH}_{j}\)
18:else
19:\(\hat{f},H\leftarrow\mathbf{Update}(\bot)\) of \(\mathbf{DP-HH}_{j}\)
20:for\(\mathbf{c}\in H\)do
21:if\(\hat{f}(\mathbf{c})\geq\frac{\theta}{1000}\cdot\hat{T}_{h(\mathbf{c})}\)then
22: Add centerpoint of \(\mathbf{c}\) to \(\hat{\mathcal{F}}_{t}\) as a center
23: Return \(\hat{\mathcal{F}}_{t}\)
```
**Algorithm 3** DPFindCenters
### Proof of Theorem 9
**Privacy.** Since we are outputting the center point of the cells marked as heavy hitters, we only need to show that DP is maintained wrt these centers and hash substreams. For a fixed timestep \(t\), an input point \(x_{t}\) is assigned to a specific cell for a specific level of the quadtree, and cells at the same level are disjoint. Since there are \(\log(\Lambda)\) levels per quadtree, point \(x_{t}\) is a member of \(\log(\Lambda)\) cells in total. Since there
are \(2\log^{2}(k)\) parallel processes (considering \(\log(k)\) quadtrees and \(\log(k^{2})\) parallel processes per quadtree), a single point participates in \(2\log\Lambda\log^{2}(k)\) total calls to DP-HH. Note that we do not account for the \(O(k)\) buckets that the cells are hashed into, as DP-HH is called on disjoint inputs for each bucket. Thus calling each DP-HH instance with a privacy budget of \(\frac{\varepsilon}{2\log(\Lambda)\log^{2}(k)}\) preserves \(\varepsilon\)-DP. We use the Binary Mechanism to keep track of the size of each hash substream \(\mathcal{B}_{j}\)\(\forall j\in[w]\). Since the input cells (and corresponding points within cells) are disjoint in each substream due to hashing, this preserves \(\frac{\varepsilon}{2\log(\Lambda)\log^{2}(k)}\)-DP which over \(2\log(\Lambda)\log^{2}(k)\) parallel processes preserves \(\varepsilon\)-DP. Finally, we release the number of points per center via the Binary Mechanism where each point can only contribute to a single cell count which preserves \(\varepsilon\)-DP. Therefore by composition, we get \(3\varepsilon\)-DP for the entire algorithm.
Accuracy.We first state some geometric properties regarding the cells within the quadtree construction.
**Proposition 1**.: _[_17_]_ _Let \(\mathcal{B}\) be an \(\ell_{\infty}\) ball of radius \(r\) contained in \([-\Lambda,\Lambda]^{d}\) (it forms a \(d\)-dimensional cube with each side length \(2r\)). Then for a randomly shifted quadtree and any level \(\ell\) with grid size at least \(r^{\prime}\geq 2r\), \(\mathcal{B}\) is split by the grid in each dimension \(j\in[d]\) independently with probability \(\frac{2r}{r^{\prime}}\)._
Let \(C_{\mathcal{S}}^{opt}=\{c_{1},\ldots,c_{k}\}\) be the optimal set of \(k\) centers for the input set \(\mathcal{S}=\{x_{1},\ldots,x_{T}\}\). For any radius, define \(n_{r}\) as the number of points \(x\in\mathcal{S}\) such that \(d(x,C_{\mathcal{S}}^{opt})\geq r\). Note that the opt cost of \(k\)-means (and \(k\)-median) is given by \(\sum_{p\in\mathbb{Z}}2^{2p}\cdot n_{2^{p}}\) and \(\sum_{p\in\mathbb{Z}}2^{p}\cdot n_{2^{p}}\) (up to an \(O(1)\)-approximation).
Fix some radius \(r=2^{p}\) where \(p\in\mathbb{Z}\) and consider a randomly shifted grid of size \(20rd\). The following lemma characterizes cells containing \(\cup_{i=1}^{k}\mathcal{B}(c_{i},r)\) with respect to the grid size.
**Lemma 5**.: _[_17_]_\(\cup_{i=1}^{k}\mathcal{B}(c_{i},r)\) _is contained in at most \(4k\) cells of grid length \(20rd\) by the corresponding level of the quadtree with probability at least 1/2._
Let \(G_{\ell}\) where \(0\leq\ell\leq\log(\Lambda)\) be the set of \(4k\) good cells of length \(20rd\) (equivalently \(\ell_{2}\)-radius of \(10rd^{3/2}\)) at level \(\ell\). Let the number of points in \(\mathcal{S}\) uncovered by \(G_{\ell}\) be \(n_{G_{\ell}}\). Observe that by Lemma 5, since \(G_{\ell}\) contains \(\cup_{i=1}^{k}\mathcal{B}(c_{i},r)\) with probability at least \(1/2\), we have that \(n_{G_{\ell}}\leq n_{r}\). It follows that
\[\sum_{\ell=0}^{\log(\Lambda)}(\text{grid length at level }\ell)^{2} \cdot n_{G_{\ell}}\] \[\leq O(d^{3})\sum_{p\in\mathbb{Z}\ :\ r=2^{p}\leq\Lambda}r^{2} \cdot n_{r}\leq O(d^{3})\cdot cost(C_{\mathcal{S}}^{opt},\mathcal{S}) \tag{27}\]
Observe that we can define a one-one mapping between the level \(\ell\) and the radius \(r\), i.e., the radius \(r\) (ranging from \(1\) to \(\Lambda\)) maps to the grid length of a cell which is at most \(\Lambda/2^{\ell}\) (level \(\ell\) ranges from \(\log(\Lambda)\) to \(0\)). Since the grid length of a cell in \(G_{\ell}\) at level \(\ell\) is \(20rd\) which maps to \(20d\frac{\Lambda}{2^{\ell}}\), we can replace the leftmost term in the expression above as follows
\[O(d^{2})\sum_{\ell=0}^{\log(\Lambda)}(\Lambda/2^{\ell})^{2}n_{G_{\ell}}\leq O( d^{3})\sum_{p\in\mathbb{Z}\ :\ r=2^{p}\leq\Lambda}r^{2}\cdot n_{r}\leq O(d^{3})\cdot cost(C_{\mathcal{S}}^{ opt},\mathcal{S}) \tag{28}\]
Recall that we define \(\mathcal{F}_{t}\) as the set of centers until time step \(t\). For a fixed level \(\ell\), let the set of cells the algorithm DP-HH marks as heavy at timestep \(t\) at level \(\ell\) as \(H_{\ell,t}\). Note that although there is an extra pruning step in DPFindCenters after the cells are marked heavy by DP-HH, we do not account for this here -- as if a cell is an \(\alpha\)-HH and marked heavy by DP-HH, and it survives the pruning step, it will still be an \(\alpha\)-HH. Then,
\[cost(\mathcal{F}_{t})\leq O(d^{2})\sum_{\ell=0}^{\log(\Lambda)}(\Lambda/2^{\ell })^{2}\cdot\mathbb{1}[x_{t}\text{ uncovered by }H_{\ell,t}]\]
Observe that
\[cost(\mathcal{F})\] \[=\sum_{t=1}^{T}cost(\mathcal{F}_{t})\] \[\leq O(d^{2})\,\sum_{\ell=0}^{\log(\Lambda)}(\Lambda/2^{\ell})^{2} \cdot\sum_{t=1}^{T}\mathbbm{1}[x_{t}\text{ uncovered by }H_{\ell,t}] \tag{29}\]
**Lemma 6**.: _For a fixed level \(\ell\), with probability at least \(1-\frac{12}{k}\),_
\[\sum_{t=1}^{T}\mathbbm{1}[x_{t}\text{ uncovered by }H_{\ell,t}]\leq(1+\theta)n_{G _{\ell}}+\frac{4k\log^{2}\Lambda\log k}{\varepsilon\eta}\operatorname{poly} \left(\log\left(\frac{T\cdot 2^{\ell}}{\theta\xi\gamma_{h}}\right)\right) \tag{30}\]
Proof.: Observe that
\[\sum_{t=1}^{T}\mathbbm{1}[x_{t}\text{ uncovered by }H_{\ell,t}]\] \[=\sum_{t=1}^{T}(\mathbbm{1}[(x_{t}\text{ uncovered by }H_{\ell,t}) \wedge(x_{t}\text{ uncovered by }G_{\ell})]+\mathbbm{1}[(x_{t}\text{ uncovered by }H_{\ell,t}) \wedge(x_{t}\text{ covered by }G_{\ell})])\] \[=\sum_{t=1}^{T}\mathbbm{1}[(x_{t}\text{ uncovered by }H_{\ell,t}) \wedge(x_{t}\text{ uncovered by }G_{\ell})]+\sum_{t=1}^{T}\mathbbm{1} \left[(x_{t}\text{ uncovered by }H_{\ell,t})\wedge(x_{t}\text{ covered by }G_{\ell})\right]\]
The first sum in the above expression can be upper bounded by \(n_{G_{\ell}}\), thus it remains to bound the second sum. In order to bound the second sum, we will need some properties of good cells that are hashed to buckets in \(\mathsf{DPFindCenters}\). In the sequel, we denote \(N_{\ell,\mathbf{c}}\) as the number of points in the cell \(\mathbf{c}\) at level \(\ell\). For simplicity, we consider the number of hash buckets \(w:=40k\). We first show that for any good cell \(\mathbf{c}\), it is unlikely that the bucket it is hashed to contains another good cell \(\mathbf{c}^{\prime}\neq\mathbf{c}\).
**Claim 2**.: _Let \(\mathbf{c}\in G_{\ell}\), then with probability at least \(1/2\), for any \(\mathbf{c}^{\prime}\in G_{\ell}\) such that \(\mathbf{c}^{\prime}\neq\mathbf{c}\), we have that \(h(\mathbf{c}^{\prime})\neq h(\mathbf{c})\)._
In the next claim we give a bound on the size of the hash bucket in terms of the size of a good cell that is hashed to it and \(n_{r}\).
**Claim 3**.: _For each \(\mathbf{c}\in G_{\ell}\), suppose the hash bucket \(\mathcal{B}_{j}\) where \(j\in[w]\), contains only one good cell which is \(\mathbf{c}\). Let \(N_{\ell,\mathbf{c}}:=y\). Then with probability at least 1/2, \(|\mathcal{B}_{j}|\leq 2(y+\frac{n_{G_{\ell}}}{40k})\)._
Note that since the hashing procedure is run \(\log(k^{2})\) times in parallel, we can boost the success probabilities in the above claims to be \(1-1/k^{2}\).
Observe that for a fixed hash bucket \(\mathcal{B}_{j}\), any cell \(\mathbf{c}\) such that \(N_{\ell,\mathbf{c}}\geq\theta\cdot 2(y+\frac{n_{G_{\ell}}}{40k})\) qualifies as a \(\theta\)-heavy hitter since \(N_{\ell,\mathbf{c}}\geq\theta\cdot 2(y+\frac{n_{G_{\ell}}}{40k})\geq\theta| \mathcal{B}_{j}|\) (by Claim 3). In particular, for good cell \(\mathbf{c}_{y}\) such that \(N_{\ell,\mathbf{c}_{y}}=y\), if \(\mathbf{c}_{y}\) is an \(\theta\)-HH then \(y\geq\theta\cdot 2(y+\frac{n_{G_{\ell}}}{40k})\) or \(y\geq\frac{\theta n_{G_{\ell}}}{20k}\). We formalize this intuition in the claim below where we use the accuracy guarantees of \(\mathsf{DP-HH}\) given by Theorem 7 to characterize the good cells that are reported as \(\theta\)-HHs.
**Claim 4**.: _Let \(\mathbf{c}\in G_{\ell}\). If \(N_{\ell,\mathbf{c}}\geq\frac{\theta n_{G_{\ell}}}{20k}\), and \(N_{\ell,\mathbf{c}}\geq\frac{2\log(\Lambda)\log^{2}(k)}{\varepsilon\gamma_{h}} \operatorname{poly}(\log(\frac{T\cdot k\cdot 2^{\ell}}{\theta\gamma_{h}}))\), then with probability at least \(1-\frac{12}{k}\), \(\mathbf{c}\) is reported as an \(\theta\)-heavy hitter by \(\mathsf{DP-HH}\)._
Finally, we give an upper bound for the number of points that are covered by good cells but for which \(\mathsf{DP-HH}\) fails to report as heavy.
**Claim 5**.: _With probability \(1-\frac{12}{k}\),_
\[\sum_{t=1}^{T}\mathbb{1}[(x_{t}\text{ uncovered by }H_{\ell,t})\wedge(x_{t}\text{ covered by }G_{\ell})]\leq\theta n_{G_{\ell}}+\frac{8k\log(\Lambda)\log^{2}(k)}{ \varepsilon\gamma_{h}}\operatorname{poly}\left(\log\left(\frac{T\cdot k\cdot 2^{ \ell}}{\theta\gamma_{h}}\right)\right)\]
Thus by combining Claim 5 with our observation about the first sum being upper bounded by \(n_{G_{\ell}}\) in the decomposition of \(\sum_{t=1}^{T}\mathbb{1}[x_{t}\text{ uncovered by }H_{\ell,t}]\), we obtain our desired statement in Lemma 6.
Note that we have shown Lemma 6 is true with probability at least \(1-\frac{12}{k}\), for a fixed level. Since we have \(\log(\Lambda)\) many levels in a specific quadtree, and \(\log(k)\) many quadtree instances in parallel -- we can boost our probability of success to be sufficiently high. It remains to bound the total \(k\)-means cost for the set of centers \(\mathcal{F}\) output by our algorithm. Combining Equation 27, Equation 28 and Equation 29 along with Lemma 6, we obtain the following.
\[cost(\mathcal{F},\mathcal{S})\] \[=\sum_{t=1}^{T}cost(\mathcal{F}_{t},\mathcal{S})\] \[\leq O(d^{2})\sum_{\ell=0}^{\log(\Lambda)}(\Lambda/2^{\ell})^{2 }\cdot\sum_{t=1}^{T}\mathbb{1}[x_{t}\text{ uncovered by }H_{\ell,t}]\] \[\leq O(d^{2})\sum_{\ell=0}^{\log(\Lambda)}(\Lambda/2^{\ell})^{2 }\cdot((1+\theta)n_{G_{\ell}}+\frac{8k\log(\Lambda)\log^{2}(k)}{\varepsilon \gamma_{h}}\operatorname{poly}\bigl{(}\log\bigl{(}\frac{T\cdot k\cdot 2^{ \ell}}{\theta\gamma_{h}}\bigr{)}\bigr{)})\] \[=O(d^{2})\sum_{\ell=0}^{\log(\Lambda)}(\Lambda/2^{\ell})^{2}\cdot (1+\theta)n_{G_{\ell}}+O(d^{2})\sum_{\ell=0}^{\log(\Lambda)}(\Lambda/2^{\ell}) ^{2}\cdot\frac{8k\log(\Lambda)\log^{2}(k)}{\varepsilon\gamma_{h}} \operatorname{poly}\bigl{(}\log\bigl{(}\frac{T\cdot k\cdot 2^{\ell}}{\theta \gamma_{h}}\bigr{)}\bigr{)}\] \[\leq O(d^{3})(1+\theta)\sum_{p\in\mathbb{Z}\ :\ r=2p\leq\Lambda}r^{2}\cdot n_{r}+O(d^{2}) \sum_{\ell=0}^{\log(\Lambda)}(\Lambda/2^{\ell})^{2}\cdot\frac{8k\log(\Lambda) \log^{2}(k)}{\varepsilon\gamma_{h}}\operatorname{poly}\bigl{(}\log\bigl{(} \frac{T\cdot k\cdot 2^{\ell}}{\theta\gamma_{h}}\bigr{)}\bigr{)}\] \[\leq O(d^{3})(1+\theta)cost(C_{\mathcal{S}}^{opt},\mathcal{S})+O( d^{2})\sum_{\ell=0}^{\log(\Lambda)}(\Lambda/2^{\ell})^{2}\cdot\frac{8k\log( \Lambda)\log^{2}(k)}{\varepsilon\gamma_{h}}\operatorname{poly}\bigl{(}\log \bigl{(}\frac{T\cdot k\cdot 2^{\ell}}{\theta\gamma_{h}}\bigr{)}\bigr{)}\] \[=O(d^{3})(1+\theta)cost(C_{\mathcal{S}}^{opt},\mathcal{S})+O\left( d^{2}\Lambda^{2}\frac{k\log(\Lambda)\log^{2}(k)}{\varepsilon\gamma_{h}} \operatorname{poly}\bigl{(}\log\bigl{(}\frac{T\cdot k\cdot\Lambda}{\theta \gamma_{h}}\bigr{)}\bigr{)}\right)\]
Finally, we can set \(\theta\) (threshold for HHs) and \(\gamma_{h}\) (approximation factor for frequency of a cell marked as heavy from Theorem 7) to appropriate constants. The accuracy claim in Theorem 9 follows.
Space. We analyze the total space usage for \(\mathsf{DP}\)-\(\mathsf{HH}\) in Algorithm 3 as this dominates space usage for the entire algorithm. From Theorem 7, one instance of \(\mathsf{DP}\)-\(\mathsf{HH}\) uses \(\operatorname{poly}\left(\log\left(T\Lambda k\right)\right)\). Since we run \(\mathsf{DP}\)-\(\mathsf{HH}\) on \(O(k)\) many hash substreams, and \(2\log(\Lambda)\log^{2}(k)\) parallel processes, the total space is \(O(k\log(\Lambda)\log^{2}(k)\operatorname{poly}\left(\log\left(T\Lambda k\right) \right))\).
**Claim 6** (Upper bound on size of \(\mathcal{F}\)).: _For all \(j\in[w]\), suppose \(|\mathcal{B}_{j}|=\Omega(\frac{\log(\Lambda)\log^{3}(k)}{\varepsilon}\log^{2.5}T)\) then with high probability, the total number of historical heavy hitters at the end of the stream is \(O(k\log^{2}(k)\log(\Lambda)\log T)\)._
Proof.: The algorithm runs independent instances of the \(\mathsf{DP}\)-\(\mathsf{HH}\) algorithm for each bucket of each level in each instantiation of the quadtree, thus it is sufficient to first show that for a fixed quadtree \(Q\), a fixed level \(\ell\), and a fixed bucket \(\mathcal{B}_{j}\) where \(j\in[w]\), the total number of historical heavy hitters is at most \(O(\frac{(1+\gamma_{h})}{\theta}\log T)\).
Let the timestamps of points that end up in \(\mathcal{B}_{j}\) be \(t_{i}=2^{i}\), where \(0\leq i\leq\log(T)\). Let the state of the hash bucket at time step \(t\) be \(\mathcal{B}_{j}^{(t)}\). We set the failure probability in Theorem 4 as \(\xi:=\frac{1}{k^{2}}\). From
Theorem 4 we know that with probability \(1-\frac{1}{k^{2}}\), the DP count of the hash bucket \(\hat{T}_{j}\) at timestep \(t\) has additive error \(O(\frac{\log(\Lambda)\log^{3}(k)}{\varepsilon}\log^{2.5}(T))\). Thus for a fixed timestamp \(t\), if \(|\mathcal{B}_{j}^{(t)}|=\Omega(\frac{\log(\Lambda)\log^{3}(k)}{\varepsilon} \log^{2.5}(T))\), then we can see that a cell \(\mathbf{c}\) is added to \(\mathcal{F}_{t}\) only if \(\hat{f}(\mathbf{c})>\frac{2\theta}{1000}|\mathcal{B}_{j}^{(t)}|\). Recall from Condition 1 of Theorem 7 that \(\hat{N}_{\ell,\mathbf{c}}\in(1\pm\gamma_{h})N_{\ell,\mathbf{c}}\). Thus if \(\mathbf{c}\in\mathcal{F}_{t}\) and \(|\mathcal{B}_{j}^{(t)}|=\Omega(\frac{\log(\Lambda)\log^{3}(k)}{\varepsilon} \log^{2.5}(T))\) then it must be the case that with probability \(1-\frac{1}{k^{2}}\), \(N_{\ell,\mathbf{c}}\geq\frac{2\theta}{1000(1+\gamma_{h})}\|\mathcal{B}_{j}^{ (t)}\|_{1}\geq\frac{2\theta}{1000(1+\gamma_{h})}t_{i-1}\).
Now, suppose for a contradiction, that the number of heavy hitters between \(t_{i-1}\) and \(t_{i}\) is at least \(\frac{1000(1+\gamma_{h})}{\theta}\). Then for each such cell \(\mathbf{c}\), we have that \(N_{\ell,\mathbf{c}}\geq\frac{2\theta}{1000(1+\gamma_{h})}t_{i-2}\). Since there are at least \(\frac{2000(1+\gamma_{h})}{2\theta}\) such cells, this implies that the total number of points between \(t_{i-1}\) and \(t_{i}\) is \(\geq\frac{2\theta}{1000(1+\gamma_{h})}t_{i-2}\frac{2000(1+\gamma_{h})}{2 \theta}=2^{i}=t_{i}\), which is a contradiction. Thus there must be at most \(\frac{1000(1+\gamma_{h})}{\theta}\) cells marked as heavy hitters between consecutive intervals, and since there are \(\log(T)\) such intervals, we have that the total number of historical \(\ell_{1}\) heavy hitters for a fixed bucket is \(O(\frac{(1+\gamma_{h})}{\theta}\log T)\).
Boosting the success probability over \(O(k)\) buckets, \(\log(k^{2})\) parallel processes, \(\log(\Lambda)\) quadtree levels, and \(\log(k)\) parallel processes of the quadtree instantiation, accounting for the additional number of historical HHs, and taking \(\theta\) and \(\gamma_{h}\) as appropriate constants, we obtain the claim as stated.
## 5 DP Merge And Reduce Algorithm
We give a differentially-private variant of the widely-known Merge and Reduce framework [34, 1, 27] that is used to efficiently release a coreset for a stream of points. The main idea behind the Merge and Reduce technique is to partition the input stream into blocks, compute a coreset for each block, take the union of the resulting coresets (merge step), and compute the coreset of the union (reduce step). The merging and reducing of the coresets is done in a tree-like fashion. In order to reduce the error introduced by merging, the number of levels in the tree must be small. On a high-level, our framework computes coresets at the base level (of the tree) using a DP coreset Algorithm \(\mathcal{A}\) (e.g. [41, 29]) and then computes coresets for subsequent levels using a non-DP coreset Algorithm \(\mathcal{B}\) (e.g. [19]).
First, we show that our coreset definition satisfies the Merge and Reduce properties, i.e., the union of a coreset is a coreset and the coreset of a union of coresets is a valid coreset for the underlying points.
**Lemma 7**.:
1. _(Merge) If_ \(Q\) _is a_ \((1+\gamma,\eta)\)_-coreset of_ \(P\)_,_ \(Q^{\prime}\) _is a_ \((1+\gamma,\eta)\)_-coreset of_ \(P^{\prime}\) _and_ \(P,P^{\prime}\) _are disjoint, then_ \(Q\cup Q^{\prime}\) _is a_ \((1+\gamma,2\eta)\)_-coreset of_ \(P\cup P^{\prime}\)_._
2. _(Reduce) If_ \(R\) _is a_ \((1+\gamma)\)_-coreset of_ \(Q\cup Q^{\prime}\)_, then_ \(R\) _is a_ \(((1+\gamma)^{2},(1+\gamma)2\eta)\)_-coreset of_ \(P\)_._
Proof.: We first prove the merge property.
\[\mathsf{cost}(Q\cup Q^{\prime}) \leq\mathsf{cost}(Q)+\mathsf{cost}(Q^{\prime})\] \[\leq(1+\gamma)\mathsf{cost}(P)+\eta+(1+\gamma)\mathsf{cost}(P^{ \prime})+\eta\] \[\leq(1+\gamma)\mathsf{cost}(P\cup P^{\prime})+2\eta\]
Next we prove the reduce property below.
\[\mathsf{cost}(R) \leq(1+\gamma)\mathsf{cost}(Q\cup Q^{\prime})\] \[\leq(1+\gamma)((1+\gamma)\mathsf{cost}(P\cup P^{\prime})+2\eta)\] \[\leq(1+\gamma)^{2}\mathsf{cost}(P\cup P^{\prime})+(1+\gamma)2\eta\]
Proof.: First observe that at any timestep the total set of input points seen so far (denoted as \(P\)) is partitioned into subsets \(P_{0},\ldots,P_{u}\) where some \(P_{i}\)'s can be empty and \(u=\lfloor\log(2N/M)\rfloor+1\). Note that we simulate this
step of partitioning \(P\) into \(P_{i}\)'s in Algorithm 4 solely for the analysis. It is not necessary to store \(P_{1},\ldots,P_{u}\) explicitly in the actual algorithm.
```
0: Size of set \(P_{0}\) denoted as \(p_{0}\), Block size \(M\), Privacy parameter \(\varepsilon\)
0:\(\top\) if the size of set \(P_{0}\) exceeds noisy threshold and \(\bot\) otherwise
1:\(\hat{M}=M+\mathsf{Lap}(2/\varepsilon)\)
2:\(\nu=\mathsf{Lap}(4/\varepsilon)\)
3:if\(p_{0}+\nu\geq\hat{M}\)then
4: return \(\top\)
5:else
6: return \(\bot\)
```
**Algorithm 4** Algorithm DP-Merge-Reduce
We first prove some claims about \(\mathsf{LevelZero}\)-\(\mathsf{AboveThreshold}\).
**Lemma 8**.: _[Privacy] \(\mathsf{LevelZero}\)-\(\mathsf{AboveThreshold}\) is \(\varepsilon\)-DP._
Proof.: Recall that \(\mathsf{LevelZero}\)-\(\mathsf{AboveThreshold}\) checks whether the size of the set \(P_{0}\) is above a certain threshold. In other words, it checks whether the total count of the _group_ of elements \(x_{i}\in P_{0}\) is above the given threshold.
Once a positive response is returned by \(\mathsf{LevelZero-AboveThreshold}\), elements in \(P_{0}\) are deleted and the process is repeated with a new group of elements. This algorithm is equivalent to grouping a stream of counts in [24]. In particular, the proof of privacy is identical except that in our algorithm we do not release the actual noisy counts but instead we just release a positive/negative response to the same query.
**Lemma 9** (Accuracy of \(\mathsf{LevelZero-AboveThreshold}\)).: _For all \(t\in[T]\), with probability \(1-\xi\), we have that_
1. \(|\nu|<\frac{4}{\varepsilon}\log(\frac{2T}{\xi})\)__
2. \(|\hat{M}-M|<\frac{2}{\varepsilon}\log(\frac{2T}{\xi})\)__
Proof.: This follows from standard application of tail bounds for Laplace distribution and union bound over all \(t\in[T]\).
**Lemma 10**.: _If \(p_{0}+\nu\geq\hat{M}\) then with probability \(1-\xi\), we have that \(p_{0}\geq M/2\)._
Proof.: We can simplify \(p_{0}+\nu\geq\hat{M}\) by applying the noise bounds from Lemma 9. Thus with probability at least \(1-\xi\), we have that \(p_{0}\geq M-\frac{6}{\varepsilon}\log(\frac{2T}{\xi})\). Finally using the assumption about \(M>\frac{12}{\varepsilon}\log(\frac{2T}{\xi})\), the statement follows.
**Claim 7**.: _The number of levels is given by \(u=\lceil\log(2N/M)\rceil+1\)._
Proof.: By Lemma 10, \(p_{0}\geq M/2\), thus the total number of blocks at level \(0\) is \(\leq\frac{2N}{M}\). The statement follows.
### Proof of Theorem 10
**Lemma 11** (Privacy).: _DP-Merge-Reduce framework is \((2\varepsilon,\delta)\)-DP._
Proof.: First, by Lemma 8, \(\mathsf{LevelZero-AboveThreshold}\) is \(\varepsilon\)-DP. Since the coresets computed by \(\mathcal{A}\) at level \(1\) are \((\varepsilon,\delta)\)-DP, we can release these DP coresets as they are computed. Subsequent computations on these DP coresets via algorithm \(\mathcal{B}\) preserve DP by postprocessing. Since each DP-Merge-Reduce instance is called on mutually disjoint subsets of the stream, this preserves \((2\varepsilon,\delta)\)-DP over all timesteps.
**Lemma 12** (Accuracy).: _With probability at least \(1-\xi-\xi_{A}-\xi_{B}\), DP-Merge-Reduce framework releases a \(((1+\gamma)\kappa,(\frac{4N}{M}-1)(1+\gamma)\eta+\hat{M})\)-coreset of \(P\). Where \(\hat{M}:=M+\frac{6}{\varepsilon}\log(\frac{2T}{\xi})\)._
Proof.: Recall that \(P\) is partitioned by \(P_{1},\ldots,P_{u}\). We first prove the following claim about the coreset for a non-empty subset \(P_{r}\subseteq P\).
**Claim 8**.: _Suppose \(P_{r}\) is non-empty. Then \(Q_{r}\) is a \(((1+\gamma/3)\kappa,(1+\gamma/3)2^{r-1}\eta)\)-coreset of \(P_{r}\)._
Proof.: We will first prove the claim that \(Q_{r}\) is a \((\prod_{j=1}^{r-1}(1+\gamma_{j}),\kappa\prod_{j=1}^{r-1}(1+\gamma_{j})2^{r-1}\eta)\)-coreset for \(P_{r}\) where \(r\geq 2\) by induction. Note that \(Q_{0}=P_{0}\) for \(p_{0}+\nu<\hat{M}\). For \(p_{0}+\nu\geq\hat{M}\), \(P_{1}=P_{1}\cup P_{0}\). Since we apply DP coreset algorithm \(\mathcal{A}\) to \(P_{1}\), the resulting coreset \(Q_{1}\) is a \((\kappa,\eta)\)-coreset for \(P_{1}\).
**Base Case.** By Lemma 7, \(Q_{1}\cup Q_{1}^{\prime}\) is a \((\kappa,2\eta)\)-coreset for \(P_{1}=P_{0}\cup P_{0}^{\prime}\) and \(Q_{2}\) is a \((\kappa(1+\gamma_{1}),(1+\gamma_{1})2\eta)\)-coreset of \(Q_{1}\cup Q_{1}^{\prime}\).
**Inductive Hypothesis.** Suppose the claim is true for \(r=i\), i.e., \(Q_{i}\) is a \((\kappa\prod_{j=1}^{i}(1+\gamma_{j}),\prod_{j=1}^{i}(1+\gamma_{j})2^{i-1}\eta)\)-coreset for \(P_{i}\) and \(Q_{i}^{\prime}\) is a \((\kappa\prod_{j=1}^{i}(1+\gamma_{j}),\prod_{j=1}^{i}(1+\gamma_{j})2^{i-1}\eta)\)-coreset for \(P_{i}^{\prime}\).
By Lemma 7, the Merge step implies
\[\mathsf{cost}(Q_{i}\cup Q_{i}^{\prime})\leq\kappa\prod_{j=1}^{i-1}(1+\gamma_{ j})\mathsf{cost}(P_{i}\cup P_{i}^{\prime})+\prod_{j=1}^{i-1}(1+\gamma_{j})2^{i-1}\eta \tag{31}\]
The Reduce step implies that the resulting \((1+\gamma_{i+1})\)-coreset of \(Q_{i}\cup Q_{i}^{\prime}\) denoted as \(Q_{i+1}\) is such that
\[\mathsf{cost}(Q_{i+1})\leq\kappa\prod_{j=1}^{i}(1+\gamma_{j})\mathsf{cost}(P_{i+ 1})+\prod_{j=1}^{i}(1+\gamma_{j})2^{i-1}\eta \tag{32}\]
Finally, provided \(c\) is large enough, we have:
\[\prod_{j=1}^{i}(1+\gamma_{j})\leq\prod_{j=1}^{i}\exp(\frac{\gamma}{cj^{2}})= \exp(\frac{\gamma}{c}\sum_{j=1}^{i}\frac{1}{j^{2}})\leq\exp(\frac{\gamma}{c} \cdot\frac{\pi^{2}}{6})\leq 1+\gamma/3 \tag{33}\]
The statement follows.
Finally, we release a \((1+\gamma/3)\)-coreset of \(\cup_{i\leq u}Q_{i}\) of \(\cup_{i\leq u}P_{i}\) for all non-empty \(P_{i}\) which by similar arguments as above is a \(((1+\gamma)\kappa,(1+\gamma)\eta 2^{u}-1)\)-coreset of \(P\). The statement follows by plugging the value for \(u\).
Note that if \(p_{0}+\nu<\hat{M}\) then we do not release anything. So we have to account for an additional additive error of \(M+\frac{\phi}{c}\log(\frac{2\pi}{\xi})\) w.p. \(1-\xi\) in this case. If \(p_{0}+\nu\geq\hat{M}\), then we proceed by computing a DP coreset using Algorithm \(\mathcal{A}\) with failure probability \(\xi_{A}\). We also compute all the coresets past the first level using Algorithm \(\mathcal{B}\) and a failure probability of \(\xi_{B}/2u\), where \(u\) is the number of levels. Thus for a fixed run of DP-Merge-Reduce, by a union bound, the total failure probability for this part is at most \(\xi_{B}\).
**Lemma 13** (Space).: _DP-Merge-Reduce framework uses \(S_{\mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta)+S_{\mathcal{B}}(SZ_{ \mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta),k,d,\gamma)+\lceil\log(2N/M) \rceil\cdot S_{\mathcal{B}}(SZ_{\mathcal{B}}(M,k,d,\gamma),k,d,\gamma)+3M/2\) space._
Proof.: First we need an upper bound for the size of the blocks at level \(0\), i.e., \(p_{0}\) which is given by the contrapositive statement of the claim below.
**Claim 9**.: _If \(p_{0}\geq\frac{3M}{2}\) then with probability \(1-\xi\), \(p_{0}+\nu\geq\hat{M}\)._
Proof.: By applying the noise bounds from Lemma 9 and using the assumption that \(p_{0}\geq\frac{3M}{2}\), we have that with probability \(1-\xi\),
\[p_{0}+\nu\geq\frac{3M}{2}-\frac{4}{\varepsilon}\log(\frac{2T}{\xi})>\frac{3M} {2}-\frac{M}{3}=M+\frac{M}{6}>M+\frac{2}{\varepsilon}\log(\frac{2T}{\xi})>\hat {M}\]
Thus, we only need to store at most \(3M/2\) points for the block of input points at level \(0\), plus the additional space required to execute the coreset construction. Note that the coreset computation of level \(0\) blocks consumes space \(S_{\mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta)\) and since the largest coreset construction wrt non-DP algorithm \(\mathcal{B}\) is the union of at most \(u\) coresets that is reduced to a single coreset, and the largest input to non-DP coreset algorithm is the resulting DP coreset size -- the additional storage size is at most \(S_{\mathcal{B}}(SZ_{\mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta),k,d, \gamma)+(u-1)\cdot S_{\mathcal{B}}(SZ_{\mathcal{B}}(M,k,d,\gamma),k,d,\gamma)\). Note that the resulting coreset size for non-DP coreset algorithm \(\mathcal{B}\) is independent of the input set size.
**Lemma 14** (Coreset Size).: _The resulting coreset has size at most \(\tilde{O}(k\log(k)\cdot\gamma^{-4})\)._
Proof.: Since \(Q_{1}\leftarrow\mathcal{A}(P_{0},k,d,\varepsilon,\delta,\kappa,\eta)\), size of \(Q_{1}\) is \(SZ_{\mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta)\). Now \(Q_{2},\ldots,Q_{u}\) are obtained by running non-DP algorithm \(\mathcal{B}\). In order to simplify our notation, we invoke the state of the art non-DP coreset algorithm as \(\mathcal{B}\) (see Theorem 8). Thus the coreset size of \(Q_{i}\) where \(2\leq i\leq u\) is \(\tilde{O}(k\log k\cdot\gamma^{-4})\). Finally we take the union of the coresets \(\cup_{1\leq i\leq u}Q_{i}\) which has size at most \(SZ_{\mathcal{A}}(M,k,d,\varepsilon,\delta,\kappa,\eta)+(u-1)O(k\log k\cdot \gamma^{-4})\) and then apply non-DP algorithm \(\mathcal{B}\) to obtain a coreset of size at most \(\tilde{O}(k\log k\cdot\gamma^{-4})\). |
2308.01865 | Renormalizing Love: tidal effects at the third post-Newtonian order | We present the conservative effective two-body Hamiltonian at the third order
in the post-Newtonian expansion with gravitoelectric quadrupolar dynamical
tidal-interactions. Our derivation of the effective two-body Lagrangian is
based on the diagrammatic effective field theory approach and it involves
Feynman integrals up to three loops, which are evaluated within the dimensional
regularization scheme. The elimination of the divergent terms occurring in the
effective Lagrangian requires the addition of counterterms to ensure finite
observables, thereby introducing a renormalization group flow to the
post-adiabatic Love number. As a limiting case of the renormalized dynamical
effective Hamiltonian, we also derive the effective Hamiltonian for adiabatic
tides, and, in this regime, calculate the binding energy for a circular orbit,
and the scattering angle in a hyperbolic scattering. | Manoj K. Mandal, Pierpaolo Mastrolia, Hector O. Silva, Raj Patil, Jan Steinhoff | 2023-08-03T16:40:41Z | http://arxiv.org/abs/2308.01865v2 | # Renormalizing Love: tidal effects at the third post-Newtonian order
###### Abstract
We present the conservative effective two-body Hamiltonian at the third order in the post-Newtonian expansion with gravitoelectric quadrupolar dynamical tidal-interactions. Our derivation of the effective two-body Lagrangian is based on the diagrammatic effective field theory approach and it involves Feynman integrals up to three loops, which are evaluated within the dimensional regularization scheme. The elimination of the divergent terms occurring in the effective Lagrangian requires the addition of counterterms to ensure finite observables, thereby introducing a renormalization group flow to the post-adiabatic Love number. As a limiting case of the renormalized dynamical effective Hamiltonian, we also derive the effective Hamiltonian for adiabatic tides, and, in this regime, calculate the binding energy for a circular orbit, and the scattering angle in a hyperbolic scattering.
###### Contents
* 1 Introduction
* 2 An EFT description of tides
* 3 Computational algorithm
* 3.1 The effective Lagrangian
* 3.2 Removal of higher order time derivatives
* 3.3 Removal of spurious divergences
* 4 Renormalization
* 4.1 Analysis of divergent terms
* 4.2 Beta-function for post-adiabatic Love number
* 4.3 Radiation from a single neutron star
* 5 Dynamical tides
* 6 Adiabatic tides
* 6.1 Effective Hamiltonian
* 6.2 Binding energy
* 6.3 Scattering angle
* 7 Conclusions
## 1 Introduction
A new era in astronomy and cosmology has begun with the recent gravitational wave (GW) detections by the LIGO-Virgo-KAGRA collaboration [1]. The worldwide network of ground-based [2; 3; 4; 5; 6; 7], as well as space-based GW detectors [8] continues to grow, and will grant access to an ever broader frequency band with higher sensitivity.
Among the most significant sources of GWs are the neutron star (NS) binaries [9; 10; 11], which provide insights into the physics of dense nuclear matter within these stars. In a binary system, a NS develops a quadrupole moment due to the tidal interaction with its companion [12]. The imprint of such tidal interactions was observed in the GW signal GW170817 [9] and led to constraints on the underlying NS equation of state (EOS) [13; 14; 15; 16]. These tidal interactions also give rise to oscillation modes of NS [17; 18; 19; 20; 21], in particular, the f-mode dynamical tides [22], which have been argued to be important in inferring the NS EOS [23] in upcoming observing runs of present GW detectors. Neglecting the dynamical tidal effects can introduce significant biases in the estimation of the tidal deformability, consequently impacting the accuracy of the EOS inferences. Moreover, the inclusion of dynamical tides is essential to improve the agreement between GW models and numerical relativity simulations [24; 25; 26; 27] and it is necessary to fulfill the scientific objectives of the next generation ground-based GW observatories [7; 28; 29]. In this article, we aim to model the effects of such dynamic oscillations of NS on the dynamics of the binary system.
We begin by considering the relativistic Lagrangian developed in Ref. [24] for the quadrupole \(f\)-mode oscillation of a NS coupled to an external tidal field, describing the dynamical tidal (DT) effects,1
Footnote 1: This representation of the Lagrangian is not unique, as it is subject to field redefinition: for instance, the last term could be equivalently replaced with a redefined \(\lambda\) along with terms involving \((E_{\mu\nu})^{2}\) and \((\mathrm{d}E_{\mu\nu}/\mathrm{d}\tau)^{2}\).
\[\mathcal{L}_{\mathrm{DT}}=\frac{z}{4\lambda\omega_{f}^{2}}\left[\frac{c^{2}}{z^ {2}}\frac{\mathrm{d}Q_{\mu\nu}}{\mathrm{d}\tau}\frac{\mathrm{d}Q^{\mu\nu}}{ \mathrm{d}\tau}-\omega_{f}^{2}Q_{\mu\nu}Q^{\mu\nu}\right]-\frac{z}{2}Q^{\mu\nu }E_{\mu\nu}-\kappa_{d}\frac{G_{d}^{2}m^{2}}{c^{6}}\frac{z}{2}Q^{\mu\nu}\frac{ \mathrm{d}^{2}E_{\mu\nu}}{\mathrm{d}\tau^{2}}\,, \tag{1}\]
where \(\omega_{f}\) is the frequency of the mode, \(\lambda\) is the tidal deformability [12], \(Q_{\mu\nu}\) is a symmetric trace-free tensor2 that models the relativistic quadrupole moment of the star, \(u^{\mu}=\mathrm{d}x^{\mu}/\mathrm{d}\tau\) is the four-velocity, \(E_{\mu\nu}=-c^{2}R_{\mu\alpha\nu\beta}u^{\alpha}u^{\beta}/z^{2}\) is the gravitoelectric field and \(z=\sqrt{u^{2}}\) is the redshift factor. We work with the gravitational constant in \((d+1)\) spacetime dimensions written as \(G_{d}=(\sqrt{4\pi\exp(\gamma_{\mathrm{E}})}\,R)^{\epsilon}\,G_{N}\). The last term in the above equation is the first non-minimal coupling that starts contributing to the conservative sector from 3PN order and \(\kappa_{d}=(\sqrt{4\pi\exp(\gamma_{\mathrm{E}})}\,R)^{-2\epsilon}\kappa\) is the _post-adiabatic Love number_ in \((d+1)\) spacetime dimensions. We express \(G_{d}\) and \(\kappa_{d}\) in this particular form because later on we will employ the modified minimal subtraction scheme [30], and hence the appearance of the \(4\pi\), the Euler-Mascheroni constant \(\gamma_{\mathrm{E}}\), and the (arbitrary) lenghtscale \(R\). In the adiabatic limit [31; 32; 33], where \(\omega_{f}\to\infty\), the tides do not oscillate independently and are instead locked to the external tidal field as
Footnote 2: We also impose a supplementary condition \(Q_{\mu\nu}u^{\mu}=0\) on the quadrupole to project out the unphysical degrees of freedom.
\[Q^{\mu\nu}=-\lambda E^{\mu\nu}-\lambda\kappa_{d}\frac{G_{d}^{2}m^{2}}{c^{6}} \frac{\mathrm{d}^{2}E^{\mu\nu}}{\mathrm{d}\tau^{2}}\,. \tag{2}\]
The first term is the leading order term in the small frequency expansion of the conservative response function of the quadrupole [34], when sourced by external tidal field and the second term is the next-to-leading-order correction of it, known as the post-adiabatic term. Substituting the above equation in Eq. (1), we obtain the Lagrangian for (post-)adiabatic tides (AT), which is given by,
\[\mathcal{L}_{\mathrm{AT}}=\frac{z\lambda}{4}E_{\mu\nu}E^{\mu\nu}+\lambda \kappa_{d}\frac{G_{d}^{2}m^{2}}{c^{6}}\frac{z}{2}E_{\mu\nu}\frac{\mathrm{d}^{ 2}E^{\mu\nu}}{\mathrm{d}\tau^{2}}\,. \tag{3}\]
We can also write a similar adiabatic Lagrangian for the higher multipole moments which are studied in Ref. [35]. See also Refs. [36; 37; 38; 39; 40; 33; 32; 33; 34; 35; 36; 37; 38; 39; 41; 42; 43; 44; 45; 46; 47; 48]. In general relativity, in addition to the relativistic gravitoelectric tides, we also get a new sector of gravitomagnetic tides [41; 42; 43; 44; 45; 46; 47; 48] that are coupled to the odd-parity normal modes of the NS, modeled by the current-type multipole moments. For the adiabatic limit of the gravitomagnetic sector, see Ref. [35]. In a systematic EFT construction of dynamical tides, further couplings should be included, as outlined in [48], and applied to gravitomagnetic tides, which we leave for future work.
In this article, we quantify the influence of the dynamic tides on the behavior of a compact binary system. To achieve this goal, we employ effective field theory (EFT) techniques [49] to analyze the inspiral phase of the binary, which occurs when its components are moving at nonrelativistic velocities and the orbital separation is large as compared to the length scale associated to each compact object. In this context, we apply a perturbative approach that involves a series expansion based on powers of \(v/c\), where \(v\) represents the orbital velocity of the binary and \(c\) is the speed of light. The virial theorem dictates that, the kinetic energy is \((-1/2)\) times the potential energies of a bound state system. Thus, we set up the _post-Newtonian_ (PN) analysis, expanding in two perturbative parameters: \(v/c\) and \(G_{N}\), where \(G_{N}\) denotes Newton's constant. Here, terms of the form \((v/c)^{n}\) are referred to
as being of \((n/2)\)PN order. The PN analysis of the binary dynamics can be categorized into two sectors: the conservative sector, where emitted radiation is neglected and the orbital separation remains constant, and the radiative sector, where radiation carries away energy and momentum. At higher PN orders, these sectors can interact due to tail effects, originating from radiation being scattered by the orbital background curvature and affecting the orbital dynamics (see, e.g., Ref. [50].) By employing the EFT approach, we can determine observables at any given PN order using diagrammatic methods, first proposed in [49] and modern integration methods [51, 52], turning the problem into the determination of scattering amplitudes, which can be systematically obtained through the calculation of corresponding Feynman diagrams (see Refs. [53, 54, 55], for reviews). Following the same computational strategy as developed in [56, 57, 58], in order to compute the effective Lagrangian, we make use of an automated in-house code, interfaced to QGRAF[59], for the diagram generation, to xTensor[60], for tensor algebra manipulation, and to LiteRed[61], for the integral decomposition.
The effective Hamiltonian for conservative dynamical gravitoelectric tides was first computed in Refs. [62, 24] up to 1PN and then was extended up to 2PN in Ref. [58]. The effects of spin and tides were analyzed together in Refs. [63, 48, 64] for gravitomagnetic tides. In the adiabatic limit, the 2PN effective Hamiltonian was computed in Refs. [35, 40] for both gravitoelectric and gravitomagnetic tides. Other works in PN theory can be found in Refs. [65, 66, 67, 35, 62]. In the post-Minkowskian (PM) expansion, where the perturbative series is controlled by \(G_{N}\) alone, the adiabatic tidal corrections were studied to 3PM order in Refs. [68, 69]. See also Refs. [70, 71, 72, 73, 74, 75, 76]. Adiabatic tidal effects where also included in effective-one-body waveform models [77, 78] in Refs. [79, 80, 35, 78, 77, 81] and in Refs. [63, 24, 25] for the case of dynamical tides. In this paper, we extend the state-of-the art of the _analytic calculations of dynamical and adiabatic gravitoelectric tides in the conservative sector to 3PN order_.
Within the diagrammatic EFT approach, higher-order perturbative corrections to the scattering amplitudes may contain divergent contributions. By adopting the dimensional regularization scheme, the divergent terms of multi-loop Feynman integrals are parameterised by poles in \(\epsilon=(d-3)\), where \(d\) is the continuous number of space-time dimension. The so-called ultraviolet (UV) divergences can be absorbed in the redefinition of the free parameters of the theory, known as _renormalization_, thus yielding finite results. The process of renormalization can be understood as a coarse-graining transformation on the system [82]. One can understand the effects of renormalization in classical context using the Kadanoff's block-spin transformations [83]. See Ref. [84, 85] for renormalization in classical field theories coupled to external spatial sources.
Besides the UV divergences, the computation of the conservative potential for a binary system may also involve infrared (IR) divergences. In the point particle sector, starting from the 4PN order, we encounter IR divergences in the conservative sector, which are due the artificial separation of conservative and dissipative dynamics [86]. However, in the computation of 3PN dynamical tides, we encounter only UV divergences. Part of these divergent terms can be removed by adding counterterms based on the world-line operators, which eventually can be removed with field-redefinitions. This procedure is equivalent to finding a suitable canonical transformation (in the Hamiltonian picture) or adding a total derivative (in the Lagrangian picture) [86]. We dub these divergences as spurious divergences as they do not have any effect on any physical observables. Additionally, we encounter UV divergences that are not spurious, which can be eliminated by the renormalization procedure, namely by introducing a new counterterm in the point particle Lagrangian, eventually yielding finite results. This counterterm affects the source coupling of the quadrupole moment with the gravitons. Hence, the renormalization procedure introduces the running of the post-adiabatic Love number \(\kappa\).
Furthermore, we note that the same counterterm can also remove the UV divergences arising from the radiative sector [87, 88]. The appearance of an identical divergent term in both the conservative
and radiative sector was observed in Ref. [49] just for the case of spurious divergences, which can be removed through field redefinition completely.
Analogously, in the recent computation of 4PM observables due to scalar interactions [89], the renormalization of one of the scalar tidal Love numbers was necessary to obtain finite results. Renormalization and running for conservative and dissipative Love numbers was also seen in Ref. [90] by determining the counter terms in the tidal response using BHPT (black-hole perturbation theory) for the case of Kerr black holes. In this article, _we present the renormalization of post-adiabatic Love number as a consequence of genuine divergences appearing in the conservative sector of the gravitational two-body system_.
The paper is organized as follows. In Section 2, we review the description of tidally-interacting binaries in the EFT formalism. In Section 3, we present the algorithm used to compute the 3PN dynamic tidal potential. In Section 4, we present the procedure of renormalization required for the post-adiabatic Love number to get a finite interaction Hamiltonian. Our main result, the effective dynamical tidal Hamiltonian (5.4), is presented in Section 5. In Section 6, we consider the adiabatic limit, and derive an effective adiabatic tidal Hamiltonian. We compute two gauge-independent observables: (i) the binding energy of a circular binary and (ii) the scattering angle for the hyperbolic encounter of two stars. Finally, we present our conclusions and avenues for future work in Section 7. This work is supplemented with three ancillary files: Hamiltonian-DT.m, containing the analytic expression of the Hamiltonian for the dynamic tides, Hamiltonian-AT.m, containing the analytic expression of the Hamiltonian for the adiabatic tides and Poincare.Algebra.m, containing the result for the center of mass of the system which completes the Poincare algebra and, hence, validates our results.
_Notation -_ The mostly negative signature for the metric is employed. Bold-face characters are used for three-dimensional variables, and normal-face font, for four-dimensional variables. The subscript \((a)\) labels the binary components on all the corresponding variables, like their position \(\mathbf{x}_{(a)}\) and quadrupole moment \(\mathbf{Q}_{(a)}\). An overdot indicates the time derivative, e.g., \(\mathbf{v}_{(a)}=\hat{\mathbf{x}}_{(a)}\) is the velocity, \(\mathbf{a}_{(a)}=\hat{\mathbf{x}}_{(a)}\) the acceleration and \(\dot{\mathbf{Q}}=\mathrm{d}\mathbf{Q}/\mathrm{d}t\). The separation between two objects is denoted by \(\mathbf{r}=\mathbf{x}_{(1)}-\mathbf{x}_{(2)}\), with absolute value \(r=|\mathbf{r}|\) and the unit vector along the separation is \(\mathbf{n}=\mathbf{r}/r\).
## 2 An EFT description of tides
In this section, we review the formalism developed to model the dynamic tidal oscillations of compact objects in Ref. [24]. The Lagrangian given in equation (1) can be written as
\[\mathcal{L}_{\mathrm{DT}(a)} =cP_{(a)\mu\nu}\frac{\mathrm{d}Q_{(a)}^{\mu\nu}}{\mathrm{d}\tau} -z_{(a)}\left[\lambda_{(a)}\omega_{f(a)}^{2}P_{(a)}^{\mu\nu}P_{(a)\mu\nu}+ \frac{1}{4\lambda}Q_{(a)}^{\mu\nu}Q_{(a)\mu\nu}\right]\] \[\quad-\frac{z_{(a)}}{2}Q_{(a)}^{\mu\nu}E_{\mu\nu}-\kappa_{d(a)} \frac{G_{2}^{2}m_{(a)}^{2}}{c^{6}}\frac{z_{(a)}}{2}Q_{(a)}^{\mu\nu}\frac{d^{2 }E_{\mu\nu}}{d\tau^{2}}\,, \tag{1}\]
where the conjugate momenta \(P_{\mu\nu}\) with respect to the quadrupole moment are defined as,
\[P_{(a)\mu\nu}=\frac{1}{c}\frac{\partial\mathcal{L}}{\partial(\mathrm{d}Q_{(a) }^{\mu\nu}/\mathrm{d}\tau)}=\frac{c}{2\lambda\omega_{f}^{2}z_{(a)}}\frac{ \mathrm{d}Q_{(a)\mu\nu}}{\mathrm{d}\tau}\,. \tag{2}\]
The action for dynamical tides, written explicitly in terms of the physical 3 degrees of freedom, \(\mathbf{Q}^{ij}_{(a)}\) and \(\mathbf{P}^{ij}_{(a)}\), can be obtained by expressing the dynamical variables in the rest frame of each body 4. This gives us the effective point-particle ("pp") action
Footnote 3: The supplementary condition for the dynamical degrees of freedom in the rest frame of the star becomes: \(Q^{A0}_{(a)}=0\,,\quad\text{and}\quad P^{A0}_{(a)}=0\) where, we now explicitly see that \(Q^{AB}_{(a)}\) and \(P^{AB}_{(a)}\) are spatial tensors that encode only the physical degrees of freedom. Thus, we define the spatial tensor \(Q^{AB}_{(a)}\delta^{i}_{A}\delta^{j}_{B}=\mathbf{Q}^{ij}_{(a)}\) and \(P^{AB}_{(a)}\delta^{i}_{A}\delta^{j}_{B}=\mathbf{P}^{ij}_{(a)}\).
Footnote 4: Different reference frames: (i) the general coordinate frame (denoted by Greek indices), (ii) the local Lorentz frame (denoted by small Latin indices), (iii) and the rest frame of the compact objects (denoted by capital Latin indices), and the Lorentz transformation, which boosts between the local Lorentz frame and the rest frame of the body is given by
\[S_{\text{pp}}=\sum_{a=1,2}\int\frac{\mathrm{d}\tau}{c}\left[-m_{(a)}z_{(a)}c^ {2}+\mathcal{L}_{\text{FD}(a)}+\mathcal{L}_{\text{MQ}(a)}+\mathcal{L}_{\text{ EQ}(a)}+\mathcal{L}_{\text{EQ}(a)}\right]\,. \tag{3}\]
The first term is simply the action for a point particle, while the remaining terms originate from the Lagrangian (1) as follows. The first term in Eq. (1) gives rise to,
\[\mathcal{L}_{\text{FD}(a)}=\mathbf{P}^{ij}_{(a)}\dot{\mathbf{Q}}^{ij}_{(a )}+c\left[-u^{\mu}_{(a)}\omega^{ij}_{\mu}\left(\frac{\mathbf{S}^{ij}_{Q(a)}}{2}- \frac{\mathbf{S}^{ik}_{Q(a)}u^{k}_{(a)}u^{j}_{(a)}}{z_{(a)}(z_{(a)}+u^{a}_{(a)} \delta^{0}_{a})}\right)-u^{\mu}_{(a)}\omega^{ai}_{\mu}\delta^{0}_{a}\mathbf{S}^{ij}_ {Q(a)}\frac{u^{j}_{(a)}}{z_{(a)}}\right.\] \[\left.+\frac{\mathbf{S}^{ij}_{Q(a)}u^{i}_{(a)}}{z_{(a)}(z_{(a)}+u^{a}_ {(a)}\delta^{0}_{a})}\frac{\mathrm{d}u^{j}_{(a)}}{\mathrm{d}\tau}\right]\,, \tag{4}\]
which describes frame-dragging ("FD") effects on the quadrupole moment of each binary component, where, the "tidal spin" tensor \(\mathbf{S}^{ij}_{Q(a)}=2\left(\mathbf{Q}^{ki}_{(a)}\mathbf{P}^{jk}_{(a)}-\mathbf{Q}^{kj}_{(a)} \mathbf{P}^{ik}_{(a)}\right)\,,\) which describes the angular momentum of the dynamical quadrupole moment. The second term in Eq. (1) yields,
\[\mathcal{L}_{\text{MQ}(a)}=-z_{(a)}\left[\lambda_{(a)}\omega^{2}_{f(a)}\mathbf{P} ^{ij}_{(a)}\mathbf{P}^{ij}_{(a)}+\frac{1}{4\lambda_{(a)}}\mathbf{Q}^{ij}_{(a)}\mathbf{Q}^{ ij}_{(a)}\right]=-z_{(a)}\mathbf{M}_{Q(a)}\,. \tag{5}\]
This term governs the dynamics of the quadrupole moment, which, by the second equality, can be described as a time-dependent effective mass term for quadrupole moment ("MQ"). Finally, the last two terms in Eq. (1) results in,
\[\mathcal{L}_{\text{EQ}(a)}=-\frac{z_{(a)}}{2}\mathbf{Q}^{ij}_{(a)}\mathbf{E}^{ij}_{( a)}\qquad\text{and}\qquad\mathcal{L}_{\text{EQ}(a)}=-\kappa_{d(a)}\frac{G^{2}_{d}m ^{2}_{(a)}}{c^{6}}\frac{z_{(a)}}{2}\mathbf{Q}^{ij}_{(a)}\vec{\mathbf{E}}^{ij}_{(a)}\,. \tag{6}\]
These terms act as a driving source for the quadrupole moment's dynamics and are induced on each of the binary components by the gravitoelectric tidal field \(\mathbf{E}^{ij}_{(a)}=B^{a}_{(a)i}B^{b}_{(a)j}e^{\mu}_{a}e^{\nu}_{b}E_{\mu\nu}\) of its companion.
Here the Poisson bracket of dynamical quantities i.e. the dynamic quadrupole \(\mathbf{Q}^{ij}\) and its conjugate momenta \(\mathbf{P}^{ij}\) is given as
\[\{\mathbf{Q}^{ij},\mathbf{P}^{kl}\}=\frac{1}{2}\left(\delta^{ik}\delta^{jl}+\delta^{ il}\delta^{jk}\right)-\frac{1}{3}\delta^{ij}\delta^{kl}\,, \tag{7}\]
which then implies a SO(3) angular momentum algebra for the tidal spin tensor \(\mathbf{S}^{ij}_{Q}\).
Computational algorithm
### The effective Lagrangian
In this section, we present the used computational algorithm. The resulting potential will then be used in the next section to obtain the effective two-body Hamiltonian. The dynamics of the gravitational field \(g_{\mu\nu}\) is given by the Einstein-Hilbert action along with a harmonic gauge fixing term in \(d+1\) spacetime dimensions,
\[S_{\rm EH}=-\frac{c^{4}}{16\pi G_{d}}\int\mathrm{d}^{d+1}x\ \sqrt{g}\ \mathcal{R}+\frac{c^{4}}{32\pi G_{d}}\int \mathrm{d}^{d+1}x\ \sqrt{g}\ g_{\mu\nu}\ \Gamma^{\mu}\Gamma^{\nu}\,, \tag{10}\]
where \(\Gamma^{\mu}=\Gamma^{\mu}_{\rho\sigma}g^{\rho\sigma}\), \(\Gamma^{\mu}_{\rho\sigma}\) is the Christoffel symbol, \(\mathcal{R}\) is the Ricci scalar, and \(g\) is the metric determinant.
For the conservative dynamics of the system, we decompose the metric as \(g_{\mu\nu}=\eta_{\mu\nu}+H_{\mu\nu}\), where \(H_{\mu\nu}\) is the potential graviton. We use the Kaluza-Klein parametrization to decompose the metric, where the 10 degrees of freedom of \(H_{\mu\nu}\) are encoded in three fields: a scalar \(\mathbf{\phi}\), a three-dimensional vector \(\mathbf{A}\) and a three-dimensional symmetric rank two tensor \(\mathbf{\sigma}\)[91; 92]. In this parametrization, we write the metric as
\[g_{\mu\nu}=\begin{pmatrix}e^{2\mathbf{\phi}/c^{2}}&-e^{2\mathbf{\phi}/c^{2}}\mathbf{A}_{j} /c^{2}\\ -e^{2\mathbf{\phi}/c^{2}}\mathbf{A}_{i}/c^{2}&-e^{-2\mathbf{\phi}/((d-2)c^{2})}\mathbf{\gamma} _{ij}+e^{2\mathbf{\phi}/c^{2}}\mathbf{A}_{i}\mathbf{A}_{j}/c^{4}\end{pmatrix}\,,\quad\text{ with}\quad\mathbf{\gamma}_{ij}=\mathbf{\delta}_{ij}+\mathbf{\sigma}_{ij}/c^{2}\,. \tag{11}\]
We can now obtain the effective action for the binary by integrating out the gravitational degrees of freedom as follows,
\[\exp\left[\mathrm{i}\,\int\mathrm{d}t\ \mathcal{L}_{\rm eff}\right]=\int \mathrm{D}\mathbf{\phi}\,\mathrm{D}\mathbf{A}_{i}\,\mathrm{D}\mathbf{\sigma}_{ij}\,\exp[ \mathrm{i}\,(S_{\rm EH}+S_{\rm pp})]\,, \tag{12}\]
where the Einstein-Hilbert action is given by Eq. (10) and the point-particle action is given by Eq. (3). To perform this integration, we first decompose the effective Lagrangian \(\mathcal{L}_{\rm eff}\) as
\[\mathcal{L}_{\rm eff}=\mathcal{K}_{\rm eff}-\mathcal{V}_{\rm eff}\,, \tag{13}\]
where \(\mathcal{K}_{\rm eff}\) is an effective kinetic term, which does not dependent on any integration of potential graviton (i.e., it is independent of integration over \(\mathbf{\phi}\), \(\mathbf{A}\), and \(\mathbf{\sigma}\)). We can compute \(\mathcal{K}_{\rm eff}\) directly up to the required PN order. Explicitly, we decompose \(\mathcal{K}_{\rm eff}\) in a point-particle, a frame-dragging, and a "quadrupole mass" contribution, i.e., \(\mathcal{K}_{\rm eff}=\mathcal{K}_{\rm pp}+\mathcal{K}_{\rm FD}+\mathcal{K}_{ \rm MQ}\),
\[\mathcal{K}_{\rm pp} =\sum_{a=1,2}m_{(a)}\left[\frac{1}{2}\mathbf{v}_{a}^{2}+\frac{1}{8}\bm {v}_{(a)}^{4}\left(\frac{1}{c^{2}}\right)+\frac{1}{16}\mathbf{v}_{(a)}^{6}\left( \frac{1}{c^{4}}\right)+\frac{5}{128}\mathbf{v}_{(a)}^{8}\left(\frac{1}{c^{6}} \right)\right]+\mathcal{O}\left(\frac{1}{c^{8}}\right)\,, \tag{14a}\] \[\mathcal{K}_{\rm FD} =\sum_{a=1,2}\left\{\mathbf{P}_{(a)}^{ij}\dot{\mathbf{Q}}_{(a)}^{ij}+\mathbf{S }_{Q(a)}^{ij}\mathbf{v}_{(a)}^{i}\mathbf{a}_{(a)}^{j}\left[\frac{1}{2}\left(\frac{1}{c^ {2}}\right)+\frac{3}{8}\mathbf{v}_{(a)}^{2}\left(\frac{1}{c^{4}}\right)+\frac{5}{16 }\mathbf{v}_{(a)}^{4}\left(\frac{1}{c^{4}}\right)\right]\right\}+\mathcal{O} \left(\frac{1}{c^{8}}\right)\,,\] (14b) \[\mathcal{K}_{\rm MQ} =\sum_{a=1,2}\mathbf{M}_{(a)}\left[1+\frac{1}{2}\mathbf{v}_{a}^{2}\left( \frac{1}{c^{2}}\right)+\frac{1}{8}\mathbf{v}_{(a)}^{4}\left(\frac{1}{c^{4}}\right) +\frac{1}{16}\mathbf{v}_{(a)}^{6}\left(\frac{1}{c^{6}}\right)\right]+\mathcal{O} \left(\frac{1}{c^{8}}\right)\,. \tag{14c}\]
The terms that are obtained after performing the explicit integral are collectively denoted by the potential \(\mathcal{V}_{\rm eff}\). These terms are computed by summing over the connected Feynman diagrams without graviton loops, as shown below,
\[\mathcal{V}_{\rm eff}=\mathrm{i}\,\lim_{d\to 3}\int\frac{\mathrm{d}^{d}\mathbf{p}}{(2\pi)^{d}} \ e^{\mathrm{i}\,\mathbf{p}\cdot(\mathbf{x}_{(1)}-\mathbf{x}_{(2)})}\ \ \ \ \xrightarrow{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:
where \(\mathbf{p}\) is the linear momentum transferred between the two bodies. For this, we begin with generating all the topologies that correspond to graviton exchanges between the worldlines of the two compact objects. There is 1 topology at tree-level (\(G_{N}\)), 2 topologies at one-loop (\(G_{N}^{2}\)), 9 topologies at two-loop (\(G_{N}^{3}\)), and 32 topologies at three-loop (\(G_{N}^{4}\)). We then dress these topologies with the Kaluza-Klein fields \(\mathbf{\phi}\), \(\mathbf{A}\) and \(\mathbf{\sigma}\). The number of diagrams5 appearing in the point-particle sector is given in Table 0(a), whereas that in the tidal sector are given in Tables 0(b), 0(c), 0(d) and 0(e). We use an in-house code that uses tools from EFTofPNG[93] and xTensor[60], for the tensor algebra manipulation, and LiteRED[61], for the integration-by-parts reduction, to compute these Feynman diagrams. This reduction recasts the Feynman diagrams in terms of two point massless master integrals [52] as shown in Fig. 1.
Footnote 5: The diagrams which can be obtained from the change in the label \(1\leftrightarrow 2\), are not counted as separate diagrams.
Once the exact expressions for the master integrals are substituted, we perform a Fourier transform to obtain the position-space effective potential \(\mathcal{V}_{\rm eff}\). The details of the algorithm and the expressions for the master integrals up to three loops can be found in Ref. [56].
After carrying out all these steps, the effective potential can be decomposed into a point-particle and a dynamical tide contribution, i.e., \(\mathcal{V}_{\rm eff}=\mathcal{V}_{\rm pp}+\mathcal{V}_{\rm DT}\), where
\[\mathcal{V}_{\rm pp} =\mathcal{V}_{\rm N}+\left(\frac{1}{c^{2}}\right)\mathcal{V}_{1 \rm PN}+\left(\frac{1}{c^{4}}\right)\mathcal{V}_{2\rm PN}+\left(\frac{1}{c^{6} }\right)\mathcal{V}_{3\rm PN}+\mathcal{O}\left(\frac{1}{c^{8}}\right)\,, \tag{13a}\] \[\mathcal{V}_{\rm DT} =\sum_{n=0}^{3}\left(\frac{1}{c^{2}}\right)^{n}\left(\mathcal{V} _{n\rm PN}^{\rm EQ}+\mathcal{V}_{n\rm PN}^{\rm FD}+\mathcal{V}_{n\rm PN}^{\rm MQ }\right)+\left(\frac{1}{c^{6}}\right)\mathcal{V}_{3\rm PN}^{\rm EQ}+\mathcal{O }\left(\frac{1}{c^{6}}\right)\,, \tag{13b}\]
and we remark that \(\mathcal{V}_{\rm DT}\) has contributions due to the driving source, the "quadrupole-mass" and the frame-dragging terms, and the post adiabatic terms.
### Removal of higher order time derivatives
The potential \(\mathcal{V}_{\rm eff}\) computed in the previous section, is a function of the dynamical variables \(\mathbf{x}_{(a)}\), \(\mathbf{Q}_{(a)}\), and \(\mathbf{S}_{Q(a)}\) and \(\mathbf{M}_{Q(a)}\), and their higher order time derivatives. The first and higher-order time derivatives of \(\mathbf{Q}_{(a)}\), \(\mathbf{S}_{Q(a)}\), and \(\mathbf{M}_{Q(a)}\) are removed using integration by parts, while second and higher-order time derivatives of \(\mathbf{x}_{(a)}\) are removed using a coordinate transformation \(\mathbf{x}_{(a)}\to\mathbf{x}_{(a)}+\delta\mathbf{x}_{(a)}\)[94; 95; 96; 97; 98]. This coordinate transformation changes the Lagrangian as
\[\delta\mathcal{L}=\frac{\delta\mathcal{L}}{\delta\mathbf{x}_{(a)}^{i}}\,\delta\bm {x}_{(a)}^{i}+\mathcal{O}(\delta\mathbf{x}_{(a)}^{2})\,, \tag{14}\]
Figure 1: The diagrammatic correspondence between the four-point EFT-Gravity graphs and the two-point quantum-field-theory (QFT) graphs.
where \(\delta\mathbf{x}_{(a)}\) is chosen such that it removes the undesirable terms from our final Lagrangian. In our case, the process of removing the higher order time derivatives using a coordinate transformation is equivalent to the substitution of the equation of motion for the acceleration \(\mathbf{a}_{(a)}\) and its higher order time derivatives back into the Lagrangian [96]. This procedure converts the Lagrangian \(\mathcal{L}_{\rm eff}\) into the Lagrangian \(\bar{\mathcal{L}}_{\rm eff}\) which depends only on \(\mathbf{x}_{(a)}\), \(\mathbf{v}_{(a)}\), and \(\mathbf{Q}_{(a)}\), but still contains divergent terms.
### Removal of spurious divergences
After the removal of the higher order time derivatives, the effective Lagrangian \(\bar{\mathcal{L}}_{\rm eff}\) contains several divergent terms. These divergent terms are generated by the two topologies as shown in Fig. 2. Part of these divergent terms can be removed by adding counter-terms based on the world-line operators, which ultimately can be removed by using field-redefinitions. So, this procedure becomes equivalent to adding total derivative terms to the Lagrangian or finding a canonical transformation on the corresponding Hamiltonian. As a result these divergences do not have any effect on the physical observable and are dubbed as spurious divergences. 6 For the removal of the 3PN spurious divergence in the point particle sector, following [100], we can add a total derivative term
Footnote 6: The spurious divergences appearing in 3PN order can also be removed from the equation of motion [99].
\[\mathcal{L}_{\rm TD1}=\left(\frac{1}{c^{6}}\right)\frac{\mathrm{d}}{\mathrm{d }t}\left[\frac{G_{N}^{3}}{r^{2}}\Big{(}c_{1}\left(\mathbf{v}_{(1)}\cdot\mathbf{n} \right)+c_{2}\left(\mathbf{v}_{(2)}\cdot\mathbf{n}\right)\Big{)}\right]\,, \tag{3.9}\]
and gaining insights from [94], we choose another total derivative term
\[\mathcal{L}_{\rm TD2}=\left(\frac{1}{c^{6}}\right)\frac{\mathrm{d}}{\mathrm{d }t}\Bigg{\{}\frac{G_{N}^{3}}{r^{4}}\left[c_{3}\Big{(}\mathbf{Q}_{(1)}^{ij}\mathbf{v}_ {(1)}^{i}\mathbf{n}^{j}\Big{)}+c_{4}\Big{(}\mathbf{Q}_{(1)}^{ij}\mathbf{v}_{(2)}^{i}\mathbf{n} ^{j}\Big{)}+c_{5}\Big{(}\mathbf{Q}_{(2)}^{ij}\mathbf{v}_{(1)}^{i}\mathbf{n}^{j}\Big{)}+c_{ 6}\Big{(}\mathbf{Q}_{(2)}^{ij}\mathbf{v}_{(2)}^{i}\mathbf{n}^{j}\Big{)}\right]\]
\begin{table}
\end{table}
Table 1: Number of Feynman diagrams contributing different sectors.
\[+\frac{G_{N}^{3}}{r^{4}}\Bigg{[}\Big{(}\mathbf{Q}_{(1)}^{ij}\mathbf{n}^{i} \mathbf{n}^{j}\Big{)}\Big{(}c_{7}\left(\mathbf{v}_{(1)}\cdot\mathbf{n}\right)+c_{8}\left(\bm {v}_{(2)}\cdot\mathbf{n}\right)\Big{)}+\Big{(}\mathbf{Q}_{(2)}^{ij}\mathbf{n}^{i}\mathbf{n}^{j} \Big{)}\Big{(}c_{9}\left(\mathbf{v}_{(1)}\cdot\mathbf{n}\right)+c_{10}\left(\mathbf{v}_{(2 )}\cdot\mathbf{n}\right)\Big{)}\Bigg{]}\Bigg{\}}\,, \tag{25}\]
to remove the spurious divergence in the 3PN tidal sector. Here the coefficients are represented in the notation,
\[c_{n}\equiv c_{en}\frac{1}{\epsilon}+c_{Ln}\log\left(\frac{r}{R} \right)\,,\text{ for }n=1,2,\cdots 10\,, \tag{26}\]
and their particular values are given as,
\[c_{\epsilon 1} =\frac{1}{3}\left(4m_{(1)}^{3}m_{(2)}-m_{(1)}m_{(2)}^{3}\right), c_{\epsilon 2} =\frac{1}{3}\left(m_{(1)}^{3}m_{(2)}-4m_{(1)}m_{(2)}^{3}\right)\,,\] \[c_{\epsilon 3} =-\frac{107}{35}m_{(1)}^{2}m_{(2)}^{2}, c_{\epsilon 4} =\frac{107}{35}m_{(1)}^{3}m_{(2)}\,,\] \[c_{\epsilon 5} =-\frac{107}{35}m_{(1)}m_{(2)}^{3}, c_{\epsilon 6} =\frac{107}{35}m_{(1)}^{2}m_{(2)}^{2}\,,\] \[c_{\epsilon 7} =\frac{107}{14}m_{(1)}^{2}m_{(2)}^{2}, \epsilon 8 =-\frac{107}{14}m_{(1)}^{3}m_{(2)}\,,\] \[c_{\epsilon 9} =\frac{107}{14}m_{(1)}m_{(2)}^{3}, c_{\epsilon 10} =-\frac{107}{14}m_{(1)}^{2}m_{(2)}^{2}\,, \tag{27}\]
and
\[c_{Li} =-3c_{\epsilon i}\quad\text{ for }i=1,2 c_{Li} =-2c_{\epsilon i}\quad\text{ for }i=3,\dots,10\,. \tag{28}\]
Now the modified Lagrangian \(\tilde{\mathcal{L}}_{\text{eff}}+\mathcal{L}_{\text{TD1}}+\mathcal{L}_{\text{ TD2}}\) is free of all the spurious divergences and still contains divergent terms in the 3PN tidal sector, which requires renormalization and we analyse it in the next section.
## 4 Renormalization
In this section, we first analyze the divergent terms present in the effective Lagrangian, which cannot be eliminated through the addition of total derivatives. These divergent terms are generated by the two topologies shown in Fig. 2. We show that to eliminate these remaining divergences, a renormalization procedure is necessary, where the divergent contributions are absorbed into the bare post-adiabatic Love number \(\kappa^{\text{B}}\). As a consequence, we obtain a renormalized post-adiabatic Love number \(\kappa(R)\), which exhibits a nontrivial renormalization group flow.
### Analysis of divergent terms
First, we conduct a complete validity check of our computation, following the observation in Ref. [24]. Specifically, the Hamiltonians \(\widetilde{\mathcal{H}}^{\text{EQ}}\) and \(\widetilde{\mathcal{H}}^{\text{FD}}\) exhibit similarities to the spin-induced quadrupole (\(\widetilde{\mathcal{H}}^{\text{ES}^{2}}\)) and spin-orbit Hamiltonians (\(\widetilde{\mathcal{H}}^{\text{SO}}\)) when certain replacements are applied. Our analysis starts with \(\tilde{\mathcal{L}}_{\text{eff}}+\mathcal{L}_{\text{TD1}}\), from which we derive the corresponding Hamiltonians \(\widetilde{\mathcal{H}}^{\text{EQ}}_{\text{3PN}}\) and \(\widetilde{\mathcal{H}}^{\text{FD}}_{\text{3PN}}\). These Hamiltonians are free of divergent and logarithmic terms in the point particle sector but contain such terms in the tidal sectors, respectively. On the other hand, following [56; 57], we obtain the \(\widetilde{\mathcal{H}}^{\text{SO}}_{\text{N}^{2}\text{LO}}\) and \(\widetilde{\mathcal{H}}^{\text{ES}^{2}}_{\text{N}^{2}\text{LO}}\), which are free of divergent and logarithmic terms in the point particle sector, but contains such terms in the spinning sector. Further investigations led us to find that these Hamiltonians are
equivalent to each other up to a canonical transformation, with \(\widetilde{\mathcal{H}}^{\text{EQ}}_{\text{3PN}}\) being equivalent to \(\widetilde{\mathcal{H}}^{\text{ES}^{2}}_{\text{N}^{3}\text{LO}}\), and \(\widetilde{\mathcal{H}}^{\text{FD}}_{\text{3PN}}\) being equivalent to \(\widetilde{\mathcal{H}}^{\text{SO}}_{\text{N}^{2}\text{LO}}\), provided certain replacements are applied. This result serves as a robust consistency check, confirming the accuracy of our computation of the effective Lagrangian and the procedure of removing the higher-order time derivatives. However, an important distinction arises, while eliminating the divergent terms using a canonical transformation, between the tidal and the spinning sector. Notably, the kinetic term of \(\mathbf{Q}^{ij}\) begins at 0PN order, whereas the kinetic term of \(\mathbf{S}^{ij}\) starts at 0.5PN order. This discrepancy has a crucial consequence while computing the canonical transformation. The contributions from the Poisson bracket of the spin tensor at N\({}^{3}\)LO can be ignored (see Ref. [57], Sec. 4.3.2), whereas the equivalent contributions from the Poisson bracket for the quadrupole tensor (Eq. (7)) at 3PN order cannot be ignored. This distinction accounts for the origin of the residual divergent terms present in the tidal sector while all the divergent pieces in the spinning sector can be removed by finding a suitable canonical transformation.
After the removal of all the spurious divergent terms, we obtain the following residual divergent term
\[\Big{(}\bar{\mathcal{L}}_{\text{eff}}+\mathcal{L}_{\text{TD1}}+\mathcal{L}_{ \text{TD2}}\Big{)}_{1/\epsilon}=\Bigg{(}\frac{107}{105}m_{(1)}^{2}G_{N}^{2} \frac{1}{c^{6}}\frac{1}{\epsilon}\Bigg{)}\Bigg{(}\frac{3}{2}\frac{G_{N}m_{(2)} }{r^{3}}\Big{(}\tilde{\mathbf{Q}}_{(1)}^{ij}\mathbf{n}^{i}\mathbf{n}^{j}\Big{)}\Bigg{)}+(1 \leftrightarrow 2)\,, \tag{10}\]
where we perform some algebraic manipulations to present the expression in a compact form. We observe that the divergent terms in Eq. (10) cannot be expressed as a total derivative and the tail effects in the case of dynamic tides start at 4PN order. So, we absorb the divergence in one of the coupling constants of the Lagrangian given in Eq. (1), by designing a counterterm, which will eventually lead to finite results. By examining the structure of terms in Eq. (10), we identify the following counterterm, which we can add to the point particle action (3).
\[\mathcal{L}_{\text{CT(a)}}=-\delta_{\kappa(a)}\frac{G_{N}^{2}m_{(a)}^{2}}{c^{ 6}}\frac{z_{(a)}}{2}\mathbf{Q}_{(a)}^{ij}\tilde{\mathbf{E}}_{(a)}^{ij}\,, \tag{11}\]
where \(\delta_{\kappa(a)}\) contains the divergence. The counter term Lagrangian contributes through a tree level diagram at this order as shown in Fig. 3, which renders the result finite, eventually. Now, the particular value of \(\delta_{\kappa(a)}\) that cancels the divergent terms presented in the effective Lagrangian (10) is,
\[\delta_{\kappa(a)}=-\frac{107}{105}\frac{1}{\epsilon}\,. \tag{12}\]
We note that the counterterm has the structure similar to \(\mathcal{L}_{\text{EQ($a$)}}\), while the divergence produced in the Eq. (10) is sourced by the \(\mathcal{L}_{\text{EQ($a$)}}\). The addition of the counterterm to the point particle action
Figure 2: The topologies that give divergent contribution in the tidal sector at 3PN. Solid lines represent the compact object’s worldlines and the dashed lines represent the potential gravitons \(H_{\mu\nu}\).
n Eq. (3), leads to the renormalization of the post-adiabatic Love number
\[\kappa_{d(a)}=\left(\sqrt{4\pi\exp(\gamma_{\rm E})}\,R\right)^{-2\epsilon}\, \left(\kappa_{(a)}+\delta_{\kappa(a)}\right), \tag{43}\]
where, the renormalized post-adiabatic Love number \(\kappa_{(a)}\) depends on the external scale \(R\), which we analyse in detail in Sec. 4.2. One important thing to note is that the Lagrangian presented in Eq. (1) can be changed by a field redefinition, which in turn redefines the different Love numbers. Hence, the above mentioned procedure of renormalizing the post-adiabatic Love number \(\kappa\) is particular to this representation of the Lagrangian.
Moreover, the counterterm presented in Eq. (41) has an intriguing form, as it turns out to be identical to the counterterms introduced to eliminate the divergences in the radiation emitted by each NS, as discussed in [87; 88]. In Sec. 4.3, we discuss further into the connection of this counterterm with radiation. Similar set of counterterms was also observed in Refs. [89; 90].
### Beta-function for post-adiabatic Love number
Now both the gravitational constant and the post-adiabatic Love number depends on the scale \(R\). However, any physical quantity should be independent of the arbitrary scale \(R\). Indeed by demanding the scale independence of the bare couplings in the Lagrangian we obtain the corresponding renormalization group equations.
For the \(d\)-dimensional gravitational coupling, we obtain,
\[0=\frac{\mathrm{d}G_{d}}{\mathrm{d}R}=\frac{\mathrm{d}}{\mathrm{d}R}\left[ \left(\sqrt{4\pi e^{\gamma_{\rm E}}}R\right)^{\epsilon}G_{N}\right]\,, \tag{44}\]
which implies a trivial beta function for \(G_{N}\) given by,
\[\beta(G_{N})\equiv R\,\frac{\mathrm{d}G_{N}}{\mathrm{d}R}=-\epsilon\left( \sqrt{4\pi e^{\gamma_{\rm E}}}R\right)^{\epsilon}G_{N}\stackrel{{ \epsilon\to 0}}{{=}}0\,. \tag{45}\]
Similarly, for the post-adiabatic coupling, we obtain
\[0=\frac{\mathrm{d}\kappa_{d(a)}}{\mathrm{d}R}=\frac{\mathrm{d}}{\mathrm{d}R} \left[\left(\sqrt{4\pi\exp(\gamma_{\rm E})}\,R\right)^{-2\epsilon}\left(\kappa _{(a)}+\delta_{\kappa_{(a)}}\right)\right]\,, \tag{46}\]
which results in the following beta function, 7
Footnote 7: Similar beta-function can be found for the parameter \(\lambda_{8}\) of the following scalar theory,
\[\mathcal{L}=\frac{1}{2}(\partial_{\mu}\phi)^{2}-\frac{1}{2}m^{2}\phi^{2}- \frac{1}{6!}\lambda_{6}\phi^{6}-\frac{1}{8!}\lambda_{6}^{2}\lambda_{8}\phi^{8}\]
\[\beta(\kappa_{(a)})\equiv R\,\frac{\mathrm{d}\kappa_{(a)}}{\mathrm{d}R}=- \frac{214}{105}\,. \tag{47}\]
Figure 3: Counter-term diagram. The the cross symbol stands for the insertion of the counterterm constant \(\delta_{\kappa(a)}\). Solid lines represent the compact object’s worldlines and the dashed lines represent the potential gravitons \(H_{\mu\nu}\).
The solution of the above given beta function is
\[\kappa_{(a)}(R)=\kappa_{(a)}(R_{0})-\frac{214}{105}\log\left(\frac{R}{R_{0}} \right)\,, \tag{41}\]
where \(R_{0}\) in the integration constant. Now the idea for choosing \(R\) will be the following [49]. In principle, we are allowed to choose the scale \(R\) to whatever value we want. But as shown in Eq. (39), the \(R\)-dependence enters the effective Hamiltonian via \(\kappa(R)\) and logarithmic terms of the form \(\log\left(r/R\right)\).
When matching the point particle theory to an overarching theory for NSs, we should choose the scale \(R\) somewhere around the cutoff of the point particle theory, i.e. \(R\approx R_{0}\approx R_{\rm NS}\) where \(R_{\rm NS}\) is some characteristic lengthscale of the NS. This will remove all the logarithmic terms from the matching procedure and we obtain a value for \(\kappa(R_{0}\approx R_{\rm NS})\). Then, when we analyse the two body system, we have \(R_{\rm orb}\approx r\) which is much larger than \(R\approx R_{\rm NS}\), where \(R_{\rm orb}\) is the characteristic lengthscale of the binary, and hence the logarithm in the Lagrangian blows up. So, in this case we will have to pick \(R\approx R_{\rm orb}\), so that the logarithm is small and we get \(\kappa(R_{\rm orb})\) in the effective Lagrangian. Hence the logarithmic terms plays a role only in flowing the \(\kappa\) from \(\kappa(R_{\rm NS})\) to \(\kappa(R_{\rm orb})\) using the RG solution in Eq. (41). Something peculiar to observe is that, \(\kappa(R_{\rm orb})\to-\infty\) when \(R_{\rm orb}\to\infty\) i.e. when the radius of the binary is large. This is acceptable because as \(R_{\rm orb}\to\infty\) there are factors of \(1/r^{m}\) with the \(\kappa(R_{\rm orb})\) in the effective Lagrangian that suppress this logarithmic growth.
Using the above procedure, we will see that the observables will be independent of the arbitrary scale \(R\) only after performing the matching to an overarching UV theory of NSs. An example of this can be seen in Ref. [87], where the authors computed the flux emitted by a binary and obtained scale dependent logarithmic terms similar to that in Eq. (41). Such terms drop out when they include terms obtained by matching the quadrupole of the binary to the dynamics of two point particle constituents of the binary [50] and the total flux is independent of the arbitrary scale.
### Radiation from a single neutron star
In this section, we explore the relationship between the renormalization procedure used in the previous section and the computation of radiative observables from a single NS with a dynamical quadrupole oscillating at a frequency \(\omega\). This is intriguing as the counter term required to achieve a finite result in our 3PN conservative computation is the same counter term that addresses the divergences in the radiative sector.
The radiative sector from the \(\mathcal{L}_{\rm EQ(a)}\) term in the Lagrangian was computed in Refs. [87; 88]. The radiative observable was expressed in terms of a matrix element of a single graviton emitted from the NS. The expression of these amplitudes upto two loops are given in Eq. (38) of [88], where they found that these amplitudes contain UV divergent terms and they can be removed by modifying the dynamics of the quadrupole as
\[\mathbf{Q}^{B}_{ij}(\omega)=\mathbf{Q}_{ij}(\omega,\mu)\Bigg{[}1+\frac{107}{105}\left( \frac{G_{N}m_{(a)}\omega}{c^{3}}\right)^{2}\frac{1}{\epsilon}\Bigg{]}\,. \tag{42}\]
If we only consider the \(\mathcal{L}_{\rm EQ(a)}\) term, the above is equivalent to adding counter terms of the form shown in Eq. (40). This can be seen easily as follows,
\[-\frac{z_{(a)}}{2}\mathbf{E}^{ij}_{(a)}\mathbf{Q}^{\rm Bij}_{(a)} =-\frac{z_{(a)}}{2}\mathbf{Q}^{ij}_{(a)}\Bigg{[}1+\frac{107}{105}\left( \frac{G_{N}m_{(a)}}{c^{3}}\right)^{2}\left(-\frac{\mathrm{d}^{2}}{\mathrm{d}t^ {2}}\right)\frac{1}{\epsilon}\Bigg{]}\mathbf{E}^{ij}_{(a)}\] \[=-\frac{z_{(a)}}{2}\mathbf{Q}^{ij}_{(a)}\mathbf{E}^{ij}_{(a)}-\delta_{ \kappa(a)}\frac{G_{N}^{2}m_{(a)}^{2}}{c^{6}}\frac{z_{(a)}}{2}\mathbf{Q}^{ij}_{(a)} \ddot{\mathbf{E}}^{ij}_{(a)}\,. \tag{43}\]
In our case, since we have a definite dynamics assigned to the quadrupole by the \(\mathcal{L}_{\rm FD(a)}\) and \(\mathcal{L}_{\rm MQ(a)}\) terms, the renormalization of the quadrupole as done in Refs. [87; 88] introduces extra divergent terms. So, rather than performing the renormalization of the quadrupole, we renormalize the post-adiabatic Love number by adding the counter term given in Eq. (4.2), which can also remove the divergence as obtained in Eq. (38) of [88] in the radiative sector.
Additionally, in Ref. [49], it was observed that when starting with the point particle action (non-spinning and without tides), the stress-energy tensor and, consequently, the metric produced by a single compact object contain divergent terms. When we extend this analysis to a binary system comprising two such point particles, the divergent metric generated by the first object affects the second object, leading to the emergence of divergences in the two-body interaction potential. The divergent terms from both the metric and the interaction potential were removed with the introduction of a counter term in the point particle action and ultimately these counter terms vanished by applying a field redefinition in the Lagrangian. We see an analogous case for the 3PN tidal interaction potential. The radiative sector from the \(\mathcal{L}_{\rm EQ(a)}\) term in the Lagrangian contains divergent pieces. If one now computes the metric from this, it should also be divergent. Hence we should expect the same divergence to also show up in the two body tidal interaction potential. The only difference being, the counter term that removes the tidal divergence, cannot be removed by a field redefinition as opposed to the same in the point particle sector.
## 5 Dynamical tides
In this section, we present the result of the effective two-body Hamiltonian with dynamical gravitoelectric tides. We compute the Hamiltonian \(\mathcal{H}\) from the Lagrangian obtained in the previous section using a Legendre transformation
\[\mathcal{H}(\mathbf{x},\mathbf{p},\mathbf{Q})=\sum_{a=1,2}\Big{(}\mathbf{p}^{i}_ {(a)}\mathbf{v}^{i}_{(a)}+\mathbf{P}^{ij}_{(a)}\hat{\mathbf{Q}}^{ij}_{(a)}\,\Big{)}- \mathcal{L}(\mathbf{x},\mathbf{v},\mathbf{Q}). \tag{5.1}\]
We express this Hamiltonian in terms of the total mass of the binary, denoted by \(M=m_{(1)}+m_{(2)}\), the reduced mass by \(\mu=m_{(1)}m_{(2)}/M\), the mass ratio by \(q=m_{(1)}/m_{(2)}\), the symmetric mass ratio \(\nu=\mu/M\), and the antisymmetric mass ratio \(\delta=(m_{(1)}-m_{(2)})/M\), which are related to each other by,
\[\nu=\frac{m_{(1)}m_{(2)}}{M^{2}}=\frac{\mu}{M}=\frac{q}{(1+q)^{2} }=\frac{(1-\delta^{2})}{4}\,. \tag{5.2}\]
We express the results in the center-of-mass (COM) frame of reference and define the momentum in the COM frame as \(\mathbf{p}\equiv\mathbf{p}_{(1)}=-\mathbf{p}_{(2)}\). In the COM frame, the orbital angular momentum is defined as \(\mathbf{L}=\mathbf{r}\times\mathbf{p}\). Hence, we can write \(p^{2}=p_{r}^{2}+L^{2}/r^{2}\), where \(p_{r}=\mathbf{p}\cdot\mathbf{n}\), \(p=|\mathbf{p}|\) and \(L=|\mathbf{L}|\). We write the Hamiltonian in terms of dimensionless quantities, which are obtained by rescaling all the variables as follows
\[\widetilde{\mathbf{p}}=\frac{1}{c}\frac{\mathbf{p}}{\mu}\,,\quad\widetilde {\mathbf{r}}=\frac{c^{2}}{G_{N}}\frac{\mathbf{r}}{M}\,,\quad\widetilde{\mathbf{L}}=\frac{ c}{G_{N}}\frac{\mathbf{L}}{M\mu}\,,\quad\widetilde{\mathcal{H}}=\frac{1}{c^{2}} \frac{\mathcal{H}}{\mu}\,,\quad\widetilde{\lambda}=\frac{c^{10}}{G_{N}^{4}} \frac{\lambda}{M^{5}}\,,\] \[\widetilde{\mathbf{Q}}_{(a)}=\frac{c^{4}}{G_{N}^{2}}\frac{\mathbf{Q}_{(a )}}{M^{2}\mu}\,,\quad\widetilde{\mathbf{S}}_{Q(a)}=\frac{c}{G_{N}}\frac{\mathbf{S}_{Q(a )}}{M\mu}\,,\quad\text{and}\quad\widetilde{\mathbf{M}}_{Q(a)}=\frac{1}{c^{2}}\frac{ \mathbf{M}_{Q(a)}}{\mu}\,. \tag{5.3}\]
The total EFT Hamiltonian in the dimensionless parameters is given by
\[\widetilde{\mathcal{H}}=\widetilde{\mathcal{H}}_{\rm pp}+ \widetilde{\mathcal{H}}_{\rm DT}\, \tag{5.4}\]
where
\[\widetilde{\mathcal{H}}_{\rm pp} =\widetilde{\mathcal{H}}_{\rm 0PN}+\left(\frac{1}{c^{2}}\right) \widetilde{\mathcal{H}}_{\rm 1PN}+\left(\frac{1}{c^{4}}\right)\widetilde{\mathcal{H}}_{ \rm 2PN}+\left(\frac{1}{c^{6}}\right)\widetilde{\mathcal{H}}_{\rm 3PN}+ \mathcal{O}\left(\frac{1}{c^{8}}\right)\,, \tag{5.5a}\] \[\widetilde{\mathcal{H}}_{\rm DT} =\sum_{n=0}^{3}\left(\frac{1}{c^{2}}\right)^{n}\,\left( \widetilde{\mathcal{H}}_{n\rm PN}^{\rm EQ}+\widetilde{\mathcal{H}}_{n\rm PN}^ {\rm FD}+\widetilde{\mathcal{H}}_{n\rm PN}^{\rm MQ}\right)+\left(\frac{1}{c^{ 6}}\right)\widetilde{\mathcal{H}}_{\rm 3PN}^{\rm EQ}+\mathcal{O}\left(\frac{1}{c^{8}} \right)\,. \tag{5.5b}\]
The point particle Hamiltonian till 3PN is presented in the same gauge in Appendix C.1 of Ref. [56], the tidal Hamiltonian up to 2PN is presented in Ref. [58] and the novel result of 3PN Hamiltonian is given as,
\[\widetilde{\mathcal{H}}_{\rm 3PN}^{\rm EQ+\tilde{E}Q}= \left(\widetilde{\mathcal{Q}}_{(1)}^{ij}\widetilde{\mathcal{F}}^ {i}\widetilde{\mathcal{F}}^{j}\right)\left\{\widetilde{L}^{6}\left(-\frac{15 \nu^{4}}{32\widetilde{r}^{11}}-\frac{135\nu^{3}}{32\widetilde{r}^{11}}+\frac{45 \nu^{2}}{32\widetilde{r}^{11}}\right)+\widetilde{L}^{4}\left(\widetilde{p}_{r} ^{2}\left(-\frac{15\nu^{4}}{8\widetilde{r}^{9}}-\frac{585\nu^{3}}{32\widetilde {r}^{9}}+\frac{327\nu^{2}}{32\widetilde{r}^{9}}\right)\right.\right.\] \[\left.\left.+\frac{75\nu^{3}}{4\widetilde{r}^{10}}+\frac{137\nu^ {2}}{4\widetilde{r}^{10}}-\frac{101\nu}{32\widetilde{r}^{10}}\right)+ \widetilde{L}^{2}\left[\widetilde{p}_{r}^{4}\left(-\frac{51\nu^{4}}{16 \widetilde{r}^{7}}-\frac{225\nu^{3}}{8\widetilde{r}^{7}}+\frac{1527\nu^{2}}{3 2\widetilde{r}^{7}}-\frac{105\nu}{8\widetilde{r}^{7}}\right)\right.\right.\] \[\left.\left.+\widetilde{p}_{r}^{2}\left(\frac{1977\nu^{3}}{32 \widetilde{r}^{8}}-\frac{383\nu^{2}}{16\widetilde{r}^{8}}-\frac{79\nu}{32 \widetilde{r}^{8}}\right)+\frac{\left(-3763200\;\bar{\kappa}_{(1)}+77175\pi^{ 2}-35396224\right)\nu^{2}}{501760\widetilde{r}^{9}}\right.\right.\] \[\left.+\frac{\left(3675\;\bar{\kappa}_{(1)}+16651\nu\right)}{490 \widetilde{r}^{9}}-\frac{1269\nu^{3}}{112\widetilde{r}^{9}}\right]+ \widetilde{p}_{r}^{2}\left(\frac{3\left(3763200\;\bar{\kappa}_{(1)}-77175\pi^{ 2}+11490272\right)\nu^{2}}{627200\widetilde{r}^{7}}\right.\right.\] \[\left.\left.-\frac{3(117600\;\bar{\kappa}_{(1)}+679671\right)\nu}{ 19600\widetilde{r}^{7}}-\frac{153\nu^{3}}{7\widetilde{r}^{7}}\right)+ \widetilde{p}_{r}^{4}\left(\frac{477\nu^{3}}{32\widetilde{r}^{8}}-\frac{291\nu ^{2}}{8\widetilde{r}^{6}}+\frac{327\nu}{32\widetilde{r}^{6}}\right)\] \[\left.\left.+\widetilde{p}_{r}^{6}\left(-\frac{6\nu^{4}}{\widetilde {r}^{5}}+\frac{45\nu^{3}}{2\widetilde{r}^{5}}-\frac{135\nu^{2}}{8\widetilde{r} ^{5}}+\frac{105\nu}{32\widetilde{r}^{5}}+\frac{\left(705600\;\bar{\kappa}_{(1 )}-1157625\pi^{2}+14115256\right)\nu^{2}}{156800\widetilde{r}^{8}}\right.\right.\] \[\left.\left.-\frac{3(14700\;\bar{\kappa}_{(1)}+44197)\nu}{9800 \widetilde{r}^{8}}+\frac{1}{q}\right[\widetilde{L}^{6}\left(-\frac{15\nu^{4}}{3 2\widetilde{r}^{11}}-\frac{45\nu^{3}}{16\widetilde{r}^{11}}+\frac{105\nu^{2}} {32\widetilde{r}^{11}}-\frac{21\nu}{32\widetilde{r}^{11}}\right)\right.\right.\] \[\left.\left.+\widetilde{L}^{4}\left(\widetilde{p}_{r}^{2}\left(- \frac{15\nu^{4}}{8\widetilde{r}^{9}}-\frac{399\nu^{3}}{32\widetilde{r}^{9}}+ \frac{159\nu^{2}}{32\widetilde{r}^{9}}-\frac{39\nu}{32\widetilde{r}^{9}}\right) +\frac{591\nu^{3}}{32\widetilde{r}^{10}}+\frac{973\nu^{2}}{32\widetilde{r}^{1 0}}-\frac{15\nu}{2\widetilde{r}^{10}}\right)\right.\] \[\left.\left.+\widetilde{L}^{2}\left(\widetilde{p}_{r}^{4}\left(- \frac{51\nu^{4}}{16\widetilde{r}^{7}}-\frac{321\nu^{3}}{16\widetilde{r}^{7}}+ \frac{399\nu^{2}}{32\widetilde{r}^{7}}-\frac{3\nu}{32\widetilde{r}^{7}}\right) +\widetilde{p}_{r}^{2}\left(\frac{1977\nu^{3}}{32\widetilde{r}^{8}}+\frac{933 \nu^{2}}{32\widetilde{r}^{8}}-\frac{6\nu}{\widetilde{r}^{8}}\right)\right.\right.\] \[\left.\left.+\frac{\left(77175\pi^{2}-1536(2450\;\bar{\kappa}_{(1 )}+28169)\right)\nu^{2}}{501760\widetilde{r}^{9}}-\frac{1269\nu^{3}}{112 \widetilde{r}^{9}}-\frac{369\nu}{8\widetilde{r}^{9}}\right)\right.\] \[\left.+\widetilde{p}_{r}^{2}\left(-\frac{3\left(-3763200\;\bar{ \kappa}_{(1)}+77175\pi^{2}-659872\right)\nu^{2}}{627200\widetilde{r}^{7}}- \frac{153\nu^{3}}{7\widetilde{r}^{7}}+\frac{117\nu}{8\widetilde{r}^{7}}\right)\right.\] \[\left.\left.+\widetilde{p}_{r}^{4}\left(\frac{219\nu^{3}}{8 \widetilde{r}^{6}}-\frac{735\nu^{2}}{32\widetilde{r}^{6}}+\frac{9\nu}{2 \widetilde{r}^{6}}\right)+\widetilde{p}_{r}^{6}\left(-\frac{6\nu^{4}}{ \widetilde{r}^{5}}+\frac{27\nu^{3}}{2\widetilde{r}^{5}}-\frac{45\nu^{2}}{8 \widetilde{r}^{5}}+\frac{15\nu}{32\widetilde{r}^{5}}\right)\right.\right.\] \[\left.\left.+\frac{\left(705600\;\bar{\kappa}_{(1)}-1157625\pi^{ 2}+15841456\right)\nu^{2}}{156800\widetilde{r}^{8}}+\frac{63\nu}{2 \widetilde{r}^{8}}\right]\right\}\right\}\] \[\left.+\left(\widetilde{\mathcal{Q}}_{(1)}^{ij}\widetilde{L}^{ i}\widetilde{L}^{j}\right)\left\{\widetilde{p}_{r}^{4}\left(-\frac{21\nu^{4}}{16 \widetilde{r}^{5}}-\frac{45\nu^{3}}{4\widetilde{r}^{5}}+\frac{24\nu^{2}}{ \widetilde{r}^{5}}-\frac{105\nu}{16\widetilde{r}^{5}}\right)+\widetilde{p}_{r} ^{2}\left(\frac{225\nu^{3}}{16\widetilde{r}^{6}}+\frac{577\nu^{2}}{8 \widetilde{r}^{6}}+\frac{77\nu}{4\widetilde{r}^{6}}\right)\right.\right.\] \[\left.\left.+\widetilde{L}^{4}\left(\frac{15\nu^{2}}{16\widetilde{ r}^{9}}-\frac{45\nu^{3}}{16\widetilde{r}^{9}}\right)+\widetilde{L}^{2}\left( \widetilde{p}_{r}^{2}\left(-\frac{3\nu^{4}}{8\widetilde{r}^{7}}-\frac{45\nu^{3}}{4 \widetilde{r}^{7}}+\frac{21\nu^{2}}{4\widetilde{r}^
\[+\frac{1}{q}\left[\widetilde{L}^{4}\left(-\frac{45\nu^{3}}{16 \widetilde{r}^{9}}+\frac{45\nu^{2}}{16\widetilde{r}^{9}}-\frac{9\nu}{16 \widetilde{r}^{9}}\right)+\widetilde{L}^{2}\left(\widetilde{p}_{r}^{2}\left(- \frac{3\nu^{4}}{8\widetilde{r}^{7}}-\frac{12\nu^{3}}{\widetilde{r}^{7}}+\frac {39\nu^{2}}{8\widetilde{r}^{7}}-\frac{3\nu}{4\widetilde{r}^{7}}\right)\right.\] \[\left.+\frac{9\nu^{3}}{2\widetilde{r}^{8}}+\frac{325\nu^{2}}{16 \widetilde{r}^{8}}-\frac{6\nu}{\widetilde{r}^{8}}\right)+\widetilde{p}_{r}^{4 }\left(-\frac{21\nu^{4}}{16\widetilde{r}^{5}}-\frac{237\nu^{3}}{16\widetilde{r }^{5}}+\frac{69\nu^{2}}{8\widetilde{r}^{5}}-\frac{3\nu}{16\widetilde{r}^{5}}\right)\] \[+\widetilde{p}_{r}^{2}\left(\frac{15\nu^{3}}{\widetilde{r}^{6}}+ \frac{1065\nu^{2}}{16\widetilde{r}^{8}}-\frac{3\nu}{\widetilde{r}^{6}}\right) +\frac{\left(384(9800~{}\bar{\kappa}_{(1)}-56157)-77175\pi^{2}\right)\nu^{2}} {1254400\widetilde{r}^{7}}-\frac{3\nu^{3}}{2\widetilde{r}^{7}}-\frac{123\nu}{4 \widetilde{r}^{7}}\right]\Bigg{\}}\] \[+\left(\widetilde{\mathcal{Q}}_{(1)}^{ij}\widetilde{r}^{\,i} \widetilde{L}^{j}\right)\widetilde{p}_{r}\Bigg{\{}\widetilde{L}^{4}\left(\frac {15\nu^{4}}{16\widetilde{r}^{9}}+\frac{45\nu^{3}}{16\widetilde{r}^{9}}-\frac{9 \nu^{2}}{16\widetilde{r}^{9}}\right)+\widetilde{L}^{2}\left(\widetilde{p}_{r}^ {2}\left(\frac{63\nu^{4}}{16\widetilde{r}^{7}}+\frac{45\nu^{3}}{8\widetilde{r} ^{7}}-\frac{45\nu^{2}}{8\widetilde{r}^{7}}\right)\right.\] \[\left.-\frac{105\nu^{3}}{8\widetilde{r}^{8}}+\frac{5\nu^{2}}{2 \widetilde{r}^{8}}-\frac{\nu}{2\widetilde{r}^{8}}\right)+\widetilde{p}_{r}^{4 }\left(\frac{123\nu^{4}}{16\widetilde{r}^{5}}-\frac{45\nu^{3}}{2\widetilde{r}^ {5}}+\frac{129\nu^{2}}{16\widetilde{r}^{5}}\right)+\widetilde{p}_{r}^{2}\left( -\frac{81\nu^{3}}{8\widetilde{r}^{6}}-\frac{229\nu^{2}}{2\widetilde{r}^{6}}+ \frac{453\nu}{8\widetilde{r}^{6}}\right)\] \[+\frac{\left(77175\pi^{2}-384(9800~{}\bar{\kappa}_{(1)}+8103) \right)\nu^{2}}{156800\widetilde{r}^{7}}+\frac{\left(235200~{}\bar{\kappa}_{( 1)}+2662147\right)\nu}{9800\widetilde{r}^{7}}+\frac{93\nu^{3}}{7\widetilde{r} ^{7}}\] \[+\frac{1}{q}\left[\widetilde{L}^{4}\left(\frac{15\nu^{4}}{16 \widetilde{r}^{9}}+\frac{3\nu^{3}}{\widetilde{r}^{9}}-\frac{33\nu^{2}}{16 \widetilde{r}^{9}}+\frac{3\nu}{16\widetilde{r}^{9}}\right)+\widetilde{L}^{2} \left(\widetilde{p}_{r}^{2}\left(\frac{63\nu^{4}}{16\widetilde{r}^{7}}+\frac{ 147\nu^{3}}{16\widetilde{r}^{7}}-\frac{45\nu^{2}}{8\widetilde{r}^{7}}+\frac{9 \nu}{8\widetilde{r}^{7}}\right)\right.\right.\] \[\left.\left.-\frac{15\nu^{3}}{\widetilde{r}^{8}}+\frac{3\nu^{2}} {8\widetilde{r}^{8}}+\frac{3\nu}{\widetilde{r}^{8}}\right)+\widetilde{p}_{r}^{4 }\left(\frac{123\nu^{4}}{16\widetilde{r}^{5}}-\frac{81\nu^{3}}{16\widetilde{r }^{5}}-\frac{57\nu^{2}}{16\widetilde{r}^{5}}+\frac{15\nu}{16\widetilde{r}^{5}} \right)+\widetilde{p}_{r}^{2}\left(-\frac{171\nu^{3}}{8\widetilde{r}^{6}}- \frac{555\nu^{2}}{8\widetilde{r}^{6}}+\frac{9\nu}{\widetilde{r}^{6}}\right)\right.\] \[\left.\left.+\frac{\left(-3763200~{}\bar{\kappa}_{(1)}+77175\pi^{ 2}-5256352\right)\nu^{2}}{156800\widetilde{r}^{7}}+\frac{93\nu^{3}}{7 \widetilde{r}^{7}}\right]\right\}\] \[+\left(1\leftrightarrow 2\right),\] (5.6a) \[\widetilde{\mathcal{H}}_{3\text{PN}}^{\text{FD}}= \left(\widetilde{\mathcal{S}}_{Q(1)}\cdot\widetilde{\mathcal{L}} \right)\Bigg{\{}\widetilde{L}^{4}\left(\frac{11\nu^{3}}{8\widetilde{r}^{7}}+ \frac{9\nu^{2}}{16\widetilde{r}^{7}}-\frac{\nu}{4\widetilde{r}^{7}}\right)+ \widetilde{L}^{2}\left(\widetilde{p}_{r}^{2}\left(\frac{23\nu^{3}}{4\widetilde{r} ^{5}}+\frac{21\nu^{2}}{8\widetilde{r}^{5}}-\frac{2\nu}{\widetilde{r}^{5}} \right)-\frac{17\nu^{3}}{16\widetilde{r}^{6}}\right.\] \[\left.-\frac{275\nu^{2}}{16\widetilde{r}^{6}}-\frac{17\nu}{4 \widetilde{r}^{6}}\right)+\widetilde{p}_{r}^{2}\left(-\frac{63\nu^{3}}{16 \widetilde{r}^{4}}-\frac{207\nu^{2}}{8\widetilde{r}^{4}}+\frac{9\nu}{2 \widetilde{r}^{4}}\right)+\widetilde{p}_{r}^{4}\left(\frac{10\nu^{3}}{\widetilde {r}^{3}}-\frac{81\nu^{2}}{8\widetilde{r}^{3}}+\frac{2\nu}{\widetilde{r}^{3}} \right)+\frac{75\nu^{2}}{8\widetilde{r}^{5}}+\frac{25\nu}{2\widetilde{r}^{5}}\] \[+\frac{1}{q}\left[\widetilde{L}^{4}\left(\frac{17\nu^{3}}{16 \widetilde{r}^{7}}-\frac{9\nu^{2}}{4\widetilde{r}^{7}}+\frac{7\nu}{16 \widetilde{r}^{7}}\right)+\widetilde{L}^{2}\left(\widetilde{p}_{r}^{2}\left( \frac{73\nu^{3}}{16\widetilde{r}^{5}}-\frac{3\nu^{2}}{\widetilde{r}^{5}}+ \frac{7\nu}{8\widetilde{r}^{5}}\right)-\frac{17\nu^{3}}{16\widetilde{r}^{6}}- \frac{133\nu^{2}}{16\widetilde{r}^{6}}+\frac{27\nu}{8\widetilde{r}^{6}}\right)\] \[+\widetilde{p}_{r}^{2}\left(-\frac{63\nu^{3}}{16\widetilde{r}^{4}} -\frac{411\nu^{2}}{16\widetilde{r}^{4}}+\frac{27\nu}{8\widetilde{r}^{4}}\right)+ \widetilde{p}_{r}^{4}\left(\frac{131\nu^{3}}{16\widetilde{r}^{3}}-\frac{9\nu^{2 }}{2\widetilde{r}^{3}}+\frac{7\nu}{16\widetilde{r}^{3}}\right)+\frac{43\nu^{2}}{8 \widetilde{r}^{5}}+\frac{21\nu}{2\widetilde{r}^{5}}\right]\Bigg{\}}\] \[+\left(1\leftrightarrow 2\right),\] (5.6b) \[\widetilde{\mathcal{H}}_{3\text{PN}}^{\text{MQ}}= \widetilde{\boldsymbol{M}}_{Q(1)}\left\{\widetilde{L}^{6}\left( \frac{5\nu^{2}}{16\widetilde{r}^{6}}-\frac{5\nu^{3}}{8\widetilde{r}^{6}} \right)+\widetilde{L}^{4}\left(\widetilde{p}_{r}^{2}\left(\frac{15\nu^{2}}{16 \widetilde{r}^{4}}-\frac{15\nu^{3}}{8\widetilde{r}^{4}}\right)+\frac{3\nu^{3} }{8\widetilde{r}^{5}}+\frac{\nu^{2}}{\widetilde{r}^{5}}\right)\right.\] \[\left.+\widetilde{L}^{2}\left(\widetilde{p}_{r}^{2}\left(\frac {\nu^{3}}{\widetilde{r}^{3}}+\frac{\nu^{2}}{4\widetilde{r}^{3}}+\frac{\nu}{ \widetilde{r}^
\[+\widetilde{p}_{r}^{6}\left(-\frac{15\nu^{3}}{16}+\frac{5\nu^{2}}{4}- \frac{5\nu}{16}\right)+\widetilde{p}_{r}^{4}\left(\frac{\nu^{3}}{\widetilde{r}} +\frac{5\nu^{2}}{\widetilde{r}}-\frac{15\nu}{8\widetilde{r}}\right)+\frac{3\nu ^{2}}{4\widetilde{r}^{3}}-\frac{\nu}{2\widetilde{r}^{3}}\right]\right\}\] \[+\left(1\leftrightarrow 2\right). \tag{100c}\]
where,
\[\bar{\kappa}_{(1)}(R)=\kappa_{(1)}(R)-\frac{214}{105}\log\left( \frac{r}{R}\right)\,, \tag{101}\]
that combines the terms that depend on the external length scale \(R\). The effective Hamiltonian in a general reference frame is provided in the ancillary file Hamiltonian-DT.m.
## 6 Adiabatic tides
In this section, we present our results for the adiabatic tides, that is, we take the \(\omega_{f}\to\infty\) limit of the Hamiltonian (100). This eliminates the dependence of Hamiltonian on the variables \(\mathbf{Q}_{(a)}\), \(\mathbf{S}_{Q(a)}\) and \(\mathbf{M}_{Q(a)}\), and hence simplifies our further calculations. We then compute the binding energy and scattering angle using the adiabatic Hamiltonian and compare these against known results in the literature.
The adiabatic limit physically refers to the quadrupole mode being locked to the external tidal field induced by the binary companion. In this case, the equation of motion for the \(\mathbf{Q}_{(a)}^{ij}\) is given by
\[\mathbf{Q}_{(a)}^{ij}=-\lambda_{(a)}\mathbf{E}_{(a)}^{ij}-\lambda_{(a)} \kappa_{d(a)}\frac{G_{d}^{2}m_{(a)}^{2}}{c^{6}}\tilde{\mathbf{E}}_{(a)}^{ij}\,. \tag{102}\]
We can then substitute Eqs. (102) in the Hamiltonian (100) to obtain the effective Hamiltonian for adiabatic tides.
Computationally it is efficient to compute dynamic tides and then take the adiabatic limit, than directly computing the adiabatic tides from the Lagrangian (3). This is because all the Feynman diagrams generated by (3) will be factorizable due to \(E^{2}\) terms. Hence in this case we can compute 4-loop (\(G_{N}^{5}\)) terms in the adiabatic observables by doing a 3-loop computation with the dynamic tides. The counter term for an adiabatic calculation starting from Lagrangian given in Eq. (3), can be obtained by using Eq. (101) in Eq. (3).
### Effective Hamiltonian
The effective adiabatic Hamiltonian is given by8,
Footnote 8: At leading order, adiabatic tides contributes at 5PN which can be easily seen writing the Hamiltonian in the form of the dimensional Love number as shown in Eq. (103). We show here the relative scaling with respect to the 5PN order contribution.
\[\widetilde{\mathcal{H}}=\widetilde{\mathcal{H}}_{\text{pp}}+ \widetilde{\mathcal{H}}_{\text{AT}}\, \tag{103}\]
where
\[\widetilde{\mathcal{H}}_{\text{pp}} =\widetilde{\mathcal{H}}_{\text{0PN}}+\left(\frac{1}{c^{2}} \right)\widetilde{\mathcal{H}}_{\text{1PN}}+\left(\frac{1}{c^{4}}\right) \widetilde{\mathcal{H}}_{\text{2PN}}++\left(\frac{1}{c^{6}}\right)\widetilde{ \mathcal{H}}_{\text{3PN}}+\mathcal{O}\left(\frac{1}{c^{8}}\right)\, \tag{104a}\] \[\widetilde{\mathcal{H}}_{\text{AT}} =\widetilde{\mathcal{H}}_{\text{0PN}}^{\text{AT}}+\left(\frac{1} {c^{2}}\right)\widetilde{\mathcal{H}}_{\text{1PN}}^{\text{AT}}+\left(\frac{1} {c^{4}}\right)\widetilde{\mathcal{H}}_{\text{2PN}}^{\text{AT}}+\left(\frac{1} {c^{6}}\right)\widetilde{\mathcal{H}}_{\text{3PN}}^{\text{AT}}+\mathcal{O} \left(\frac{1}{c^{8}}\right). \tag{104b}\]
The adiabatic Hamiltonian up to 2PN is presented in [58] and the novel result of 3PN Hamiltonian is given as,
\[\widetilde{\mathcal{H}}^{\text{AT}}_{\text{3PN}}= \widetilde{\lambda}_{(1)}\left\{\widetilde{L}^{6}\left(-\frac{45 \nu^{3}}{32\widetilde{r}^{12}}-\frac{15\nu^{2}}{4\widetilde{r}^{12}}-\frac{33 \nu}{32\widetilde{r}^{12}}\right)+\widetilde{L}^{4}\left(\widetilde{p}_{r}^{2 }\left(-\frac{189\nu^{3}}{32\widetilde{r}^{10}}+\frac{99\nu^{2}}{8\widetilde{r }^{10}}+\frac{711\nu}{32\widetilde{r}^{10}}\right)-\frac{9\nu^{2}}{\widetilde{r }^{11}}+\frac{15\nu}{4\widetilde{r}^{11}}\right)\] \[+\widetilde{L}^{2}\left[\widetilde{p}_{r}^{2}\left(-\frac{9\nu^{3 }}{2\widetilde{r}^{9}}-\frac{2775\nu^{2}}{16\widetilde{r}^{9}}-\frac{72\nu}{ \widetilde{r}^{9}}\right)+\widetilde{p}_{r}^{4}\left(-\frac{99\nu^{3}}{32 \widetilde{r}^{8}}+\frac{108\nu^{2}}{\widetilde{r}^{8}}-\frac{1431\nu}{32 \widetilde{r}^{8}}\right)\right.\] \[\left.+\frac{3(117600\ \bar{\kappa}_{(1)}+1584771)\nu}{19600 \widetilde{r}^{10}}+\frac{93\nu^{2}}{2\widetilde{r}^{10}}\right]+\widetilde{p }_{r}^{2}\left(\frac{9993\nu^{2}}{56\widetilde{r}^{8}}-\frac{3(117600\ \bar{\kappa}_{(1)}+626121)\nu}{9800\widetilde{r}^{8}}\right)\] \[+\widetilde{p}_{r}^{4}\left(-\frac{45\nu^{3}}{\widetilde{r}^{7}}- \frac{729\nu^{2}}{8\widetilde{r}^{7}}+\frac{777\nu}{16\widetilde{r}^{7}}\right) +\widetilde{p}_{r}^{6}\left(\frac{1485\nu^{3}}{32\widetilde{r}^{6}}-\frac{465 \nu^{2}}{8\widetilde{r}^{6}}+\frac{465\nu}{32\widetilde{r}^{6}}\right)-\frac{3 (29400\ \bar{\kappa}_{(1)}+429119)\nu}{9800\widetilde{r}^{9}}\] \[+\frac{1}{q}\Bigg{[}\widetilde{L}^{6}\left(-\frac{15\nu^{3}}{8 \widetilde{r}^{12}}-\frac{75\nu^{2}}{8\widetilde{r}^{12}}-\frac{99\nu}{16 \widetilde{r}^{12}}+\frac{33}{32\widetilde{r}^{12}}\right)+\widetilde{L}^{4} \left(\widetilde{p}_{r}^{2}\left(-\frac{9\nu^{3}}{\widetilde{r}^{10}}-\frac{ 171\nu^{2}}{8\widetilde{r}^{10}}-\frac{27\nu}{16\widetilde{r}^{10}}+\frac{9}{3 2\widetilde{r}^{10}}\right)\right.\] \[\left.+\frac{273\nu^{2}}{16\widetilde{r}^{11}}+\frac{1545\nu}{16 \widetilde{r}^{11}}+\frac{495}{16\widetilde{r}^{11}}\right)+\widetilde{L}^{2} \left(\widetilde{p}_{r}^{4}\left(-\frac{99\nu^{3}}{8\widetilde{r}^{8}}+\frac{ 315\nu^{2}}{8\widetilde{r}^{8}}+\frac{27\nu}{16\widetilde{r}^{8}}-\frac{9}{32 \widetilde{r}^{8}}\right)\right.\] \[\left.+\widetilde{p}_{r}^{2}\left(-\frac{9\nu^{3}}{2\widetilde{r }^{9}}-\frac{831\nu^{2}}{16\widetilde{r}^{9}}+\frac{567\nu}{8\widetilde{r}^{9} }-\frac{99}{8\widetilde{r}^{9}}\right)+\frac{867\nu^{2}}{28\widetilde{r}^{10}} +\frac{3\left(8384+63\pi^{2}\right)\nu}{512\widetilde{r}^{10}}-\frac{1335}{ \widetilde{r}^{10}}\right)\] \[+\widetilde{p}_{r}^{2}\left(\frac{2097\nu^{2}}{16\widetilde{r}^{8 }}-\frac{3\left(20576+63\pi^{2}\right)\nu}{256\widetilde{r}^{8}}+\frac{261}{8 \widetilde{r}^{8}}\right)+\widetilde{p}_{r}^{4}\left(-\frac{45\nu^{3}}{ \widetilde{r}^{7}}-\frac{105\nu^{2}}{8\widetilde{r}^{7}}-\frac{327\nu}{16 \widetilde{r}^{7}}+\frac{99}{16\widetilde{r}^{7}}\right)\] \[+\widetilde{p}_{r}^{6}\left(\frac{99\nu^{3}}{4\widetilde{r}^{6}}- \frac{69\nu^{2}}{8\widetilde{r}^{6}}-\frac{45\nu}{16\widetilde{r}^{6}}+\frac{ 15}{32\widetilde{r}^{6}}\right)-\frac{1599\nu^{2}}{56\widetilde{r}^{9}}+ \frac{\left(6376-945\pi^{2}\right)\nu}{64\widetilde{r}^{9}}+\frac{519}{4 \widetilde{r}^{9}}\right]\Bigg{\}}\] \[+\left(1\leftrightarrow 2\right). \tag{100}\]
This Hamiltonian in a general reference frame is provided by us in the ancillary file Hamiltonian-AT.m. Following the procedure described in Sec. 5.2 of Ref. [58], we have validated the above result by computing the complete Poincare algebra [101; 102] using the above result. The expression of the center of mass \(\mathbf{G}^{i}\) is given in the ancillary file Poincare_Algebra.m.
### Binding energy
In this section, we compute the binding energy in the COM frame for circular orbits. The gauge invariant relation between the binding energy and the orbital frequency for circular orbits is obtained by eliminating the dependence on the radial coordinate. For circular orbits we have \(\partial\widetilde{\mathcal{H}}(\widetilde{r},\widetilde{L})/\partial \widetilde{r}=0\). We invert this relation to express \(\widetilde{r}\) as a function of \(\widetilde{L}\). Then, we substitute \(\widetilde{L}\), written as a function of the orbital frequency \(\widetilde{\omega}=\partial\widetilde{\mathcal{H}}(\widetilde{L})/\partial \widetilde{L}\), in the Hamiltonian (101). Following this procedure we obtain the binding energy \(E\) as,
\[E_{\text{AT}}=E_{\text{AT}}^{\text{0PN}}+E_{\text{AT}}^{\text{1PN}}+E_{\text{AT }}^{\text{2PN}}+E_{\text{AT}}^{\text{3PN}}\,, \tag{101}\]
where the energy up to 2PN is presented in [58] and the 3PN result is given as,
\[E_{\text{AT}}^{\text{3PN}}=x^{9}\left\{\left[-\frac{45}{32}\nu^{3} +\frac{74495}{448}\nu^{2}+\left(-\frac{823243}{784}+\frac{8895\pi^{2}}{512} \right)\nu+\frac{894721}{3136}+\frac{321}{7}\left(2\nu-1\right)\log\left(x \widetilde{R}_{\text{orb}}\right)\right]\widetilde{\lambda}_{(+)}\right.\] \[\left.\qquad\qquad+\left.\left[\frac{825}{64}\nu^{2}-\frac{42225}{2 24}\nu+\frac{378751}{3136}-\frac{321}{7}\log\left(x\widetilde{R}_{\text{orb}} \right)\right]\delta\widetilde{\lambda}_{(-)}-45(\widetilde{\lambda}\kappa)_{(+ )}\right\}\,, \tag{102}\]
where \(x=\widetilde{\omega}^{2/3}\), \(\widetilde{R}_{\rm orb}=c^{2}R_{\rm orb}/(G_{N}M)\) and the combined dimensionless Love numbers are denoted by
\[\widetilde{\lambda}_{(\pm)} =\frac{m_{(2)}}{m_{(1)}}\widetilde{\lambda}_{(1)}\,\pm\,\frac{m_{( 1)}}{m_{(2)}}\widetilde{\lambda}_{(2)}\,, \tag{6.7}\] \[(\widetilde{\lambda}\kappa)_{(\pm)} =\frac{m_{(2)}}{m_{(1)}}\widetilde{\lambda}_{(1)}\kappa_{(1)}(R_ {\rm orb})\,\pm\,\frac{m_{(1)}}{m_{(2)}}\widetilde{\lambda}_{(2)}\kappa_{(2)} (R_{\rm orb})\,. \tag{6.8}\]
Here, \(\kappa_{(a)}(R_{\rm orb})\) is evaluated at the orbital length scale \(R_{\rm orb}\), after evolving it using the RG Eq. (4.9) from the matching scale \(R_{\rm NS}\).
### Scattering angle
Here we present the scattering angle \(\chi\) in the COM frame for the hyperbolic encounter of two stars. We begin by expressing the Hamiltonian \(\mathcal{H}\) (which is a function of \(p_{r}\), \(L\) and \(r\)) to obtain \(p_{r}=p_{r}(\mathcal{H},L,r)\). Then, we use relation between the Lorentz factor \(\gamma\) and the total energy per total rest mass \(\Gamma=\mathcal{H}/(Mc^{2})\) given by
\[\gamma=\frac{1}{\sqrt{1-v^{2}/c^{2}}}=1+\frac{\Gamma^{2}-1}{2 \nu}\,, \tag{6.9}\]
where \(v\equiv|\dot{\mathbf{r}}|\) is the relative velocity of the compact objects, and the total angular momentum \(L\) and the impact parameter \(b\) are related by \(L=(\mu\gamma vb)/\Gamma\). This allows us to exchange \(\mathcal{H}\) for \(v\) and \(L\) for \(b\). With this, we can then write the scattering angle as
\[\chi(v,b)=-\frac{\gamma}{\mu\gamma v}\,\int{\rm d}r\,\frac{\partial p _{r}(v,b,r)}{\partial b}-\pi\,. \tag{6.10}\]
Performing this procedure with the Hamiltonian (6.2) yields the scattering angle computed in the COM frame, which we write as
\[\chi_{\rm AT}=\chi_{\rm AT}^{\rm OPN}+\chi_{\rm AT}^{\rm 1PN}+\chi_{ \rm AT}^{\rm 2PN}+\chi_{\rm AT}^{\rm 3PN} \tag{6.11}\]
where the scattering angle up to 2PN is presented in [58] and the 3PN result is given as 9
Footnote 9: For computing the logarithmic terms in the scattering angle, check Ref. [103].
\[\frac{\chi_{\rm AT}^{\rm 3PN}}{\Gamma} =\frac{1}{Mb^{4}}\left[\lambda_{(+)}\ \ \delta\lambda_{(-)}\right]\cdot\left\{\pi\left(\frac{G_{N}M}{v^{2}b}\right)^{2} \frac{1575}{256}\begin{bmatrix}1\\ 0\end{bmatrix}\left(\frac{v^{6}}{c^{6}}\right)+\left(\frac{G_{N}M}{v^{2}b} \right)^{3}\frac{1}{70}\begin{bmatrix}18073\\ 2713\end{bmatrix}\left(\frac{v^{6}}{c^{6}}\right)\right.\] \[\quad+\pi\left.\left(\frac{G_{N}M}{v^{2}b}\right)^{4}\left(\frac{ 1}{458752}\begin{bmatrix}304535296-\left(46848512+231525\pi^{2}\right)\nu\\ 51930496\end{bmatrix}+\frac{1605}{64}\log\left(\frac{2b}{R_{\rm sc}}\right) \begin{bmatrix}1-2\nu\\ 1\end{bmatrix}\right)\left(\frac{v^{6}}{c^{6}}\right)\right.\] \[\quad+\left.\left(\frac{G_{N}M}{v^{2}b}\right)^{5}\left\{192 \begin{bmatrix}1\\ 0\end{bmatrix}+48\begin{bmatrix}53-4\nu\\ 5\end{bmatrix}\left(\frac{v^{2}}{c^{2}}\right)+\frac{12}{35}\begin{bmatrix}2875 3-3480\nu\\ 7473-140\nu\end{bmatrix}\left(\frac{v^{4}}{c^{4}}\right)\right.\] \[\quad\qquad\left.+\left(\frac{1}{8575}\begin{bmatrix}\left(584325 \pi^{2}-68190952\right)\nu+128915306\\ 33894506-2486400\nu\end{bmatrix}+\frac{903936}{1225}\log\left(\frac{b}{2R_{ \rm sc}}\right)\begin{bmatrix}1-2\nu\\ 1\end{bmatrix}\right)\left(\frac{v^{6}}{c^{6}}\right)\right\}\Bigg{\}}\] \[\quad+\frac{1}{Mb^{4}}\left[\left(\lambda\kappa\right)_{(+)}\ \ \delta\lambda\kappa_{(-)}\right]\cdot\left\{\pi\left.\left(\frac{G_{N}M}{v^{2}b} \right)^{4}\frac{1575}{64}\begin{bmatrix}-\nu\\ 0\end{bmatrix}\left(\frac{v^{6}}{c^{6}}\right)+\left.\left(\frac{G_{N}M}{v^{2}b} \right)^{5}\frac{25344}{35}\begin{bmatrix}-\nu\\ 0\end{bmatrix}\left(\frac{v^{6}}{c^{6}}\right)\right\}\right.\] \[\quad+\mathcal{O}\left(G_{N}^{6},\frac{v^{8}}{c^{8}}\right)\,, \tag{6.12}\]
where combined Love numbers are denoted by,
\[\lambda_{(\pm)} =\frac{m_{(2)}}{m_{(1)}}\lambda_{(1)}\,\pm\,\frac{m_{(1)}}{m_{(2)}} \lambda_{(2)}\,, \tag{6.13}\] \[(\lambda\kappa)_{(\pm)} =\frac{m_{(2)}}{m_{(1)}}\lambda_{(1)}\kappa_{(1)}(R_{\rm sc})\, \pm\,\frac{m_{(1)}}{m_{(2)}}\lambda_{(2)}\kappa_{(2)}(R_{\rm sc})\,. \tag{6.14}\]
Here, \(\kappa_{(a)}(R_{\rm sc})\) is evaluated at the typical length scale \(R_{\rm sc}\) of the scattering system, after evolving it using the RG Eq. (4.9) from the matching scale \(R_{\rm NS}\).
## 7 Conclusions
We calculated the conservative two-body effective Hamiltonian, taking into account the dynamical tidal interaction up to 3PN order. Our approach involves Feynman diagrams up to three loops, which are evaluated using the dimensional regularization scheme. We also studied the adiabatic limit and obtained the adiabatic tidal Hamiltonian up to 3PN order. The fulfillment of Poincare algebra constituted an important validation of the Hamiltonian. Additionally, we presented the analytic expressions of two gauge-invariant observables, namely the binding energy for bound orbits and the scattering angle for hyperbolic encounters.
In the considered action, we included the contribution from the non-minimal coupling (last term of Eq. (1.1)), which turned out to be crucial for ensuring the finite observables at the 3PN order. Interestingly, we found that not all the divergent pieces can be removed by adding a total derivative to the Lagrangian (canonical transformations on the Hamiltonian). Therefore, we constructed a counterterm to reabsorb those divergent pieces, yielding the renormalization of the the post-adiabatic Love number \(\kappa_{(a)}\). Consequently, the latter experiences a renormalization group running, as shown in Sec. 4.2. According to the corresponding beta function, we observed that the value of \(\kappa_{(a)}\) increases, when reducing the external length scale \(R\).
In the adiabatic limit, our results depend on the two Love numbers (Wilson coefficients within the context of point particle EFT), namely, \(\lambda\) (the tidal Love number) and \(\kappa\) (the post-adiabatic Love number). The determination of their specific values in the case of BHs requires a matching computation of an observable between the EFT and BHPT, as described in [104; 105; 106; 107; 108; 109]. Notably, the tidal Love number \(\lambda\) is found to be zero for BHs [111; 112; 32; 33]. However, the specific value of the post-adiabatic Love number \(\kappa\) remains unknown and requires further investigation. In the case of NSs, Love numbers are typically expressed in terms of their EOS. For \(\lambda\), this was accomplished in Refs. [12; 31], and \(\kappa\) in Ref. [113]. It will be interesting to compute the particular values of \(\kappa\) for different compact objects [114; 115; 116]. The implications of the flow of the post-adiabatic Love number on the EOS of NS are very intriguing and warrant further investigation.
The adiabatic Love number \(\lambda\) was found to have a non-zero value for Schwarzschild BHs in spacetime dimensions other than four, as shown in Ref. [104]. Consequently, when using dimensional regularization, the Love number \(\lambda\) yields a finite contribution at \(\mathcal{O}(\epsilon=d-3)\), which, in combination with the divergent terms generated by the \(\mathcal{L}_{\rm EQ}\), results in a finite \(\mathcal{O}(\epsilon^{0})\) contribution. However, such contributions to observables could be absorbed into the contributions from the counterterm, so that the number of degrees of freedom to be matched does not increase. Furthermore, matching in arbitrary dimension \(d\) or in \(d=3\) will in general result in different values for \(\lambda\) and \(\kappa\), but observables are the same in the limit \(\epsilon\to 0\). In this context, it is also very interesting to explore the role of evanescent operators
[30], which are non-zero only in spacetime dimensions other than four. It was observed in [117, 118, 119], that the scattering amplitudes from such operators can affect the matching equation determining the Wilson coefficients. Also the appearance of such operators in counterterms originating from physical operators leads to the mixing of physical and evanescent operators during the RG evolution [117].
In this work, we focused on the dynamical gravitoelectric quadrupolar tides, but our framework and our automatic computational techniques are general enough to be extended to obtain further higher-order corrections and also other types of tidal effects, like the dynamical gravitomagnetic tides [64, 48], as well as to incorporate higher order multipolar tides. The coupling of the oscillation modes of the NS with other degrees of freedom, such as its spin [120, 121, 63, 122], or other oscillation modes [123, 124, 125, 126, 127, 128] could also be incorporated. Finally, following Refs. [24, 25], our 3PN Hamiltonians can be added into the time-domain effective-one-body waveform models to improve their agreement with numerical relativity simulations.
## Acknowledgements
We thank Sumanta Chakraborty, Thibault Damour, Quentin Henry, Mikhail Ivanov, Gustav Jakobsen, Jung-Wook Kim, Oliver Long, Elisa Maggio, Saketh MVS and Ira Rothstein for insightful comments and discussions. The work of M.K.M is supported by Fellini - Fellowship for Innovation at INFN funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 754496. H.O.S acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) - Project No. 386119226. R.P.'s research is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Projektnummer 417533893/GRK2575 "Rethinking Quantum Field Theory".
|
2306.09995 | Fairness in Preference-based Reinforcement Learning | In this paper, we address the issue of fairness in preference-based
reinforcement learning (PbRL) in the presence of multiple objectives. The main
objective is to design control policies that can optimize multiple objectives
while treating each objective fairly. Toward this objective, we design a new
fairness-induced preference-based reinforcement learning or FPbRL. The main
idea of FPbRL is to learn vector reward functions associated with multiple
objectives via new welfare-based preferences rather than reward-based
preference in PbRL, coupled with policy learning via maximizing a generalized
Gini welfare function. Finally, we provide experiment studies on three
different environments to show that the proposed FPbRL approach can achieve
both efficiency and equity for learning effective and fair policies. | Umer Siddique, Abhinav Sinha, Yongcan Cao | 2023-06-16T17:47:36Z | http://arxiv.org/abs/2306.09995v2 | # Fairness in Preference-based Reinforcement Learning
###### Abstract
In this paper, we address the issue of fairness in preference-based reinforcement learning (PbRL) in the presence of multiple objectives. The main objective is to design control policies that can optimize multiple objectives while treating each objective fairly. Toward this objective, we design a new fairness-induced preference-based reinforcement learning or FPbRL. The main idea of FPbRL is to learn vector reward functions associated with multiple objectives via new _welfare-based_ preferences rather than _reward-based_ preference in PbRL, coupled with policy learning via maximizing a generalized Gini welfare function. Finally, we provide experiment studies on three different environments to show that the proposed FPbRL approach can achieve both efficiency and equity for learning effective and fair policies.
Machine Learning, ICML
## 1 Introduction
The broad application of reinforcement learning (RL) faces a significant challenge, namely, the design of appropriate reward functions that align with specific mission objectives in given environments. To mitigate this challenge, preference-based RL (PbRL) (see, for example, (Christiano et al., 2017)) has emerged as a promising paradigm, leveraging human feedback to eliminate the need for manual reward function design. However, real-world missions often entail multiple objectives and the consideration of preferences among diverse users, necessitating a balanced approach. Existing PbRL methods primarily focus on maximizing a single performance metric, neglecting the crucial aspect of equity or fairness, e.g., (Stiennon et al., 2020; Wu et al., 2021; Lee et al., 2021). Consequently, the lack of fairness considerations poses a barrier to the widespread deployment of PbRL for systems affecting multiple end-users when it is critical to address fairness among these users.
To address this critical gap, the development of methods enabling fairness in PbRL becomes imperative. While recent advancements have explored fairness in RL, albeit not within the PbRL framework, notable contributions in, e.g., (Weng, 2019; Siddique et al., 2020; Fan et al., 2022), have employed welfare functions to ensure fairness in the single-agent RL setting. Furthermore, the work in (Zimmer et al., 2021) considered fairness in a multi-agent RL setting.
This paper proposes an approach that builds upon existing studies on fairness, focusing on a PbRL setting. In particular, rather than relying on known ground truth rewards, our method involves learning fair policies by incorporating fairness directly into the PbRL paradigm, thereby eliminating the need for hand-crafted reward functions. By doing so, we aim to address fairness in PbRL without compromising on its advantages.
Contributions.In this paper, we present a novel approach that addresses fairness in PbRL. Our proposed method introduces a novel technique to learn vector rewards associated with multiple objectives by leveraging welfare-based preferences rather than reward-based preferences in (Christiano et al., 2017). Hence, the proposed approach provides new insights and techniques to address fairness in PbRL. We validate the effectiveness of our approach through comprehensive experiments conducted in three real-world domains. The proposed approach is expected to provide solutions for RL problems when reward functions are absent, or it is too costly to design them.
## 2 Related Work
The concept of having equity and fairness, especially in real-world missions with multiple objectives and diverse users, is imperative. Such a concept has also been given careful consideration in many domains, including economics (Moulin, 2004b), political philosophy (Rawls, 2020), applied mathematics (Brams & Taylor, 1996) operations research (Bauerle & Ott, 2011), and theoretical computer science (Ogryczak et al., 2014). Fairness considerations have been incorporated into classic continuous and combinatorial optimization problems in scenarios where the underlying model was assumed to be fully known, and learning might not be necessary (Nei
dhardt et al., 2008; Ogryczak et al., 2013; Nguyen and Weng, 2017; Busa-Fekete et al., 2017; Agarwal et al., 2018). Such methods include linear programming and other model-based algorithms that consider the feedback effects and dynamic impacts in decision-making processes, allowing for the development of fair policies that adapt to changing circumstances. While such methods yielded satisfactory results, they cannot be directly used if the underlying model is unknown or too complex to be modeled.
The study of fairness in RL, especially within a model-free paradigm, has gained significant attention in recent years, with notable contributions shedding light on various aspects of this emerging field. Initial work by (Jabbari et al., 2017) laid the foundation by focusing on scalar rewards, paving the way for further advancements. Researchers have pursued diverse directions to incorporate fairness into RL frameworks. (Wen et al., 2021) explored fairness constraints as a means to reduce discrimination, while the work of (Jiang and Lu, 2019; Zimmer et al., 2021; Ju et al., 2023) delved into achieving fairness among agents. The work of Siddique et al. (2020) introduced a novel fair optimization problem within the context of multi-objective RL, enabling modifications to the existing deep RL algorithms to ensure fair solutions. Chen et al. (2021) extended the scope by incorporating fairness into actor-critic RL algorithms, optimizing general fairness utility functions for real-world network optimization problems. The work of Zimmer et al. (2021), on the other hand, focused on fairness in decentralized cooperative multi-agent settings, developing a framework involving self-oriented and team-oriented networks concurrently optimized using a policy gradient algorithm. Notably, the work in Ju et al. (2023) introduced online convex optimization methods as a means to learn fairness with respect to agents.
Despite the significant successes achieved in the field of deep RL, these methods heavily rely on the availability of known reward functions. However, in many real-world problems, the task of defining a reward function is often challenging and sometimes even infeasible. To address this limitation, PbRL has emerged as an active area of research (Christiano et al., 2017). Within PbRL, different settings have been explored, depending on whether the involvement of humans is direct or if simulated human preferences are derived from the ground truth rewards. In the context of PbRL, the standard approach typically revolves around maximizing a single criterion, such as a reward, which is inferred from the preferences (Stiennon et al., 2020; Lee et al., 2021; Wu et al., 2021). However, it is clear that focusing exclusively on maximizing rewards falls short of assuring fairness across various objectives. Our approach, which is consistent with the fundamental concepts of preference-based learning, digs into the investigation of learning fair policies in the context of PbRL.
## 3 Preliminaries
### Preference-based RL (PbRL)
We consider a Markov Decision process without reward (MDP\(\backslash\)R) augmented with preferences, which is a tuple of the form \((\mathcal{S},\mathcal{A},T,\rho,\gamma)\), where \(\mathcal{S}\) is the set of states, \(\mathcal{A}\) is the set of possible actions, \(T:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) is a state transition probability function specifying the probability \(p(s^{\prime}\mid s,a)\) of reaching state \(s^{\prime}\in\mathcal{S}\) after taking action \(a\) in state \(s\), \(\gamma\) is a discount factor, and \(\rho:\mathcal{S}\rightarrow[0,1]\) specifies the initial state distribution. The learning agent interacts with the environment through rollout trajectories, where a length-\(k\) trajectory segment takes the form \((s_{1},a_{1},s_{1},a_{1},\ldots,s_{k},a_{k})\). A _policy_\(\pi\) is a function that maps states to actions, such that \(\pi(a\mid s)\) is the probability of taking action \(a\in\mathcal{A}\) in state \(s\in\mathcal{S}\).
PbRL is an approach to learning policies without rewards in which humans are asked to compare pairs of trajectories and give relative preferences between them (Christiano et al., 2017). More specifically, in PbRL, a human is asked to compare a pair of length-\(k\) trajectory segments \(\sigma^{1}=(s_{1}^{1},a_{1}^{1},s_{1}^{2},a_{1}^{2},\ldots,s_{k}^{1},a_{k}^{1})\) and \(\sigma^{2}=(s_{1}^{2},a_{1}^{2},s_{2}^{2},a_{2}^{2},\ldots,s_{k}^{2},a_{k}^{2})\), where \(\sigma^{1}\succ\sigma^{2}\) indicates that the user preferred \(\sigma^{1}\) over \(\sigma^{2}\). Owing to the unavailability of the reward function, many PbRL algorithms learn an estimated reward function model, \(\hat{r}(\cdot,\cdot):\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\). The reward estimate \(\hat{r}(\cdot,\cdot)\) can be viewed as an underlying latent factor explaining human preferences. In particular, it is often assumed that the human's probability of preferring a segment \(\sigma^{1}\) over \(\sigma^{2}\) is given by the Bradley-Terry model (Christiano et al., 2017),
\[P(\sigma^{1}\succ\sigma^{2}\mid\hat{r})=\frac{e^{\hat{R}(\sigma^{1})}}{e^{ \hat{R}(\sigma^{1})}+e^{\hat{R}(\sigma^{2})}}, \tag{1}\]
where \(\hat{R}(\sigma_{i}):=\sum_{t=1}^{k}\gamma^{t-1}\hat{r}(s_{t}^{i},a_{t}^{i})\) is the estimated total discounted reward of trajectory segment \(\sigma_{i}\), and \((s_{t}^{i},a_{t}^{i})\) is the \(t^{\text{th}}\) state-action pair in \(\sigma_{i}\). One can minimize the cross-entropy loss between the Bradley-Terry preference predictions and true human preferences, given by (Christiano et al., 2017),
\[L(\hat{r})= -\sum_{(\sigma^{1},\sigma^{2},\mu)\in S}\left(\mu(1)\log P[ \sigma^{1}\succ\sigma^{2}]\right.\] \[\left.+\mu(2)\log P[\sigma^{2}\succ\sigma^{1}]\right), \tag{2}\]
where \(\mu(i),\ i\in\{1,2\}\) is an indicator such that \(\mu(i)=1\) when trajectory segment \(\sigma^{i}\) is preferred, whereas \(S\) is the dataset of labeled human preferences. By optimizing \(L(\hat{r})\), an estimated reward function \(\hat{r}(\cdot,\cdot)\) can be obtained to help explain human preferences.
### Notion of Fairness
The fairness concept used in previous work such as (Spicher et al., 2018; Weng, 2019; Siddique et al., 2020; Zim
mer et al., 2021) enforces three natural properties: _efficiency_, _equity_, and _impartiality_. The concept of efficiency, also referred to as _optimality_, implies that the solution should be optimal and Pareto dominant. Equity is often associated with the concept of distributive justice, as it pertains to the fairness of resource or opportunity distribution. This property ensures that a fair solution follows the Pigou-Dalton principle (Moulin, 2004), which states that by transferring rewards from the more advantaged to the less advantaged users, the overall fairness of the solution can be improved. Impartiality or equality requires that all users be treated equally, without favoritism towards any particular user in terms of the solution's outcomes.
To operationalize this notion of fairness, the use of welfare functions is employed. These welfare functions aggregate the utilities of all users and provide a measure of the overall desirability of a solution for the entire group. While there exist various welfare functions, we only consider those that satisfy the three fairness properties discussed earlier. One welfare function that satisfies the aforementioned properties is the _generalized Gini welfare function_(Weymark, 1981), which is defined as follows:
\[\phi_{\mathbf{w}}(\mathbf{u})=\sum_{i\in\mathcal{K}}\mathbf{w}_{i}\mathbf{u}_{i}^{\top}\,, \tag{3}\]
where \(\mathbf{u}\in\mathbb{R}^{\mathcal{K}}\) represents the utility vector of a size \(\mathcal{K}\), \(\mathbf{w}\in\mathbb{R}^{\mathcal{K}}\) is a fixed weight vector with positive components that strictly decrease (i.e., \(\mathbf{w}_{1}>\ldots>\mathbf{w}_{K}\)), and \(\mathbf{u}^{\top}\) denotes the vector obtained by sorting the components of \(\mathbf{u}\) in increasing order (i.e., \(\mathbf{u}_{1}^{\top}\leq\ldots\leq\mathbf{u}_{K}^{\top}\)). For consistency, bold variables represent vectors/matrices. In essence, this function computes the summation of the weight multiplied by the sorted utility for each objective. The weight vector is fixed, positive, and strictly decreasing. It is important to note that the strict decrease in weights is crucial to ensure a fair and Pareto optimal, as well as an equitable solution.
## 4 Approach
In order to account for the impact of an agent's actions on multiple objectives, i.e., users in the notion of fairness in Section 3.2, we extend previous RL formulations by redefining the estimated reward function as a vector function, denoted as \(\mathbf{\hat{r}}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}^{\mathcal{K}}\), where \(\mathcal{K}\) denotes the number of objectives. This vector function captures the rewards associated with all objectives, acknowledging the multi-objective nature of the problem at hand. Note that this is different from the scalar reward function \(\hat{r}\) in PbRL (Christiano et al., 2017). To formalize the fair policy optimization problem, we integrate the welfare function \(\phi_{\mathbf{w}}\) into our objective function. Consequently, the goal is to find a policy that generates a fair distribution of rewards over \(\mathcal{K}\) objectives given by
\[\max_{\mathbf{\pi_{\theta}}}\phi_{\mathbf{w}}(\mathbf{J}(\pi_{\mathbf{\theta}})), \tag{4}\]
where \(\pi_{\mathbf{\theta}}\) represents a policy parameterized by \(\mathbf{\theta}\), \(\phi_{\mathbf{w}}\) denotes a welfare function with fixed weights that requires optimization, and \(\mathbf{J}(\pi_{\mathbf{\theta}})\) represents the vectorial objective function that yields the utilities (i.e., \(\mathbf{u}\)) for all users. It is also worth noting that the chosen welfare function, such as the generalized Gini welfare function, is concave. As a result, the optimization problem presented in (4) can be characterized as a convex optimization problem. This convexity property facilitates the exploration of effective solution methods for achieving equitable policies in model-free RL settings.
Note that optimizing the welfare function defined in (3) is an effective way to address fairness because the weights \(\mathbf{w}\) are selected such that a higher weight will be assigned for objectives with lower utility values, which will ensure that all objectives are treated fairly than the cases when the weights are assigned without considering the utility values.
Our procedure to optimize the welfare function is an iterative process that integrates the policy update step and reward update step (via the collection of more preferences for reward function estimation). Since the reward function estimation is non-stationary, we focus on policy gradient methods. As a state-of-the-art policy gradient method, we adopt the Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017) for policy optimization and compute the advantage function via
\[\mathbf{A}_{\pi_{\mathbf{\theta}}}(s_{t},a_{t})=\sum_{t}(\gamma\lambda)^{t-1}\delta_{t }\,, \tag{5}\]
where \(\delta_{t}\) is determined by the expression \(\mathbf{\hat{r}}_{t}+\gamma\mathbf{V_{\theta}}(s_{t+1})-\mathbf{V_{\theta}}(s_{t})\), with \(\mathbf{\hat{r}}_{t}\) representing the estimated rewards, and \(\mathbf{V_{\theta}}(s_{t})\) denoting the value function associated with state \(s_{t}\). In PPO, the objective function \(\mathbf{J}(\mathbf{\theta})\) is designed to limit policy changes after an update, that is,
\[\mathbb{E}_{s\sim\mathcal{A}_{\pi},a\sim\pi_{\mathbf{\theta}}(\cdot |s)}\left[\min(\rho_{\mathbf{\theta}}\mathbf{A}_{\pi_{\mathbf{\theta}}}(s,a),\bar{\rho}_{ \mathbf{\theta}}\mathbf{A}_{\pi_{\mathbf{\theta}}}(s,a))\right]\,, \tag{6}\]
where \(\rho_{\mathbf{\theta}}=\frac{\pi_{\mathbf{\theta}}(a|s)}{\pi_{\mathbf{\theta}}(a|s)}\), \(\bar{\rho}_{\mathbf{\theta}}=\text{clip}(\rho_{\mathbf{\theta}},1-\epsilon,1+\epsilon)\), \(\pi_{\mathbf{b}}\) represents the policy generating the transitions, and \(\epsilon\) is a hyperparameter controlling the constraint. To compute the gradient for \(\mathbf{J}(\mathbf{\theta})\), we have
\[\nabla_{\mathbf{\theta}}\phi_{\mathbf{w}}(\mathbf{J}(\pi_{\mathbf{\theta}}))= \nabla_{\mathbf{J}(\pi_{\mathbf{\theta}})}\phi_{\mathbf{w}}(\mathbf{J}(\pi_{\mathbf{ \theta}}))\cdot\nabla_{\mathbf{\theta}}\mathbf{J}(\pi_{\mathbf{\theta}}) \tag{7}\] \[= \mathbf{w}_{\sigma}^{\top}\nabla_{\mathbf{\theta}}\mathbf{J}(\pi_{\mathbf{\theta }}), \tag{8}\]
where \(\nabla_{\mathbf{\theta}}\mathbf{J}(\pi_{\mathbf{\theta}})\) is a \(\mathcal{K}\times\mathcal{N}\) matrix representing the classic policy gradient over the \(\mathcal{K}\) objectives, \(\mathbf{w}_{\sigma}\) is a vector sorted based on the values of \(\mathbf{J}(\pi_{\mathbf{\theta}})\), and \(\mathcal{N}\) denotes the number of policy parameters.
For reward estimation function update, we ask a human (or a similar mechanism like a synthetic human) to provide preferences for the segments collected by the policy, establishing or expanding the dataset for preferences. The vector
function \(\hat{\mathbf{r}}\) is learned via minimizing the loss function (2) with a modified preference probability given by
\[P(\sigma^{1}\succ\sigma^{2}\mid\hat{\mathbf{r}})=\frac{e^{\hat{R}(\sigma^{1})}}{e^{ \hat{R}(\sigma^{1})}+e^{\hat{R}(\sigma^{2})}}, \tag{9}\]
where \(\hat{R}(\sigma_{i}):=\phi_{\mathbf{w}}(\sum_{t=1}^{k}\gamma^{t-1}\hat{\mathbf{r}}(s_{t} ^{i},a_{t}^{i}))\). This formulation applies the welfare function \(\phi_{\mathbf{w}}\) to the discounted cumulative vector rewards, resulting in a scalarized \(\hat{R}(\sigma_{i})\). This scalarized value is then utilized to compute \(P(\sigma^{1}\succ\sigma^{2}\mid\hat{\mathbf{r}})\). It is important to note that the key distinction between our proposed approach and PbRL in (Christiano et al., 2017) lies in the utilization of the welfare function to determine preferences, as opposed to relying on segment rewards as done in (Christiano et al., 2017).
## 5 Experimental Results
To demonstrate the robustness and practicality of our method, we meticulously design and conduct three experiments. Each experiment showcases a unique scenario where fairness plays a pivotal role in RL outcomes. Moreover, at present, our primary emphasis is directed toward investigating synthetic human preferences owing to their convenient acquisition process and their appropriateness for testing objectives. Nonetheless, it is essential to note that our proposed approach is readily applicable in situations that involve human-in-the-loop interactions. Through rigorous analysis and evaluation, we assess the performance of our approach, both in terms of achieving fairness objectives and maintaining desirable learning outcomes in a model-free setting. We assign weights \(\mathbf{w}_{i}=\frac{1}{2^{i}},i=0,...,\mathcal{K}-1\), and to ensure the reproducibility of the results, and average the results over \(5+\) runs with different seeds to provide reliable evidence of our method's effectiveness. All algorithm hyperparameters were optimized using the open-source Lightweight HyperParameter Optimizer (LHPO) (Zimmer, 2018).
### Species Conservation
Species conservation is a critical domain in the field of ecology, particularly when dealing with the preservation of multiple interacting endangered species. Here, we tackle the challenge of incorporating fairness considerations into the conservation efforts of two specific species: sea otters and their prey, the northern abalone. The sea otter and northern abalone populations face a delicate balance as sea otters consume abalones, both of which are currently endangered. To navigate this complex conservation problem, we adopt the setting proposed in Chades et al. (2012) and tailor it to address the fairness aspects of this ecosystem. In our conservation problem, the state is defined by the current population numbers of both species. To influence the system, we have five distinct actions at our disposal: introducing sea otters, enforcing antipoaching measures, controlling sea otter populations, implementing a combination of half-antipoaching and half-controlled sea otters, or taking no action. Each action has significant implications, as introducing sea otters is necessary for balancing the abalone population, but if not carefully managed, it can inadvertently drive the abalone species to extinction. Similarly, neglecting any of the other managerial actions would result in the extinction of one of the species, highlighting the importance of a comprehensive approach in terms of equity and fairness. The transition function in this conservation problem incorporates population growth models for both species, accounting for factors such as poaching and oil spills. Through this framework, we strive to optimize not just a single objective but the population densities of both species, thereby dealing with a multidimensional problem where two objectives, sea otter and abalone population densities, need to be simultaneously optimized, leading to \(\mathcal{K}=2\).
In this domain, our primary objective is to assess the effectiveness of our proposed method in optimizing the welfare function, denoted as \(\phi_{\mathbf{w}}\). To evaluate this, we conduct a comparative analysis of welfare scores between three approaches: PPO, PbRL, and our proposed FPbRL method within this domain. To compute the welfare scores, we employ trained agents and evaluate their performance across 100 trajectories within the given environment. The empirical average vector returns of these trajectories serve as the basis for deriving the welfare score by applying the function \(\phi_{\mathbf{w}}\). The distribution of welfare scores for PPO, PbRL, and FPbRL is shown in Figure 0(a). Our results reveal that FPbRL achieves the highest welfare score, thereby demonstrating its ability to identify fairer solutions compared to PPO and the standard PbRL method. However, recognizing that the welfare score alone may not provide a comprehensive understanding of the objective balance, we present individual density plots in Figure 0(b) depicting the densities of both species. These plots offer further insights into the distribution of objectives. Consistently, our findings demonstrate that FPbRL yields more balanced solutions in terms of equity, surpassing both PbRL and PPO. In addition, we introduce the Coefficient of Variation (CV) to address scenarios where demonstrating the utility of each objective becomes challenging due to a multitude of objectives. Figure 0(c) showcases the CV, as well as the minimum and maximum densities. Corresponding with our previous findings, our proposed FPbRL method exhibits the lowest CV, indicating reduced variation between different objectives. Moreover, our method prioritizes maximizing the minimum objective to foster a more equitable distribution of utilities.
### Resource Gathering
We now consider a resource-gathering environment that encompasses a \(5\times 5\) grid world, adapted from the work of (Barrett & Narayanan, 2008). This dynamic environ
ment poses the challenge of resource acquisition, where the agent's objective is to collect three distinct types of resources: gold, gems, and stones, thus \(\mathcal{K}=3\). Within this grid world, the agent is situated at a specific position, while the resources are scattered randomly across various locations. Upon consumption of a resource, it is promptly regenerated at another random location within the grid, ensuring a continuous supply. The state representation in this environment encapsulates the agent's current position within the grid, as well as the cumulative count of each resource type collected throughout the ongoing trajectory. To navigate this complex environment, the agent is equipped with four cardinal direction actions: up, down, left, and right, enabling movement across the grid. However, to introduce an additional layer of intricacy, we assign distinct values to the resources. Gold and gems are endowed with a value of 1, symbolizing their higher significance, while stones, deemed less valuable, are assigned a value of 0.4. This deliberate assignment fosters an unbalanced distribution of resources, with two stones, one gold, and one gem, strategically placed within the grid. Amidst this resource-rich environment, the agent's ultimate goal is twofold: to maximize the accumulation of resources while concurrently maintaining a balanced distribution among the different resource types. By striking this delicate equilibrium, the agent strives to optimize its resource-gathering strategy, maximizing its overall utility and adaptability within this domain.
To demonstrate the efficacy of our proposed approach in maintaining a balanced distribution of resources, we conduct an analysis of welfare scores for the resource collection problem. Through this analysis, we aim to assess the fairness of different approaches and determine the extent to which our proposed method promotes equitable solutions. Figure 1(a) presents the welfare scores computed for PPO, PbRL, and the proposed FPbRL. These scores were computed over a hundred trajectories during the testing phase. Encouragingly, our proposed method achieved the highest welfare score, signifying a fairer solution when compared to both PPO and the standard PbRL method. To gain a comprehensive understanding of the balance between objectives in resource collection, we also examine the individual number of resources collected (see Figure 1(b)). Once again, the results reinforce the superiority of FPbRL in producing more balanced solutions. In contrast, PbRL and PPO tend to favor the accumulation of certain resources at the expense of others, highlighting the limitations of a standard approach that solely optimizes the aggregate or weighted sum of objectives. Our proposed method, however, maintains a balanced distribution of different resources, underscoring the significance of fairness considerations in resource collection scenarios. Furthermore, Figure 1(c) provides additional insights into the performances of PPO, PbRL, and FPbRL by examining the CV as well as the minimum and the maximum number of collected resources. Strikingly, FPbRL outperforms the other algorithms, exhibiting the lowest CV, which indicates a more equitable distribution of objectives. Notably, only FPbRL successfully maximizes the minimum objective utility, whereas PPO and the PbRL method yield the lowest minimum objective values, reflecting a prioritization of maximizing cumulative rewards at the expense of fairness considerations.
Figure 1: Performances of PPO, PbRL, FPbRL in the species conservation problem.
Figure 2: Performances of PPO, PbRL, FPbRL in resource gathering.
### Traffic Control at Intersections
To thoroughly validate the effectiveness of our proposed method, we also conduct a series of experiments in the demanding real-world domain of traffic light control. This domain presents unique challenges due to the multitude of objectives involved, making it an ideal testbed for evaluating the efficacy of our approach. To simulate a realistic traffic intersection scenario, we employed the widely-used Simulation of Urban MObility (SUMO) platform (Lopez et al., 2018). Specifically, our focus is on a standard 8-lane intersection, with two lanes designated for turning (left or right, depending on the side of the road) and the remaining lanes facilitating straight driving or additional turns. Traditionally, the objective of traffic control is to optimize traffic flow by minimizing the total waiting time for all vehicles approaching the intersection. However, our approach diverges from this conventional perspective. Instead, we adopted a novel viewpoint, aiming to optimize traffic flow for each of the four distinct sides of the road. Each side of the intersection is treated as a separate objective, and our goal is to learn a controller that effectively reduces the expected waiting times for vehicles on each road segment. This multi-objective setup thus takes \(\mathcal{K}=4\), reflecting the four sides of the road that need to be individually optimized. In this challenging problem, a state is defined by several key factors, including the waiting time of vehicles, the car density in the vicinity, and the current phase of the traffic light. The action space comprises four distinct options, each corresponding to a different phase change that influences the traffic flow on a specific side of the road. The transition function governing the evolution of the system is dependent on factors such as the current traffic light phase, the movement of vehicles through the intersection, and the generation of new traffic.
Similar to the previous assessments, we evaluate the efficacy of the proposed method in optimizing the welfare function. The welfare scores obtained during testing for PPO, PbRL, and FPbRL are presented in Figure 2(a). To improve readability, the y-axis has been scaled by a factor of 1000, with each tick representing 1000 units. It is evident that FPbRL outperforms both PPO and PbRL, achieving the highest welfare score. This noteworthy result underscores the efficacy of FPbRL in optimizing the welfare function, which is crucial for ensuring fair and equitable treatment of the diverse objectives at hand. To establish the correlation between these high welfare scores and fairer solutions, we examine the waiting times for all sides of the roads, as depicted in Figure 2(b). Our proposed method, FPbRL, demonstrates a more balanced distribution of waiting times across all road segments. Although FPbRL exhibits slightly higher total waiting times, it prioritizes lanes with fewer cars, thereby preventing any single vehicle from enduring significantly prolonged waiting periods. In contrast, PPO and PbRL tend to favor lanes with higher car densities in their pursuit of minimizing total waiting times. This observation underscores the importance of fairness considerations, indicating that the attainment of fairness may sometimes come at a cost. However, the cost of fairness is not excessively high, as evidenced in the previous domains (Figures 0(b) and 0(b)). Furthermore, we compare the performances of PPO, PbRL, and FPbRL in terms of the CV, minimum waiting time, and maximum waiting time (Figure 2(c)). Once again, FPbRL emerges as the top-performing algorithm, attaining the lowest CV and achieving a more balanced distribution of objectives. Notably, only FPbRL successfully maximizes the minimum objective and minimizes the maximum objective, thereby promoting equitable outcomes in the context of traffic light control.
## 6 Conclusions and Future Work
By incorporating fairness into PbRL, we developed a new fairness-induced PbRL (FPbRL) approach that can provide more equitable and socially responsible RL systems. Through our multi-experiment validation, we provided compelling evidence of the effectiveness and practicality of our approach toward its potential applications in real-world scenarios where fairness considerations are imperative. Our findings underscore the effectiveness of our proposed method, FPbRL, in optimizing the welfare function and achieving fairness in the presence of multiple objectives. A detailed investigation of other welfare functions and different impartiality properties, along with actual human feedback, could be interesting to explore in the future.
Figure 3: Performances of PPO, PbRL, FPbRL in traffic control.
## Acknowledgements
The authors were supported in part by Army Research Lab under grant W911NF2120232, Army Research Office under grant W911NF2110103, and Office of Naval Research under grant N000142212474.
|
2307.01038 | Multi-messenger Study of Galactic Diffuse Emission with LHAASO and
IceCube Observations | With the breakthrough in PeV gamma-ray astronomy brought by the LHAASO
experiment, the high-energy sky is getting richer than before. Lately, LHAASO
Collaboration reported the observation of a gamma-ray diffuse emission with
energy up to the PeV level from both the inner and outer Galactic plane. In
these spectra, there is one bump that is hard to explain by the conventional
cosmic-ray transport scenarios. Therefore, we introduce two extra components
corresponding to unresolved sources with exponential-cutoff-power-law (ECPL)
spectral shape, one with an index of 2.4, and 20 TeV cutoff energy, and another
with index of 2.3 and 2 PeV cutoff energy. With our constructed model, we
simulate the Galactic diffuse neutrino flux and find our results are in full
agreement with the latest IceCube Galactic plane search. We estimate the
Galactic neutrino contributes of $\sim 9\%$ of astrophysical neutrinos at 20
TeV. In the high-energy regime, as expected most of the neutrinos observed by
IceCube should be from extragalactic environments. | Chengyu Shao, Sujie Lin, Lili Yang | 2023-07-03T14:17:12Z | http://arxiv.org/abs/2307.01038v2 | # Multi-messenger Study of Galactic Diffuse Emission with LHAASO and IceCube Observations
###### Abstract
With the breakthrough in PeV gamma-ray astronomy brought by the LHAASO experiment, the high-energy sky is getting richer than before. Lately, LHAASO Collaboration reported the observation of a gamma-ray diffuse emission with energy up to the PeV level from both the inner and outer Galactic plane. In these spectra, there is one bump that is hard to explain by the conventional cosmic-ray transport scenarios. Therefore, we introduce two extra components corresponding to unresolved sources with exponential-cutoff-power-law (ECPL) spectral shape, one with an index of 2.4, and 20 TeV cutoff energy, and another with index of 2.3 and 2 PeV cutoff energy. With our constructed model, we simulate the Galactic diffuse neutrino flux and find our results are in full agreement with the latest IceCube Galactic plane search. We estimate the Galactic neutrino contributes of \(\sim 9\%\) of astrophysical neutrinos at 20 TeV. In the high-energy regime, as expected most of the neutrinos observed by IceCube should be from extragalactic environments.
## I Introduction
The origin of cosmic rays (CRs) is one of the key questions in astrophysics. The CR energy spectrum shows the knee and ankle features. It is generally believed that CRs with energies below the spectral knee at \(\sim 10^{15}\) eV, mainly come from our Galaxy, so-called Galactic cosmic rays (GCRs). While those with energies above the spectral ankle at \(\sim 10^{18}\) eV are mostly from extra-galactic energetic sources. Most CR particles may lose their directional information due to the deflection and interaction with extra-galactic and Galactic magnetic fields and medium during their propagation. This additional uncertainty means that the origin of CRs near the knee remains mysterious.
To resolve the puzzles, alternative methods have been adopted. Collisions between energetic CRs and ambient and interstellar medium generate neutral (\(\pi^{0}\)) and charged pions (\(\pi^{\pm}\)), which decay to gamma rays and neutrinos. These secondary products detected on Earth will encode details of both the CR and target populations. The accurate interpretation of such measurements can provide direct information on the propagation and sources of CRs.
In the last few decades, progress has been made in detecting high-energy gamma-ray and neutrino emissions. The continuum diffuse gamma-ray emission has been well measured by Fermi Large Area Telescope (LAT) up to a few hundred GeV [1; 2]. Later on, in the TeV energy regime, Milagro [3], ARGO-YBJ [4], H.E.S.S. [5; 6; 7] and HAWC [8; 9] have been contributing data in the Galactic plane. These measurements have only recently reached to PeV range thanks to the Tibet AS\(\gamma\) and LHAASO [10; 11; 12]. This discovery suggests the existence of PeVatrons [13], which are sources capable of accelerating particles up to PeV energies. It is absolutely a big step towards understanding cosmic-ray physics by exploring the knee region of the CR spectrum.
On the other hand, since the first detection of astrophysical neutrinos in 2012, IceCube has been accumulating neutrino data for more than 10 years [14; 15; 16]. With the development of machine learning techniques and more statistics, the neutrino emission from the Galactic plane has been identified lately [17].
All these achievements can provide a hint to the injection, distribution and propagation of CRs in our Galaxy. However, the analysis of the Galactic diffuse emission (GDE) can be seriously contaminated by unresolved Galactic point sources which may have a distribution similar to the interstellar gas. This brings a challenge to recognize the accelerator of CRs. Previously, a few groups have performed the studies on the diffuse emission from TeV to PeV, where they discussed the possibility of the Galactic diffuse gamma-ray and neutrino emission coming from cosmic-ray interaction, known sources, and unresolved sources [18; 19; 20; 21; 22].
In this work, based on the current cosmic-ray and Fermi-LAT data, and the most recent LHAASO and IceCube Galactic plane observation, we apply the popular GALPROP code [23] to model CR transport and generate simulated spectra and maps of the diffuse gamma-ray and neutrino emissions. Specifically, we adopt a Diffusion plus Reacceleration (DR) model, and employ DR-high and DR-low models to take into account the uncertainties of the measurements obtained from the ground-based air-shower experiments, IceTop and KASCADE respectively. However, we find a tension between the predicted gamma-ray flux with our constructed models and the observations. To illustrate the characteristic of the LHAASO Galactic plane spectrum and explain the excess, we invent two populations of Galactic sources (EX
TRA1 and EXTRA2) with exponential-cutoff-power-law (ECPL) spectra shape. In the energy up to \(10^{5}\) GeV, the spectrum of EXTRA1 has an index of 2.40 and \(\sim\) 20 TeV cutoff energy. In the higher energy end up to \(10^{6}\) GeV, another component EXTRA2 with an index of 2.3 and a 2 PeV cutoff is introduced. This can be naturally explained by the two types of unresolved sources in our Galaxy with different maximum cosmic-ray energy.
Based on the constructed models, we also estimate the diffuse Galactic neutrino flux that is consistent with the latest IceCube Galactic plane search. We found the Galactic neutrinos can contribute around 9 percent to the all-sky neutrino events at 20 TeV. At PeV energy, most of the neutrinos are coming from outside of our Galaxy. However, due to a few uncertain factors like the mechanisms, numbers, and distribution of these unresolved sources in our Galaxy and limited observation capability, there is still some space allowed for modeling. Therefore, to further reveal the puzzles, the next-generation Imaging Air Cherenkov Telescopes (IACTs) and neutrino detectors with a larger effective area and better angular and energy resolution, which can dramatically provide a precise location and morphology of sources, are in high demand.
The paper is organized as follows. Section 2 provides the description of the multi-messenger data, including cosmic-ray, gamma-ray, and neutrino observations used in this work. In Section 3, we present the injection and propagation models of cosmic rays, together with the addition of extra source components for fitting the gamma-ray data. Based on the constructed models, we show the calculated Galactic diffuse gamma-ray and neutrino emission in Section 4. In Section 5, we give a discussion about the obtained results and the origins of the two extra components. In Section 6, we give a summary and future outlook for the multi-messenger observations.
## 2 Multi-messenger observation
Thanks to the development of both satellites and ground-based observatories, diffuse high-energy neutrinos with energies between 10 TeV to PeV [24], ultra-high-energy cosmic rays (UHECRs, \(>10^{18}\) eV) [25], and high-energy gamma rays from MeV to PeV have been measured, or upper limits have been provided [8; 12; 26]. As there is a natural connection between these three messengers, neutrinos and gamma rays are produced during the CR propagation and can directly point back to the origin of CRs. Their joint detection and analysis should be a very efficient way to explore the Universe. [27; 28]. Moreover, the energy budgets of UHECRs, PeV neutrinos, and isotropic sub-TeV gamma rays are comparable [29], which supports the unification of high-energy cosmic particles.
Before exploring, understanding, and identifying the mechanisms and physical processes of the astrophysical sources of CRs, the diffuse backgrounds originating from our Galaxy should be seriously studied. One accurate diffuse template can provide great help in analyzing the upcoming data. For this purpose, we attempt to constrain the diffuse emission with current observation. The measurements used in this work are presented below.
### High-energy cosmic rays
The high-energy CR particles are accelerated by energetic astrophysical sources like supernova remnants (SNRs) and propagate inside the Galactic magnetic field around the Galactic disk after escaping. Although only the CR fluxes around the sun could be measured, their distribution throughout the Galaxy can be predicted by the propagation model. Generally, the propagation model is constrained by the secondary-to-primary flux ratio observation, such as B/C [30] and \({}^{10}\)Be/\({}^{9}\)Be [31]. More details regarding the propagation model can be found in Section 3.
Their fluxes around the earth have been directly measured by space-born experiments like AMS-02 [31; 32; 33; 34; 35] and DArk Matter Particle Explorer (DAMPE) [36; 37; 38; 30], and also indirectly measured by the ground-based experiments like IceTop [39] and KASCADE [40].
One has to notice that the measurements of the energies of the knees disagree between IceTop and KASCADE, as shown in Figure 1. As the KASCADE experiment uses the QGSJET-II-02 model while IceTop uses the Sibyll 2.1 model instead, the discrepancies are caused by the large systematic uncertainty of the hadronic model. In our study, we refer to the models derived from KASCADE and IceTop data as DR-low and DR-high respectively.
In this work, to estimate the diffuse gamma-ray and neutrino emission, the proton and electron plus positron spectra observed by Voyager, AMS-02, IceTop, and KASCADE are adopted to constrain the Galactic CR distribution as seen in Figure 1 and Figure 2.
### Gamma-ray sky
The diffuse gamma-ray emission has been well measured by a few satellites below TeV energies, such as EGRET, and followed by Fermi-LAT [41; 26]. Recently, the Galactic plane has been observed up to 1 PeV, thanks to the Tibet AS\(\gamma\) and LHAASO [10; 12] experiments. These discoveries show the evidence of hadronic origin of sub-PeV diffuse gamma rays, which are generated during the propagation of tens of PeV CRs.
The LHAASO experiment announced the source-subtracted galactic diffuse gamma-ray fluxes from the inner galactic plane (\(15^{\circ}<\)l\(<125^{\circ}\), \(|b|<5^{\circ}\)) and outer plane (\(125^{\circ}<\)l\(<235^{\circ}\), \(|b|<5^{\circ}\)) for the first time lately. Where a simple power law is adopted to describe the spectra for both regions with similar spectral indices of
\(-2.99\), which is consistent with the CR spectral index of the knee region.
In Figure 3, the data for the window of \(15^{\circ}<\)l\(<125^{\circ}\), \(|b|<5^{\circ}\) from LHAASO and Fermi-LAT experiments [42], and for \(25^{\circ}<\)l\(<100^{\circ}\), \(|b|<5^{\circ}\) from Tibet AS\(\gamma\) are presented. As can be seen, both the LHAASO and Tibet AS\(\gamma\) data are in agreement with the Fermi-LAT data. However, the result from LHAASO is a few times lower than that from AS\(\gamma\). This is due to the different analysis methods of these two experiments. Where LHAASO analyzes the data by masking sources included in the TeVCAT with a radius of five times the Gaussian extension widths. Therefore, this cut procedure may lose a large part data of the Galactic plane, where the diffuse CR and unresolved sources are located.
### Neutrino sky
Since the first observation of the astrophysical neutrino signal in the TeV - PeV energy range in 2012 [24], IceCube has kept updating the neutrino sky for more than 10 years. The event distribution is consistent with being isotropic, and the origin of these neutrino signals is still unresolved. With larger statistics, IceCube has recently shown that there are more events at lower Galactic latitudes and a deficit of neutrino events at high Galactic latitudes [15]. The IceCube Neutrino Observatory has provided 6-year all-sky total and 10-year Galactic plane data [15; 17]. In this recently updated data sample for
Figure 1: Best-fitting spectra of protons (top), and helium nuclei (bottom), along with the observation data from IceTop (blue circles), KASCADE (yellow crosses), AMS-02 (red triangles), and DAMPE (purple stars). The solid line represents the DR-low model, while the dashed line represents the DR-high model.
Figure 3: The gamma-ray data from Fermi-LAT (black crosses) and LHAASO experiments (blue triangles) in the region of \(15^{\circ}<\)l\(<125^{\circ}\), \(|b|<5^{\circ}\), gamma-ray data from Tibet AS\(\gamma\) (red plus), IceCube total \(\nu\) (blue shaded region) and their results with \(\pi^{0}\) model in the Galactic plane (red shaded region) are shown.
Figure 2: The black solid line shows the fitted electrons plus positrons spectrum, and the measurements from AMS-02 (red dots) and DAMPE (yellow crosses) are also marked.
Galactic neutrino search, they performed the analysis for the cascade events with lower energy thresholds. The neutrino emission from the Galactic plane is reported at the 4.5\(\sigma\) level of significance [17] with a total of 59,592 events selected over the entire sky in the energy range of 500 GeV to several PeV. As shown in Figure 3, the best-fitting Galactic plane neutrino flux is comparable with the gamma-ray flux.
The total neutrino observation as shown with blue shaded region in Figure 3 includes events from Galactic and extra-galactic diffuse backgrounds and astrophysical sources, whose spectrum follows a simple power-law distribution as can be found below,
\[\Phi_{\nu}=\Phi_{0}(\frac{E}{100\mathrm{TeV}})^{-\gamma}. \tag{1}\]
here the normalization factor \(\Phi_{0}\) is 1.66\(\times 10^{-18}\)GeV\({}^{-1}\)cm\({}^{-2}\)s\({}^{-1}\)sr\({}^{-1}\), and the common spectral index \(\gamma\) is 2.53. The observed neutrino spectrum is softer than \(E^{-2}\) which is comparable with the observed diffuse extragalactic gamma-ray background [43].
## III Models
Both the diffuse gamma-ray and neutrino flux are generated by CR particles when they propagate in the Galaxy. The hadronic component of CRs can induce gamma-ray and neutrino emission through proton-proton interaction, as well as bremsstrahlung radiation. On the other hand, the leptonic component of CRs contributes to gamma-ray emissions through the inverse Compton (IC) effect. To calculate these processes, along with the propagation effect of CRs, we utilize the well-established GALPROP code [23] in this study. In this section, we provide a detailed description of the CR propagation and emission model that are employed.
### Cosmic-ray propagation
In the propagation model, CRs are assumed to undergo diffusion within the Galactic magnetic field, taking into account the possible effects such as reacceleration, energy loss, fragmentation, and decay. The diffusion coefficient is parameterized as \(D(R)=\beta^{\eta}D_{0}(R/4\mathrm{GV})^{\delta}\), where \(D_{0}\) is the normalization factor at a reference rigidity of 4 GV, R is the particle's rigidity, \(\beta\) is the velocity of the particles in natural units, \(\delta\) is the slope of rigidity dependence, and \(\eta\) is a phenomenological parameter introduced to fit the low energy secondary-to-primary ratios. Besides the diffusion, the convection or reacceleration effect is also required by the observed \(B/C\) data.
In some recent studies, with more secondary CR species like Li, Be, and B precisely measured by AMS-02, it was found that the reacceleration effect is favored [44]. In this work, we adopt a Diffusion plus Reacceleration (DR) model as a benchmark model. The model parameters that describe the propagation processes are adopted following the work of other groups [44], corresponding to the best-fit values obtained by fitting the Li, Be, B, C, and O measurements from AMS-02. As listed in Table 1, the half height of diffuse zone \(z_{h}\) is 6.3 kpc, and the Alfven speed \(v_{A}\) that describes the strength of reacceleration is 33.76 km s\({}^{-1}\).
### Cosmic-Ray injection
It is widely accepted that SNRs are the most promising galactic high-energy CR sources, whose shock provides the ideal environment for first-order Fermi accelerations of relativistic particles. In the discovery of 12 Galactic PeV accelerators by LHAASO, eight of them are somehow linked to SNRs [11]. Therefore we make a simple assumption that CRs are injected into the Galaxy by the SNRs. As it is not possible to gather information on all historical SNRs, for an estimation, we employ a continuous source distribution for SNRs as follows
\[f(r,z)=\left(\frac{r}{r_{\odot}}\right)^{1.25}\exp\left[-\frac{3.56(r-r_{\odot })}{r_{\odot}}\right]\exp\left(-\frac{|z|}{z_{s}}\right), \tag{2}\]
where \(r_{\odot}=8.3\)kpc is the distance of the sun, \(z_{s}=0.2\)kpc is a scale factor that indicates the thickness of the Galactic disk.
Given the many spectral structures revealed by recent direct detection experiments, the injection spectra of CRs may be quite complicated. A multiple-broken-power-law spectrum is employed to describe these features as seen below,
\[f(x)=\begin{cases}R^{-\nu_{0}}e^{-\frac{R}{hc}},&R<R_{1}\\ (\prod_{i=1}^{n}R_{i}^{\nu_{i}-\nu_{i-1}})R^{-\nu_{n}}e^{-\frac{R}{hc}},&R_{n} \leq R<R_{n+1}\\ (\prod_{i=1}^{4}R_{i}^{\nu_{i}-\nu_{i-1}})R^{-\nu_{4}}e^{-\frac{R}{hc}},&R_{4} \leq R\end{cases} \tag{3}\]
where n in the second row is from 1 to 3. The corresponding observation could constrain the injection parameters in Equation 3 for different CR species. As there exist discrepancies between the IceTop and KASCADE measurements, we construct two models, differing in their injections, to indicate the upper and lower boundary of theoretical estimation, so-called DR-high and DR-low models respectively.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(D_{0}\) & \(\delta\) & \(z_{h}\) & \(v_{A}\) & \(\eta\) \\ & (\(10^{28}\) cm\({}^{2}\) s\({}^{-1}\)) & (kpc) & (km s\({}^{-1}\)) & \\ \hline
7.69 & 0.362 & 6.3 & 33.76 & \(-0.05\) \\ \hline \end{tabular}
\end{table}
Table 1: Propagation parameters.
Following Ref. [42], the spectral structures are assumed to be mainly due to the source injection as in Equation 3, without taking into account the change of propagation parameters. The constraints for proton, helium, and electron plus positron were performed. These three kinds of CR particles play a major role in contributing to Galactic \(\gamma\) and neutrino emission. We develop our neutrino and gamma-ray sky map based on these best-fitting parameters, listed in Table 2 and 3. The comparisons between observation and model are shown in Figure 1 and 2.
As seen in Table 2, for both DR-high and -low model, most of the best-fitting parameters \(\nu_{i}\) and \(R_{i}\) are identical to each other, except \(R_{c}\) and \(\nu_{4}\). Here \(R_{c}\) represents the characteristic cutoff rigidity of the exponential cutoff spectral, describing the knee energy of those particles. Apparently, the \(R_{c}\) of DR-high is higher than that of DR-low, because of the different knee energies from IceTop and KASCADE. On the other hand, the \(\nu_{4}\) of DR-high is smaller than that of DR-low, which is due to a harder spectrum from the IceTop measurement.
### Gamma-Ray expectation
With the propagation and injection of CRs fixed, we analyze the gamma-ray sky map. We apply the GALPROP code to calculate the diffuse emissions from a few processes, including natural pion decay, bremsstrahlung, and inverse Compton scattering (ICS). The AAfrag package [45] is adopted to estimate the secondary gamma-ray and neutrino production from inelastic hadronic interactions.
We show the diffuse gamma-ray spectra measured by the LHAASO and Fermi-LAT experiments, along with our model predictions for both the inner region (Figure 4a and 4c) and outer region (Figure 4b and 4d). To ensure a self-consistent comparison, we apply the same masks as in the LHAASO analysis [12] for all calculated results and data.
Compared with the gamma-ray data from Fermi-LAT and LHAASO, the predicted flux with the DR-high-only model is consistent with the data both at energies less than a few GeV and above 60 TeV. However, between a few GeV and 60 TeV, DR-high-only model can not explain the LHAASO data, as can be seen in Figure 4c and 4d.
This excess below 60 TeV was initially identified through the analysis of GeV Fermi-LAT observations [1]. To account for this excess, some studies have proposed a spatially dependent diffusion model [46]. However, this modification of the propagation model is insufficient to explain the data obtained by LHAASO.
In this work, we attribute this TeV excess to unresolved sources along the Galactic plane, which are expected to be numerous and faint within the field of view of LHAASO and Fermi-LAT. Various physical interpretations have been previously discussed in the literature [47; 48; 49]. Among these interpretations, the pulsar TeV halo and pulsar wind nebulae (PWNe) have emerged as potential candidates [48; 49]. Therefore to fit the bump \(\sim\mathcal{O}(1)\)TeV in the LHAASO spectrum, we employ an ECPL component (named EXTRA1) with an index of 2.4 and a cutoff of 20 TeV to describe these unresolved sources that follow the spatial distribution of pulsars.
The cutoff energy of the introduced EXTRA1 in this work is lower than that of the extra component in Reference [42] (30 TeV), as we have introduced the EXTRA2 to account for the high-energy data.
However, this component is insufficient for the DR-low case, where an additional component is required at PeV energy. Therefore, another ECPL component (EXTRA2) with an index of 2.3 and a cutoff at 2 PeV is introduced for the DR-low case. These EXTRA1 and EXTRA2 components have close spectral indices, around the average spectral index of sources in the H.E.S.S. Galactic Plane Survey, but different cutoff energies. The cutoff energies of gamma rays for different scenarios indicate different maximum energies of CR particles. For instance, the leptonic (hadronic) origin, the 20 TeV gamma-ray cutoff energy corresponds to 700 TeV (100 TeV) electron/positron (proton) cutoff energy. This suggests that EXTRA1 and EXTRA2 likely represent at least two distinct types of unresolved sources in the Galaxy.
Even though recent studies indicate that this excess, so as our EXTRA1 components, prefers to be leptonic origin, as strongly constrained by the hardening of the local cosmic-ray proton spectrum observed by AMS. No source class has been uncovered.
For EXTRA2 sources contributing at higher energies, there is no constrain from current cosmic-ray observation. If they are of leptonic origin, such as PWNe, the Klein-Nishina regime is dominant. Therefore a very high acceleration rate is required and should also be higher than the electron radiative losses. This is quite stringent. If EXTRA2 sources are TeV halos, some studies argued a slower diffusion of the electrons in the interstellar medium is needed [50], which is still not understood.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Proton} & \multicolumn{2}{c}{Helium} \\ & DR-high & DR-low & DR-high & DR-low \\ & (IceTop) & (KASCADE) & (IceTop) & (KASCADE) \\ \hline \(\nu_{0}\) & 2.06 & 2.06 & 1.46 & 1.46 \\ \(\nu_{1}\) & 2.43 & 2.43 & 2.36 & 2.36 \\ \(\nu_{2}\) & 2.22 & 2.22 & 2.12 & 2.12 \\ \(\nu_{3}\) & 2.52 & 2.52 & 2.42 & 2.42 \\ \(\nu_{4}\) & 2.18 & 2.32 & 2.08 & 2.28 \\ \(R_{1}\)/GV & 13.9 & 13.9 & 1.99 & 1.99 \\ \(R_{2}\)/TV & 0.50 & 0.50 & 0.65 & 0.65 \\ \(R_{3}\)/TV & 15.0 & 15.0 & 15.0 & 15.0 \\ \(R_{4}\)/TV & 100.0 & 100.0 & 100.0 & 100.0 \\ \(R_{c}\)/PV & 12.0 & 4.0 & 6.0 & 4.0 \\ \(\Phi\)/GV & 0.700 & 0.700 & 0.700 & 0.700 \\ \hline \end{tabular}
\end{table}
Table 2: Source injection and solar modulation parameters as in Equation 3 for proton and Helium nuclei.
Therefore, the hadronic model cannot be excluded. To confirm and deep explore the source mechanisms of both EXTRA1 and EXTRA2, neutrino signals as the smoking gun would provide the direct evidence for this mystery.
## IV Results
### Galactic diffuse gamma-ray emission
Based on the constructed model, we generate a diffuse gamma-ray emission map which can be used as a template for future studies. This map consists of four components: ICS, bremsstrahlung, natural pion decay, and extra source contributions. Except for bremsstrahlung and neutral pion decay, the spatial distributions of all these components are different from each other. In Figure 5, we show the gamma-ray energy spectrum for the region of \(25^{\circ}<\)l\(<100^{\circ}\), \(|b|<5^{\circ}\) without masking as an example. In general, this spectrum is higher than that of the region \(15^{\circ}<\)l\(<125^{\circ}\), which might be due to the masking effect from the LHAASO analysis. For any other region of interest, the predicted gamma-ray emission can be selected in the same manner, to serve as a background template for point source analysis.
### Galactic diffuse neutrino
We show the neutrino sky map from 100 TeV to 10 PeV resulting from Section 3 in Figure 6. As one can see in Figure 7a and Figure 7b, our prediction for Galactic diffuse neutrino emission for both all-sky and Galactic plane with DR-low model are in agreement with IceCube best-fitting flux normalizations from the data [17]. However, for the \(\pi^{0}\) template of IceCube, an extra source contribution with a hadronic origin is needed. This appears to contradict the fact that these sources only contribute to gamma-ray emissions and not cosmic rays.
For comparison, IceCube's total neutrino is also shown here. Our calculated Galactic diffuse neutrino flux shows that the contribution of Galactic neutrinos to the total neutrino observation is around 9% at 20 TeV, as seen in Figure 7a.
In Figure 7b, we present a comparison of the surface brightness of one flavor neutrino between the Galactic contribution in the disk region (\(|b|<5^{\circ}\), \(25^{\circ}<l<100^{\circ}\)) and the total contribution averaged over the all-sky region. This shows the distinctiveness of the neutrino Galactic disk compared to the isotropic neutrino background. The neutrino flux of the Galactic disk is prominent in the energy range from 10 TeV to 100 TeV and decreases significantly at higher energies. This is constrained by the gamma-ray and cosmic-ray measurements. Our results are in agreement with other groups' study [17; 51]. The Milky Way is a source of high-energy neutrinos consistent with the gamma-ray observation, as seen in Figure 7a and 5.
In the case of DR-high-only model, as seen in Figure 8, calculated neutrino flux is consistent with two best-fitting results with KRA\({}_{\gamma}\) model. At a few PeV, the Glashow resonance is shown in the spectrum [52]. However, to explain the results for the \(\pi^{0}\) model, EXTRA1 would be necessary.
## V Discussion
In this work, based on the most recent PeV Galactic diffuse gamma-ray observation from LHAASO, with two sets of CR data from IceTop and KASCADE, we construct our DR-high and DR-low models separately. For both models, we find it is hard to explain the LHAASO Galactic plane search with conventional CR propagation. After adding extra source contribution, the diffuse gamma-ray emission can be well explained both by DR-high with EXTRA1 (Model 1) and DR-low with both EXTRA1 and EXTRA2 (Model 2).
For Model 1, one extra source spectrum EXTRA1 is introduced, with a spectral index of 2.4 and 20 TeV cutoff. For Model 2, two extra source contributions are introduced, where one has an index of 2.4 and 20 TeV cutoff energy and another with an index of 2.3 and 2 PeV cutoff energy. That means there could be two populations of sources in our Galaxy with faint gamma-ray emission which is lower than the sensitivity of our current instruments, so they have not been identified. They follow similar CR accelerated mechanisms with close spectral index, but various maximum CR energy.
Based on the obtained model, we simulated the Galactic diffuse neutrino flux, obtaining the sky map as shown in Figure 6. For example with Model 2, we estimate the Galactic contribution of the astrophysical flux is around 9 % at 20 TeV. It is uncertain if these Galactic neutrinos are from the CR propagation or point sources because of insufficient statistical power. Therefore, we believe the future Imaging Air Cherenkov Telescope [53] and upgraded neutrino observatory will resolve the point sources and precisely provide the diffuse map and reveal the origin and propagation of cosmic rays.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \(\nu_{0}^{-}\) & \(\nu_{1}^{-}\) & \(\nu_{2}^{-}\) & \(\nu_{3}^{-}\) & \(R_{1}^{-}\)/GV & \(R_{2}^{-}\)/GV & \(R_{3}^{-}\)/GV & \(R_{c}^{-}\)/TV & \(\Phi^{-}\)/GV \\
2.33 & 0.01 & 2.88 & 2.45 & 0.950 & 4.19 & 55.7 & 6.27 & 1.1 \\ \hline \(c_{e^{+}}\) & \(\nu_{1}^{+}\) & \(\nu_{2}^{+}\) & \(R_{1}^{+}\)/GV & \(R_{c}^{+}\)/TV & \(\Phi^{+}\)/GV & & & \\
1.00 & 3.04 & 2.08 & 31.2 & 3.42 & 1.1 & & & \\ \hline \end{tabular}
\end{table}
Table 3: Source injection and solar modulation parameters of electron plus positron.
The best-fitting Galactic neutrino flux from IceCube is model dependent. Where the \(\pi^{0}\) model is constrained by the Fermi MeV to GeV gamma-ray emission and is extrapolated to TeV, where the same spatial emission profile is assumed. While KRA\({}_{\gamma}\) models take into account the spatial distribution of spectra, and cutoff energies of 5 and 50 PeV respectively. Therefore, the \(\pi^{0}\) model provides an even event distribution along the Galactic plane, and KRA\({}_{\gamma}\) models give a higher neutrino flux at the Galactic center region. So that for the interested region of \(25^{\circ}<\)l\(<100^{\circ}\), \(|b|<5^{\circ}\), the \(\pi^{0}\) model gives higher flux than that from KRA\({}_{\gamma}\) models. On the other hand, the cosmic-ray diffuse modeling with Galprop for the DR-low and DR-high models in this work, is not consistent with the KRA models from the Dragon analysis. The discrepancy between all these models is due to the low statistics and the uncertainty of the current templates. Further accurate measurements and studies are quite essential. We summarize the differences in Table 4.
With only the gamma-ray and cosmic-ray observation, EXTRA1 sources prefer to be the leptonic origin, which has been discussed by a few groups. However, in the case of IceCube best-fitting flux for the \(\pi^{0}\) model which is the only one consistent with the recent observations of 100 TeV gamma rays by the Tibet AS\(\gamma\)[10], a population of EXTRA1 sources with a hadronic scenario would be necessary no matter of DR-high or DR-low model, as seen in Figure 7 and 8. It would require this kind of source to inject fewer high-energy protons. So the identification of neutrinos can reveal the origin of CRs, modify the CR propagation and distribution models drastically and explore the history of our Galaxy. If EXTRA1 sources are the leptonic origin, where no neutrino is produced. A tension exists between the predicted diffuse Galactic
Figure 4: The diffuse gamma-ray emission calculated from the DR model. The physical radiation of ICS (green dot-dashed line), bremsstrahlung (pink dotted line), and pion decay (blue dashed line) are shown. Two extra source components, EXTRA1 (red dotted line) and EXTRA2 (red dot-dashed line) with ECPL spectra are presented. Panel (a) and (b) are the spectra obtained from the DR-low model and panel (c) and (d) for the DR-high model. The (a) and (c) panels show the results for the inner Galaxy region of \(15^{\circ}<\)l\(<125^{\circ}\), \(|b|<5^{\circ}\), while the (b) and (d) panels display the results for the outer Galaxy region of \(125^{\circ}<\)l\(<235^{\circ}\), \(|b|<5^{\circ}\).
neutrino flux and the IceCube results for the \(\pi^{0}\) model.
For the case of IceCube best-fitting fluxes for the KRA\({}_{\gamma}\) models, it provides the lower limit for the neutrino emission from the Galactic plane. No other extra hadronic scenario sources are needed. In other words, the EXTRA1 sources would be of the leptonic origin. Which needs no extra proton injection and releases the tension between the data and models.
No matter which results for the different model templates obtained by IceCube, both leptonic and hadronic origins of EXTRA2 source are allowed by data. High-energy neutrino emission is a unique diagnostic of hadronic content. With the future PeV neutrino detection with improved sensitivity, the EXTRA2 sources could be identified. If EXTRA2 of Model 2 is the hadronic origin, the Galactic neutrino will contribute around 1% to the total IceCube neutrino at PeV. Otherwise, the contribution is \(\sim\) 0.4%.
Figure 5: The diffuse gamma-ray emission calculated from the DR-low model. The physical radiation of ICS (green dot-dashed line), bremsstrahlung (pink dotted line), and pion decay (blue dashed line) are shown. Two extra source components, EXTRA1 (red dotted line) and EXTRA2 (red dot-dashed line) with ECPL spectra are presented. This figure shows the result for the inner Galaxy region of \(25^{\circ}<\)l\(<100^{\circ}\), \(|b|<5^{\circ}\)
Figure 6: Calculated galactic diffuse neutrino map with energies from 100 TeV to 10 PeV. The morphology follows the gas distribution in our Galaxy.
Figure 7: The predicted neutrino flux per flavor from the DR-low model compared with the IceCube total data (blue shaded region), their \(\pi^{0}\) model (red shaded region), \(KRA_{\gamma}^{5}\) model (region with brown solid edge) and \(KRA_{\gamma}^{50}\) model (region with brown dashed edge). Other components including EXTRA1 (red dotted line), EXTRA2 (red dot-dashed line), neutrino flux with DR-low model (green dashed line), and total \(\nu\) flux (black solid line) are shown. Panel (a) is for the all-sky region, and panel (b) is in the region of \(25^{\circ}<\)l\(<100^{\circ}\), \(|b|<5^{\circ}\).
## VI Summary
In summary, thanks to the recent observations from LHAASO [12] and IceCube [17], the Galactic diffuse sky has become richer, especially in the high-energy regime. The LHAASO measurements show a bump in the gamma-ray spectrum, where contributions from extra unresolved sources are needed. The IceCube Collaboration confirms the high-energy neutrinos from the Galactic plane. Our calculated flux with models obtained from gamma-ray observation is consistent with the neutrino data. However for the best-fitting results for the \(\pi^{0}\) model from IceCube data, the EXTRA1 sources with hadronic scenario is a must to fill the gap between calculated flux and data. Even though it would be disfavored by CR measurements.
The joint analysis of cosmic rays, gamma rays, and neutrinos has shown strong power in understanding the high-energy sky. For example, the diffuse gamma-ray detection by LHAASO can probe the CR density in our Galaxy and solve the problem of the disagreement between IceTop and KASCADE. Secondly, the neutrino detection can reveal the hidden sources which are not transparent for gamma-ray emission.
The current results from all these three messengers are in agreement with each other. More evidence shows the existence of PeVatrons in our Galaxy. The next step forward should be identifying the mysterious astronomical origin of high-energy cosmic rays with upgraded neutrino and gamma-ray detectors.
## VII Acknowledgements
We thank the referee for the useful and helpful comments and suggestions. This work is supported by the National Natural Science Foundation of China (NSFC) grants 12005313, 12205388, and 12261141691.
|
2309.00852 | Are there higher electron densities in narrow emission line regions of
Type-1 AGN than Type-2 AGN? | In the manuscript, we check properties of electron densities $n_e$ traced by
flux ratio $R_{sii}$ of [S~{\sc ii}]$\lambda6716$\AA~ to [S~{\sc
ii}]$\lambda6731$\AA~ in narrow emission line regions (NLRs) between Type-1 AGN
and Type-2 AGN in SDSS DR12. Under the framework of Unified Model considering
kpc-scale structures, similar $n_e$ in NLRs should be expected between Type-1
AGN and Type-2 AGN. Based on reliable measurements of [S~{\sc ii}] doublet with
measured parameters at least five times larger than corresponding
uncertainties, there are 6039 Type-1 AGN and 8725 Type-2 AGN (excluding the
Type-2 LINERs and the composite galaxies) collected from SDSS DR12. Then, lower
$R_{sii}$ (higher $n_e$) in NLRs can be well confirmed in Type-1 AGN than in
Type-2 AGN, with confidence level higher than 5$\sigma$, even after considering
necessary effects including effects of electron temperatures traced by [O~{\sc
iii}]$\lambda4364,4959,5007$\AA~ on estimating $n_e$ in NLRs. Two probable
methods are proposed to explain the higher $n_e$ in NLRs in Type-1 AGN. First,
the higher $n_e$ in NLRs of Type-1 AGN could indicate longer time durations of
AGN activities in Type-1 AGN than in Type-2 AGN, if AGN activities triggering
galactic-scale outflows leading to more electrons injecting into NLRs were
accepted to explain the higher $n_e$ in NLRs of Type-2 AGN than HII galaxies.
Second, the lower $n_e$ in NLRs of Type-2 AGN could be explained by stronger
star-forming contributions in Type-2 AGN, considering lower $n_e$ in HII
regions. The results provide interesting challenges to the commonly and widely
accepted Unified Model of AGN. | Zhang XueGuang | 2023-09-02T07:43:23Z | http://arxiv.org/abs/2309.00852v1 | # Are there higher electron densities in narrow emission line regions of Type-1 AGN than Type-2 AGN?
###### Abstract
In the manuscript, we check properties of electron densities \(n_{e}\) traced by flux ratio \(R_{sii}\) of [S ii]\(\lambda\)6716A to [S ii]\(\lambda\)6731A in narrow emission line regions (NLRs) between Type-1 AGN and Type-2 AGN in SDSS DR12. Under the framework of Unified Model considering kpc-scale structures, similar \(n_{e}\) in NLRs should be expected between Type-1 AGN and Type-2 AGN. Based on reliable measurements of [S ii] doublet with measured parameters at least five times larger than corresponding uncertainties, there are 6039 Type-1 AGN and 8725 Type-2 AGN (excluding the Type-2 LINERs and the composite galaxies) collected from SDSS DR12. Then, lower \(R_{sii}\) (higher \(n_{e}\)) in NLRs can be well confirmed in Type-1 AGN than in Type-2 AGN, with confidence level higher than 5\(\sigma\), even after considering necessary effects including effects of electron temperatures traced by [O iii]\(\lambda\)4364, 4959, 5007A on estimating \(n_{e}\) in NLRs. Two probable methods are proposed to explain the higher \(n_{e}\) in NLRs in Type-1 AGN. First, the higher \(n_{e}\) in NLRs of Type-1 AGN could indicate longer time durations of AGN activities in Type-1 AGN than in Type-2 AGN, if AGN activities triggering galactic-scale outflows leading to more electrons injecting into NLRs were accepted to explain the higher \(n_{e}\) in NLRs of Type-2 AGN than HII galaxies. Second, the lower \(n_{e}\) in NLRs of Type-2 AGN could be explained by stronger star-forming contributions in Type-2 AGN, considering lower \(n_{e}\) in HII regions. The results provide interesting challenges to the commonly and widely accepted Unified Model of AGN.
galaxies:active - galaxies:nuclei - galaxies:emission lines - galaxies:Seyfert +
Footnote †: journal: ApJ
0000-0002-8861-8885]XueGuang Zhang
## 1 Introduction
Different observed phenomena between broad line AGN (Active Galactic Nuclei) (Type-1 AGN) and narrow line AGN (Type-2 AGN) can be well explained by the known Unified Model (UM) of AGN, considering expected different orientation angles of central accretion disk Antonucci (1993), combining with different properties of central activities and inner dust torus etc., as well discussed in Marinucci et al. (2012); Oh et al. (2015); Mateos et al. (2016); Balokovic et al. (2018); Brown et al. (2019); Kuraszkiewicz et al. (2021); Zhang (2022a). More recent reviews on the UM can be found in Netzer (2015). The elegant UM has been strongly supported by clearly detected polarized broad emission lines and/or clearly detected broad infrared emission lines in some Type-2 AGN (Miller & Goodrich, 1990; Heisler, Lumsden & Bailey, 1997; Tran, 2003; Nagao et al., 2004; Onori et al., 2017; Savic et al., 2018; Moran et al., 2020) and strong resonance of silicate dust at 10\(\mu\)m is seen in absorption towards many Type-2 AGN but in emission in Type-1 AGN reported in Siebenmorgen et al. (2005). Under the current framework of the UM, Type-1 AGN are intrinsically like Type-2 AGN of which central regions including central accretion power source around black hole (BH) and broad line regions (BLRs) are hidden by central dust torus.
However, even after considerations of different properties of central dust torus and central activities related to central black hole (BH) accreting power source, there are some other challenges to the being constantly revised UM. Franceschini et al. (2002) have discussed the probably different evolutionary patterns in Type-1 and Type-2 AGN. Hiner et al. (2009) have shown that host galaxies of Type-2 AGN have higher average star formation rates than Type-1 AGN. Villarroel & Korn (2014) have shown different environment characteristics with different neighbours around Type-1 AGN and Type-2 AGN. More recently, Zou et al.
(2019) have shown lower stellar masses of host galaxies in Type-1 AGN than Type-2 AGN, through X-ray selected AGN. Bornancini et al. (2020) have shown significantly different properties of UV/optical and mid-infrared colour distribution of the different AGN types. More recently, we Zhang (2022) have shown statistically larger stellar velocity dispersion in Type-1 AGN than in Type-2 AGN. As the detailed discussions on the UM in Netzer (2015), the UM has been successfully applied to explain different observed features between Type-1 and Type-2 AGN in many different ways, however, the AGN family with many other features considering the reported challenges to the UM are far from homogeneous.
The UM has been well accepted that Type-1 AGN are intrinsically like Type-2 AGN, there are not only similar properties of central region on scale of sub-pcs including central BLRs but also similar properties of NLRs (narrow emission line regions) on scale of kpcs. Therefore, considering NLRs on scale of kpcs under the framework of the UM, there should be similar properties of electron densities in NLRs between Type-1 AGN and Type-2 AGN, which is the starting point of the manuscript. Moreover, not similar as properties of central power source and BLRs which can be affected by physical properties of dust torus on scale of pcs, there are few structures on scale of kpcs having effects on physical properties of NLRs on scale of kpcs. In other words, physical properties of NLRs are pure, leading to more robust final results without additional contaminations. Furthermore, flux ratios of [S ii] doublet are mainly considered in the manuscript, indicating that moving dust clouds and orientation effects have no effects on our final results on flux ratios of [S ii] doublet.
Properties of electron densities \(n_{e}\) in NLRs are mainly considered and discussed between the Type-1 AGN and the Type-2 AGN, which will provide further clues to support the UM or will provide further clues leading a challenge to the UM. Electron densities \(n_{e}\) in emission line regions can be well and conveniently determined by narrow forbidden emission line ratios. In 1950s, Seaton (1954) has shown that the electron densities in planetary nebulae can be well estimated by relative intensities of the forbidden lines, and then followed and improved by Osterbrock (1955); Osterbrock and Flather (1959); Osterbrock (1955a, 1960); Saraph and Seaton (1970); Aller and Epps (1976). In 1980s, Canto (1980) have shown that forbidden [S ii]\(\lambda\)6716, 6731A line ratios can be effectively applied to determine electron densities based on solutions of collision strengths and transition probabilities, and then followed by Stanghellini and Kale (1989). Then, in the classic book of 'Astrophysics of Gaseous Nebulae and Active Galactic Nuclei' (Osterbrock, 1989; Osterbrock and Ferland, 2006), there are detailed review on the theoretical method to determine electron densities in emission line regions by line ratios of forbidden doublets. More recently, Zhang et al. (2013); Dors et al. (2014); Proxauf et al. (2014); Sanders et al. (2016); Kawasaki et al. (2017); Kakkad et al. (2018); Kewley et al. (2019); Flury and Moran (2020); Kazuma et al. (2021); Riffel et al. (2021); Dors et al. (2022) have shown the methods and/or corresponding discussions to determine electron densities in emission regions by forbidden line ratios. Among line flux ratios of forbidden doublets, the ratio of [S ii]\(\lambda\)6716, 6731A doublet is preferred in the manuscript to trace properties of \(n_{e}\) in NLRs, because the collected low redshift emission line objects are from SDSS DR12 (Sloan Digital Sky Survey, Data Release 12, Alam et al. (2015)), with apparent [S ii]\(\lambda\)6716, 6731A doublets in their SDSS spectra.
Based on the parameter \(R_{sii}\), flux ratio of [S ii]\(\lambda\)6716A to [S ii]\(\lambda\)6731A, properties of \(n_{e}\) in NLRs can be well conveniently checked between Type-1 AGN and Type-2 AGN collected from Sloan Digital Sky Survey (SDSS) data release 12 (DR12). Section 2 presents data samples of Type-1 AGN and Type-2 AGN, methods to measure [S ii] doublets. Section 3 shows main results and necessary discussions on properties of electron densities in NLRs of different kinds of AGN. Section 4 gives a further implication. Section 5 gives final summaries and conclusions. And in the manuscript, the cosmological parameters of \(H_{0}~{}=~{}70\rm km\cdot s^{-1}Mpc^{-1},\Omega_{\Lambda}=0.7\) and \(\Omega_{m}~{}=~{}0.3\) have been adopted.
## 2 Data Samples
### Parent samples of Type-1 AGN and Type-2 AGN
The work is based on large samples of low redshift Type-1 AGN and Type-2 AGN which have apparent [S ii]\(\lambda\)6716, 6731A doublets. Therefore, low redshift AGN with \(z~{}<~{}0.3\) in SDSS DR12 are mainly considered.
Criterion of redshift smaller than 0.3, \(z~{}<~{}0.3\), is applied to collect 12342 low redshift Type-1 AGN from SDSS pipeline classified QSOs (Richards et al., 2002; Ross et al., 2012; Peters et al., 2015; Lyke et al., 2020) in DR12, through the SDSS provided SQL (Structured Query Language) Search tool ([http://skyserver.sdss.org/dr12/en/tools/search/sql.aspx](http://skyserver.sdss.org/dr12/en/tools/search/sql.aspx)) by the following query
```
SELECT plate, fiberid, mjd FROM SpecObjail WHEREclass='QSO'andz<0.30andzwarning=0
```
where 'SpecObjall' is SDSS pipeline provided database including basic properties of emission line galaxies in SDSS DR12. More detailed information of the database 'SpecObjall' can be found in [http://skyserver.sdss.org/dr12/en/help/docs/tabledesc.aspx](http://skyserver.sdss.org/dr12/en/help/docs/tabledesc.aspx).
The collected information of plate, fiberid and mjd can be conveniently applied to download SDSS spectra of the 12342 Type-1 AGN.
The same criteria \(z~{}<~{}0.3\) combining with criterion'sub-class='AGN" are applied to collect all the 16269 low redshift
Type-2 AGN from SDSS pipeline classified main galaxies in DR12, by the following query
**SELECT** plate, fiberid, mjd
**FROM** SpecObjall
**WHERE**class='GALAXY' and zwarning=0
and subclass = 'AGN' and z<0.30
More detailed information of SDSS spectroscopic catalogs (subclass, class, etc.) can be found in [https://www.sdss.org/dr12/spectro/catalogs/](https://www.sdss.org/dr12/spectro/catalogs/).
### Parent samples of HII galaxies
Besides Type-1 AGN and Type-2 AGN collected from SDSS DR12, HII galaxies are also simply discussed in the manuscript, which will provide clues on AGN activity contributions to properties of \(n_{e}\) in NLRs by comparing HII galaxies and Type-2 AGN.
The criterion \(z~{}<~{}0.3\) combining with criterion of'subclass='starforming" are applied to collect all the 245590 low redshift HII galaxies from SDSS pipeline classified main galaxies in DR12, by the following query
**SELECT** plate, fiberid, mjd
**FROM** SpecObjall
**WHERE**class='GALAXY' and z<0.30 and zwarning=0
and subclass ='starforming'
### Method to measure the line parameters of [S ii] doublet
In order to well measure line intensities of [S ii]\(\lambda\)6716, 6731A, host galaxy contributions included in SDSS spectra should be firstly subtracted.
The common SSP method (Simple Stellar Population) (Bruzual & Charlot, 2003; Kauffmann et al., 2003; Cid Fernandes et al., 2005; Cappellari, 2017) has been applied to determine the host galaxy contributions, similar as what we have done in Zhang (2014); Zhang et al. (2016); Rakshit et al. (2017); Zhang et al. (2019, 2021); Zhang (2021a,b, 2022b, 2023). We have exploited the 39 simple stellar population templates in Bruzual & Charlot (2003), which can be used to well describe the characteristics of almost all the SDSS galaxies as discussed in Bruzual & Charlot (2003). Meanwhile, an additional power law component is applied to describe intrinsic AGN continuum emissions, especially in Type-1 AGN. Meanwhile, when the SSP method is running, narrow emission lines in spectrum are masked out, by full width at zero intensity about 450km s\({}^{-1}\). And the wavelength ranges from 4450 to 5600A and from 6250 to 6750A are also masked out for the broad H\(\beta\) and the broad H\(\alpha\) and optical Fe ii emission lines. Then, through the Levenberg-Marquardt least-squares minimization technique (the known MPFIT package), the best descriptions can be well determined to the SDSS spectra with emission lines being masked out. Moreover, when the SSP method is running, only one restriction is accepted that the strengthened factor of each stellar population template is not smaller than zero. Left panels of Fig. 1 shows two examples on the SSP method determined host galaxy contributions in one Type-1 AGN and one Type-2 AGN.
After subtractions of the host galaxy contributions (if there are), emission lines around H\(\alpha\), within rest wavelength from 6250 to 6850A, can be well described by multiple Gaussian functions. Simple descriptions on the measurements of emission lines are as follows, similar as what we have recently done in Zhang (2021a,b,c). Three broad Gaussian functions plus one narrow Gaussian function are applied to describe the broad and narrow H\(\alpha\), six narrow Gaussian components are applied to describe the [O i], [N ii] and [S ii] doublets, a power law component is applied to describe the continuum emissions underneath the broad H\(\alpha\). Then, through the Levenberg-Marquardt least-squares minimization technique, emission lines can be well described by multiple Gaussian functions, and uncertainties (formal 1\(\sigma\) errors) of the model parameters can be determined by the covariance matrix. When the model functions above are applied, the following restrictions are accepted. First, the components of each forbidden narrow emission line doublet ([S ii], [N ii], [O i]) have the same redshift and the same line width in velocity space. Second, each emission component has intensity not smaller than zero. Third, each narrow Gaussian component has line width (second moment) smaller than 500km/s\({}^{\rm{l}}\). Fourth, each broad Gaussian component in broad H\(\alpha\) has line width larger than the line width of narrow H\(\alpha\). Fifth, the flux ratio of [N ii] doublet is fixed to the theoretical value 3. And when the fitting procedure is running, the starting values of the parameters are as follows. For each narrow emission line, theoretical central wavelength, 2A and 0 are accepted the starting values of central wavelength, second moment and line intensity. For the three broad Gaussian components in broad H\(\alpha\), the starting values of [central wavelength, second moment, intensity] are [6540, 20, 0], [6564, 25, 0] and [6580, 20, 0], respectively. Right panels of Fig. 1 shows two examples on the best descriptions to the emission lines around H\(\alpha\), after subtractions of host galaxy contributions.
Moreover, because line intensities of [O iii]\(\lambda\)4959, 5007A and narrow H\(\beta\) will be discussed in the following section, simple descriptions are as follows on the fitting procedure applied to describe the emission lines around H\(\beta\) within rest wavelength range from 4400 to 5600A after subtractions of the host galaxy contributions. Similar as what we have recently done in Zhang (2021a,b,c), three broad Gaussian functions plus one narrow Gaussian function are applied to describe the
broad and narrow H\(\beta\), two narrow and two broad Gaussian components are applied to describe the core and extended components of [O iii]\(\lambda\)4959, 5007A doublet (Shen et al., 2011; Greene & Ho, 2005), one Gaussian component is applied to describe the He ii line, broadened and scaled Fe ii templates discussed in Kovacevic et al. (2010) are applied to describe optical Fe ii lines, and a power law component is applied to describe the continuum emissions underneath the broad H\(\beta\). The following restrictions are accepted to the model parameters, as the restrictions to the model parameters to describe the emission lines around H\(\alpha\). First, the core (extended) components of [O iii] doublet have the same redshift, the same line width and the flux ratio fixed to the theoretical value 3. Second, each emission component has intensity not smaller than zero. Third, the core components of [O iii] doublet and the narrow H\(\beta\) have line widths (second moment) smaller than 500km/s. Fourth, each broad Gaussian component in broad H\(\beta\) has line width larger than the line width of narrow H\(\beta\). Fifth, the extended components of [O iii] doublet have line widths larger than the line widths of the core components. And when the fitting procedure is running, the starting values of the parameters are as follows. For each narrow emission lines, theoretical central wavelength, 2A and 0 are accepted the starting values of central wavelength, second moment and line intensity. For the three broad Gaussian components in broad H\(\beta\), the starting values of [central wavelength, second moment, intensity] are [4840, 20, 0], [4861, 25, 0] and [4880, 20, 0], respectively. Fig. 2 shows two examples on the best-fitting results to the emission lines around H\(\beta\) in one Type-1 AGN and one Type-2 AGN, through the Levenberg-Marquardt least-squares minimization technique.
### Final main data samples
Figure 1: Left panels show the SSP method determined best descriptions (solid red line) to the SDSS spectra (solid dark green line) of Type-1 AGN 0997-52734-0303 (PLATE-MJD-FIBERID) and Type-2 AGN 0332-52367-0317. In each left panel, from left to right, the vertical purple lines point out the emission lines being masked out when the SSP method is running, including [O ii]\(\lambda\)3727Å, H\(\beta\), H\(\gamma\), [Ne iii]\(\lambda\)3869Å, Ca K, [Ne iii]\(\lambda\)3968Å, Ca H line, [S ii]\(\lambda\)44070Å, H\(\delta\), H\(\gamma\), [O iii]\(\lambda\)4364Å, He ii\(\lambda\)5877Å and [O i]\(\lambda\)6300, 6363Å doublet, and the area filled by purple lines around 5000Å shows the region masked out including the optical Fe ii lines, broad and narrow H\(\beta\) and [O iii] doublet, and the area filled by purple lines around 6550Å shows the region masked out including the broad and narrow H\(\alpha\), [N ii] and [S ii] doublets. In top left panel, solid blue line shows the determined host galaxy contributions, solid cyan line shows the determine AGN continuum emissions. Right panels show the best descriptions (solid red line) to the emission lines around H\(\alpha\) (solid dark green line), especially on the [S ii] doublet, after subtractions of host galaxy contributions. In each right panel, solid blue line shows the determined narrow H\(\alpha\), solid purple lines show the determine [N ii] doublet, solid pink lines show the determined [O i] doublet, solid cyan lines show the determined [S ii] doublet. In top right panel, solid green lines show the determined broad Gaussian components in the broad H\(\alpha\), dashed blue line shows the determined power law continuum emissions underneath the emission lines. In each panel, the \(\chi^{2}\) (the summed squared residuals for the best-fitting results divided by the degree of freedom) is marked in red characters.
Finally, starting with 12342 Type-1 AGN in the parent sample collected from SDSS pipeline classified quasars, and with 16269 Type-2 AGN in the parent sample collected from SDSS pipeline classified main galaxies, and 245590 HII galaxies in the parent sample collected from SDSS pipeline classified main galaxies, applying the following criteria,
* The measured line width and line flux of [S ii] doublet described by Gaussian functions are at least 5 times larger than their corresponding uncertainties, indicating reliable [S ii] doublet.
* For the Type-1 AGN, not only there are reliable [S ii] doublet, but also there are reliable broad H\(\alpha\) emission lines with at least one broad Gaussian component with
Figure 3: Properties of the collected 12999 Type-2 AGN shown in contours filled by bluish colors and the 8725 Type-2 AGN (excluding the Type-2 LINERs and the composite galaxies) shown as red pluses in the BPT diagrams of S2HA versus O3HB (left panel) and of N2HA versus O3HB (right panel). In left panel, solid red line and solid green line show the dividing lines as discussed in Kewley et al. (2006) between HII galaxies and AGN and between Seyfert 2 galaxies and Type-2 LINERs, leading Type-2 LINERs to lie into the region above the solid red line but below the solid green line. In right panel, solid and dashed green lines show the dividing lines between HII galaxies and composite galaxies and AGN, as discussed in Kauffmann et al. (2003).
Figure 2: Left panel shows the best fitting results (solid red line) to emission lines around H\(\beta\) (solid dark green line) including apparent optical Fe ii emission features in the Type-1 AGN 0856-52339-0050. Double-dot-dashed red line shows the determined power law continuum emissions, solid green line shows the determined broad H\(\beta\), solid purple lines show the determined optical Fe ii lines, dashed green line shows the determined broad He ii line, solid pink lines show the determined core [O iii] components, and thick blue solid lines show the determined blue-shifted extended [O iii] components. Right panel shows the best fitting results (solid red line) to the emission lines around H\(\beta\) (solid dark green line) in the Type-2 AGN 0332-52367-0317, after subtractions of host galaxy contributions. Solid green line shows the determined narrow H\(\beta\), solid lines in pink and in blue show the determined core and extended components of [O iii] doublet. And the calculated \(\chi^{2}\) values are marked in the top-left corners in the panels. In order to show clearer emission features in the left panel, the Y-axis is in logarithmic coordinate.
the measured line flux and line width at least 5 times larger than the corresponding uncertainties and second moment larger than 600km \(\cdot\) s\({}^{-1}\).
* For the Type-2 AGN, not only there are reliable [S ii] doublet, but also there are no broad H\(\alpha\) emission lines with the determined three broad Gaussian components for broad H\(\alpha\) with the measured line fluxes and line widths 2 times smaller than the corresponding uncertainties.
* For the HII galaxies, not only there are reliable [S ii] doublet, but also there are no broad H\(\alpha\) emission lines with the determined three broad Gaussian components for broad H\(\alpha\) with the measured line fluxes and line widths 2 times smaller than the corresponding uncertainties.
leads main samples including 6039 Type-1 AGN with both apparent [S ii] doublets and apparent broad H\(\alpha\) emission lines, and 12999 Type-2 AGN with apparent [S ii] doublets but no broad H\(\alpha\) emission lines, and 199700 HII galaxies with apparent [S ii] doublets but no broad H\(\alpha\) emission lines. Here, the word "reliable" means the Gaussian function described emission component has its measured line parameters (central wavelength, second moment and line intensity) at least 5 times larger than the corresponding uncertainties.
Furthermore, as described in subsection 2.1, both Seyfert 2 galaxies and Type-2 LINERs (Low Ionization Nuclear Emission Line Regions) (LINERs without apparent broad emission lines) are collected into the main sample of Type-2 AGN. However, not similar as Seyfert 2 galaxies totally powered by central BH accreting process,there are different mechanisms applied to Type-2 LINERs, such as shock heating (Heckman, 1980; Dopita & Sutherland, 1996), photoionization by young stars (Terlevich & Melnick, 1985; Filippenko & Terlevich, 1992), photoionization by post-asymptotic giant branch (post-AGB) stars (Eracleous et al., 2010; Cid Fernandes et al., 2011), etc. More recent review on LINERs can be found in Marquez et al. (2017) which have shown that 60% to 90% of LINERs could be well considered as genuine AGN. Considering the controversial conclusion on physical nature of Type-2 LINERs (at least part of Type-2 AGN without AGN nature), Type-2 LINERs are not considered in the manuscript, in order to ignore effects of different physical natures of part of Type-2 LINERs on our final results. Not similar as Type-2 LINERs, Type-1 LINERs (LINERs with apparent broad emission lines) included in the parent sample of Type-1 AGN are well considered as AGN, due to their broad emission lines.
Based on the dividing lines between Seyfert 2 galaxies and Type-2 LINERs in the BPT diagram of O3HB (flux ratio of
Figure 4: Left panels show properties of mean spectra (in dark green) of the 1251 Type-1 AGN (top-left panel) and the 1198 Type-2 AGN (bottom left panel) with high quality spectra in the main samples. In bottom left panel, solid red line shows the SSP method determined host galaxy contributions. Right panels show the best fitting results to the emission lines around H\(\alpha\) in the mean spectrum of Type-1 AGN (top-right panel) and of Type-2 AGN after subtractions of the host galaxy contributions (bottom-right panel). In right panels, the symbols and line styles are the same as those in right panels of Fig. 1. In each right panel, top right corner lists the measured line parameters [central wavelength \(\lambda_{0}\), second moment \(\sigma\), relative flux \(rF\)] of the [S ii] doublet.
\([\)O iii\(]\lambda 5007\)A to narrow H\(\beta\)) versus S2HA (flux ratio of total \([\)S ii\(]\lambda 6716,6731\)A to narrow H\(\alpha\)) as shown in Kewley et al. (2006)
\[\begin{array}{ll}\log(O3HB)&>&\frac{0.72}{\log(S2HA)-0.32}+1.30\\ \log(O3HB)&>&1.89\log(S2HA)+0.76\end{array} \tag{1}\]
there are 8793 Type-2 AGN, excluding the Type-2 LINERs and excluding the classified HII galaxies in the BPT diagrams of O3HB versus S2HA. Meanwhile, based on the dividing line between AGN and composite galaxies as discussed in Kauffmann et al. (2003) in the BPT diagram of O3HB versus N2HA (flux ratio of \([\)N ii\(]\lambda 6583\)A to narrow H\(\alpha\))
\[\log(O3HB)~{}>~{}\frac{0.61}{\log(N2HA)-0.47}+1.19 \tag{2}\]
among the 8793 Type-2 AGN, there are 68 classified composite galaxies excluded from the collected Type-2 AGN, in order to ignore probable strong effects of starforming. Therefore, there are 8725 (8793-68) Type-2 AGN included in the final main sample of Type-2 AGN. Fig. 3 shows properties of the collected Type-2 AGN in the BPT diagrams of S2HA versus O3HB (left panel) and of N2HA versus O3HB (right panel). The results in left panel of Fig. 3 show clear classifications of Type-2 LINERs. And the results in right panel of Fig. 3 show clear evidence to support that the collected Type-2 AGN, neither including Type-2 LINERs nor including composite galaxies, are reliable AGN with central AGN activities.
### Spectroscopic properties of mean spectra of Type-1 AGN and Type-2 AGN
In the subsection, mean spectra are discussed in Type-1 AGN and Type-2 AGN, in order not only to provide further evidence to support that the emission line fitting procedure is appropriate and but also to provide further clues to answer the question whether asymmetric line profiles should be considered in \([\)S ii\(]\) doublet.
The commonly accepted PCA (Principal Component Analysis) technique is applied to create mean spectra of Type-1 AGN and Type-2 AGN. PCA technique uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of uncorrelated variables called principal components. Commonly, mean subtraction (or mean centering) is necessary for performing PCA to ensure that the first principal component describes the direction of maximum variance. However, if mean subtraction is not performed, the PCA technique determined first eigencomponent represents the mean spectrum of input set of spectra. Here, we apply the convenient and public IDL PCA program pca_solve.pro written by D. Schlegel, which is included in SDSS software package of IDLSPEC2D ([http://spectro.princeton.edu/](http://spectro.princeton.edu/)).
Figure 5: On the correlations between measured \(R_{sii}\) in the manuscript and \(R_{sii}(SDSS)\) determined from the SDSS pipeline determined line parameters of the Type-2 AGN (top panel), and between the measured \(R_{sii}\) in the manuscript and \(R_{sii}(SH11)\) determined from the reported line parameters of the Type-1 AGN in Shen et al. (2011) (bottom panel). In each panel, solid red line shows \(X~{}=~{}Y\).
Figure 6: Distributions of \(\log(R_{sii})\) of the 6039 Type-1 AGN (histogram filled by red lines), the 8725 Type-2 AGN (histogram filled by blue lines), and the 199700 HII galaxies (histogram filled by dark green lines) in the final main samples, respectively. Thick dashed lines in red, in blue and in dark represent the corresponding best Gaussian profiles for the \(\log(R_{sii})\) distributions of the Type-1 AGN, the Type-2 AGN and the HII galaxies, respectively.
Figure 8: Distributions of redshift, O3HB, N2HA, \(L_{O3}\) and SN of the 548 Type-1 AGN (histogram filled with red lines) and the 548 Type-2 AGN (histogram filled with blue lines) in the subsamples. In each panel, the Kolmogorov-Smirnov statistic technique provided significance level is marked in red characters.
Figure 7: Distributions of redshift, O3HB, N2HA, \(L_{O3}\) and SN of the Type-1 AGN (histogram filled with red lines) and Type-2 AGN (histogram filled with blue lines) in the main samples. In each panel, vertical dashed line in red and in blue mark position of mean value of each distribution of Type-1 AGN and Type-2 AGN, respectively.
Here, in order to check probable asymmetric profiles of [S ii] doublet, 1251 Type-1 AGN with spectral signal-to-noise larger than 20 and 1198 Type-2 AGN with spectral signal-to-noise larger than 25 are mainly considered. PCA technique determined mean spectra are shown in left panels of Fig. 4. The same SSP method is applied to determine host galaxy contributions in the mean spectrum of Type-2 AGN. Then, the same emission line fitting procedure discussed in subsection 2.2 is applied to measure the emission lines around H\(\alpha\) in the mean spectra of Type-1 AGN and of Type-2 AGN after subtractions of host galaxy contributions. The best fitting results are shown in right panels of Fig. 4 to the emission lines around H\(\alpha\), with the determined line parameters of [S ii] doublet marked in top right corner in each right panel.
It is clear that the two Gaussian components can be well applied to describe the [S ii] doublet in the mean spectra of high quality Type-1 AGN and high quality Type-2 AGN, indicating there are few contributions of asymmetric kinematic components in [S ii] doublets. Therefore, the results in Fig. 4 not only can be applied to support that the emission line fitting procedure can be well accepted, but also can be applied to support that there are few effects of asymmetric kinematic components in [S ii] doublets on our final results.
## 3 Main Results and Discussions
### To confirm the reliability of the measured line parameters of [S ii] doublet
Comparing with line parameters from different methodologies/techniques can provide further and necessary information to confirm the reliability of the measured line parameters. For the [S ii] doublet of which features with few effects from host galaxy absorption features in Type-2 AGN, it is necessary and interesting to confirm the reliability of our measured parameters of [S ii] doublet, through comparing our measured values and the values calculated from SDSS pipeline provided parameters for the Type-2 AGN. Due to apparent effects of broad H\(\alpha\) on measured line parameters of [S ii] doublet in Type-1 AGN, the parameters reported in Shen et al. (2011) rather than the parameters reported by the SDSS pipeline are considered for the Type-1 AGN.
In the subsection, properties of line flux ratio \(R_{sii}\) of [S ii]\(\lambda\)6716A to [S ii]\(\lambda\)6731A are well discussed, based on the SDSS pipeline produced line parameters of the Type-2 AGN, and based on the line parameters reported in Shen et al. (2011) of the Type-1 AGN in SDSS DR7 (Data Release 7).
For the 8725 Type-2 AGN in SDSS DR12 (excluding the Type-2 LINERs and composite galaxies), the SDSS pipeline measured line parameters of [S ii] doublets are stored in the database of 'galSpecLine'2. Top panel of Fig. 5 shows the correlation between the measured \(R_{sii}\) in the manuscript and \(R_{sii}(SDSS)\) determined from the SDSS reported line parameters. There is a strong positive linear correlation with Spearman Rank correlation coefficient about 0.935 with \(P_{null}~{}<~{}10^{-20}\). The linear correlation can be described by
Footnote 2: Detailed information of ‘galSpecLine’ can be found in [http://skyserver.sdss.org/dr12/en/help/docs/tabledesc.aspx](http://skyserver.sdss.org/dr12/en/help/docs/tabledesc.aspx)
\[\begin{split}\log(R_{sii}(SDSS))~{}=~{}(-0.053\pm 0.008)\\ ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} }~{}
0.147\(\pm\)0.001 of the Type-1 AGN, the Type-2 AGN (excluding the Type-2 LINERs and composite galaxies) and the HII galaxies, respectively. Uncertainty of each mean value is determined by the bootstrap method with 1000 loops applied. Meanwhile, based on the measured [S ii] doublets in the mean spectra of high quality Type-1 AGN and high quality Type-2 AGN in Fig. 4, \(\log(R_{sii})\) are about 0.047 and 0.084 in high quality Type-1 AGN and in high quality Type-2 AGN, respectively, which are a bit different from the mean values in the AGN in the main samples, indicating a few effects of spectral signal-to-noise (SN) on our final results, besides to show the different \(\log(R_{sii})\) between high quality Type-2 AGN and high quality Type-1 AGN.
Based on the theoretical dependence of \(n_{e}\) on \(R_{sii}\) more recently discussed in Sanders et al. (2016); Kewley et al. (2019),
\[\frac{n_{e}}{\rm cm^{3}}\ =\ \frac{627.1\ \times\ R_{sii}\ -\ 909.17}{0.4315 \ -\ R_{sii}} \tag{5}\]
the mean electron densities \(n_{e}\) can be roughly estimated as 291\(\pm\)18, 198\(\pm\)4 and 30\(\pm\)3 in units of cm\({}^{-3}\) of the 6039 Type-1 AGN, the 8725 Type-2 AGN and the HII galaxies, respectively, with uncertainties determined by accepted corresponding uncertainties of \(R_{sii}\). Here, as discussed results in Sanders et al. (2016); Kewley et al. (2019), effects of electron temperature on \(n_{e}\) can lead to about 15% uncertainties of \(n_{e}\), which cannot be applied to explain the apparent dif
Figure 9.— Within a narrow range of one parameter, distributions of the other four parameters of the collected Type-1 AGN and Type-2 AGN from the subsamples. Symbols and line styles have the same meanings as those in Fig. 8. And the numbers N1 and N2 of the collected Type-1 AGN and Type-2 AGN are marked in red characters in top region in each panel. From top to bottom, the Type-1 and Type-2 AGN are collected from the subsamples, through the criteria that \(|z-\overline{z}|\ <\ -0.0134\), \(|\log(O3HB)-\log(O3HB)|\ <\ \ -0.051\), \(|\log(N2HA)-\log(N2HA)|\ <\ \ -0.036\), \(|\log(L_{O3})-\log(L_{O3})|\ <\ 0.092\), \(|\log(SN)-\log(SN)|\ <\ 0.05\), where \(\overline{p}\) as mean value of parameter \(p\).
ference in \(n_{e}\) in the different kinds of emission line objects. And detailed discussions on effects of electron temperatures on estimating electron densities in NLRs can be found in the following subsection 3.6.
Moreover, as discussed in Kawasaki et al. (2017), \(R_{sii}\) should be effectively limited to the range from 0.4 to 1.5, when \(R_{sii}\) is applied to calculate electron density \(n_{e}\). Then, with \(R_{sii}\) larger than 0.4 and smaller than 1.5, mean values of \(\log(R_{sii})\) are 0.044\(\pm\)0.003, 0.071\(\pm\)0.003 and 0.116\(\pm\)0.001, and the corresponding mean \(n_{e}\) in unit of cm\({}^{-3}\) can be estimated as 319\(\pm\)11, 229\(\pm\)4 and 102\(\pm\)3 of the 5467 Type-1 AGN, the 8389 Type-2 AGN and the 144210 HII galaxies among the objects in the main samples, respectively. The results can also roughly lead to apparently lower \(\log(R_{sii})\) (higher \(n_{e}\)) in NLRs in Type-1 AGN, before considering necessary effects on the \(R_{sii}\) comparisons between the Type-1 AGN and the Type-2 AGN.
Considering the effective range of \(R_{sii}\) to estimate \(n_{e}\) in NLRs, the following discussed main samples of AGN include the 5467 Type-1 AGN with 0.4 \(<~{}R_{sii}~{}<~{}1.5\) and the 8389 Type-2 AGN with 0.4 \(<~{}R_{sii}~{}<~{}1.5\).
### Effects of different distributions of redshift, 03HB, N2HA or [O iii] line luminosity?
In order to well explain the determined apparently higher \(n_{e}\) (only related to lower \(R_{sii}\)) in NLRs in Type-1 AGN than in Type-2 AGN which are against the expected results by the Unified model of AGN, different effects are considered as follows, especially based on the different distributions of redshift, O3HB, N2HA and [O iii] line luminosity \(L_{O3}\) between the 5467 Type-1 AGN and the 8389 Type-2 AGN in the main samples with 0.4 \(<~{}R_{sii}~{}<~{}1.5\). Distributions of the parameters of redshift, O3HB, N2HA, \(L_{O3}\) and SN are shown in Fig. 7. Here, redshift can be considered as evolutionary histories of AGN. And O3HB and N2HA can be well applied in BPT diagram (Baldwin et al., 1981; Kewley et al., 2001; Kauffmann et al., 2003; Kewley et al., 2006, 2019; Zhang et al., 2020) to identify AGN and to trace central AGN activities in AGN. Then, considering the mean value of each distribution shown in Fig. 7, properties of \(R_{sii}\) are checked in AGN with each parameter larger than and smaller than the mean value. Here, the shown \(L_{O3}\) are reddening corrected values through the measured Balmer decrements (flux ratio of narrow H\(\alpha\) to narrow H\(\beta\)), after accepted the intrinsic Balmer decrement to be 3.1. And in the following subsections, there are no further discussions of reddening on our final results.
Considering distributions of redshift, the estimated mean values of \(\log(R_{sii})\) are about 0.041\(\pm\)0.003 and 0.048\(\pm\)0.004 in the 2752 low redshift Type-1 AGN with \(z<0.16\) and in the 2715 high redshift Type-1 AGN with \(z>0.16\) in the main sample of the 5467 Type-1 AGN with 0.4 \(<~{}R_{sii}~{}<~{}1.5\), respectively. Estimated mean values of \(\log(R_{sii})\) are about 0.073\(\pm\)0.003 and 0.068\(\pm\)0.003 in the 443 low redshift Type-2 AGN with \(z<0.105\) and in the 3955 high redshift Type-2 AGN with \(z>0.105\) in the main sample of the 8389 Type-2 AGN with 0.46 \(<~{}R_{sii}~{}<~{}1.5\), respectively. Therefore, considering different mean \(\log(R_{sii})\) in different redshift ranges, there are accepted effects of different distributions of redshift on properties of distributions of calculated \(R_{sii}\) in Type-1 AGN and in Type-2 AGN.
Considering distributions of O3HB, the estimated mean values of \(\log(R_{sii})\) are about 0.059\(\pm\)0.003 and 0.032\(\pm\)0.003 in the 2544 Type-1 AGN with \(\log(O3HB)\) smaller than 0.98 and in the 2923 Type-1 AGN with \(\log(O3HB)\) larger than 0.98 in the main sample of the 5467 Type-1 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\), respectively. The estimated mean values of \(\log(R_{sii})\) are about 0.079\(\pm\)0.004 and 0.062\(\pm\)0.003 in the 4228 Type-2 AGN with \(\log(O3HB)\) smaller than 0.74 and in the 4161 Type-2 AGN with \(\log(O3HB)\) larger than 0.74 in the main sample of the 8389 Type-2 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\), respectively. Therefore, considering different mean \(\log(R_{sii})\) in different O3HB ranges, there are also accepted effects of different distributions of O3HB on properties of distributions of calculated \(R_{sii}\).
Considering distribution of N2HA, the estimated mean values of \(\log(R_{sii})\) are about 0.055\(\pm\)0.003 and 0.035\(\pm\)0.003 in the 2584 Type-1 AGN with \(\log(N2HA)\) smaller than -0.17 and in the 2883 Type-1 AGN with \(\log(N2HA)\) larger than -0.17 in the main sample of the 5467 Type-1 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\), respectively. The estimated mean values of \(\log(R_{sii})\) are about 0.079\(\pm\)0.003 and 0.062\(\pm\)0.003 in the 4529 Type-2 AGN with \(\log(N2HA)\) smaller than -0.072 and in the 3860 Type-2 AGN with \(\log(N2HA)\) larger than -0.072 in the main sample of the 8389 Type-2 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\), respectively. Therefore, considering different mean \(\log(R_{sii})\) in different \(L_{O3}\) ranges, especially in Type-2 AGN, there are also accepted effects of different distributions of \(L_{O3}\) on properties of distributions of calculated \(R_{sii}\).
Considering distribution of SN, the estimated mean values of \(\log(R_{sii})\) are about 0.044\(\pm\)0.002 and 0.045\(\pm\)0.002 in the 2650 Type-1 AGN with \(\log(SN)\) smaller than 1.15 and in the 2817 Type-1 AGN with \(\log(SN)\) larger than 1.15 in the main sample of the 5467 Type-1 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\), respectively. The estimated mean values of \(\log(R_{sii})\) are about 0.069\(\pm\)0.003 and 0.073\(\pm\)0.003 in the 4380 Type-2 AGN with \(\log(SN)\) smaller than 1.21 and in the 4009 Type-2 AGN with \(\log(SN)\) larger than 1.21 in the main sample of the 8389 Type-2 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\), respectively. Meanwhile, as the shown results in subsection above, the calculated mean \(R_{sii}\) in AGN in the main samples are different from the calculated \(R_{sii}\) through emission line properties in mean spectra of the collected high quality AGN. Therefore, considering different mean \(\log(R_{sii})\) in different SN ranges, there are also accepted effects of different distributions of SN on properties of distributions of calculated \(R_{sii}\).
Before proceeding further, one point is noted. Not similar as the physical quantities of \(z\), O3HB, N2HA and \(L_{O3}\), SN is a parameter related to spectra quality. Why effects of different SN are considered? Actually, there is negative dependence of SN on redshift in AGN. The Spearman Rank correlation coefficients are about -0.57 (\(P_{null}<10^{-15}\)) and -0.65 (\(P_{null}<10^{-15}\)) for the collected 5467 Type-1 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\) and for the 8389 Type-2 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\), respectively. Here, we do not show the dependence of SN on redshift in plots. However, considering the effects of different redshifts on \(R_{sii}\) comparisons between Type-1 AGN and Type-2 AGN, it is consequent to consider effects of different distributions of SN.
Due to discussions above, it is necessary and interesting to check effects of different distributions of redshift, O3HB, N2HA, \(L_{O3}\) and SN on the results in Fig 6. The convenient way is to create one subsample of Type-1 AGN which have the same distributions of redshift, O3HB, N2HA, \(L_{O3}\) and SN as those of a subsample of Type-2 AGN. Based on the distributions of \(z\), O3HB, N2HA, \(L_{O3}\) and SN of the AGN in the main samples shown in Fig. 7 (5467 Type-1 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\) and 8389 Type-2 AGN with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\)), it is easy to create a subsample of Type-2 AGN having the same distributions of \(z\), O3HB, N2HA, \(L_{O3}\) and \(SN\) as those of the Type-1 AGN in the subsample, through finding minimum parameter distance \(D_{p}~{}<~{}D_{cri}\) calculated as
\[D_{p,~{}i} = D_{z,~{}i}~{}+~{}D_{O3HB,~{}i}~{}+~{}D_{N2HA,~{}i}~{}+~{}D_{L_{O3,~{}i}}~{}+~{}D_{SN,~{}i}~{}\] \[= (\frac{z_{1,i}-z_{2}}{sca_{z}})^{2}~{}+~{}(\frac{\log(O3HB_{1,i} )-\log(O3HB_{2})}{sca_{O3HB}})^{2}\] \[+~{}(\frac{\log(N2HA_{1,i})-\log(N2HA_{2})}{sca_{N2HA}})^{2}\] \[+~{}(\frac{\log(L_{O3,~{}1,~{}i})-\log(L_{O3,~{}2})}{sca_{L_{O3}}} )^{2}\] \[+~{}(\frac{\log(SN_{1,~{}i})-\log(SN_{2})}{sca_{SN}})^{2}~{}~{}~{ }~{}~{}for~{}i=1,\ldots,N_{1}\]
where \(z_{1,~{}i}\), \(O3HB_{1,~{}i}\), \(N2HA_{1,~{}i}\), \(L_{O3,~{}1,~{}i}\) and \(SN_{1,~{}i}\) mean parameters of the \(i\)th Type-1 AGN in the main sample with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\) (\(N_{1}~{}=~{}5467\)), \(z_{2}\), \(O3HB_{2}\), \(N2HA_{2}\), \(L_{O3,~{}2}\) and \(SN_{2}\) mean parameters of \(N_{2}~{}=~{}8389\) (\(N_{2}~{}>~{}N_{1}\)) Type-2 AGN in the main sample with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\), \(sca_{z}\), \(sca_{O3HB}\), \(sca_{N2HA}\), \(sca_{L_{O3}}\), and \(sca_{SN}\) are scale factors leading to \(D_{z}\), \(D_{O3HB}\), \(D_{N2HA}\) and \(D_{L_{O3}}\) not much different in quantity, and \(D_{cri}\) means a critical value to prevent high \(D_{p}\) leading to much different distributions of \(z\), O3HB, N2HA, \(L_{O3}\) and SN between the created final two subsamples. Then, based on \(sca_{z}~{}\sim~{}0.002\) and \(sca_{O3HB}~{}\sim~{}0.01\), \(sca_{N2HA}~{}\sim~{}0.007\) and \(sca_{L_{O3}}~{}\sim~{}0.02\) and \(sca_{SN}~{}\sim~{}0.0065\) and \(D_{cri}~{}\sim~{}60\), one subsample of 548 Type-1 AGN and one subsample of 548 Type-2 AGN are created, which have the same distributions of \(z\), O3HB, N2HA, \(L_{O3}\) and \(SN\) with significance levels higher than 99% through the two-sided Kolmogorov-Smirnov statistic technique. Certainly, each object in the main samples is selected once into the two subsamples. The distributions of \(z\), O3HB, N2HA, \(L_{O3}\) and SN for the AGN in the subsamples are shown in Fig. 8.
Simple descriptions are given as follows to determine the scale factors \(D_{z}\), \(D_{O3HB}\), \(D_{N2HA}\) and \(D_{L_{O3}}\) and the critical \(D_{cri}\) by three steps. First, starting values of the scale factors are set to be the differences between the mean redshift, between the mean \(\log(O3HB)\), between the mean \(\log(N2HA)\), between the mean \(\log(L_{O3})\) and between the mean \(\log(SN)\) of the 5467 Type-1 AGN and the 8389 Type-2 AGN in the main samples with \(0.4~{}<~{}R_{sii}~{}<~{}1.5\): \(sca_{z}\) = 0.05, \(sca_{O3HB}\) = 0.25, \(sca_{N2HA}\) = 0.1, \(sca_{L_{O3}}\) = 0.65, \(sca_{SN}\) = 0.06. And the starting value of \(D_{cri}\) = 260 is the mean value of \(D_{p}\), \({}_{0}\) determined by the starting values of the scale factors. And then, based on the Equation (6), two subsamples are created. Second, for the created two subsamples, the two-sided Kolmogorov-Smirnov statistic technique is applied to check whether the two subsamples have the same distributions of \(z\), O3HB, N2HA, \(L_{O3}\) and \(SN\) with significance levels higher than 99%. If the created two subsamples have different distributions of \(z\) and/or O3HB and/or N2HA and/or \(L_{O3}\) and/or \(SN\) (statistical significance level smaller than 99%), then smaller values should be re-assigned to the correspond
ing scale factors and \(D_{cri}\). Based on the re-given \(D_{z}\), \(D_{O3HB}\), \(D_{N2HA}\) and \(D_{Lo3}\) and \(D_{cri}\), two new subsamples are created, and then to check whether the two new subsamples have the same distributions of \(z\), O3HB, N2HA, \(L_{O3}\) and \(SN\) with significance levels higher than 99%. Third, repeating the second step, until the created two subsamples have the same distributions of \(z\), O3HB, N2HA, \(L_{O3}\) and \(SN\) with significance levels higher than 99%. The two subsamples of 548 Type-1 AGN and 548 Type-2 AGN in the manuscript are created after 15 attempts. And the basic parameters of redshift, O3HB, N2HA, SN, \(L_{O3}\) and \(R_{sii}\) are listed in Table 1 and Table 2.
Moreover, in order to further confirm the two created subsamples having intrinsically same physical properties of \(z\), O3HB, N2HA, \(L_{O3}\) and \(SN\) between Type-1 AGN and Type-2 AGN, it is necessary to check whether the collected Type-1 AGN and Type-2 AGN with one fixed parameter have the same distributions of the other four parameters. Here, Type-2 AGN and Type-1 AGN are collected with absolute value of one of the five parameters minus its mean value4 smaller than 5%5 of the total range of the parameter. Then, the two-sided Kolmogorov-Smirnov statistic technique is applied to checked whether the collected Type-1 AGN and Type-2 AGN from the subsamples having the same distributions of the other four parameters. The results are shown in Fig. 9. It is apparent that the collected Type-1 AGN and Type-2 AGN with one given parameter have the same distributions of the other four parameters with significance level higher than 75% (actually most of the cases have the significance levels higher than 90%). Therefore, the collected 548 Type-1 AGN and 548 Type-2 AGN in the subsamples can be well and efficiently applied to check different \(R_{sii}\) (to simply trace properties of \(n_{e}\)) properties between Type-1 AGN and Type-2 AGN, after considerations of necessary effects.
Footnote 4: To select different point from the mean value can lead to the same results
Footnote 5: The critical value 5% can lead about 100 Type-1 AGN and about 100 Type-2 AGN to be collected, leading to much clearer histogram distributions of the parameters.
Based on the subsamples of the 548 Type-1 AGN and the 548 Type-2 AGN, \(R_{sii}\) distributions are shown in Fig. 10, with mean \(\log(R_{sii})\) about 0.042\(\pm\)0.005 and 0.072\(\pm\)0.005 of the Type-1 AGN and the Type-2 AGN, respectively, with uncertainties determined by the bootstrap method within 1000 loops. The new mean \(R_{sii}\) can lead the corresponding mean \(n_{e}\) in units of cm\({}^{-3}\) to be estimated as 326\(\pm\)7 and 225\(\pm\)8 of the 548 Type-1 AGN and the 548 Type-2 AGN in the subsamples. Therefore, after considering the necessary effects of different distributions of redshift, O3HB, N2HA, \(L_{O3}\) and SN, Type-1 AGN have higher \(n_{e}\) in NLRs than Type-2 AGN.
Furthermore, the well-known Students T-statistic technique is applied to confirm that the mean values of \(\log(R_{sii})\) of the 548 Type-1 AGN and the 548 Type-2 AGN in the subsamples shown in Fig. 10 are significantly different with confidence level about \(2.4\times 10^{-10}\) (higher than 5\(\sigma\)). And the two-sided Kolmogorov-Smirnov statistic technique indicates that the Type-1 AGN and the Type-2 AGN obey the same distributions of \(\log(R_{sii})\) with significance level about \(8.4\times 10^{-11}\) (higher than 5\(\sigma\)). Therefore, before giving clear effects of electron temperature on measurements of electron densities (which will be well discussed in the subsection 3.6), Type-1 AGN have apparently higher electron densities \(n_{e}\) (only related to smaller \(R_{sii}\)) in NLRs than the Type-2 AGN, with confidence level higher than 5\(\sigma\), against the expected results by the Unified model of AGN.
### Stronger AGN activities in Type-1 AGN?
Based on the higher \(n_{e}\) in NLRs in Type-2 AGN than in the HII galaxies as the shown results in Fig. 6, AGN activities can be well applied to explain the higher \(n_{e}\) in the Type-2 AGN than in HII galaxies, due to probable injecting electrons into NLRs through the galactic-scale outflows expected by the AGN feedback which plays key roles in galaxy evolution leading to tight connections between AGN and host galaxies as discussed in McNamara et al. (2007); Fabian (2012); Kormendy & Ho (2013); Heckman & Best (2014); King & Pounds (2015); Tombesi et al. (2015); Muller-Sanchez et al. (2018). If the AGN feedback expected outflows can lead to the higher \(n_{e}\) in Type-2 AGN than in HII galaxies, the stronger outflows could be also well applied to explain the higher \(n_{e}\) in NLRs in Type-1 AGN than in Type-2 AGN. More recently, Kakkad et al. (2018) have shown that there are statistical higher electron densities in NLRs in outflowing Seyfert galaxies than in non-outflowing Seyfert galaxies. Therefore, it is interesting to consider effects of outflows on our results.
As discussed in Cicone et al. (2014); Fiore et al. (2017), the kinetic powers of outflows are tightly scaled with AGN
Figure 10: Similar as Fig. 6, but for the 548 Type-1 AGN and the 548 Type-2 AGN in the subsamples, which have the same distributions of redshift, O3HB, N2HA, \(L_{O3}\) and SN. The symbols and line styles have the same meanings as those in Fig. 6.
bolometric luminosity, indicating stronger outflows in AGN with strong [O iii] line luminosity. Similar results on the dependence of shifted velocities of [O iii] lines on continuum luminosity can be found in Zhang (2021). However, even the Type-2 AGN and the Type-1 AGN have the same properties of the [O iii] line luminosity, the higher \(n_{e}\) can be also confirmed in Type-1 AGN as the results shown in Fig. 10. Meanwhile, considering the fitting results to the emission lines around H\(\alpha\) in the mean spectra of high quality Type-1 AGN and high quality Type-2 AGN in Fig. 4, the [S ii] doublets have symmetric line profiles in Type-1 AGN and in Type-2 AGN, because the [S ii] doublets can be well described by two Gaussian components. If there were apparent effects of expected strong outflows on [S ii] doublets, there should be double-peaked features and/or asymmetric line profiles as shown in Kakkad et al. (2018). Therefore, the symmetric line profiles of [S ii] doublets support that there are no apparent different properties of outflows in current stages in the Type-1 AGN and in the Type-2 AGN, and it is not necessary to consider effects of asymmetric wings in [S ii] doublets on our final results. Therefore, rather than the present injecting electrons into NLRs through galactic-scale outflows, longer durations of AGN activities triggering outflows in Type-1 AGN could be naturally applied to explain the higher \(n_{e}\) in NLRs in Type-1 AGN.
Either the higher \(n_{e}\) in NLRs in Type-1 AGN or the expected long durations of AGN activities triggering outflows in Type-1 AGN are against the expected results by the commonly and widely accepted Unified model of AGN.
### Stronger star-forming contributions to NLRs in Type-2 AGN?
The main objective of the manuscript is to check the Unified Model of AGN through comparisons of electron densities in NLRs between Type-1 AGN and Type-2 AGN. Under the framework of the Unified Model of AGN, based on the same distributions of redshift between the 548 Type-1 AGN and the 548 Type-2 AGN in the subsamples, there are the same expected evolutionary histories between the Type-1 AGN and the Type-2 AGN in the subsamples, indicating the same host galaxy properties (including expected similar contributions of star-forming) between the 548 Type-1 AGN and the 548 Type-2 AGN in the subsamples.
However, if there were more contributions from HII regions in Type-2 AGN than in Type-1 AGN, lower electron densities in NLRs would be expected in Type-2 AGN, due to lower electron densities in HII regions, as the shown results for HII galaxies in Fig. 6. However, the assumption that more star-forming contributions in Type-2 AGN than in Type-1 AGN is against what have been expected by the Unified Model of AGN, to support our main final conclusion that the manuscript provide interesting clues to challenge the Unified Model of AGN.
### Further Discussions
In the discussed results above, effects of aperture sizes on the measured \(R_{sii}\) are not considered. Actually, Type-2 AGN with lower redshift than 0.1 should have their emission regions of [S ii] doublet partly covered in the SDSS fiber spectra. However, if to consider the 140 Type-1 AGN and the 139 Type-2 AGN with redshift larger than 0.15 (corresponding fiber distance about 5200pc large enough to totally cover the NLRs of AGN with \(L_{O3}\sim 10^{41}\)erg \(\cdot\) s\({}^{-1}\)) in the subsamples, the mean \(\log(R_{sii})\) are about 0.042\(\pm\)0.003 and 0.072\(\pm\)0.004 in the Type-1 AGN and in the Type-2 AGN, also leading to higher \(n_{e}\) in NLRs of Type-1 AGN than Type-2 AGN. Here, the distances \(R_{NLRs}\) of NLRs to central BHs in AGN are simply determined by the empirical relation between \(R_{NLRs}\) and [O iii] line luminosity, as well discussed in Liu et al. (2013); Hainline et al. (2013, 2014); Fischer et al. (2018); Dempsey & Zakamska (2018). Therefore, effects of aperture sizes have few effects on the final results.
Moreover, as described in Section 2, we can totally confirm that Type-1 AGN cannot be mis-collected into HII galaxy sample or into Type-2 AGN sample, because apparent broad H\(\alpha\) in Type-1 AGN but no broad H\(\alpha\) in HII galaxies nor in Type-2 AGN, however, we cannot give the totally confirmed conclusion that there are no Type-2 AGN mis-collected into the HII galaxy sample. Therefore, effects are simply discussed on our final results, if some Type-2 AGN were mis-collected into HII galaxies (or some HII galaxies mis-collected into Type-2 AGN). For the 8725 Type-2 AGN in the main sample, Fig. 11 shows the dependence of mean \(\log(R_{sii})\)
Figure 11: On the dependence of mean \(\log(R_{sii})\) on mean \(\log(O3HB)\) for the main sample of the 8725 Type-2 AGN (excluding the Type-2 LINERs and composite galaxies) divided into 55 bins (at least 50 objects included in each bin) with equal width of \(\log(O3HB)\). Horizontal red line marks the position \(\log(R_{sii})=0.052\).
on mean \(\log(O3HB)\) for the Type-2 AGN divided into 55 bins (at least 50 objects included in each bin) with equal width of \(\log(O3HB)\). In Fig. 11, uncertainty of each mean \(\log(R_{sii})\) is calculated by the bootstrap method within 1000 loops. It is clear that in order to detect mean \(\log(R_{sii})\) to be about 0.042 (the mean \(\log(R_{sii})\) for the Type-1 AGN in the main sample) in Type-2 AGN, the Type-2 AGN with \(\log(O3HB)\) less than 1 should be the objects actually identified as HII galaxies, leading to the totally unreasonable results that about 95% Type-2 AGN in the main sample were HII galaxies. Therefore, mis-collected HII galaxies into Type-2 AGN sample can not be applied to explain the different \(n_{e}\) in NLRs between Type-1 AGN and Type-2 AGN. Meanwhile, considering the case that some Type-2 AGN were mis-collected into the HII galaxy sample, in order to detect mean \(\log(R_{sii})\) to be about 0.042 in Type-2 AGN, the HII galaxies with \(\log(R_{sii})\) smaller than 0.042 should be the objects actually identified as Type-2 AGN. Among the HII galaxies in the main sample, there are 1269 HII galaxies with \(\log(R_{sii})\) smaller than 0.052, even considering all the 1269 HII galaxies as Type-2 AGN, the mean \(\log(R_{sii})\) is about 0.078\(\pm\)0.005 in Type-2 AGN, re-confirming the higher electron density \(n_{e}\) in NLRs in Type-1 AGN than in Type-2 AGN.
Furthermore, as detailed discussions in Osterbrock (1989); Osterbrock & Ferland (2006); Kewley et al. (2019); Flury & Moran (2020), etc., there are apparent effects of electron temperature \(T_{e}\) on estimating electron density \(n_{e}\) by the parameter \(\log(R_{sii})\), and the improved formula to estimate electron density can be described as (see Fig. 5.8 and corresponding discussions in Osterbrock & Ferland (2006)),
\[\frac{n_{e}}{\mathrm{cm}^{3}}\times(\frac{10^{4}K}{T_{e}})^{0.5}\ \cong\ \frac{627.1\times R_{sii}\ - 909.17}{0.4315\ -\ R_{sii}} \tag{7}\]
after considering effects of electron temperature \(T_{e}\). Therefore, it is necessary to consider effects of \(T_{e}\) on reported results on larger \(n_{e}\) (smaller \(\log(R_{sii})\)) in Type-1 AGN. Electron temperatures \(T_{e}\) can be well traced by flux ratio \(O_{32}\) of the [O iii] lines
\[\begin{split} O_{32}=&\frac{f_{44959}\ +\ f_{45007}}{f_{4 364}}=\frac{7.9\times exp(\frac{3.29\times 10^{4}K}{T_{e}})}{1+4.5\times 10^{-4 }n_{e}/T_{e}^{0.5}}\\ &\sim 7.9\times exp(\frac{3.29\times 10^{4}}{T_{e}})\end{split} \tag{8}\]
For the 548 Type-1 AGN and the 548 Type-2 AGN in the subsamples, emission lines around [O iii]\(\lambda\)4364A within rest wavelength range from 4250A to 4450A are well measured by multiple-Gaussian functions, one narrow Gaussian function applied to describe narrow H\(\gamma\), one narrow Gaussian function applied to describe narrow [O iii]\(\lambda\)4364A, two broad Gaussian functions applied to describe broad H\(\gamma\) only in Type-1 AGN, after subtractions of host galaxy contributions (if there are) which have been determined above through the SSP method. Fig. 12 shows one Type-1 AGN and one Type-2 AGN of which apparent [O iii]\(\lambda\)4364A are best described by multiple Gaussian functions. Then, through the criterion that the measured flux and second moment of [O iii]\(\lambda\)4364A at least 3 times larger than their corresponding uncertainties, there are 133 Type-1 AGN and 101 Type-2 AGN which have apparent [O iii]\(\lambda\)4364A. The results indicate that only a small part of AGN have apparent [O iii]\(\lambda\)4364A. That is the main reason why we do create our main samples (discussed in section 2) of AGN without considering properties of [O iii]\(\lambda\)4364A. Not similar as [O iii]\(\lambda\)4959, 5007A doublet which are commonly clear and strong in AGN, [O iii]\(\lambda\)4364A emissions are commonly weak in AGN, leading to less number of AGN which have apparent [O iii]\(\lambda\)4364A emission features. If not only apparent H\(\alpha\), H\(\beta\), [O iii]\(\lambda\)4959, 5007A, [N ii] and [S ii] but also apparent [O iii]\(\lambda\)4364A (line parameters are at least 5 times larger than their corresponding uncertainties) were considering to create new main samples, only about one seventh of the AGN in the main samples created in Section 2 were retained into new created main samples. Then, there should be only tens of AGN included in expected new subsamples which have the same distributions of \(z\), O3HB, N2HA, \(L_{O3}\) and \(SN\), leading to not reliable discussions on results through the new created subsamples.
Then, distributions of \(O_{32}\) and \(\log(R_{sii})\) are shown in top panels of Fig. 13, with mean values [\(O_{32}\), \(\log(R_{sii})\)] of [61.49\(\pm\)6.42, 0.044\(\pm\)0.006] in the 133 Type-1 AGN and of [92.27\(\pm\)4.12, 0.062\(\pm\)0.005] in the 101 Type-2 AGN, respectively. Based on properties of \(O_{32}\), bottom left panel of Fig. 13 shows distributions of \(T_{e}\) which are also listed in Table 1 and Table 2, with mean values of \((1.95\pm 0.14)\times 10^{4}K\) in the 133 Type-1 AGN and of \((1.41\pm 0.07)\times 10^{4}K\) in the 101 Type-2 AGN, respectively. Then, based on the calculated \(T_{e}\) and \(\log(R_{sii})\), bottom right panel of Fig. 13 shows distributions of \(n_{e}/cm^{3}\) after corrections of effects of \(T_{e}\). The improved mean electron densities \(n_{e}/\mathrm{cm}^{3}\) are about \(394\pm 36\) and \(283\pm 23\) in the 133 Type-1 AGN and in the 101 Type-2 AGN, respectively, re-leading to apparently large \(n_{e}\) in NLRs in Type-1 AGN than in Type-2 AGN. Uncertainties of the mean values above are determined by the bootstrap method within 1000 loops. Furthermore, the two-sided Kolmogorov-Smirnov statistic technique is applied to determine that the 133 Type-1 AGN and the 101 Type-2 AGN obey the same distributions of \(\log(R_{sii})\) with significance level only about \(6\times 10^{-5}\) (higher than \(4\sigma\)). And the Students T-statistic technique is applied to confirm that the mean values of \(n_{e}\) of the 133 Type-1 AGN and the 101 Type-2 AGN in the subsamples are significantly different with confidence level about \(6.9\times 10^{-7}\) (higher than \(5\sigma\)). Moreover, as shown in bottom right panel of Fig. 13, it looks like there is a cut value \(\log(n_{e})\sim 2.87\) for Type-2 AGN, it is also necessary to roughly check whether the cut value can lead to
different results on \(n_{e}\). Here, even accepted \(log(n_{e})\sim 2.87\) (the maximum value for Type-2 AGN) as a cut value, the mean values of \(log(n_{e})\) are \(2.51\pm 0.03\) (\(323\pm 21cm^{3}\)) and \(2.45\pm 0.02\) (\(282\pm 14cm^{3}\)) for the 107 Type-1 AGN with \(log(n_{e})<2.87\) and the 101 Type-2 AGN with \(\log(n_{e})<2.87\), leading to higher \(n_{e}\) in Type-1 AGN than in Type-2 AGN. Also, the two-sided Kolmogorov-Smirnov statistic technique can lead to probability only about \(1.9\times 10^{-2}\) to support similar \(\log(n_{e})\) distributions of the 107 Type-1 AGN with \(\log(n_{e})<2.87\) and the 101 Type-2 AGN with \(\log(n_{e})<2.87\), and the Students T-statistic technique can lead to probability only about \(4.5\times 10^{-3}\) to support similar mean values of \(n_{e}\) of the 107 Type-1 AGN with \(\log(n_{e})<2.87\) and the 101 Type-2 AGN with \(\log(n_{e})<2.87\). Therefore, effects of electron temperatures can be applied to re-confirm the larger electron densities in NLRs of Type-1 AGN than in Type-2 AGN.
Moreover, based on \(T_{e}\) distributions shown in Fig. 13, effects of different \(T_{e}\) distributions can be checked by the following method, as what have been done in subsection 3.3. Through equation (6) with applications of only one parameter \(\log(T_{e})\) of the 133 Type-1 AGN and the 101 Type-2 AGN shown in Fig. 13, one subsample of 62 Type-1 AGN and the other one subsample of 62 Type-2 AGN can be created with the same \(\log(T_{e})\) distributions. The two-sided Kolmogorov-Smirnov statistic technique can lead to probability 98.4% to support the same \(\log(T_{e})\) distributions of AGN in the two subsamples. Left panel of Fig. 14 shows the \(\log(T_{e})\) distributions of AGN in the two subsamples. The then, right panel of Fig. 14 shows the \(n_{e}\) distributions of AGN in the two subsamples. The mean values of \(\log(n_{e})\) are about 2.56\(\pm 0.04\) (\(363\pm 33\)cm\({}^{-3}\)) and 2.45\(\pm 0.03\) (\(282\pm 19\)cm\({}^{-3}\)) for the 62 Type-1 AGN and the 62 Type-2 AGN in the subsamples having the same \(\log(T_{e})\) distributions, leading to higher \(n_{e}\) in the 62 Type-1 AGN than in the Type-2 AGN in the new created subsamples. Uncertainties of the mean values above are determined by the bootstrap method within 1000 loops. Also, the two-sided Kolmogorov-Smirnov statistic technique can lead to probability only about \(2.6\times 10^{-2}\) to support similar \(\log(n_{e})\) distributions of the 62 Type-1 AGN and the 62 Type-2 AGN in the new created subsamples, and the Students T-statistic technique can lead to probability only about \(3.1\times 10^{-2}\) to support similar mean values of \(\log(n_{e})\) of the 62 Type-1 AGN and the 62 Type-2 AGN in the new created subsamples. Therefore, totally ignoring effects of different \(T_{e}\) distributions shown in Fig. 13, higher electron density in NLRs in Type-1 AGN can be well
Figure 12: Left panels show the SDSS spectra (in dark green) of the Type-1 AGN 1436-53054-0495 (top panel) and the Type-2 AGN 2219-53816-0496 (bottom panel) in the subsamples, and the SSP method determined best descriptions (in red). In top left panel, solid blue line shows the determined host galaxy contributions and dashed red line shows the determined AGN continuum emissions. Right panels show the best descriptions (in red) to the emission lines around H\(\gamma\) in the line spectrum (in dark green). In bottom region of each right panel, solid purple line and solid green line show the determined narrow H\(\gamma\) and [O iii]\(\lambda\)4364Å. And in bottom region of top right panel, solid blue line shows the determined broad H\(\gamma\). Title of each left panel marks the information of PLATE-MJD-FIBERID of the SDSS spectrum.
Figure 14: The same \(\log(T_{e})\) distributions (left panel) and the \(\log(n_{e})\) distributions (right panel) of the 62 Type-1 AGN and the 62 Type-2 AGN in the new created subsamples, In each panel, histogram filled by red lines shows the results for the 62 Type-1 AGN, and histogram filled by blue lines shows the results for the 62 Type-2 AGN. In right panel, vertical dashed red line and dashed blue line mark positions for mean values of \(\log(n_{e})\) of the 62 Type-1 AGN and the 62 Type-2 AGN, respectively.
Figure 13: Distributions of \(O_{32}\) (top left panel), \(\log(R_{sil})\) (top right panel), \(\log(T_{e}/K)\) (bottom left panel) and improved electron density \(\log(n_{e}/cm^{3})\) (bottom right panel) of the 133 Type-1 AGN (in red color) and the 101 Type-2 AGN (in blue color) which have apparent [O iii]\(\lambda\)4364Å. In top region of each panel, vertical dashed lines in red and in blue mark the positions corresponding to the mean values of the Type-1 AGN and of the Type-2 AGN, respectively.
Figure 15: On the correlations between \(n_{e}\) and the parameters of \(z\), O3HB, N2HA, \(L_{O3}\) and SN for the Type-1 AGN (panels in the first rows) and the Type-2 AGN (panels in the last rows) shown in bottom right panel of Fig 13. In each panel in the first two rows, open circles in red and in blue show the results for the Type-1 AGN with \(\log(n_{e})<2.87\) and the Type-1 AGN with \(\log(n_{e})>2.87\), respectively. Horizontal solid red line and horizontal dashed red lines show the mean value of the parameter shown in Y-axis and corresponding 1RMS scatter bands for all the Type-1 AGN. In each panel in the last two rows, open circles in red and in blue show the results for the Type-2 AGN with \(\log(n_{e})<2.45\) and the Type-2 AGN with \(\log(n_{e})>2.45\), respectively. Horizontal solid red line and horizontal dashed red lines show the mean value of the parameter shown in Y-axis and corresponding 1RMS scatter bands for all the Type-2 AGN. In top region of each panel of the Figure, the three number ratios are marked for the ratio of AGN outsides of the 1RMS scatter bands to all the AGN, the ratio of AGN shown in blue outsides of the 1RMS scatter bands to all the AGN shown in red.
confirmed, even there are less numbers of AGN in the new created subsamples.
Furthermore, through the shown results in Fig. 13, it is interesting to consider whether the Type-1 AGN with higher electron densities \(\log(n_{e})>2.87\) have different physical properties from the other Type-1 AGN with \(\log(n_{e})<2.87\). Then, panels in the first two rows of Fig. 15 shows correlations between \(n_{e}\) and the parameters of \(z\), SN, O3HB, N2HA and \(L_{O3}\) of the Type-1 AGN shown in Fig. 13. The correlations have Spearman Rank correlation coefficients smaller than 0.2, thus rather than to determine linear fitting results (with determined parameters smaller than corresponding uncertainties by the FITEXY code), the mean values of \(z\), SN, O3HB, N2HA and \(L_{O3}\) and corresponding 1RMS scatter bands are shown in each panel of Fig. 15. It is clear that there are the similar number ratios (marked in each panel) of Type-1 AGN outside of the 1RMS scatter bands to all Type-1 AGN, of the Type-1 AGN with \(\log(n_{e})<2.87\) outside of the 1RMS scatter bands to all the Type-1 AGN with \(\log(n_{e})<2.87\), and of the Type-1 AGN with \(\log(n_{e})>2.87\) outside of the 1RMS scatter bands to all the Type-1 AGN with \(\log(n_{e})>2.87\). The similar number ratios strongly indicate that the Type-1 AGN with \(\log(n_{e})>2.87\) are not outliers among the reported Type-1 AGN. Meanwhile, the panels in the last two rows shows similar results for the Type-2 AGN shown in Fig. 13 with accepted cut value \(\log(n_{e})\sim 2.45\) (the mean value of \(\log(n_{e})\) of the Type-2 AGN). Similar results can be found that there are similar number ratios (marked in each panel) of Type-2 AGN outside of the 1RMS scatter bands to all Type-2 AGN, of the Type-2 AGN with \(\log(n_{e})<2.45\) outside of the 1RMS scatter bands to all the Type-2 AGN with \(\log(n_{e})<2.45\), and of the Type-2 AGN with \(\log(n_{e})>2.45\) outside of the 1RMS scatter bands to all the Type-2 AGN with \(\log(n_{e})>2.45\). Therefore, the results in Fig. 15 strongly support that the selected AGN with different \(n_{e}\) have similar physical properties as the other reported AGN in the manuscript.
Furthermore as well discussed in Filippenko & Halpern (1984) with two zones with different electron densities in NGC 7213, if there were higher electron density regions closer to central BHs could be visible in Type-1 AGN but probably seriously obscured in Type-2 AGN, higher electron densities (smaller \(R_{sit}\)) could be well expected in Type-1 AGN. To put it simply, if the two-zone model was accepted, line fluxes (\(f_{6716}\), \(f_{6731}\)) of each [S ii] line include two components, one component (\(f_{6716,H}\), \(f_{6731,H}\)) from higher electron density regions and the other one component (\(f_{6716,L}\) and \(f_{6731,L}\)) from normal (or lower) electron density regions. Meanwhile, due to \(f_{6716,H}\), \(f_{6731,H}\) from higher electron density regions, we have
\[f_{6716,H}/f_{6731,H}\ <\ f_{6716,L}/f_{6731,L} \tag{9}\]
. Then, based on serious dependence of flux ratios of [S ii] on electron densities, mean electron densities \(n_{e}\) can be simply determined by flux ratios of the two components as follows,
\[\begin{split}& R_{sit}=\frac{f_{6716,H}+f_{6716,L}}{f_{6731,H}+f_{673 1,L}}\\ & n_{e}\times(\frac{10^{4}K}{T_{e}})^{0}.5\sim\frac{627.1\ \times\ R_{sit}\ -\ 909.17}{0.4315\ -\ R_{sit}} \end{split} \tag{10}\]
. For type-1 AGN, due to no obscurations, all the parameters from observed line fluxes can be well accepted intrinsic values. However, for Type-2 AGN with orientation effects leading to higher electron density zones being seriously obscured, without contributions of \(f_{6716,H}\) and \(f_{6731,H}\), \(R_{sit}\) in Type-2 AGN should be \(R_{sit,T2}=\frac{f_{6716,L}}{f_{6731,L}}\). Considering \(f_{6716,H}/f_{6731,H}<f_{6716,L}/f_{6731,L}\), we will clearly have
\[R_{sit,T2}\ >\ R_{sit,T1}\ =\ \frac{f_{6716,H}+f_{6716,L}}{f_{6731,H}+f_{6731,L}} \tag{11}\]
Figure 16: Distributions of \(f_{6716}\) (left panel) and \(f_{6731}\) (right panel) in the 548 Type-1 AGN (in red color) and in the 548 Type-2 AGN (in blue color) in the subsamples. Thick dashed lines in red and in blue represent the corresponding best Gaussian profiles for the distributions of the 548 Type-1 AGN and the 548 Type-2 AGN in the subsamples, respectively.
with \(R_{sii,T1}\) as measurements of \(R_{sii}\) in Type-1 AGN. That is why two zone model can be applied to explain higher electron densities in Type-1 AGN based on flux ratio \(R_{sii}\) of [S ii] emission lines. Meanwhile, after considering orientation effects expected obscurations on higher electron density regions, flux intensities of [S ii] emission lines in Type-2 (\(f_{6716,L}\) and \(f_{6731,L}\)) should be smaller than the flux intensities (\(f_{6716,L}+f_{6716,H}\) and \(f_{6731,L}+f_{6731,H}\)) in Type-1 AGN. Therefore, it is interesting and necessary to check effects of the probable higher electron density regions closer to central BHs on our final results. However, the mean [S ii]\(\lambda\)6716A ([S ii]\(\lambda\)6731A ) line intensities log(\(f_{6716}\)/10\({}^{-17}\)erg/s/cm\({}^{2}\)) (log(\(f_{6731}\)/10\({}^{-17}\)erg/s/cm\({}^{2}\))) are about 1.812\(\pm\)0.013 (1.767\(\pm\)0.010) and 1.832\(\pm\)0.013 (1.765\(\pm\)0.010) in the 548 Type-1 AGN and the 548 Type-2 AGN in the subsamples respectively. The uncertainties are determined by the bootstrap method within 1000 loops. Distributions of \(f_{6716}\) and \(f_{6731}\) are shown in Fig. 16. Slightly higher [S ii]\(\lambda\)6716A line intensities can be found in Type-2 AGN than in Type-1 AGN, and similar [S ii]\(\lambda\)6731A line intensities can be found between Type-1 AGN and Type-2 AGN. Because the Type-1 AGN and the Type-2 AGN in the subsamples have the same redshift distributions, [S ii] line luminosity distributions are not shown and discussed again. The results are against the expected higher [S ii] line intensities in Type-1 AGN, therefore, higher electron density regions visible in Type-1 AGN (or higher electron density regions partly obscured in Type-2 AGN) can not be applied to well explain the higher electron densities in Type-1 AGN than in Type-2 AGN.
Finally, simple discussions are given on probable asymmetric components in [S ii] doublet, although there are no clear clues to support apparent asymmetric components in [S ii] doublet in the mean spectra of high quality AGN. As is known, asymmetric components in [S ii] doublet could be tightly related to radial outflows. And galactic outflows could be tightly related to radio emissions, such as the more recent results in Jarvis et al. (2019). Therefore, among the AGN in the main samples, radio properties are checked through the FIRST (Faint Images of the Radio Sky at Twenty-Centimeters) database (Becker et al., 1995; Helfand et al., 2015), and to show whether are there quite different log(\(R_{sii}\)) between AGN without radio emissions and AGN with radio emissions. Among the 6039 Type-1 AGN collected from SDSS DR12, there are 4432 Type-1 AGN (no-radio Type-1 AGN) covered by FIRST but with radio emission intensity to be zero, and 1607 Type-1 AGN (radio Type-1 AGN) covered by FIRST with radio emission intensity larger than zero. Mean log(\(R_{sii}\)) are about 0.052\(\pm\)0.003 for the 4432 no-radio Type-1 AGN and 0.049\(\pm\)0.003 for the 1607 radio Type-1 AGN, respectively. Meanwhile, among the 12999 Type-2 AGN collected from SDSS DR12, there are 10431 Type-2 AGN (no-radio Type-2 AGN) covered by FIRST but with radio emission intensity to be zero, and 2568 Type-2 AGN (radio Type-2 AGN) covered by FIRST with radio emission intensity larger than zero. Mean log(\(R_{sii}\)) are about 0.088\(\pm\)0.005 for the 10431 no-radio Type-2 AGN and 0.082\(\pm\)0.005 for the 2568 radio Type-2 AGN. Therefore, even considering effects of asymmetric components related to radio emissions, it can be re-confirmed that Type-1 AGN has higher \(n_{e}\) in NLRs than Type-2 AGN.
## 4 Further Implications
If the higher electron density \(n_{e}\) in NLRs in Type-1 AGN are intrinsically true, there could be some special type-1 AGN of which NLRs have electron densities high enough to be nearer to critical densities to forbidden emission lines, once strong injecting electrons can last long enough, leading to quite weak narrow forbidden emission lines. Therefore, in the near future, to detect so special Type-1 AGN without narrow forbidden emission lines is the main objective of one of our being prepared manuscripts.
## 5 Summary and Conclusions
Finally, the main summary and conclusions are as follows.
* All the low redshift (\(z<0.3\)) Type-1 AGN and Type-2 AGN are collected from SDSS DR12. The [S ii]\(\lambda\)6716, 6731A doublets are well measured. Based on the reliable [S ii]\(\lambda\)6716, 6731A doublets, there are 6039 Type-1 AGN with reliable [S ii] doublets and apparent broad H\(\alpha\) emission lines, and 12999 Type-2 AGN with reliable [S ii] doublets but no broad H\(\alpha\) emission lines.
* After considering the controversial conclusion on physical nature of Type-2 LINERs (at least part of Type-2 AGN without AGN nature) and considering strong contributions from star-forming in composite galaxies, both Type-2 LINERs and composite galaxies are not considered in the final main sample of Type-2 AGN, lead the final main sample of Type-2 AGN including 8725 Type-2 AGN.
* Based on the reliable line flux ratio \(R_{sii}\) of [S ii]\(\lambda\)6716A to [S ii]\(\lambda\)6731A, higher electron densities \(n_{e}\) in NLRs can be found in Type-1 AGN than in Type-2 AGN, and in Type-2 AGN than in HII galaxies.
* After considering necessary effects of redshift and central AGN activities on the distributions of \(n_{e}\), two subsamples of 548 Type-1 AGN and 548 Type-2 AGN are created to have the same distributions of \(z\), 03HB, N2HA, \(L_{O3}\) and SN, still leading to higher \(n_{e}\) in NLRs of Type-1 AGN than Type-2 AGN, with confidence level higher than 5\(\sigma\).
* Comparing the \(n_{e}\) in NLRs between HII galaxies and Type-2 AGN, AGN activities related to central BH accreting power should play key roles in higher electron densities in NLRs due to injecting electrons by AGN feedback expected galactic-scale outflows.
* Unfortunately, even Type-1 AGN and Type-2 AGN have the same properties of O3HB, N2HA and \(L_{O3}\) at present times, the AGN in the subsamples, higher \(n_{e}\) in NLRs in Type-1 AGN can also be confirmed. Therefore, longer time durations of AGN activities in Type-1 AGN should be preferred.
* Considering lower electron densities in HII galaxies, stronger star-forming contributions to NLRs could be applied to explain the lower electron densities in NLRs in Type-2 AGN than in Type-1 AGN, if not considering the Unified Model expected similar host galaxy evolutionary histories for the AGN in the subsamples with the same distributions of redshift.
* After considering probable effects of asymmetric components in [S ii] doublets related to radio emissions, it can be re-confirmed that Type-1 AGN without (with) radio emissions has higher \(n_{e}\) in NLRs than Type-2 AGN without (with) radio emissions.
* After considering effects of electron temperatures traced by flux ratio of [O iii]\(\lambda 4364,4959,5007\)A emission lines on estimating electron densities in NLRs, more apparently large \(n_{e}\) in NLRs in Type-1 AGN than in Type-2 AGN.
* Either higher \(n_{e}\) in NLRs in Type-1 AGN than in Type-2 AGN or expected longer time durations of AGN activities triggering outflows in Type-1 AGN than in Type-2 AGN or stronger star-forming contributions in Type-1 AGN than in Type-2 AGN could provide interesting challenges to the currently accepted Unified model of AGN.
## Acknowledgements
Zhang gratefully acknowledge the anonymous referee for giving us constructive and valuable comments and suggestions to greatly improve the paper. Zhang gratefully thanks the kind financial support from NSFC-12173020. This manuscript has made use of the data from the SDSS projects, [http://www.sdss3.org/](http://www.sdss3.org/), managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaborations. This manuscript has made use of the data from FIRST database [http://sundog.stsci.edu/](http://sundog.stsci.edu/). This paper has made use of the MPFIT package [https://pages.physics.wisc.edu/](https://pages.physics.wisc.edu/)\(\sim\)craigm/idl/cmpfit.html and the FITEXY procedure [https://idlastro.gsfc.nasa.gov/ftp/pro/math/fitexy.pdf](https://idlastro.gsfc.nasa.gov/ftp/pro/math/fitexy.pdf).
|
2305.08103 | A Unifying Formal Approach to Importance Values in Boolean Functions | Boolean functions and their representation through logics, circuits, machine
learning classifiers, or binary decision diagrams (BDDs) play a central role in
the design and analysis of computing systems. Quantifying the relative impact
of variables on the truth value by means of importance values can provide
useful insights to steer system design and debugging. In this paper, we
introduce a uniform framework for reasoning about such values, relying on a
generic notion of importance value functions (IVFs). The class of IVFs is
defined by axioms motivated from several notions of importance values
introduced in the literature, including Ben-Or and Linial's influence and
Chockler, Halpern, and Kupferman's notion of responsibility and blame. We
establish a connection between IVFs and game-theoretic concepts such as Shapley
and Banzhaf values, both of which measure the impact of players on outcomes in
cooperative games. Exploiting BDD-based symbolic methods and projected model
counting, we devise and evaluate practical computation schemes for IVFs. | Hans Harder, Simon Jantsch, Christel Baier, Clemens Dubslaff | 2023-05-14T08:36:20Z | http://arxiv.org/abs/2305.08103v1 | # A Unifying Formal Approach to Importance Values in Boolean Functions
###### Abstract
Boolean functions and their representation through logics, circuits, machine learning classifiers, or binary decision diagrams (BDDs) play a central role in the design and analysis of computing systems. Quantifying the relative impact of variables on the truth value by means of _importance values_ can provide useful insights to steer system design and debugging. In this paper, we introduce a uniform framework for reasoning about such values, relying on a generic notion of _importance value functions (IVFs)_. The class of IVFs is defined by axioms motivated from several notions of importance values introduced in the literature, including Ben-Or and Linial's _influence_ and Chockler, Halpern, and Kupferman's notion of _responsibility_ and _blame_. We establish a connection between IVFs and game-theoretic concepts such as _Shapley_ and _Banzhaf_ values, both of which measure the impact of players on outcomes in cooperative games. Exploiting BDD-based symbolic methods and projected model counting, we devise and evaluate practical computation schemes for IVFs.
## 1 Introduction
Boolean functions arise in many areas of computer science and mathematics, e.g., in circuit design, formal logics, coding theory, artificial intelligence, machine learning, and system analysis [1, 14]. When modeling and analyzing systems through Boolean functions, many design decisions are affected by the relevance of variables for the outcome of the function. Examples include noise-reduction components for important input variables to increase reliability of circuits, prioritizing important variables in decision-making of protocols, or the order of variables in BDDs [1, 1]. Many ideas to quantify such notions of _importance_ of variables in Boolean functions have since been considered in the literature. To mention a few, _influence_[1] is used to determine power of actors in voting schemes, [1] devised measures based on how constant a function becomes depending on variable assignments, _blame_[2] quantifies the average _responsibility_[1] of input variables on the outcome of circuits or on causal reasoning, and the _Jeroslow-Wang value_[1] quantifies importance of variables in CNFs to derive splitting rules for SAT-solvers [1]. Closely related are notions of impact in cooperative games, e.g., through the _Shapley value_[2] or the _Banzhaf value_[1].
Although some of the aforementioned concepts are of quite different nature and serve different purposes, they share some common ideas. This raises the question of what characteristics importance values have and how the notions of the literature relate. The motivation of this paper is to advance the understanding of importance values, independent of concrete applications. For this purpose, we introduce a generic axiomatic framework that constitutes the class of _importance value functions (IVFs)_. Our axioms are motivated by properties one would intuitively expect from IVFs, e.g., that independent variables have no importance or that permutations do not change importance values. We show basic relationships within and between IVFs and provide new insights for existing and new importance measures. By connecting Boolean functions and cooperative games through _cooperative game mappings (CGMs)_ and using Shapley and Banzhaf values, we show how to generically derive new IVFs. All aforementioned notions of importance values from the literature satisfy our IVF axioms, showing that we provide a _unifying framework_ for all these notions, including CGM-derived ones.
Most notions of importance are known to be computationally hard, e.g., computing influence or the Shapley value is #P-complete [15, 16, 17]. We address computational aspects by devising practical computation schemes for IVFs using projected model counting [1] and BDDs.
Contributions and outline.In summary, our main contribution is an axiomatic definition of IVFs for variables in Boolean functions (Section 3), covering notions of importance from the literature (Sections 4.1 and 4.2). Moreover, we derive novel IVFs by linking Boolean functions with cooperative games and related values (Section 4.3). Finally, we provide practical computation schemes for IVFs (Section 5).
Supplemental material.This is a preprint of the paper accepted at the 32nd International Joint Conferences on Artificial Intelligence (IJCAI'23). It includes proofs and other additional material in the appendix. An implementation of the computing schemes for IVFs can be found at [https://github.com/graps1/impmeas](https://github.com/graps1/impmeas).
## 2 Preliminaries
Let \(X=\{x,y,z,\dots\}\) be a finite set of \(n=|X|\) variables, which we assume to be fixed throughout the paper.
Assignments.An _assignment over \(U\subseteq X\)_ is a function \(\mathbf{u}\colon\overline{U}\to\{0,1\}\), written in the form \(\mathbf{u}=x/0;y/1;\dots\). We denote assignments by bold lower-case letters and their domains by corresponding upper-case letters. If \(\mathbf{u}\) and \(\mathbf{v}\) have disjoint domains, we write their _concatenation_ as \(\mathbf{w}=\mathbf{u};\mathbf{v}\) with \(W=V\cup U\) and \(\mathbf{w}(x)=\mathbf{u}(x)\) if \(x\in U\) and \(\mathbf{w}(x)=\mathbf{v}(x)\) if \(x\in V\). The _restriction_ of \(\mathbf{u}\) to a domain \(S\subseteq U\) is denoted by \(\mathbf{u}_{S}\). For a permutation \(\sigma\) of \(X\), we define \(\mathbf{\sigma}\mathbf{u}\) as the assignment over \(\sigma(U)\) with \((\sigma\mathbf{u})(x)=\mathbf{u}(\sigma^{-1}(x))\).
Boolean functions.We call \(f,g,h,\dots\colon\{0,1\}^{X}\to\{0,1\}\)_Boolean functions_, collected in a set \(\mathbb{B}(X)\). We write \(g=x\) if \(g\) is the indicator function of \(x\), and we write \(\overline{g}\) for negation, \(f\lor g\) for disjunction, \(fg\) for conjunction and \(f\oplus g\) for exclusive disjunction. The _cofactor of w.r.t. an assignment \(\mathbf{v}\)_ is the function \(f_{\mathbf{v}}\) that always sets variables in \(V\) to the value given by \(\mathbf{v}\), and is defined as \(f_{\mathbf{v}}(\mathbf{u})=f(\mathbf{v};\mathbf{u}_{U\setminus V})\). The _Shannon decomposition of \(f\) w.r.t. variable \(x\)_ is a decomposition rule stating that \(f=xf_{x/1}\lor\overline{x}f_{x/0}\) holds, where \(f_{x/1}\) and \(f_{x/0}\) are the _positive_ and _negative_ cofactor of \(f\) w.r.t. \(x\). For a Boolean function \(f\), variable \(x\), and Boolean function or variable \(s\), let \(f[x/s]=sf_{x/1}\lor\overline{s}f_{x/0}\) be the function that replaces \(x\) by \(s\). For example, if \(f=y\lor xz\), then \(f_{x/1}=y\lor z\) and \(f_{x/0}=y\). Moreover, for \(s=x_{1}x_{2}\), we have
\[f[x/s]=s(y\lor z)\lor\overline{s}y=y\lor sz=y\lor x_{1}x_{2}z.\]
For \(\sim\in\{\leq,\geq,=\}\), we write \(f\sim g\) if \(f(\mathbf{u})\sim g(\mathbf{u})\) is true for all assignments. We collect the variables that \(f\) depends on in the set \(\mathsf{dep}(f)=\{x\in X:f_{x/1}\neq f_{x/0}\}\). If \(\mathbf{v}\) is an assignment with \(\mathsf{dep}(f)\subseteq V\), then \(f(\mathbf{v})\) denotes the only possible value that \(f_{\mathbf{v}}\) can take.
We say that \(f\) is _monotone in \(x\)_ if \(f_{x/1}\geq f_{x/0}\), and call \(f\)_monotone_ if \(f\) is monotone in all of its variables. Furthermore, \(f\) is the _dual_ of \(g\) if \(f(\mathbf{u})=\overline{g}(\mathbf{u})\), where \(\overline{\mathbf{u}}\) is the variable-wise negation of \(\mathbf{u}\). We call \(f\)_symmetric_ if \(f=\sigma f\) for all permutations \(\sigma\) of \(X\), where \(\sigma f(\mathbf{u})=f(\sigma^{-1}\mathbf{u})\).
Expectations.We denote the expectation of \(f\) w.r.t. the uniform distribution over \(D\) by \(\mathbb{E}_{d\in D}[f(d)]\) for \(f:D\to\mathbb{R}\). We only consider cases where \(D\) is finite, so
\[\mathbb{E}_{d\in D}[f(d)]=\frac{1}{|D|}\sum_{d\in D}f(d).\]
If the domain of \(f\) is clear, we simply write \(\mathbb{E}[f]\). For \(f\in\mathbb{B}(X)\), \(\mathbb{E}[f]\) is the fraction of satisfying assignments of \(f\).
Modular decompositions.We introduce a notion of _modularity_ to capture independence of subfunctions as common in the theory of Boolean functions and related fields [1, 1, 10]. Intuitively, \(f\) is modular in \(g\) if \(f\) treats \(g\) like a subfunction and otherwise ignores all variables that \(g\) depends on. We define modularity in terms of a _template function_\(\ell\) in which \(g\) is represented by a variable \(x\):
**Definition 1**.: Let \(f,g\in\mathbb{B}(X)\). We call \(f\)_modular in \(g\)_ if \(g\) is not constant and there is \(\ell\in\mathbb{B}(X)\) and \(x\in X\) such that \(\mathsf{dep}(\ell)\cap\mathsf{dep}(g)=\varnothing\) and \(f=\ell[x/g]\). If \(\ell\) is monotone in \(x\), then \(f\) is _monotonically modular in \(g\)_.
If \(f\) is modular in \(g\) with \(\ell\) and \(x\) as above, then \(f(\mathbf{u})=\ell(\mathbf{w})\), where \(\mathbf{w}\) is defined for \(y\in X\) as
\[\mathbf{w}(y)=\begin{cases}g(\mathbf{u})&\text{if $y=x$, and}\\ \mathbf{u}(y)&\text{otherwise.}\end{cases}\]
Thus, the value computed by \(g\) is assigned to \(x\) and then used by \(\ell\), which otherwise is not influenced by the variables that \(g\) depends on. For example, \(f=x_{1}\lor z_{1}z_{2}x_{2}\) is modular in \(g=z_{1}z_{2}\) since \(f\) can be obtained by replacing \(x\) in \(\ell=x_{1}\lor xx_{2}\) by \(g\). Note that \(\mathsf{dep}(\ell)=\{x,x_{1},x_{2}\}\) and \(\mathsf{dep}(g)=\{z_{1},z_{2}\}\) are disjoint. This property is crucial, since it ensures \(f\) and \(g\) are coupled through variable \(x\) only.
If \(f\) is modular in \(g\), then the cofactors \(\ell_{x/1}\) and \(\ell_{x/0}\) must be unique since \(g\) is not constant. (See Proposition 5 in the appendix.) Hence, we can define the _cofactors of \(f\) w.r.t. \(g\)_ as \(f_{g/1}=\ell_{x/1}\) and \(f_{g/0}=\ell_{x/0}\). The instantiation is reversed by setting \(f[g/x]=xf_{g/1}\lor\overline{x}f_{g/0}\).
Boolean derivatives.We frequently rely on the _derivative of a Boolean function \(f\) w.r.t. variable \(x\),_
\[\mathrm{D}_{x}f=f_{x/1}\oplus f_{x/0},\]
which encodes the undirected change of \(f\) w.r.t. \(x\). For example, \(f=x\lor y\) has the derivative \(\mathrm{D}_{x}f=\overline{y}\), with the intuition that \(x\) can only have an impact if \(y\) is set to zero. Furthermore, if \(f\) is modular in \(g\), we define the _derivative of \(f\) w.r.t. \(g\)_ as \(\mathrm{D}_{g}f=f_{g/1}\oplus f_{g/0}\). Given this, we obtain the following lemma corresponding to the chain rule known in calculus:
**Lemma 1**.: _Let \(f\) be modular in \(g\) and \(x\in\mathsf{dep}(g)\). Then_
\[\mathrm{D}_{x}f=(\mathrm{D}_{x}g)(\mathrm{D}_{g}f).\]
## 3 Importance Value Functions
In this section, we devise axiomatic properties that should be fulfilled by every _reasonable_ importance attribution scheme.
For a Boolean function \(f\) and a variable \(x\), we quantify the importance of \(x\) in \(f\) by a number \(\mathfrak{I}_{x}(f)\in\mathbb{R}\), computed by some _value function_\(\mathfrak{I}\). Not every value makes intuitive sense when interpreted as the "importance" of \(x\), so we need to pose certain restrictions on \(\mathfrak{I}\).
We argue that \(\mathfrak{I}\) should be bounded, with \(1\) marking the highest and \(0\) the lowest importance; that functions which are independent of a variable should rate these variables the lowest importance (e.g., \(\mathfrak{I}_{x}(f)=0\) if \(f=y\lor z\)); that functions which depend on one variable only should rate these variables the highest importance (e.g., \(\mathfrak{I}_{x}(f)=1\) for \(f=x\)); that neither variable names nor polarities should play a role in determining their importance (e.g., \(\mathfrak{I}_{x}(x\overline{z})=\mathfrak{I}_{z}(x\overline{z})\), cf. [1, 10])
**Definition 2** (Ivf).: A _value function_ is a mapping of the form \(\mathcal{I}\colon X\times\mathbb{B}(X)\to\mathbb{R}\) with \((x,f)\mapsto\mathcal{I}_{x}(f)\). An _importance value function (IVF)_ is a value function \(\mathcal{I}\) where for all \(x,y\in X\), permutations \(\sigma\colon X\to X\), and \(f,g,h\in\mathbb{B}(X)\):
* \(0\leq\mathcal{I}_{x}(f)\leq 1\).
* \(\mathcal{I}_{x}(f)=0\) if \(x\not\in\mathtt{dep}(f)\).
* (i) \(\mathcal{I}_{x}(f)=\mathcal{I}_{\sigma(x)}(\sigma f)\) and
* (ii) \(\mathcal{I}_{x}(f)=\mathcal{I}_{x}(f[y/\overline{y}])\).
* \(\mathcal{I}_{x}(f)\geq\mathcal{I}_{x}(h)\) if
* (i) \(f\) and \(h\) are monotonically modular in \(g\),
* (ii) \(f_{g/1}\geq h_{g/1}\) and \(h_{g/0}\geq f_{g/0}\), and
* (iii) \(x\in\mathtt{dep}(g)\).
Bound, Dum for "dummy", Dic for "dictator" and Type for "type invariance" were discussed above. ModEC (for "modular encapsulation consistency") is the only property that allows the inference of non-trivial importance inequalities in different functions. Let us explain its intuition. We say that \(f\)_encapsulates_\(h\)_on_\(g\) if these functions satisfy (i) and (ii) from ModEC. Intuitively, together with (i), condition (ii) states that _if one can control the output of \(g\), it is both easier to satisfy \(f\) than \(h\)_ (using \(f_{g/1}\geq h_{g/1}\)) _and to falsify_\(f\) _than_\(h\) (using \(h_{g/0}\geq f_{g/0}\)). We argue in ModEC that if \(f\) encapsulates \(h\) on \(g\), then \(g\)'s impact on \(f\) is higher than on \(h\), and thus, the importance of variables in \(\mathtt{dep}(g)\) (cf. (iii)) should be also higher w.r.t. \(f\) than w.r.t. \(h\).
_Example._ Let \(f=x_{1}x_{2}\lor x_{3}x_{4}x_{5}\), \(h=x_{3}x_{4}\lor x_{1}x_{2}x_{5}\), and \(\mathcal{I}\) be an IVF. Then \(f\) encapsulates \(h\) on \(g=x_{1}x_{2}\), since
\[\underbrace{1}_{f_{g/1}}\;\geq\;\underbrace{x_{3}x_{4}\lor x_{5}}_{h_{g/1}}\; \geq\;\underbrace{x_{3}x_{4}}_{h_{g/0}}\;\geq\;\underbrace{x_{3}x_{4}x_{5}}_{f _{g/0}}.\]
We then get \(\mathcal{I}_{x_{1}}(f)\geq\mathcal{I}_{x_{1}}(h)\) by application of ModEC. Swapping \(x_{1}\) with \(x_{3}\) and \(x_{2}\) with \(x_{4}\), we obtain a permutation \(\sigma\) such that \(h=\sigma f\). By Type, we derive \(\mathcal{I}_{x_{1}}(h)=\mathcal{I}_{x_{3}}(f)\). Using Type on the other variables yields
\[\mathcal{I}_{x_{1}}(f)=\mathcal{I}_{x_{2}}(f)\geq\mathcal{I}_{x_{3}}(f)= \mathcal{I}_{x_{4}}(f)=\mathcal{I}_{x_{5}}(f).\]
Together with Type, ModEC implies the _Winder preorder_, which is similar in spirit (see [1]). However, ModEC generalizes to modular decompositions and allows inferring importance inequalities w.r.t. to different functions. (See Proposition 6 in the appendix).
Biased and unbiased.We say that an IVF is _unbiased_ if \(\mathcal{I}_{x}(g)=\mathcal{I}_{x}(\overline{g})\) holds for all Boolean functions \(g\) and variables \(x\). That is, unbiased IVFs measure the impact of variables without any preference for one particular function outcome, while biased ones quantify the impact to enforce a function to return one or zero. Biased IVFs can, e.g., be useful when the task is to assign responsibility values for the violation of a specification.
### Further Properties
We defined IVFs following a conservative approach, collecting minimal requirements on IVFs. Further additional properties can improve on the predictability and robustness of IVFs.
**Definition 3**.: A value function \(\mathcal{I}\) is called
* _rank preserving_, if for all \(f,g\in\mathbb{B}(X)\) such that \(f\) is modular in \(g\) and \(x,y\in\mathtt{dep}(g)\): \[\mathcal{I}_{x}(g)\geq\mathcal{I}_{y}(g)\;\implies\;\mathcal{I}_{x}(f)\geq \mathcal{I}_{y}(f),\]
* _chain-rule decomposable_, if for all \(f,g\in\mathbb{B}(X)\) such that \(f\) is modular in \(g\) and \(x\in\mathtt{dep}(g)\): \[\mathcal{I}_{x}(f)\;=\;\mathcal{I}_{x}(g)\mathcal{I}_{g}(f),\] where \(\mathcal{I}_{g}(f)=\mathcal{I}_{x_{g}}(f[g/x_{g}])\) for some \(x_{g}\not\in\mathtt{dep}(f)\),
* and _derivative dependent_, if for all \(f,g\in\mathbb{B}(X)\), \(x\in X\): \[\mathrm{D}_{x}f\geq\mathrm{D}_{x}g\;\implies\;\mathcal{I}_{x}(f)\geq \mathcal{I}_{x}(g).\]
We also consider _weak_ variants of _rank preserving_ and _chain-rule decomposable_ where \(f\) ranges only over functions that are _monotonically_ modular in \(g\).
Rank preservation.Rank preservation states that the relation between two variables should not change if the function is embedded somewhere else. This can be desired, e.g., during a modeling process in which distinct Boolean functions are composed or fresh variables added, where rank preserving IVFs maintain the relative importance order of variables. We see this as a useful but optional property of IVFs since an embedding could change some parameters of a function that might be relevant for the relationship of both variables. For example, if \(f=gz\) with \(z\not\in\mathtt{dep}(g)\), then the relative number of satisfying assignments is halved compared to \(g\). If \(x\) is more important than \(y\) in \(g\) but highly relies on \(g\) taking value one, it might be that this relationship is reversed for \(f\) (cf. example given in Section 4.1).
Chain-rule decomposability.If an IVF is chain-rule decomposable, then the importance of a variable in a module is the product of (i) its importance w.r.t. the module and (ii) the importance of the module w.r.t. the function. Many values studied in this paper satisfy this property (Section 4).
Example.Let \(f=x_{1}\oplus\cdots\oplus x_{m}\), and let \(\mathcal{I}\) be a chain-rule decomposable IVFs with \(\mathcal{I}_{x}(x\oplus y)=\alpha\). Since \(f\) is modular in \(g=x_{1}\oplus\cdots\oplus x_{m-1}\), and \(g\) modular in \(x_{1}\oplus\cdots\oplus x_{m-2}\), etc., we can apply the chain-rule property iteratively to get
\[\mathcal{I}_{x_{1}}(f)=\mathcal{I}_{x_{1}}(g)\mathcal{I}_{g}(f)=\mathcal{I}_{x_{ 1}}(g)\alpha=\cdots=\alpha^{m-1},\]
where we use Type to derive \(\mathcal{I}_{g}(f)=\mathcal{I}_{x_{g}}(x_{g}\oplus x_{m})=\alpha\).
Derivative dependence.Derivative dependence states that an IVF should quantify the _change_ a variable induces on a Boolean function. It can be used to derive, e.g., the inequality \(\mathcal{I}_{x_{1}}(x_{1}\oplus x_{2}x_{3})\geq\mathcal{I}_{x_{1}}(x_{2}\oplus x _{1}x_{3})\), which is not possible solely using ModEC since \(x_{1}\oplus x_{2}x_{3}\) is neither monotone in \(x_{1}\) nor in \(x_{2}\). If a value function \(\mathcal{I}\) (that is not necessarily an IVF) is derivative dependent, then this has some interesting implications. First, \(\mathcal{I}\) is unbiased and satisfies ModEC. Second, if \(\mathcal{I}\) is weakly chain-rule decomposable (weakly rank preserving), then it is also chain-rule decomposable (rank preserving). Finally, if \(\mathcal{I}\) satisfies Dic and Dum, then it is also bounded by zero and one. As a consequence, if \(\mathcal{I}\) is derivative dependent and satisfies Dic, Dum, and Type, then \(\mathcal{I}\) is an IVF. (See Proposition 7 in the appendix.)
### Induced Relations
In this section, we will establish foundational relations between IVFs. Recall that \(f\) is a _threshold function_ if
\[f(\mathbf{u})=1\quad\text{iff}\quad\sum_{x\in X}w_{x}\mathbf{u}(x)\geq\delta\quad\forall \mathbf{u}\in\{0,1\}^{X},\]
where \(\{w_{x}\}_{x\in X}\subseteq\mathbb{R}\) is a set of weights and \(\delta{\in}\mathbb{R}\) a threshold.
**Theorem 1**.: _Let \(\mathfrak{I}\) be an IVF, \(f,g,h{\in}\mathbb{B}(X)\), \(x,y{\in}X\). Then:_
1. _[leftmargin=*]_
2. _If_ \(f\) _is symmetric, then_ \(\mathfrak{I}_{x}(f)=\mathfrak{I}_{y}(f)\)_._
3. _If_ \(\mathfrak{I}\) _is unbiased and_ \(f\) _is dual to_ \(g\)_, then_ \(\mathfrak{I}_{x}(f)=\mathfrak{I}_{x}(g)\)_._
4. _If_ \(f\) _is a threshold function with weights_ \(\{w_{x}\}_{x\in X}\subseteq\mathbb{R}\)_, then_ \(|w_{x}|\geq|w_{y}|\) _implies_ \(\mathfrak{I}_{x}(f)\geq\mathfrak{I}_{y}(f)\)_._
5. _If_ \(\mathfrak{I}\) _is monotonically modular in_ \(g\) _and_ \(x\in\mathsf{dep}(g)\)_, then_ \(\mathfrak{I}_{x}(g)\geq\mathfrak{I}_{x}(f)\)_._
6. _If_ \(\mathfrak{I}\) _is_ _(weakly) chain-rule decomposable, then it is (weakly) rank preserving._
For the case of threshold functions, Theorem 1 shows in (3) that any IVF will rank variables according to their absolute weights. In (4), it is stated that the if a function is monotonically embedded somewhere, the importance of variables in that function can only decrease, e.g., \(\mathfrak{I}_{x}(xy)\geq\mathfrak{I}_{x}(xyz)\). Moreover, in (5), if derivative dependence is satisfied, \(\oplus\)-parts without the variable can be dropped. As a consequence, \(\mathfrak{I}_{x}(f)=1\) whenever \(f\) is a parity function and \(x\in\mathsf{dep}(f)\).
## 4 Instances of Importance Value Functions
In this section, we show that IVFs can be instantiated with several notions for importance values from the literature and thus provide a unifying framework.
### Blame
Chockler, Halpern, and Kupferman's (CHK) notions of _responsibility_[10] and _blame_[10] measure the importance of \(x\) in \(f\) through the number of variables that have to be flipped in an assignment \(\mathbf{u}\) until \(x\) becomes _critical_, i.e., "flipping" \(x\) changes the outcome of \(f\) to its complement. Towards a formalization, let
\[\operatorname{flip}_{S}(\mathbf{u})(x)=\begin{cases}\overline{\mathbf{u}}(x)&\text{ if }x\in S\\ \mathbf{u}(x)&\text{otherwise}\end{cases}\]
denote the assignment that flips variables in \(S\). We now rely on the following notion of critical set:
**Definition 4** (Critical sets).: A _critical set_ of \(x\in X\) in \(f\in\mathbb{B}(X)\) under assignment \(\mathbf{u}\) over \(X\) is a set \(S\subseteq X\backslash\{x\}\) where
\[f(\mathbf{u})=f(\operatorname{flip}_{S}(\mathbf{u}))\text{ and }f(\mathbf{u})\neq f( \operatorname{flip}_{S\cup\{x\}}(\mathbf{u})).\]
We define \(\operatorname{scs}^{\mathbf{u}}_{x}(f)\) as the size of the smallest critical set, and set \(\operatorname{scs}^{\mathbf{u}}_{x}(f)=\infty\) if there is no such critical set.
_Example_.: The set \(S=\{y\}\) is critical for \(x\) in \(f=x\lor y\) under \(\mathbf{u}=x/1;y/1\). It is also the smallest critical set. On the other hand, there is no critical set if \(\mathbf{u}=x/0;y/1\).
The responsibility of \(x\) for \(f\) under \(\mathbf{u}\) is inversely related to \(\operatorname{scs}^{\mathbf{u}}_{x}(f)\). Using the following notion of a _share function_, we generalize the original notion of responsibility [10]:
**Definition 5** (Share function).: Call \(\rho\colon\mathbb{N}\cup\{\infty\}\to\mathbb{R}\) a _share function_ if (i) \(\rho\) is monotonically decreasing, (ii) \(\rho(\infty)=\lim_{n\to\infty}\rho(n)=0\), and (iii) \(\rho(0)=1\).
In particular, we consider three instances of share functions:
* \(\rho_{\exp}(k)=\nicefrac{{1}}{{2}}\),
* \(\rho_{\operatorname{frac}}(k)=\nicefrac{{1}}{{(k+1)}}\),
* \(\rho_{\operatorname{step}}(k)=1\) for \(k=0\) and \(\rho(k)=0\) otherwise.
Given a share function \(\rho\), the _responsibility_ of \(x\) for \(f\) under \(\mathbf{u}\) is defined as \(\rho(\operatorname{scs}^{\mathbf{u}}_{x}(f))\). Note that \(\rho_{\operatorname{frac}}(\operatorname{scs}^{\mathbf{u}}_{x}(f))\) implements the classical notion of responsibility [10]. While responsibility corresponds to the size of the smallest critical set in a fixed assignment,CHK's _blame_[10] is a global perspective and fits our notion of value function. It is the expected value of the responsibility (we restrict ourselves to uniform distributions):
**Definition 6** (Blame).: For a share function \(\rho\), we define the _\(\rho\)-blame_ as value function \(\mathbf{B}^{\rho}\) where for any \(x\in X\), \(f\in\mathbb{B}(X)\):
\[\mathbf{B}^{\rho}_{x}(f)=\mathbb{E}_{\mathbf{u}\in\{0,1\}^{X}}[\rho(\operatorname{ scs}^{\mathbf{u}}_{x}(f))].\]
_Example_.: Let \(f=x\lor y\). To compute the importance of \(x\) we can count the number of times \(\operatorname{scs}^{\mathbf{u}}_{x}(f)=0,1,2,\ldots,\infty\) occurs if \(\mathbf{u}\) ranges over the assignments for \(\{x,y\}\): \(\operatorname{scs}^{\mathbf{u}}_{x}(f)=\infty\) happens once, \(\operatorname{scs}^{\mathbf{u}}_{x}(f)=0\) happens twice, and \(\operatorname{scs}^{\mathbf{u}}_{x}(f)=1\) occurs once. Therefore,
\[\mathbf{B}^{\rho}_{x}(f)=\nicefrac{{1}}{{4}}\cdot\rho(\infty)+\nicefrac{{1}}{{2} }\cdot\rho(0)+\nicefrac{{1}}{{4}}\cdot\rho(1),\]
which is \(\nicefrac{{5}}{{8}}\) for \(\rho=\rho_{\exp}\).
Independent of \(\rho\), the blame is always an IVF:
**Theorem 2**.: \(\mathbf{B}^{\rho}\) _is an unbiased IVF for any share function \(\rho\)._
In full generality, the blame violates the optional properties for IVFs (see Section 3.1). For example, if \(\rho\neq\rho_{\operatorname{step}}\), then the \(\rho\)-blame is neither chain-rule decomposable nor derivative dependent, and one can find counterexamples for the rank-preservation property for \(\rho_{\operatorname{frac}}\) and \(\rho_{\exp}\):
**Proposition 1**.: _Let \(\rho\) be a share function. Then the following statements are equivalent:_
1. [leftmargin=*]
2. \(\mathbf{B}^{\rho}\) _is weakly chain-rule decomposable,_
3. \(\mathbf{B}^{\rho}\) _is derivative dependent, and_
4. \(\rho=\rho_{\operatorname{step}}\)_._
_Further, neither \(\mathbf{B}^{\rho_{\operatorname{frac}}}\) nor \(\mathbf{B}^{\rho_{\exp}}\) are weakly rank preserving._
To give an example for the reason why the \(\rho_{\operatorname{frac}}\)-blame is not weakly rank preserving, consider \(g=x_{1}\overline{x}_{0}\overline{x}_{2}\sqrt{x}_{1}x_{0}\lor x_{3}\) and \(f=g\lor z\). Note that \(f\) is clearly monotonically modular in \(g-\) only \(z\) is added as fresh variable. Nevertheless, the order of \(x_{0}\) and \(x_{3}\) changes:
\[\mathbf{B}^{\rho_{\operatorname{frac}}}_{x_{0}}(g)=0.6302<0.7188= \mathbf{B}^{\rho_{\operatorname{frac}}}_{x_{3}}(g)\] \[\mathbf{B}^{\rho_{\operatorname{frac}}}_{x_{0}}(f)=0.4802>0.4688= \mathbf{B}^{\rho_{\operatorname{frac}}}_{x_{3}}(f).\]
Intuitively, this is because byCHK's definition of critical sets: for all Boolean functions \(h\), variables \(x\) and assignments \(\mathbf{u}\),
\[h(\mathbf{u})=1,h_{x/1}\geq h_{x/0},\mathbf{u}(x)=0\implies\operatorname{scs}^{\mathbf{u}} _{x}(h)=\infty.\]
Hence, whenever an assignment \(\mathbf{u}\) satisfies the premise for \(x\) in \(h\), the responsibility of \(x\) for \(h\) under \(\mathbf{u}\) will be zero.
For \(x_{3}\), this is more frequently the case in \(g\) than in \(f\) (\(19\%\) vs. \(34\%\) of all assignments). On the other hand, there is always a critical set for \(x_{0}\) in both \(f\) and \(g\). Partly for this reason, the importance of \(x_{3}\) decreases more than \(x_{0}\) when switching from \(g\) to \(f\).
**Modified Blame**
We modify the definition of critical sets in order to derive a _modified blame_ that satisfies more optional properties for a wider class of share functions.
For a Boolean function \(f\), an assignment \(\mathbf{u}\) over \(X\) and a variable \(x\), the _modified_\(\operatorname{scs}\) is defined as the size \(\operatorname{mscs}_{x}^{\mathbf{u}}(f)\) of the smallest set \(S\subseteq X\setminus\{x\}\) that satisfies
\[f\big{(}\mathrm{flip}_{S\cup\{x\}}\big{)}\neq f\big{(}\mathrm{flip}_{S\cup\{x \}}\big{(}\mathbf{u}\big{)}\big{)}.\]
If there is no such set, we set \(\operatorname{mscs}_{x}^{\mathbf{u}}(f)=\infty\).
_Example_.: The condition for critical sets is relaxed, hence \(\operatorname{mscs}_{x}^{\mathbf{u}}(f)\) provides a lower bound for \(\operatorname{scs}_{x}^{\mathbf{u}}(f)\). Let for example \(f=x\lor y\) and \(\mathbf{u}=x/0;y/1\). Then
\[\operatorname{mscs}_{x}^{\mathbf{u}}(f)=1<\infty=\operatorname{scs}_{x}^{\mathbf{u}}(f).\]
The definitions for responsibility and blame are analogous for the modified version, replacing \(\operatorname{scs}\) by \(\operatorname{mscs}\). We denote by \(\operatorname{\mathbf{MB}}^{\rho}\) the _modified \(\rho\)-blame_, which is (in contrast to \(\operatorname{\mathbf{B}}^{\rho}\)) always derivative dependent and even chain-rule decomposable if \(\rho\) is an exponential- or stepping-function:
**Theorem 3**.: \(\operatorname{\mathbf{MB}}^{\rho}\) _is an unbiased, derivative-dependent IVF for any share function \(\rho\). If there is \(0\leq\lambda<1\) so that \(\rho(k)=\lambda^{k}\) for all \(k\geq 1\), then \(\operatorname{\mathbf{MB}}^{\rho}\) is chain-rule decomposable._
### Influence
The influence [1, 10, 11] is a popular importance measure, defined as the probability that flipping the variable changes the function's outcome for uniformly distributed assignments:
**Definition 7**.: The _influence_ is the value function \(\mathbf{I}\) defined by \(\mathbf{I}_{x}(f)=\mathbb{E}[\mathrm{D}_{x}f]\) for all \(f\in\mathbb{B}(X)\) and variables \(x\in X\).
It turns out that the influence is a special case of blame:
**Proposition 2**.: \(\mathbf{I}=\operatorname{\mathbf{MB}}^{\rho_{\operatorname{step}}}=\operatorname {\mathbf{B}}^{\rho_{\operatorname{step}}}\)_._
Since \(\rho_{\operatorname{step}}(k)=0^{k}\) for \(k\geq 1\), Proposition 2 and Theorem 3 show that the influence is a derivative-dependent, rank-preserving, and chain-rule decomposable IVF.
**Characterizing the Influence**
Call a value function \(\mathfrak{I}\)_cofactor-additive_ if for all Boolean functions \(f\) and variables \(x\neq z\):
\[\mathfrak{I}_{x}(f)=\nicefrac{{1}}{{2}}\cdot\mathfrak{I}_{x}(f_{z/0})+ \nicefrac{{1}}{{2}}\cdot\mathfrak{I}_{x}(f_{z/1}).\]
Using this notion, we _axiomatically characterize_ the influence as follows.
**Theorem 4**.: _A value function \(\mathfrak{I}\) satisfies Dic, Dum, and cofactor-additivity if and only if \(\mathfrak{I}=\mathbf{I}\)._
_Remark_.: A relaxed version of _cofactor-additivity_ assumes the existence of \(\alpha_{z},\beta_{z}\in\mathbb{R}\) for \(z\in X\) such that for all \(x\neq z\):
\[\mathfrak{I}_{x}(f)=\alpha_{z}\mathfrak{I}_{x}(f_{z/0})+\beta_{z}\mathfrak{I} _{x}(f_{z/1}).\]
This, together with the assumption that \(\mathfrak{I}\) satisfies Type, Dum and Dic, implies \(\alpha_{z}=\beta_{z}=\nicefrac{{1}}{{2}}\). Hence, another characterization of the influence consists of Type, Dum, Dic, and _relaxed cofactor-additivity_. (See Corollary 1 in the appendix.)
Moreover, we give a _syntactic characterization_ of the influence by a comparison to the two-sided Jeroslow-Wang heuristic used for SAT-solving [1, 1, 10, 11]. This value is defined for families of sets of literals, which are sets of subsets of \(X\cup\{\overline{x}:z\in X\}\), and it weights subsets that contain \(x\) or \(\overline{x}\) by their respective lengths:
**Definition 8** ([1]).: Let \(\mathcal{D}\) be a family of sets of literals. The _two-sided Jeroslow-Wang value_ for a variable \(x\) is defined as
\[\operatorname{\mathbf{JW}}_{x}(\mathcal{D})=\sum_{C\in\mathcal{D}\ \text{s.t.}\ x\in C\ \text{or}\ \overline{x}\in C}2^{-|C|}\]
We call a set \(C\) of literals _trivial_ if there is a variable \(x\) such that \(x\in C\) and \(\overline{x}\in C\). For a variable \(x\), say that \(\mathcal{D}\) is _\(x\)-orthogonal_ if for all \(C,C^{\prime}\in\mathcal{D}\), \(C\neq C^{\prime}\), there is a literal \(\eta\not\in\{x,\overline{x}\}\) such that \(\eta\in C\) and \(\overline{\eta}\in C^{\prime}\). Orthogonality is well-studied for DNFs [1]. The two-sided Jeroslow-Wang value and the influence agree up to a factor of two for some families of sets of literals when interpreting them as DNFs:
**Theorem 5**.: _Let \(\mathcal{D}\) be a family of sets of literals such that all of its elements are non-trivial, and let \(x\) be variable such that \(\mathcal{D}\) is \(x\)-orthogonal. Then:_
\[\mathbf{I}_{x}(\bigvee_{C\in\mathcal{D}}\bigwedge_{\eta\in C}\eta)\ =\ 2\cdot \operatorname{\mathbf{JW}}_{x}(\mathcal{D}).\]
A simple example that illustrates Theorem 5 would be \(\mathcal{D}=\{\{x,y,z\},\{y,\overline{z}\}\}\). Note that we can interpret \(\mathcal{D}\) as a CNF as well, since the influence does not distinguish between a function and its dual (Theorem 1). Note that every Boolean function can be expressed by a family \(\mathcal{D}\) that satisfies the conditions of Theorem 5. For this, we construct the canonical DNF corresponding to \(f\) and resolve all monomials that differ only in \(x\). (See Proposition 8 in the appendix.)
### Cooperative Game Mappings
Attribution schemes analogous to what we call _value functions_ were already studied in the context of game theory, most often with emphasis on Shapley- and Banzhaf values [1, 11]. They are studied w.r.t. _cooperative games_, which are a popular way of modeling collaborative behavior. Instead of Boolean assignments, their domains are subsets (coalitions) of \(X\). Specifically, cooperative games are of the form \(v\colon 2^{X}\to\mathbb{R}\), in which the value \(v(S)\) is associated with the payoff that variables (players) in \(S\) receive when collaborating. Since more cooperation generally means higher payoffs, they are often assumed to be monotonically increasing w.r.t. set inclusion. In its unconstrained form, they are essentially pseudo Boolean functions.
We denote by \(\mathbb{G}(X)\) the set of all cooperative games. If \(\mathtt{image}(v)\subseteq\{0,1\}\), then we call \(v\)_simple_. For a cooperative game \(v\), we denote by \(\partial_{x}v\) the cooperative game that computes the "derivative" of \(v\) w.r.t. \(x\), which is \(\partial_{x}v(S)=v(S\cup\{x\})-v(S\setminus\{x\})\). We compose cooperative games using operations such as \(\cdot,+,-,\wedge,\vee\) etc., where
\(v(S)\circ w(S)\). For \(\sim\in\{\geq,\leq,=\}\), we also write \(v\sim w\) if \(v(S)\sim w(S)\) for all \(S\subseteq X\). The set of variables \(v\) depends on is defined as \(\mathsf{dep}(v)=\{x\in X:\partial_{x}v\neq 0\}\).
Cooperative game mappings map Boolean functions to cooperative games. Specific _instances_ of such mappings have previously been investigated by [1, 10]. We provide a _general definition_ of this concept to show how it can be used to construct IVFs.
**Definition 9** (Cgm).: A _cooperative game mapping (CGM)_ is a function \(\tau\colon\mathbb{B}(X)\to\mathbb{G}(X)\) with \(f\mapsto\tau_{f}\). We call \(\tau\)_importance inducing_ if for all \(x,y\in X\), permutations \(\sigma\colon X\to X\), and \(f,g,h\in\mathbb{B}(X)\):
* \(0\leq\partial_{x}\tau_{f}\leq 1\).
* \(\partial_{x}\tau_{f}=0\) if \(x\not\in\mathsf{dep}(f)\).
* \(\partial_{x}\tau_{x}=\partial_{x}\tau_{\overline{x}}=1\).
* \(\mathsf{(Type_{CG})}\) (i) \(\tau_{f}(S)=\tau_{\sigma f}(\sigma(S))\) and (ii) \(\tau_{f}(S)=\tau_{f[y/\overline{y}]}(S)\) for all \(S\subseteq X\).
* \(\partial_{x}\tau_{f}\geq\partial_{x}\tau_{h}\) if (i) \(f\) and \(h\) are monotonically modular in \(g\), (ii) \(f_{g/1}\geq h_{g/1}\) and \(h_{g/0}\geq f_{g/0}\) and (iii) \(x\in\mathsf{dep}(g)\).
We call \(\tau\)_unbiased_ if \(\tau_{g}=\tau_{\overline{g}}\) for all \(g\in\mathbb{B}(X)\).
An example is the _characteristic_ CGM \(\zeta\) given by \(\zeta_{f}(S)=f(\mathbf{1}_{S})\), where \(\mathbf{1}_{S}(x)=1\) iff \(x\in S\). We study various importance-inducing CGMs in the following sections. Note that \(\zeta\) is not importance inducing: for example, it violates \(\textsc{Bound}_{\textsc{CG}}\) since \(\partial_{x}\zeta_{f}(\varnothing)=-1\) for \(f=\overline{x}\).
The restriction to _importance-inducing_ CGMs ensures that compositions with the Banzhaf or Shapley value are valid IVFs (Lemma 2). These CGMs satisfy properties that are related to Definition 2: \(\tau_{f}\) should be monotone (\(0\leq\partial_{x}\tau_{f}\)), irrelevant variables of \(f\) are also irrelevant for \(\tau_{f}\) (\(\textsc{Dum}_{\textsc{CG}}\)), etc. In an analogous fashion, we can think of properties related to Definition 3:
**Definition 10**.: A CGM \(\tau\) is called
* _chain-rule decomposable_, if for all \(f,g\in\mathbb{B}(X)\) such that \(f\) is modular in \(g\) and \(x\in\mathsf{dep}(g)\): \[\partial_{x}\tau_{f}=(\partial_{x}\tau_{g})(\partial_{g}\tau_{f}),\] where \(\partial_{g}\tau_{f}=\partial_{x_{g}}\tau_{f[g/x_{g}]}\) for some \(x_{g}\not\in\mathsf{dep}(f)\). We call \(\tau\)_weakly coin-rule decomposable_ if this holds for all cases where \(f\) is monotonically modular in \(g\).
* _derivative dependent_, if for all \(f,g\in\mathbb{B}(X)\), \(x\in X\) \[\mathrm{D}_{x}f\geq\mathrm{D}_{x}g\implies\partial_{x}\tau_{f}\geq\partial_{x }\tau_{g}.\]
Since (weak) rank-preservation for value functions uses an IVF in its premise, it cannot be stated naturally at the level of CGMs. Let us now define the following abstraction, which captures Shapley and Banzhaf values:
**Definition 11**.: Call \(\mathfrak{E}\colon X\times\mathbb{G}(X)\to\mathbb{R},(x,v)\mapsto\mathfrak{E} _{x}(v)\) a _value function for cooperative games_. Call \(\mathfrak{E}\) an _expectation of contributions_ if there are weights \(c(0),\ldots,c(n{-}1)\in\mathbb{R}\) such that for all \(v\in\mathbb{G}(X)\) and \(x\in X\):
\[\sum_{S\subseteq X\setminus\{x\}}\hskip-14.226378ptc(|S|)=1\quad\text{and} \quad\mathfrak{E}_{x}(v)=\hskip-14.226378pt\sum_{S\subseteq X\setminus\{x\}} \hskip-14.226378ptc(|S|)\cdot\partial_{x}v(S).\]
If \(\mathfrak{E}\) is an expectation of contributions, then \(\mathfrak{E}_{x}(v)\) is indeed the expected value of \(\partial_{x}v(S)\) in which every \(S\subseteq X\setminus\{x\}\) has probability \(c(|S|)\). The _Banzhaf_ and _Shapley values_ are defined as the expectations of contributions with weights:
\[c_{\textsc{Bz}}(k)=\tfrac{1}{2^{n-1}}\left(\mathbf{Bz}\right)\quad\text{ and}\quad c_{\textsc{Sh}}(k)=\tfrac{1}{n}{n-1\choose k}^{-1}\left(\mathbf{Sh}\right).\]
Observe that there are \({n-1\choose k}\) sets of size \(k\in\{0,\ldots,n-1\}\), so the weights of the Shapley value indeed sum up to one.
If \(\tau\) is a CGM, then its composition with \(\mathfrak{E}\) yields \((\mathfrak{E}\circ\tau)_{x}(f)=\mathfrak{E}_{x}(\tau_{f})\), which is a value function for Boolean functions. Then every composition with an expectation of contributions is an IVF if the CGM is importance inducing:
**Lemma 2**.: _If \(\tau\) is an importance-inducing CGM and \(\mathfrak{E}\) an expectation of contributions, then \(\mathfrak{E}\circ\tau\) is an IVF. If \(\tau\) is unbiased/derivative dependent, then so is \(\mathfrak{E}\circ\tau\). Finally, if \(\tau\) is (weakly) chain-rule decomposable, then so is \(\mathbf{Bz}\circ\tau\)._
In the following sections, we study two novel and the already-known CGM of [10]. By Lemma 2 we can focus on their properties as CGMs, knowing that any composition with the Shapley value or other expectations of contributions will induce IVFs.
**Simple Satisfiability-Biased Cooperative Game Mappings**
The first CGM interprets the "power" of a coalition as its ability to force a function's outcome to one: If there is an assignment for a set of variables that yields outcome one _no matter_ the values of other variables, we assign this set a value of one, and zero otherwise.
**Definition 12**.: The _dominating CGM_\(\omega\) is defined as
\[\omega_{f}(S)=\begin{cases}1&\text{if }\exists\mathbf{u}\in\{0,1\}^{S}.\;\forall\mathbf{w} \in\{0,1\}^{X\setminus S}.\;f(\mathbf{u};\mathbf{w}).\\ 0&\text{otherwise}.\end{cases}\]
_Example._ Let \(f=x\vee(y\oplus z)\). We have \(\omega_{f}(\{y,z\})=1\) since \(f_{\mathbf{u}}=1\) for \(\mathbf{u}=y/1;z/0\). On the other hand, \(\omega_{f}(\{y\})=0\), since \(x/0;z/1\) resp. \(x/0;z/0\) falsify \(f_{y/1}\) and \(f_{y/0}\).
**Theorem 6**.: _The dominating CGM is weakly chain-rule decomposable and importance inducing._
_Example._ Let \(\mathbf{Z}\) be the expectation of contributions with \(c(0)=1\), i.e., \(\mathbf{Z}_{x}(v)=v(\{x\})-v(\varnothing)\). By Lemma 2 and Theorem 6, the mapping
\[(\mathbf{Z}\circ\omega)_{x}(f)=\begin{cases}1&\text{if }f\neq 1\text{ and }f_{x/0}=1\text{ or }f_{x/1}=1\\ 0&\text{otherwise}\end{cases}\]
is an IVF. Intuitively, \(x\) has the highest importance if the function is falsifiable and there is a setting for \(x\) that forces the function to one. Otherwise, \(x\) has an importance of zero.
**Biasedness and rank preservation.** The dominating CGM is biased: Consider \(g=x\vee(y\oplus z)\) with \(\overline{g}=\overline{x}\wedge(\overline{g}\oplus z)\). Note that \(\omega_{g}(S)=1\) for \(S=\{x\}\) while \(\omega_{\overline{g}}(S)=0\), which shows biasedness. Composing \(\omega\) with the Banzhaf value yields
\[\begin{split}&(\mathbf{Bz}\circ\omega)_{(\cdot)}(g):\quad z:0.25 \;=\;y:0.25\;<\;x:0.75,\\ &(\mathbf{Bz}\circ\omega)_{(\cdot)}(\overline{g}):\quad z:0.25\;=\;y:0.25\;=\;x: 0.25,\end{split}\]
One can force \(g\) to one by controlling either \(x\) or _both_\(y\) and \(z\), so \(x\) is rated higher than the others. But to force \(\overline{g}\) to one, control over all variables is required, so all variables in \(\overline{g}\) have the same importance.
Since \(g\) is modular in \(\overline{g}\), we also obtain a counterexample for rank preservation:
\[(\mathbf{B}\mathbf{z}\circ\omega)_{y}(\overline{g}) \geq(\mathbf{B}\mathbf{z}\circ\omega)_{x}(\overline{g})\] \[\text{does not imply} (\mathbf{B}\mathbf{z}\circ\omega)_{y}(g) \geq(\mathbf{B}\mathbf{z}\circ\omega)_{x}(g).\]
However, _weak_ rank preservation is fulfilled by \(\mathbf{B}\mathbf{z}\circ\omega\) since it is weakly chain-rule decomposable by Theorem 6 and Lemma 2. Then the claim follows with Theorem 1.
A dual to the dominating CGM.One can think of a dual notion of the CGM \(\omega\) that reverses the order of both quantifiers. Intuitively, we are now allowed to choose an assignment _depending_ on the values of the remaining variables:
**Definition 13**.: The _rectifying CGM_\(\nu\) is defined as
\[\nu_{f}(S)=\begin{cases}1&\text{if }\forall\mathbf{w}\in\{0,1\}^{X\setminus S }.\ \exists\mathbf{u}\in\{0,1\}^{S}.\ f(\mathbf{u};\mathbf{w}).\\ 0&\text{otherwise}.\end{cases}\]
If we compose \(\nu\) with an expectation of contributions that satisfies \(c(k)=c(n{-}1{-}k)\) for all \(k\in\{0,\ldots,n{-}1\}\), which is a condition satisfied both by the Shapley and Banzhaf values, the induced importance of a variable equals its importance w.r.t. \(\omega\) and the negated function:
**Proposition 3**.: _Let \(\mathfrak{e}\) be an expectation of contributions with \(c(k)=c(n{-}1{-}k)\) for all \(k\in\{0,\ldots,n{-}1\}\). Then for all \(g\in\mathbb{B}(X)\) and \(x\in X\):_
\[(\mathfrak{e}\circ\omega)_{x}(g)=(\mathfrak{e}\circ\nu)_{x}(\overline{g})\]
We now discuss connections to the influence. If a Boolean function is monotone, and we "control" a set of variables \(S\), the best towards satisfaction (resp. falsification) is to set all variables in \(S\) to one (resp. to zero). This can be used to show that both \(\mathbf{B}\mathbf{z}\circ\omega\) and \(\mathbf{B}\mathbf{z}\circ\nu\) agree with the influence:
**Proposition 4**.: _Let \(f\) be a monotone Boolean function and \(x\) a variable. Then \((\mathbf{B}\mathbf{z}\circ\omega)_{x}(f)=(\mathbf{B}\mathbf{z}\circ\nu)_{x}(f)= \mathbf{I}_{x}(f).\)_
A Constancy-Based Cooperative Game Mapping Hammer, Kogan and Rothblum Hammer _et al._ (2000) (HKR) defined a CGM that measures the power of variables by how constant they make a function if assigned random values. It depends on the following notion of _constancy measure_:
**Definition 14**.: We call a mapping \(\kappa\colon[0,1]\to[0,1]\) a _constancy measure_ if (i) \(\kappa\) is convex, (ii) \(\kappa(0)=1\), (iii) \(\kappa(x)=\kappa(1{-}x)\), and (iv) \(\kappa(\nicefrac{{1}}{{2}})=0\).
The following functions are instances of constancy measures:
* \(\kappa_{\mathrm{quad}}(a)=4(a-\nicefrac{{1}}{{2}})^{2}\),
* \(\kappa_{\mathrm{log}}(a)=1+a\mathrm{lb}(a)+(1{-}a)\mathrm{lb}(1{-}a)\) with \(0\mathrm{lb}(0)=0\),
* \(\kappa_{\mathrm{abs}}(a)=2|a-\nicefrac{{1}}{{2}}|\).
For a constancy measure \(\kappa\) and a Boolean function \(f\), the _\(\kappa\)-constancy of \(f\)_ is the value \(\kappa(\mathbb{E}[f])\), which measures how balanced the share of ones and zeros is. It is close to one if \(f\) is very unbalanced and close to zero if the share of zeros and ones in \(f\) is (almost) the same. The power of a set of variables \(S\) is now measured in terms of the expected \(\kappa\)-constancy of \(f\) if variables in \(S\) are fixed to random values:
**Definition 15** ([Hammer _et al._, 2000]).: Given a constancy measure \(\kappa\), we define the CGM \(\mathrm{H}^{\kappa}\) by
\[\mathrm{H}^{\kappa}_{f}(S)=\mathbb{E}_{\mathbf{a}\in\{0,1\}^{S}}[\kappa( \mathbb{E}[f_{\mathbf{a}}])].\]
_Example_.: Let \(f=x\lor y\lor z\) and \(S=\{x\}\). We obtain \(\mathrm{H}^{\kappa}_{f}(S)=\nicefrac{{1}}{{2}}\cdot\kappa(\nicefrac{{3}}{{4}} )+\nicefrac{{1}}{{2}}\cdot\kappa(1)\), since
\[\mathbb{E}[f_{x/0}]=3/4\quad\text{and}\quad\mathbb{E}[f_{x/1}]=1.\]
Setting \(x\) to zero does not determine \(f\) completely, while setting it to one also sets \(f\) to one, i.e., makes it constant. The measure then gives a lower value to the less-constant cofactor, a higher value to the more-constant cofactor and computes the average. For this example and \(\kappa=\kappa_{\mathrm{abs}}\), we obtain \(\mathrm{H}^{\kappa}_{f}(S)=\nicefrac{{3}}{{4}}\) due to \(\kappa(\nicefrac{{3}}{{4}})=\nicefrac{{1}}{{2}}\) and \(\kappa(1)=1\).
Theorem 7 shows that \(\mathrm{H}^{\kappa_{\mathrm{quad}}}\) is a chain-rule decomposable and importance-inducing CGM. It is open whether other constancy measures are importance inducing too.
**Theorem 7**.: _Suppose \(\kappa\) is a constancy measure. Then \(\mathrm{H}^{\kappa}\) is an unbiased CGM that satisfies \(\textsc{Bound}_{\textsc{CG}}\), \(\textsc{Dic}_{\textsc{CG}}\), \(\textsc{Dum}_{\textsc{CG}}\), and \(\textsc{Type}_{\textsc{CG}}\). Further, \(\mathrm{H}^{\kappa_{\mathrm{quad}}}\) is chain-rule decomposable and satisfies \(\textsc{Mod}\textsc{EC}_{\textsc{CG}}\)._
_Example_.: For the special case where \(\kappa=\kappa_{\mathrm{quad}}\), note that
\[\nicefrac{{1}}{{2}}\cdot\kappa(a)+\nicefrac{{1}}{{2}}\cdot\kappa(b)-\kappa( \nicefrac{{1}}{{2}}\cdot a+\nicefrac{{1}}{{2}}\cdot b)=(a-b)^{2}.\]
Using \(\mathbb{E}[f]=\nicefrac{{1}}{{2}}\cdot\mathbb{E}[f_{x/1}]+\nicefrac{{1}}{{2}} \cdot\mathbb{E}[f_{x/0}]\), this implies
\[(\mathbf{Z}\circ\mathrm{H}^{\kappa})_{x}(f)=(\mathbb{E}[f_{x/1}]-\mathbb{E}[f_{ x/0}])^{2},\]
where \(\mathbf{Z}\) is again the expectation of contributions with
\[\mathbf{Z}_{x}(v)=v(\{x\})-v(\varnothing).\]
The value \(\mathbf{Z}\circ\mathrm{H}^{\kappa}\) is an IVF according Lemma 2 and Theorem 7. In contrast to derivative-dependent IVFs, \(\mathbf{Z}\circ\mathrm{H}^{\kappa}\) assigns low values to variables in parity-functions: for \(f=x\oplus y\), we have \(\mathbb{E}[f_{x/1}]=\mathbb{E}[f_{x/0}]\), and thus \((\mathbf{Z}\circ\mathrm{H}^{\kappa})_{x}(f)=0\).
Derivative dependence.This property cannot be achieved, as witnessed by \(f=x\oplus y\) and \(g=x\). Due to \(\mathrm{D}_{x}f=\mathrm{D}_{x}g\), it suffices to show that \(\partial_{x}\mathrm{H}^{\kappa}_{f}\neq\partial_{x}\mathrm{H}^{\kappa}_{g}\) holds for all \(\kappa\). Note that
\[\mathbb{E}[f_{x/0}]=\nicefrac{{1}}{{2}},\ \mathbb{E}[f_{x/1}]=\nicefrac{{1}}{{2}},\ \mathbb{E}[g_{x/0}]=0,\ \mathbb{E}[g_{x/1}]=1,\]
and \(\mathbb{E}[f]=\mathbb{E}[g]=\nicefrac{{1}}{{2}}\). Thus, for all constancy measures \(\kappa\),
\[\partial_{x}\mathrm{H}^{\kappa}_{f}(\varnothing) =\nicefrac{{1}}{{2}}\cdot\kappa(\nicefrac{{1}}{{2}})+\nicefrac{{1}}{{2}} \cdot\kappa(\nicefrac{{1}}{{2}})-\kappa(\nicefrac{{1}}{{2}})=0,\] \[\partial_{x}\mathrm{H}^{\kappa}_{g}(\varnothing) =\nicefrac{{1}}{{2}}\cdot\kappa(1)+\nicefrac{{1}}{{2}}\cdot\kappa(0)- \kappa(\nicefrac{{1}}{{2}})=1,\]
which shows \(\partial_{x}\mathrm{H}^{\kappa}_{f}\neq\partial_{x}\mathrm{H}^{\kappa}_{g}\).
## 5 Computing Importance Values
In this section, we present and evaluate computation schemes for blame, influence, and CGMs. While there exists a practical approach based on model counting for the influence in CNFs (Traxler, 2009), we are only aware of naive computations of CHK's blame (Dubsdaff _et al._, 2022). Details are given in the appendix.
Blame.We focus on the modified blame. CHK's blame can be computed in a very similar fashion. Observe that for a Boolean function \(f\) and \(x\in X\),
\[\mathbf{M}\mathbf{B}^{\varrho}_{x}(f)=\mathbb{E}[\gamma_{0}]+\sum_{k=1}^{n-1} \rho(k)(\mathbb{E}[\gamma_{k}]-\mathbb{E}[\gamma_{k-1}]),\]
where \(\gamma_{k}\) is the Boolean function for which \(\gamma_{k}(\mathbf{u})=1\) iff \(\mathrm{mscs}^{\mathbf{u}}_{x}(f)\leq k\). We devise two approaches for computing \(\mathbb{E}[\gamma_{k}]\). The first represents \(\gamma_{k}\) through BDDs using the following recursion scheme: \(\mathrm{m
* \(k=0\) and \(f(\mathbf{u})\neq f(\mathrm{flip}_{\{x\}}(\mathbf{u}))\), or
* \(k>0\) and
* \(\mathrm{mscs}^{\mathbf{u}}_{x}(f)\leq k-1\) or
* there is \(y\neq x\) such that \(\mathrm{mscs}^{\mathbf{u}}_{x}(f[y/\overline{y}])\leq k-1\).
This allows us to construct BDDs for \(\gamma_{k}\) from \(\gamma_{k-1}\), which lends itself to BDD-based approaches since \(\gamma_{k}\) does not necessarily increase in size as \(k\) grows. The second approach introduces new existentially quantified variables in the input formula of \(f\) to model occurrences of variables in critical sets of \(\mathrm{mscs}^{\mathbf{u}}_{x}(f)\). With an additional cardinality constraint restricting the number of variables in critical sets to at most \(k\), we can use projected model counting to compute \(\mathbb{E}[\gamma_{k}]\).
Influence.In case \(f\) is given as a CNF formula, we use Traxler's method to compute the influence [10]. For all other formulas, note that standard satisfiability-preserving transformations do not preserve influence values: For example, applying the Tseytin transformation to \(x\sqrt{xy}\) results in a CNF where \(x\) has a higher influence than \(y\).
However, the influence is proportional to the number of models of \(\mathrm{D}_{x}f\). If \(f\) is given by a BDD, computing a representation of \(\mathrm{D}_{x}f\) means squaring \(f\)'s size in the worst case, while the formula-based representation only doubles it. For the latter case, we can count the models of \(\mathrm{D}_{x}f\) using a Tseytin transformation and a standard model counter.
BDD representations of satisfiability-biased CGMs.The dominating CGM computes a simple game, which is essentially a Boolean function, and therefore permits a representation by BDDs. Moreover, using a BDD representation of \(f\), we compute \(\omega_{f}\) using a recursion on cofactors of variables \(z\),
\[(\omega_{f})_{z/1}=\omega_{f_{x/1}}\lor\omega_{f_{x/0}}\quad\text{ and }\quad( \omega_{f})_{z/0}=\omega_{f_{x/0}\wedge f_{x/1}}.\]
The Banzhaf value of \(x\) in \(\omega_{f}\) is then just
\[\mathbb{E}[(\omega_{f})_{x/1}]-\mathbb{E}[(\omega_{f})_{x/0}],\]
which poses no effort once the BDD of \(\omega_{f}\) is constructed. The rectifying CGM can be computed analogously.
Implementation and evaluation.We have implemented Traxler's method and our new computation schemes in Python, using BuDDy [11] as BDD backend with automatic reordering and GPMC [23, 24] for (projected) model counting. To evaluate our approaches, we conducted experiments on Boolean functions given as CNFs that were either randomly generated or generated from the ISCAS'99 dataset [1, 23]. We always computed importance values w.r.t. the first variable in the input CNF and averaged the timings over 20 runs each. Our experiments were carried out on a Linux system with an i5-10400F CPU at 2.90GHz and 16GB of RAM. To compare our BDD-based and model counting approaches, Figure 1 shows timings for blame computations on random CNFs. Here, the BDD-based approach clearly outperforms the one based on projected model counting. This is also reflected in real-world benchmarks from ISCAS'99 shown in Table 1, where the approach based on model counting runs into timeouts for even small instances. Table 1 shows that computations for influence values based on model counting scale better than the BDD-based approach, mainly due to an expensive initial BDD construction. Computing the BDD of the dominating CGM is done without much overhead once the BDD for the CNF is given.
## 6 Conclusion
This paper introduced IVFs as a way to formally reason about importance of variables in Boolean functions. We established general statements about IVFs, also providing insights on notions of importance from the literature by showing that they all belong to the class of IVFs. Apart from revealing several relations between known IVFs, we have shown how to generate new ones inspired by cooperative game theory.
For future work, we will study properties with strict importance inequalities, IVFs for sets of variables, IVFs for pseudo Boolean functions, and global values similar to the _total influence_[13]. On the empirical side, the generation of splitting rules for SAT-solvers and variable-order heuristics for BDDs based on different instances of IVFs are promising avenues to pursue.
Acknowledgments.The authors were partly supported by the DFG through the DFG grant 389792660 as part of TRR 248 and the Cluster of Excellence EXC 2050/1 (CeTI,
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{(projected) model counting approaches} & \multicolumn{3}{c}{BDD-based approaches} \\ \cline{4-11} Instance & \#Variables & \#Clauses & Influence (CNF) & Influence (formula) & Blame & Construction & Influence & DCGM & Blame \\ \hline b02 & 26 & 66 & 5 ms & 49 ms & timeout & 1 ms & \(<\)1 ms & 2 ms & 3’649 ms \\ b06 & 44 & 122 & 7 ms & 99 ms & timeout & 3 ms & \(<\)1 ms & 6 ms & 697’573 ms \\ b01 & 45 & 120 & 7 ms & 110 ms & timeout & 4 ms & \(<\)1 ms & 8 ms & 3’068’667 ms \\ b03 & 156 & 376 & 11 ms & 442 ms & timeout & 53’934 ms & 24 ms & 1’776 ms & timeout \\ b13 & 352 & 847 & 34 ms & 1’088 ms & timeout & timeout & timeout & timeout & timeout \\ b12 & 1’072 & 2’911 & 230 ms & 8’555 ms & timeout & timeout & timeout & timeout & timeout \\ \hline \hline \end{tabular}
\end{table}
Table 1: Computation time for instances of the ISCAS’99 dataset, timeout set to one hour. BDD columns _Influence_, _DCGM_ (construction of the BDD for the dominating CGM), and _Blame_ are _without_ the BDD construction time for the initial CNF (cf. column _Construction_).
Figure 1: Computation of blame values on random \((n,3n,7)\)-CNFs (number of variables, number of clauses, clause width). BDD times _include_ construction time of the BDD for the initial CNF.
project ID 390696704, as part of Germany's Excellence Strategy) and "SAIL: SustAlnable Life-cycle of Intelligent Socio-Technical Systems" (Grant ID NW21-059D), funded by the program "Netzwerke 2021" of the Ministry of Culture and Science of the State of North Rhine-Westphalia, Germany.
|
2307.06757 | PBHs and GWs from $\mathbb{T}^2$-inflation and NANOGrav 15-year data | In this paper, we propose a novel mechanism in $\mathbb{T}^2$-inflation to
enhance the power spectrum large enough to seed primordial black holes (PBHs)
formation. To accomplish this, we consider the coupling function between the
inflaton field and $\mathbb{T}^2= T_{\mu \nu}T^{\mu \nu}$ term. PBHs formed
within this scenario can contribute partially or entirely to dark matter (DM)
abundance. Furthermore, the amplification in the scalar power spectrum will
concurrently produce significant scalar-induced gravitational waves (SIGWs) as
a second-order effect. In addition, the energy spectrum associated with SIGWs
can be compatible with the recent NANOGrav 15-year stochastic gravitational
wave detection and fall into the sensitivity range of other forthcoming GW
observatories. | Seyed Ali Hosseini Mansoori, Fereshteh Felegray, Alireza Talebian, Mohammad Sami | 2023-07-13T13:53:15Z | http://arxiv.org/abs/2307.06757v2 | # PBHs and GWs from \(\mathbb{T}^{2}\)-inflation and NANOGrav 15-year data
###### Abstract
In this paper, we propose a novel mechanism in \(\mathbb{T}^{2}\)-inflation to enhance the power spectrum large enough to seed primordial black holes (PBHs) formation. To accomplish this, we consider the coupling function between the inflaton field and \(\mathbb{T}^{2}=T_{\mu\nu}T^{\mu\nu}\) term. PBHs formed within this scenario can contribute partially or entirely to dark matter (DM) abundance. Furthermore, the amplification in the scalar power spectrum will concurrently produce significant scalar-induced gravitational waves (SIGWs) as a second-order effect. In addition, the energy spectrum associated with SIGWs can be compatible with the recent NANOGrav 15-year stochastic gravitational wave detection and fall into the sensitivity range of other forthcoming GW observatories.
## I Introduction
Recently, various collaborative efforts of Pulsar Timing Arrays (PTAs), such as NANOGrav [1], Parkers PTA [2], European PTA [3], and the China PTA [4], have collectively presented compelling evidence that firmly supports the existence of a stochastic gravitational wave background (SGWB) within the nHz frequency range. While the observed signal is predominantly attributed to standard astrophysical sources such as supermassive black hole binary mergers [5; 6], it is worth considering the possibility that, in addition to the astrophysical background, the data might also has a cosmological origin.
Among the potential cosmological interpretations of the SGWB are scalar-induced GWs (SIGWs) [7; 8; 9; 10; 11; 12; 13; 14], first-order cosmological phase transitions [15; 16; 17; 18; 19], as well as topological defects such as cosmic strings and domain walls [20; 21; 22; 23; 24]. For recent investigations on PTA results, please refer to Refs. [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39] as well.
In recent years, there has been a growing research interest in SIGWs, which are generated as a second-order effect from the first-order scalar perturbations [40; 41; 42; 43; 44; 45; 46]. Importantly, if these scalar perturbations reach significant amplitudes on small scales, they can give rise to a substantial population of PBHs [47; 48; 49; 50; 51] and the SIGWs are simultaneously enhanced and can be sizable or even larger rather than the first-order GWs.
On the other hand, we recently examine chaotic inflation within the context of the Energy-Momentum-Squared Gravity (EMSG) theory [52]. The EMSG theory incorporates terms proportional to \(\mathbb{T}^{2}\equiv T_{\mu\nu}T^{\mu\nu}\), where \(T_{\mu\nu}\) is the energy-momentum tensor of the canonical scalar field Lagrangian [53]. In this respect, the EMSG will be a subset of the K-essence models [54]. Essentially, to avoid ghost and gradient instabilities, specific restrictions must also be imposed on the model coupling parameter.
Despite recent observational bounds from Planck, WMAP, and BICEP/Keck during the 2018 observing season [55] ruling out chaotic inflation [56; 57] with a potential of \(\phi^{n}\) even for \(n=2/3\) at approximately 95% confidence level (CL), the presence of EMSG terms allows inflationary parameters, such as the spectral index \(n_{s}\) and the tensor-to-scalar ratio \(r\), to satisfy current observational constraints.
Over the past three decades, several mechanisms have been proposed to enhance the scalar power spectrum on small scales. In the context of single-field inflation models, for instance, such an enhanced power spectrum can be achieved through imposing specific features on the inflaton potential such as a break in its first derivative [58], an inflection point [59; 60], and tiny bumps or dips in it [61].
In this work, we introduce a novel mechanism aimed at significantly amplifying the power spectrum, resulting in the abundant production of PBHs. To be more precise, we focus on a model where a scalar field (inflaton) is coupled to the EMSG term. In fact, as the scalar-\(\mathbb{T}^{2}\) coupling exhibits a rapid change during inflation, the curvature perturbations can be enhanced to become seeds of the primordial black holes formed. It is interesting to note that in our model, these significant curvature perturbations not only result in the generation of primordial black holes, but also act as a source for second-order gravitational waves. Furthermore, we try to show the possibility that the recent PTA data can be interpreted by the induced GW sourced by EMSG term during inflation. Several other studies, for example [62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74], have been carried out on SIGWs as an explanation of the PTAs data.
This paper is organized as follows. Sec. II begins by introducing our setup within the EMSG framework. In Sec. III, we discuss inflationary solutions in our scenario
and show that the value of the spectral index \(n_{s}\) and tensor-to-scalar ratio \(r\) are compatible with the recent BICEP/Keck bound [55]. In the presence of the scalar-\(\mathbb{T}^{2}\) coupling, we select some benchmark parameter sets and discuss enhancements in the primordial curvature power spectrum. Furthermore, we attempt to determine the fraction of PBH abundance in dark matter density at the present epoch. They will be further discussed in Sec. IV. In Sec. V we investigate the possibility of detecting the energy spectrum of SIGWs from the recent NANOGrav signal and future GW experiments. Our conclusions are drawn in Sec. VI.
## II Model
Allow us to consider the EMSG gravity action, which is given by [52; 53].
\[S=\frac{1}{2}\int d^{4}x\sqrt{-g}\Big{(}M_{\rm p}^{2}R-M_{\rm p}^{-4}f(\phi) \mathbb{T}^{2}+2\mathcal{L}_{\rm m}\Big{)} \tag{1}\]
where \(M_{\rm p}\) is the reduced Planck mass, \(R\) is the Ricci scalar associated with the spacetime metric \(g_{\mu\nu}\), and \(\mathcal{L}_{m}\) is the Lagrangian density corresponding to the matter source described by the energy-momentum tensor \(T_{\mu\nu}\). Moreover, \(\mathbb{T}\) is defined as \(\mathbb{T}^{2}\equiv T_{\mu\nu}T^{\mu\nu}\) and \(f(\phi)\) is a coupling function of inflaton \(\phi\)[53]. It's important to note that in order to avoid ghost and gradient instabilities at the level of perturbations, it is crucial to ensure that \(f(\phi)<0\)[53]. For convenience, we set \(M_{\rm P}^{2}=1\) throughout this paper.
Furthermore, in this model, \(T_{\mu\nu}\) is derived by varying the canonical scalar field Lagrangian \(\mathcal{L}_{\rm m}=X-V(\phi)\) where \(X=-(\partial_{\mu}\phi\partial^{\mu}\phi)/2\) with respect to the metric, i.e.,
\[T_{\mu\nu}\equiv-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{m})}{ \delta g^{\mu\nu}}=\partial_{\mu}\phi\partial_{\nu}\phi+g_{\mu\nu}(X-V) \tag{2}\]
Hence, one obtains
\[\mathbb{T}^{2}=T_{\mu\nu}T^{\mu\nu}=4(X^{2}-XV+V^{2}). \tag{3}\]
By substituting the aforementioned result into the action (1), the action can be rewritten as the K-essence [54] model with a general function \(P(X,\phi)=\mathcal{L}_{\rm m}-f(\phi)\mathbb{T}^{2}/2\). Here, we also consider the step-like coupling given by
\[f(\phi)=\alpha+\mu_{1}\Big{[}\cosh\Big{(}\frac{\phi-\phi_{c}}{\mu_{2}}\Big{)} \Big{]}^{-2} \tag{4}\]
where \(\mu_{1}\), \(\mu_{2}\), and \(\phi_{c}\) are constants. Around the field value \(\phi_{c}\), this coupling rapidly shifts to the asymptotic constant \(f(\phi)=\alpha\). Hence, the value of \(\phi_{c}\) can be determined by examining the evolution of \(\phi\) in the \(f(\phi)=\alpha\) model during the slow-roll inflation [53]. In the following section, we first review the cosmological predictions mentioned in Ref. [53]. Then, our focus will shift to estimating the quantity of \(\phi_{c}\) at any given number of e-folds (\(N\)).
## III Chaotic slow-roll inflation with \(f(\phi)=\alpha\)
When we consider the slow-roll scheme, where \(\dot{\phi}\ll V\) (or \(X\ll V\)) and \(\ddot{\phi}\ll H\dot{\phi}\) (or \(\dot{X}\ll HX\)), we can derive the dynamical equation for the scale factor of the universe and the scalar field as follows [53]:
\[3H^{2} \simeq V\left(1+2\alpha V\right) \tag{5}\] \[\dot{\phi}V^{\prime}\left(1+4\alpha V\right) \simeq -6XH\left(1+2\alpha V\right). \tag{6}\]
Clearly, Eq. (5) reveals the presence of an upper bound on the potential, namely
\[V<\frac{1}{2|\alpha|} \tag{7}\]
Note that in the above equation, we have considered the absolute value for \(\alpha\). This choice is motivated by the findings of [53], which demonstrate that in order to address the issues of ghost and gradient instabilities, the coupling constant \(\alpha\) must be negative, specifically \(\alpha<0\).
By differentiating both sides of Eq. (5) with respect to time and combining it with Eq. (5), we can derive the Hubble slow-roll parameter.
\[\varepsilon_{H}\simeq-\frac{1}{2}\Big{(}\frac{V^{\prime}}{V}\Big{)}\Big{(} \frac{\dot{\phi}}{H}\Big{)}\Big{(}\frac{1+4\alpha V}{1+2\alpha V}\Big{)}. \tag{8}\]
By making use of Eq. (6), we can obtain
\[\frac{\dot{\phi}}{H}=-\Big{(}\frac{V^{\prime}}{V}\Big{)}\Bigg{[}\frac{1+4 \alpha V}{\Big{(}1+2\alpha V\Big{)}^{2}}\Bigg{]} \tag{9}\]
As a result of the above relation, the slow roll parameter (8) converts to
\[\varepsilon_{H}=\frac{1}{2}\Big{(}\frac{V^{\prime}}{V}\Big{)}^{2}\Bigg{[} \frac{\Big{(}1+4\alpha V\Big{)}^{2}}{\Big{(}1+2\alpha V\Big{)}^{3}}\Bigg{]}. \tag{10}\]
Moreover, the Hubble slow-roll parameter \(\eta_{H}\) is related to \(\epsilon_{H}\) as
\[\eta_{H}=\frac{\varepsilon_{H}}{H\varepsilon_{H}}=\frac{\varepsilon_{H}^{ \prime}}{\varepsilon_{H}}\frac{\dot{\phi}}{H} \tag{11}\]
Notice that both slow roll parameters reduce to the standard form [75] as \(\alpha\to 0\). Additionally, these parameters must be much smaller than one, namely \(\varepsilon_{H},\ \ \eta_{H}\ll 1\) during the inflation era which takes at least 50-60 number of e-folds to solve the flatness and the horizon problems. As a final remark, the inflation ends when either of the slow-roll parameters tends to unity.
Furthermore the sound speed, in the slow-roll limit, can be expressed as
\[c_{s}^{2}=\frac{P_{,X}}{P_{,X}+2XP_{,XX}}\simeq 1+\frac{4\alpha V}{3}\Big{(} \frac{\dot{\phi}}{H}\Big{)}^{2} \tag{12}\]
By taking advantage of Eq. (9), we can write down the above relation as a function of the potential and its derivatives. The scalar and tensor power spectrum in the slow roll regime are also given by [76; 77]
\[\mathcal{P}_{\mathcal{R}}\simeq\frac{1}{8\pi^{2}}\frac{H^{2}}{\varepsilon_{H}c_ {s}}|_{c_{s}k=aH},\hskip 28.452756pt\mathcal{P}_{h}=\frac{2}{\pi^{2}}H^{2}|_{k=aH} \tag{13}\]
Then, one can calculate the spectral index \(n_{s}\) and the tensor-to-scalar ratio \(r\) as
\[n_{s}-1 \equiv \frac{d\ln\mathcal{P}_{\mathcal{R}}}{d\ln k}\simeq-2\varepsilon_ {H}-\eta_{H}-s \tag{14}\] \[r \equiv \frac{\mathcal{P}_{h}}{\mathcal{P}_{\mathcal{R}}}=16\varepsilon_ {H}c_{s} \tag{15}\]
where \(s\equiv c_{s}/(Hc_{s})\). In the recent years, several observational constraints have been obtained on the \(r\) and \(n_{s}\) quantities from various data sources, including the _Planck_ 2018 data [78; 79], as well as BICEP/_Keck_ (BK15 [80] and BK18 [55]) data and BAO data. These limitations put serious restrictions on the free parameters of the model.
Now, let us select a simple potential function like the chaotic potential with \(V(\phi)=(A/M_{\rm P}^{n})\phi^{n}\), where \(A\) is a dimensionless coefficient and \(n\) is a rational number. Note that \(A\) stands for the normalisation parameter given by the amplitude of the scalar power spectrum at the CMB pivot scale (\(K_{\rm CMB}=0.05\)Mpc\({}^{-1}\)), i.e. \(\mathcal{P}_{CMB}\sim 2.1\times 10^{-9}\).
Using Eq. (9), the number of e-folding is also defined as
\[N=-\int_{t_{e}}^{t}Hdt=\int_{V_{e}}^{V}\frac{1}{2\varepsilon_{H}}\frac{dV}{V} \Big{[}\frac{1+4\alpha V}{1+2\alpha V}\Big{]} \tag{16}\]
where the subscript "\(e\)" stands for the value of the quantities at the end of the inflation. Now by putting \(V=A\phi^{n}\) into the above relation, we have
\[N=\frac{1}{2n}\Big{(}\frac{V}{A}\Big{)}^{\frac{2}{n}}\Big{[}1+\frac{2\alpha V} {2+n}\Big{(}1-_{2}F_{1}(1,1+\frac{2}{n},2+\frac{2}{n},-4\alpha V)\Big{)}\Big{]} \tag{17}\]
It should be noted that the potential \(V\) in the above equation is calculated at the beginning of the inflation. Clealy, it is difficult to read the potential function \(V\) as a function of \(N\) in reverse. However, it can be accomplished if one chooses small values of the potential such that \(|\alpha|V\leq\mathcal{O}(\sqrt{\varepsilon_{H}^{SC}})<1/2\) during inflation (the symbol SC represents the standard chaotic inflation.). By assuming this and defining the expansion parameters as \(\epsilon=\alpha A\), one can derive [53]
\[\frac{V}{A}\simeq\Big{(}2nN\Big{)}^{\frac{n}{2}}\Big{[}1-\Big{(}\frac{(2n)^{n +1}N^{n}}{(1+n)}\Big{)}\epsilon^{2}+\mathcal{O}(\epsilon^{3})\Big{]} \tag{18}\]
While we have considered the above relation up to the second order, when \(|\alpha|V\) approaches the bound \(\sqrt{\varepsilon_{H}^{SC}}\sim\mathcal{O}(0.01)\)[81], it becomes necessary to consider higher orders, such as the fourth order, to achieve a strong agreement between analytical and numerical results.
By considering Eqs. (10), (11) and Eq. (18) together, we now derive the relation between the slow-roll parameters and \(N\) as
\[\varepsilon_{H}\simeq\frac{n}{4N}\Big{[}1+2(2nN)^{\frac{n}{2}}\epsilon-\frac{4 (2nN)^{n}(1+2n)}{1+n}\epsilon^{2}+\mathcal{O}(\epsilon^{3})\Big{]} \tag{19}\]
\[\eta_{H}\simeq\frac{1}{N}\Big{[}1-n(2nN)^{\frac{n}{2}}\epsilon+\frac{2n(2nN)^ {n}(3+5n)}{1+n}\epsilon^{2}+\mathcal{O}(\epsilon^{3})\Big{]} \tag{20}\]
The sound speed \(c_{s}\) is also obtained to be
\[c_{s}\simeq 1+\frac{n}{3N}(2nN)^{\frac{n}{2}}\epsilon+\mathcal{O}(\epsilon^{2}) \tag{21}\]
All above relations provide us with a formal solution for the spectral index (14) as [53]
\[n_{s}-1\simeq-\frac{1}{N}\Big{[}\frac{2+n}{2}-\frac{n(2nN)^{\frac {n}{2}}(n-2)}{6N}\epsilon\] \[+ \frac{n(2nN)^{n}}{18(1+n)N^{2}}\Big{(}n(1+n)(n-2)+36(2+3n)N^{2} \Big{)}\epsilon^{2}\] \[+ \mathcal{O}(\epsilon^{3})\Big{]}\]
In comparison with Refs. [75; 82], the spectral index is modified by the orders of \(\alpha A\). It is also trivial to derive \(r\) as a function of \(N\) when one combines Eqs. (15), (19), and (21) together.
Fig. 1 presents the tensor-to-scalar ratio as a function of the spectral index \(n_{s}\) for \(V=A\phi^{2/3}\), along with the observational constraints from the _Planck_ 2018 data, as well as BICEP/_Keck_ (BK15 [80] and BK18 [55]) data and BAO data. As seen, EMSG corrections result in improving the predicted values of \(\{r,n_{s}\}\) in the standard chaotic inflation (orange shapes) such that they (red shapes) fall entirely within the region determined by the BK18 results [55].
For instance, the numerical values of \(\{n_{s},r\}\) on CMB scales are \(\{0.971802,0.0176263\}\) which are in close agreement with analytic result (22) as one takes \(\alpha A=-0.043\) and \(N=60\). In addition, by substituting \(V=A\phi^{2/3}\) in Eq. (18), the \(\phi_{c}\) can be obtained as
\[\phi_{c}^{2/3}\simeq\Big{(}\frac{4}{3}N\Big{)}^{\frac{1}{3}}\Big{[}1-\frac{3}{ 5}\Big{(}\frac{4}{3}\Big{)}^{\frac{5}{3}}N^{\frac{3}{3}}(\alpha A)^{2}+ \mathcal{O}\Big{(}(\alpha A)^{2}\Big{)}\Big{]} \tag{23}\]
In Fig. 2, we compared the aforementioned result with numerical values represented by the red points. As shown, there is a satisfactory agreement between numerics and analytic results. In the presence of the coupling (4), the background dynamics and perturbation spectra are subject to modifications. In the next section, similar to Refs. [83; 84; 85], we expect that this choice of the coupling can lead inflation into the ultra slow-roll (USR) stage and can thus significantly enhance the power spectrum \(\mathcal{P}_{\mathcal{R}}\) of the primordial curvature perturbation on
small scales, \(k<k_{\rm CMB}\). As previously stated, the enhancement in the power spectrum of the scalar perturbations results in PBH formation with desirable masses and abundances.
## IV Power spectrum and PBH abundance
In this section, we begin by calculating the power spectrum \(\mathcal{P}_{\mathcal{R}}\) of the primordial scalar curvature perturbation using numerical methods. Afterward, we proceed to estimate the abundance of PBHs. Using the benchmark parameter sets listed in Tab. 1, we numerically generate the curvature perturbations power spectrum, as shown in Fig. 3. It's fascinating how adjusting the parameter space leads to a significant enhancement in the curvature power spectrum on smaller scales. As can be seen, the location of the peak is altered by the initial condition \(\phi^{c}\)1. In addition, the amplitude of \(\mathcal{P}_{\mathcal{R}}(k)\) is controlled by \(\mu_{1}\), while \(\mu_{2}\) controls the width of the peak.
Figure 2: The evolution of the scalar field during the inflationary phase. The red points indicate numerical results, whereas the blue solid curve is plotted by using (23) under the slow roll approximation.
Figure 1: Tensor-to-scalar ratio vs spectral index for EMSG model with the power law potential \(V=A\phi^{2/3}\), compared to the data of Ref. [55].
In addition, the enhanced power spectrum can lead to a significant contribution of PBHs to the DM density today. The fraction of PBHs against the total DM density at the present can be given by [92]
\[f(M_{\rm PBH})=2.7\times 10^{8}\Big{(}\frac{0.2}{\gamma}\sqrt{\frac{g_{*}}{10.75}} \frac{M_{\rm PBH}}{M_{\odot}}\Big{)}^{-1/2}\beta(M_{\rm PBH}) \tag{24}\]
where the constant \(\gamma\) measures how much fraction of mass transformed to be PBHs, \(g^{*}\) is the relativistic degrees of freedom at formation, \(M_{\odot}\) is the solar mass, and the \(\beta\) is the mass fraction of PBHs at formation time. The mass of the formed PBH as a function of a comoving scale \(k\) is also given by [93]
\[\frac{M_{\rm PBH}(k)}{M_{\odot}}=30\Big{(}\frac{\gamma}{0.2}\Big{)}\big{(} \frac{g_{*}}{10.75}\Big{)}^{-1/6}\Big{(}\frac{k}{2.9\times 10^{5}{\rm Mpc}^{-1} }\Big{)} \tag{25}\]
According to the Press-Schechter formalism [94], the mass fraction \(\beta\) for a given mass is defined as the probability that the Gaussian comoving curvature perturbation \(\mathcal{R}\) (or the density contrast \(\delta\)) is larger than a certain threshold value \(\mathcal{R}_{c}\) (or \(\delta_{c}\)) for PBH formation [94; 95; 96; 97]. In this respect, by taking the Gaussian probability distribution function (PDF) for the curvature fluctuation spectrum, the fraction of collapsing regions at formation can be estimated as
\[\beta(k)\simeq\frac{1}{2}{\rm Erfc}\Big{(}\frac{\mathcal{R}_{c}}{\sqrt{2 \mathcal{P}_{\mathcal{R}}(k)}}\Big{)} \tag{26}\]
Recent numerical and theoretical investigations indicate that \(\mathcal{R}_{c}\sim\mathcal{O}(1)\)[98; 99; 100; 101]. In addition, the proper value of the threshold depends on the shape of the power spectrum of the curvature perturbation. In this paper, we take \(\mathcal{R}_{c}\sim 1.75\) according to the amount of density threshold \(\delta_{c}\sim 0.55\) quoted in [102] by using the linear relation \(\mathcal{R}_{c}=9/(2\sqrt{2})\delta_{c}\) between curvature and density threshold [103; 104; 105].
In Fig. 4, we have depicted \(f_{\rm PBH}\) for the model parameters in Table 1. As illustrated, the formed PBHs can furnish a large fraction of total DM abundance. In particular, for model III, we obtain \(f_{\rm PBH}\simeq 1\) corresponding to \(M_{\rm PBH}\sim 10^{-14}M_{\odot}\).
## V Detectability of induced gravitational waves by new results of nanograv
As mentioned, large curvature perturbations can act as a source for the second-order tensor perturbations, and thus generating SIGWs in the radiation domination (RD) era. In this part, we therefore concentrate on the possibility of the enhanced scalar perturbation power spectrum as the source of the GWs within the new results of NANOGrav. The energy density of the induced GW is given by [109; 41]
\[\Omega_{\rm GW}=\frac{\Omega_{r,0}}{36}\int_{0}^{1/\sqrt{3}}{ \rm du}\int_{1/\sqrt{3}}^{\infty}{\rm dv}\Big{[}\frac{({\rm u}^{2}-1/3)({\rm v }^{2}-1/3)}{{\rm v}^{2}-{\rm u}^{2}}\Big{]}^{2}\] \[\mathcal{P}_{\mathcal{R}}(\frac{k\sqrt{3}}{2}(v+u))\mathcal{P}_{ \mathcal{R}}(\frac{k\sqrt{3}}{2}(v-u))\Big{(}I_{c_{1}}^{2}(v,u)+I_{c_{2}}^{2}(v,u)\Big{)}\]
where \(\Omega_{r,0}\simeq 8.6\times 10^{-5}\) is the radiation density at present and the functions \(L_{c_{1},c_{2}}\) are defined in Appendix D of Ref. [109]. We also refer interested readers to Appendix D of Ref. [110] in more detail. In Fig. 5, we have plotted the quantity \(\Omega_{GW}h^{2}\) in terms of frequency \(f=k/2\pi=1.55\times 10^{-15}(k/1{\rm Mpc}^{-1}){\rm Hz}\) with \(h^{2}=0.49\) together with the sensitivity of the various forthcoming GW experiments _e.g._ the Laser Interferometer Space Antenna (LISA) [111], the Big Bang Observatory (BBO) [112; 113; 114]. Clearly, for model III, \(\Omega_{\rm GW}h^{2}\) falls within the sensitivity of the BBO and LISA, while GWs for model II only peak well inside the range of detectability of LISA.
In Fig. 6, we have depicted the spectrum of SIGWs for Model I sets and compared them to the NANOGrave results. As seen, the energy density for all model I sets follow the NANOGrav 15-year results on the stochastic gravitational wave background. However, fully explaining the PTA signal with SIGW proves to be a challenging task due to the PBH bound in our model.
Taken together, these results suggest the PBHs much smaller than the Sun can be explained by the new PTA data analyses results [66]. Additionally, PBHs that are produced in small abundances are more compatible with PTA observations [117]. To summarize, the NANOGrav 15-year results on gravitational waves (GWS) [1], in combination with the _Planck_ 2018 data and PBH bounds, can put serious restrictions on the parameters of our model.
Figure 4: Fraction \(f_{\rm PBH}\) as a function of the mass of the formed PBHs in the unit of solar mass for models in Table. 1. The observational bounds are taken from Refs. [106; 107; 108]..
## VI Conclusion and discussion
In this study, we investigated a mechanism for producing the seed of PBHs in \(\mathbb{T}^{2}\)-inflation by examining the coupling between the inflation field and \(\mathbb{T}^{2}\) term. Compared to the standard chaotic inflation, the EMSG term can modify the predictions of the scalar spectral index and the tensor-to-scalar ratio on CMB scales. This modification makes them compatible with the recent BICEP/Keck observational bounds.
Furthermore, we examined the possibility of enhancing curvature perturbations at specific scales to generate the seed for PBHs, while ensuring that the model remains consistent with CMB observations. As previously discussed, such an enhanced power spectrum leads to PBHs, contributing a large fraction of DM abundance, and simultaneously generating sizable SIGWs.
The recently published PTA measurements provide evidence of a SGWB. While it aligns with the possibility of a background originating from binary mergers of supermassive black holes, it is intriguing to consider the signal's potential association with the early universe. By tuning the model parameters, we can observe an enhanced power spectrum in \(\mathbb{T}^{2}\)-inflation model at different scales, which enables us to generate primordial black holes (PBHs) with a wide range of masses. Furthermore, it provides us with an opportunity to explain the PTA data via the corresponding SIGWs in our model.
Last but not least, we should comment on the ongoing discussion on quantum loop corrections to the power spectrum in \(P(X,\phi)\) theories. It has been noticed that loop corrections put stringent constraints on PBH formation in single field inflation during _sharp_ slow roll to ultra slow roll transition, namely, enhancement in fluctuation is shifted towards large frequencies, thereby creating PBH with small masses [86; 87; 88; 89; 90; 91; 118; 119]. It is likely to shift the GW predictions towards the right, to the high frequency regime, which might fall within the LIGO-LISA proposed sensitivities.
###### Acknowledgements.
We gratefully acknowledge Hassan Firouzjahi for the useful comments and discussions. We thank the partial support from the "Saramadan" Federation of Iran. MS is partially supported by the Ministry of Education and Science of the Republic of Kazakhstan, Grant No. AP14870191 and CAS President's International Fellowship Initiative(PIFI). A. T. would like to thank University of Rwanda, EAIFR, and ICTP for their kind hospitalities during the 17th international workshop on the "Dark Side of the Universe" when some parts of the project were in hand.
|
2309.01594 | Lepage Equivalents and the Variational Bicomplex | We show how to construct, for a Lagrangian of arbitrary order, a Lepage
equivalent satisfying the closure property: that the Lepage equivalent vanishes
precisely when the Lagrangian is null. The construction uses a homotopy
operator for the horizontal differential of the variational bicomplex. A choice
of symmetric linear connection on the manifold of independent variables, and a
global homotopy operator constructed using that connection, may then be used to
extend any global Lepage equivalent to one satisfying the closure property. In
the second part of the paper we investigate the r\^ole of vertical
endomorphisms in constructing such Lepage equivalents. These endomorphisms may
be used directly to construct local homotopy operators. Together with a
symmetric linear connection they may also be used to construct global vertical
tensors, and these define infinitesimal nonholonomic projections which in turn
may be used to construct Lepage equivalents. We conjecture that these global
vertical tensors may also be used to define global homotopy operators. | David Saunders | 2023-09-04T13:26:08Z | http://arxiv.org/abs/2309.01594v3 | # Lepage equivalents and the Variational Bicomplex
###### Abstract
We show how to construct, for a Lagrangian of arbitrary order, a Lepage equivalent satisfying the closure property: that the Lepage equivalent vanishes precisely when the Lagrangian is null. The construction uses a homotopy operator for the horizontal differential of the variational bicomplex. A choice of symmetric linear connection on the manifold of independent variables, and a global homotopy operator constructed using that connection, may then be used to extend any global Lepage equivalent to one satisfying the closure property.
In the second part of the paper we investigate the role of vertical endomorphisms in constructing such Lepage equivalents. These endomorphisms may be used directly to construct local homotopy operators. Together with a symmetric linear connection they may also be used to construct global vertical tensors, and these define infinitesimal nonholonomic projections which in turn may be used to construct Lepage equivalents. We conjecture that these global vertical tensors may also be used to define global homotopy operators.
**MSC: 58A10, 58A20, 83D05**
**Keywords:** Jet bundle, Poincare-Cartan form, Lepage equivalent of a Lagrangian, variational bicomplex
## Dedication
In the Notes to Chapter 5 of [15], Peter Olver wrote about the variational complex and the variational bicomplex 'It is hoped that these methods will inspire further research in the geometric theory of the calculus of variations'. A few years later [16] he wrote 'In the geometric theory of the calculus of variations in mechanics, the Cartan form, which first arose as the integrand in Hilbert's invariant integral, plays a ubiquitous role'. Lepage equivalents are generalizations of Cartan forms, and I hope that this paper will be a small contribution to Peter's project.
## 1 Introduction
In recent years there has been a revival of interest in the 'fundamental Lepage equivalent' of a Lagrangian, a differential form on a jet bundle which (as with any such Lepage
equivalent) provides a geometrical construction leading to the Euler-Lagrange equations of the corresponding variational problem, but which has the additional property that it is closed precisely when the Lagrangian is null [17, 23]. The original formulation of the fundamental Lepage equivalent was given for first order Lagrangians (in [12], and then independently in [2]). Although expressed in local coordinates, the form is in fact invariant under changes of coordinates, and so is a global geometric object. There had, however, been been no similar construction for higher order Lagrangians.
A construction for Lagrangians of arbitrary order has now been proposed in [25], giving a Lepage form of order no greater than \(4k-2\) for a Lagrangian of order \(k\). The construction is again given in local coordinates, but now there is no guarantee that it will be defined globally. In addition, if the original Lagrangian happens to be first order, the new Lepage form will in general be of second order and will differ from the original, first order, fundamental Lepage equivalent.
In the first part of this paper, after giving some background on the different types of Lepage equivalent, we propose a new method of constructing a fundamental Lepage equivalent for a Lagrangian of arbitrary order by using homotopy operators for the horizontal differential in the variational bicomplex on the infinite jet manifold. This has the disadvantage that any bound on the order of the resulting Lepage form, although necessarily finite, will depend on the number of independent variables. On the other hand, the choice of a symmetric linear connection will allow the construction of a global form, and in the case of a first order Lagrangian the result will be independent of any connection and we recover the classic fundamental Lepage equivalent.
In the second part of the paper we explore the potential for clarifying this construction by using'vertical endomorphisms' on jet bundles, tensorial objects depending on a closed differential form, which are related to the canonical isomorphism between the tangent space at any point of an affine space, and the vector space on which the affine space is modelled [18]. We recall how local homotopy operators for the horizontal differential can be constructed from these vertical endomorphisms, and then we show how a symmetric linear connection can be used to remove the dependence on the differential form to produce a globally-defined, fully tensorial object. (A related but technically different approach has been described in [3].) Such a'vertical tensor' can be used to give an infinitesimal rigidity to nonholonomic jet bundles, allowing the construction of a global Lepage equivalent for a Lagrangian of arbitrary order. Finally we offer a conjecture regarding how these vertical tensors, together with a covariant version of the horizontal differential, might be used to construct a global homotopy operator for the ordinary horizontal differential.
## 2 Notation
We adopt a modified version of the notation used in [19]. We let \(\pi:E\to M\) be a fibred manifold with \(\dim M=m\) and \(\dim E=m+n\). The \(k\)-jet manifold of \(\pi\) will be denoted \(J^{k}\pi\) with projections \(\pi_{k}:J^{k}\pi\to M\), \(\pi_{k,0}:J^{k}\pi\to E\) and \(\pi_{k,l}:J^{k}\pi\to J^{l}\pi\) where \(l<k\). A typical element of \(J^{k}\pi\) will be denoted \(j^{k}_{p}\phi\). We use similar notation for jets of the cotangent bundle \(\tau:T^{*}M\to M\).
We let \(\mathfrak{X}(J^{k}\pi)\) denote the module of vector fields on \(J^{k}\pi\), and \(\Omega^{r}(J^{k}\pi)\) the module of \(r\)-forms.
Regarding the jet bundle \(\pi_{k-1}:J^{k-1}\pi\to M\) as the starting bundle, we shall let \((\pi_{k-1})_{1}:J^{1}\pi_{k-1}\to M\) denote its first jet bundle, and we let \(\mathrm{i}_{1,k-1}:J^{k}\pi\to J^{1}\pi_{k-1}\) be the canonical inclusion. There is also an intermediate submanifold \(\widehat{J}^{k}\pi\subset J^{1}\pi_{k-1}\), the semi-holonomic manifold [14], with a canonical symmetrization projection \(\mathrm{p}_{k}:\widehat{J}^{k}\pi\to J^{k}\pi\).
We also use the infinite jet bundle \(\pi_{\infty}:J^{\infty}\pi\to M\) where \(J^{\infty}\pi\), although infinite dimensional, is a Frechet manifold and so is reasonably well behaved. We let \(\Omega^{r}\) (without specifying a manifold) denote the module of \(r\)-forms on \(J^{\infty}\pi\) of globally finite order, so that each such form is projectable to some \(J^{k}\pi\).
Any differential form \(\omega\in\Omega^{r}\) can be decomposed uniquely into its contact components
\[\omega=\omega^{(0)}+\omega^{(1)}+\cdots+\omega^{(p)}+\cdots+\omega^{(r)}\]
where if \(r>m\) then \(\omega^{(p)}=0\) for \(p<r-m\). We let \(\Omega^{p,q}\subset\Omega^{r}\) with \(p+q=r\) denote the submodule of \(p\)-contact \(r\)-forms. In a similar way, a differential form \(\omega\in\Omega^{r}(J^{k}\pi)\) on a finite order jet manifold may be decomposed into contact components, but these will normally be defined on \(J^{k+1}\pi\) rather than on \(J^{k}\pi\).
When using coordinates, we take fibred coordinates \((x^{i},u^{\alpha})\) on \(E\) over base coordinates \((x^{i})\) on \(M\). Jet coordinates will be denoted \((u^{\alpha}_{i})\) on \(J^{1}\pi\) and \((u^{\alpha}_{i},u^{\alpha}_{(ij)})\) on \(J^{2}\pi\) with parentheses denoting symmetrization because \(u^{\alpha}_{(ji)}\) is the same coordinate as \(u^{\alpha}_{(ij)}\). For this reason we use the symbol \(\#(ij)\) to equal \(1\) when \(i=j\) and to equal \(2\) when \(i\neq j\), in order to avoid double counting during summation. (We use the standard summation convention for repeated indices.)
On higher order jet manifolds this notation becomes unwieldy and we write \((u^{\alpha}_{I})\) instead, where \(I\in\mathbb{N}^{m}\) is a multi-index with \(I(i)\) giving the number of copies of the index \(i\), so that this notation automatically takes care of symmetrization. We let \(1_{i}\) denote the multi-index with a single \(1\) in the \(i\)-th position; \(|I|=\sum_{i=1}^{n}I(i)\) is the length of \(I\), and \(I!=I(1)!I(2)!\cdots I(m)!\) is its factorial. Any summation involving multi-indices will be indicated explicitly, including the zero multi-index where appropriate.
Sometimes we need to use a mixed notation, and converting to or from multi-index notation requires coefficients to be adjusted. If \(F(J)\) is some object depending on the
multi-index \(J\) then
\[\sum_{|J|=r+1}\frac{|J|!}{J!}F(J)=\sum_{i=1}^{m}\sum_{|I|=r}\frac{|I|!}{I!}F(I+1_{ i})\]
where the quotient \(|J|!/J!\) is the 'weight' of the multi-index \(J\).
We use notation
\[\theta^{\alpha}=du^{\alpha}-u^{\alpha}_{j}dx^{j}\,,\qquad\theta^{\alpha}_{i}= du^{\alpha}_{i}-u^{\alpha}_{(ij)}dx^{j}\,,\qquad\theta^{\alpha}_{I}=du^{\alpha}_{I}- u^{\alpha}_{I+1_{j}}dx^{j}\]
for local contact 1-forms, and
\[\omega_{0}=dx^{1}\wedge\cdots\wedge dx^{m}\,,\qquad\omega_{i}=i_{\partial/ \partial x^{i}}\,\omega_{0}=(-1)^{i-1}dx^{1}\wedge\cdots\wedge\widehat{dx^{i} }\wedge\cdots\wedge dx^{m}\]
(where the circumflex indicates an omitted factor) for local forms horizontal over \(M\). Local total derivatives, dual to the local contact forms, will be denoted \(d_{i}\) and are given explicitly as
\[\frac{\partial}{\partial x^{i}}+u^{\alpha}_{i}\frac{\partial}{\partial u^{ \alpha}}\,,\qquad\frac{\partial}{\partial x^{i}}+u^{\alpha}_{i}\frac{\partial }{\partial u^{\alpha}_{i}}+u^{\alpha}_{(ij)}\frac{\partial}{\partial u^{ \alpha}_{j}}\,,\qquad\frac{\partial}{\partial x^{i}}+\sum_{I}u^{\alpha}_{I+1_{ i}}\frac{\partial}{\partial u^{\alpha}_{I}}\,.\]
In the finite order case they are vector fields along the map \(\pi_{k,k-1}\) rather than on a single jet manifold. We also use the symbol \(\partial_{i}\) to indicate \(\partial/\partial x^{i}\) as a vector field along the map \(\pi_{k}\).
On a nonholonomic jet manifold we need to distinguish between the two levels of jet coordinates, and we use juxtaposition, with a dot to indicate when a particular index is missing. So on \(J^{1}\pi_{1}\) the coordinates are \((x^{i},u^{\alpha}_{..},u^{\alpha}_{i^{\prime}},u^{\alpha}_{.j},u^{\alpha}_{ij})\) and on \(J^{1}\pi_{k-1}\) they are \((x^{i},u^{\alpha}_{I.},u^{\alpha}_{Ij})\).
Finally we note that \(\pi_{1,0}:J^{1}\pi\to E\) is an affine bundle, modelled on the vector bundle \(\pi^{*}T^{*}M\otimes V\pi\), so that the vertical bundle \(V\pi_{1,0}\) is canonically isomorphic to \(\pi^{*}_{1}T^{*}M\otimes\pi^{*}_{1,0}V\pi\); the inverse of this isomorphism may be regarded as a tensor field
\[S=\partial_{i}\otimes\theta^{\alpha}\otimes\frac{\partial}{\partial u^{\alpha }_{i}}\,,\]
a section of the bundle \(\pi^{*}_{1}TM\otimes T^{*}J^{1}\pi\otimes TJ^{1}\pi\) over \(J^{1}\pi\). We shall call this the _first order vertical tensor_.
## 3 Background
Many of the results in the geometrical calculus of variations can be described in terms of _source forms_ and _Lepage equivalents_ (see [13] for a useful summary and historical references).
A source form is a form \(\varepsilon\in\Omega^{m+1}(J^{l}\pi)\) with the properties that it is horizontal over \(E\), and maximally horizontal over \(M\), so that in coordinates it appears as \(\varepsilon=\varepsilon_{\alpha}\theta^{\alpha}\wedge\omega_{0}\). The zero set of a source form is a submanifold of \(J^{l}\pi\) representing a family of partial differential equations; if \(\lambda=L\,\omega_{0}\in\Omega^{m}(J^{k}\pi)\) is a horizontal \(m\)-form, a Lagrangian, then it gives rise to a source form \(\varepsilon_{\lambda}\in\Omega^{m+1}(J^{2k}\pi)\), the _Euler-Lagrange form_ of \(\lambda\), incorporating the Euler-Lagrange equations of the variational problem defined by \(\lambda\):
\[\varepsilon_{\lambda}=\sum_{|I|=0}^{k}(-1)^{|I|}d_{I}\bigg{(}\frac{\partial L} {\partial u_{I}^{\alpha}}\bigg{)}\theta^{\alpha}\wedge\omega_{0}\,.\]
A Lepage form is a form \(\vartheta\in\Omega^{m}(J^{l}\pi)\) with the property that \((d\vartheta)^{(1)}\), the 1-contact component of its exterior derivative, is a source form. A Lepage equivalent of a Lagrangian \(\lambda\in\Omega^{m}(J^{k}\pi)\) is a Lepage form \(\vartheta_{\lambda}\in\Omega^{m}(J^{l}\pi)\) with \(l\geq k\) such that the difference \(\vartheta_{\lambda}-\pi_{l,k}^{*}\lambda\) is a contact form; the corrresponding source form \((d\vartheta_{\lambda})^{(1)}\) is then just the Euler-Lagrange form \(\varepsilon_{\lambda}\). Different Lepage equivalents of the same Lagrangian give the same Euler-Lagrange form.
If \(m=1\) then each Lagrangian \(\lambda\) gives rise to a unique globally-defined Lepage equivalent \(\vartheta_{\lambda}\in\Omega^{1}(J^{2k-1}\pi)\), the Cartan form of the Lagrangian. However, complications arise when \(m\geq 2\), and these concern both existence and uniqueness. Clearly if \(\vartheta_{\lambda}\) is a Lepage equivalent of \(\lambda\) then so is \(\vartheta_{\lambda}+d\psi+\omega\) where \(\omega\) is at least 2-contact, and in fact the converse is true: if \(\vartheta_{\lambda}\), \(\vartheta_{\lambda}^{\prime}\) are both Lepage equivalents of \(\lambda\) then \(\theta_{\lambda}^{\prime}-\theta_{\lambda}=d\psi+\omega\).
As far as existence is concerned, if we initially consider forms which are at most 1-contact then locally
\[\vartheta_{\lambda}=L\,\omega_{0}+\sum_{|J|=0}^{k-1}\sum_{|K|=0}^{k-|J|-1} \frac{(-1)^{|J|}(J+K+1_{j})!|J|!|K|!}{(|J|+|K|+1)!J!K!}d_{J}\bigg{(}\frac{ \partial L}{\partial u_{J+K+1_{j}}^{\alpha}}\bigg{)}\theta_{K}^{\alpha}\wedge \omega_{j}\]
is a Lepage equivalent, known as the _principal Lepage equivalent_ of \(\lambda\). When \(k=1\) this is just the Poincare-Cartan form of \(\lambda\),
\[\vartheta_{\lambda}=L\,\omega_{0}+\frac{\partial L}{\partial u_{j}^{\alpha}} \theta^{\alpha}\wedge\omega_{j}\]
and is defined globally; it is the unique Lepage equivalent of \(\lambda\) which is both at most 1-contact and also horizontal over \(E\). When \(k=2\) we obtain
\[\vartheta_{\lambda}=L\,\omega_{0}+\bigg{(}\bigg{(}\frac{\partial L}{\partial u ^{\alpha}}-\frac{1}{\#(ij)}d_{i}\bigg{(}\frac{\partial L}{\partial u_{(ij)}^ {\alpha}}\bigg{)}\bigg{)}\theta^{\alpha}+\frac{1}{\#(ij)}\frac{\partial L}{ \partial u_{(ij)}^{\alpha}}\theta_{i}^{\alpha}\bigg{)}\wedge\omega_{j}\]
which again, perhaps surprisingly, is invariant under a fibred change of coordinates \(\tilde{x}=\tilde{x}^{j}(x^{i})\), \(\tilde{u}^{\beta}=\tilde{u}^{\beta}(x^{i},u^{\alpha})\) and is therefore also defined globally. For \(k\geq 3\), however, there is no such Lepage equivalent invariant under coordinate changes [11]; choices need to be made in order to obtain a globally defined form. Several authors (see, for instance, [9, 10]) have used connections of various kinds for this purpose
Another approach [18] has been to 'pretend' that the \(k\)-th order Lagrangian is really first order by using a tubular neighbourhood of \(J^{k}\pi\) in \(J^{1}\pi_{k-1}\) and'spreading out' the Lagrangian using the neigbourhood's projection. By repeating this process, a global Lepage equivalent may be constructed. In fact only infinitesimal projections are needed, mapping \(T_{J^{k}\pi}J^{1}\pi_{k-1}\) to \(TJ^{k}\pi\), and we shall see in Section 7 that a symmetric linear connection on \(M\) determines a suitable family of projections. In the second order case, it may be seen that only the restriction of the projection to the semiholonomic manifold \(\widehat{J}^{2}\pi\) is needed, explaining why symmetrization projection \(\mathrm{p}_{2}\) may be used to give a global Lepage equivalent in this case.
The Lepage equivalents described so far have all been at most 1-contact. There have, however, been important examples of Lepage equivalents involving higher contact terms. One such, defined for a nonvanishing first order Lagrangian, is the _Caratheodory form_[4]
\[\vartheta_{\lambda}=\frac{1}{L^{m-1}}\bigwedge_{j=1}^{m}\biggl{(}L\,dx^{j}+ \frac{\partial L}{\partial u_{j}^{\alpha}}\theta^{\alpha}\biggr{)}\,;\]
this decomposable form is again defined globally and indeed is invariant, not just under a fibred change of coordinates, but under a general change \(\tilde{x}=\tilde{x}^{j}(x^{i},u^{\alpha})\), \(\tilde{u}^{\beta}=\tilde{u}^{\beta}(x^{i},u^{\alpha})\). A similar form for a nonvanishing second order Lagrangian,
\[\vartheta_{\lambda}=\frac{1}{L^{m-1}}\bigwedge_{j=1}^{m}\biggl{(}L\,dx^{j}+ \biggl{(}\frac{\partial L}{\partial u_{j}^{\alpha}}-\frac{1}{\#(ij)}d_{i} \biggl{(}\frac{\partial L}{\partial u_{(ij)}^{\alpha}}\biggr{)}\biggr{)} \theta^{\alpha}+\frac{1}{\#(ij)}\frac{\partial L}{\partial u_{(ij)}^{\alpha}} \theta_{i}^{\alpha}\biggr{)}\]
was described in [16] (see also [6]).
The Lepage equivalent of particular interest in the present paper, again involving higher contact terms, is the _fundamental Lepage equivalent_ of a first order Lagrangian [2, 12]
\[\vartheta_{\lambda}=\sum_{p=0}^{\min\{m,n\}}\frac{1}{(p!)^{2}}\,\frac{ \partial^{p}L}{\partial u_{j_{1}}^{\alpha_{1}}\cdots\partial u_{j_{p}}^{ \alpha_{p}}}\,\theta^{\alpha_{1}}\wedge\cdots\wedge\theta^{\alpha_{p}}\wedge \omega_{j_{1}\cdots j_{p}}\,.\]
This satisfies the _closure property_, that \(d\vartheta_{\lambda}=0\) precisely when the Lagrangian is null: that is, when the Euler-Lagrange form \(\varepsilon_{\lambda}\) is zero. (Of course any individual form \(\vartheta_{\lambda}\) is either closed or not closed; the closure property applies to the procedure mapping \(\lambda\) to \(\vartheta_{\lambda}\).) Once again this form is defined globally, and indeed is invariant under a general change of coordinates \(\tilde{x}=\tilde{x}^{j}(x^{i},u^{\alpha})\), \(\tilde{u}^{\beta}=\tilde{u}^{\beta}(x^{i},u^{\alpha})\)[7]. The content of the closure property lies in the requirement that \(d\vartheta_{\lambda}=0\) when \(\lambda\) is null; for any Lepage equivalent \(\vartheta_{\lambda}\) it is obvious that the converse holds, that \(\lambda\) is null when \(d\vartheta_{\lambda}=0\).
In the next Section we consider how the construction of the fundamental Lepage equivalent might be generalised for higher order Lagrangians.
The closure property
The question of whether it possible to find a procedure for constructing a Lepage equivalent which satisfies the closure property, although solved in 1977 for first order Lagrangians, has been an open problem for higher order Lagrangians (see [17, 23] and the references therein). An original solution to this problem was given in [25], using the Vainberg-Tonti Lagrangian of a source form \(\varepsilon\), the horizontal \(m\)-form \(\lambda_{\varepsilon}\) obtained locally in coordinates from \(\varepsilon=\varepsilon_{\alpha}\theta^{\alpha}\wedge\omega_{0}\) by the fibred homotopy operator
\[\lambda_{\varepsilon}=u^{\alpha}\int_{0}^{1}\varepsilon_{\alpha}(x^{i},tu_{I }^{\beta})dt\,.\]
Typically \(\lambda_{\varepsilon}\) has the same order as \(\varepsilon\). If in fact \(\varepsilon=\varepsilon_{\lambda}\), so that the source form is the Euler-Lagrange form of a given Lagrangian \(\lambda\), then the Vainberg-Tonti Lagrangian \(\lambda_{\varepsilon_{\lambda}}\) and the pullback of \(\lambda\) have the same Euler-Lagrange equations so that they differ by \(h(d\alpha)\) for some horizontal \((m-1)\)-form \(\alpha\). Then, taking \(\vartheta_{\lambda_{\varepsilon_{\lambda}}}\) to be the principal Lepage equivalent of the Vainberg-Tonti Lagrangian in the given coordinates and writing \(\vartheta^{\rm F}=\vartheta_{\lambda_{\varepsilon_{\lambda}}}+d\alpha\) we find that, to within pullbacks,
\[(d\vartheta^{\rm F})^{(1)}=d\vartheta_{\lambda_{\varepsilon_{\lambda}}}^{(1) }=\varepsilon_{\lambda_{\varepsilon_{\lambda}}}=\varepsilon_{\lambda}\]
so that \(\vartheta^{\rm F}\) is a source form, and that
\[h(\vartheta^{\rm F})=h(\vartheta_{\lambda_{\varepsilon_{\lambda}}})+h(d \alpha)=\lambda_{\varepsilon_{\lambda}}+h(d\alpha)=\lambda\]
so that \(\vartheta^{\rm F}\) is a Lepage equivalent of \(\lambda\), and finally that
\[d\vartheta^{\rm F}=d\vartheta_{\lambda_{\varepsilon_{\lambda}}}\]
so that if \(\lambda\) is a null Lagrangian then \(\varepsilon_{\lambda}=0\) and therefore \(d\vartheta^{\rm F}=0\).
This procedure therefore satisfies the closure property. It is not, though, a generalisation of the fundamental Lepage equivalent for first order Lagrangians, because it is always at most 1-contact, whereas the Fundamental Lepage equivalent is obtained from the Poincare-Cartan form by adding higher contact terms.
We shall, instead, define an alternative procedure which uses homotopy operators for the horizontal differential of the variational bicomplex to add the higher contact terms. Recall that this bicomplex is defined for forms of globally finite order on the infinite jet manifold \(J^{\infty}\pi\), as shown in the diagram below (note that the squares with vertical arrows labelled \(\pi^{*}_{\infty}\) commute, whereas those with vertical arrows labelled \(d_{\rm v}\) anticommute.) The rows and columns are all locally exact, and indeed all the \(d_{\rm h}\) rows apart from the first are globally exact [1, 20, 21, 22, 24]. Any Lagrangian \(\lambda\in\Omega^{m}(J^{k}\pi)\) will have a pullback \(\pi^{*}_{\infty,k}\lambda\in\Omega^{0,m}\) on \(J^{\infty}\pi\) which for simplicity we shall continue to denote by \(\lambda\) without the pullback map.
Let \(\vartheta_{\lambda}\) denote the pullback to \(J^{\infty}\pi\) of any local Lepage equivalent of \(\lambda\) which is at most \(1\)-contact, so that \(\vartheta_{\lambda}^{(1)}=\vartheta_{\lambda}-\lambda\in\Omega^{1,m-1}\), and let \(P\) denote any local homotopy operator for the \(d_{\mathrm{h}}\) rows (apart from the first) of the variational bicomplex. Define the _extension of \(\vartheta_{\lambda}\) by \(P\)_ to be the \(m\)-form defined locally by
\[\vartheta^{\mathrm{F}} =\vartheta_{\lambda}+\sum_{p=1}^{m-1}(-Pd_{\mathrm{v}})^{p} \vartheta_{\lambda}^{(1)}\] \[=\lambda+\vartheta_{\lambda}^{(1)}-(Pd_{\mathrm{v}})\vartheta_{ \lambda}^{(1)}+(Pd_{\mathrm{v}})^{2}\vartheta_{\lambda}^{(1)}-\cdots+(-Pd_{ \mathrm{v}})^{m-1}\vartheta_{\lambda}^{(1)}\] \[\in\Omega^{0,m}\oplus\Omega^{1,m-1}\oplus\Omega^{2,m-2}\oplus \Omega^{3,m-3}\oplus\cdots\oplus\Omega^{m,0}\,,\]
so that \(\vartheta^{\mathrm{F}}\) is another Lepage equivalent of \(\lambda\). We shall show that this method of constructing \(\vartheta^{\mathrm{F}}\) satisfies the closure property, by diagram chasing.
Suppose that \(\lambda\) is a null Lagrangian, so that
\[0=\varepsilon_{\lambda}=(d\vartheta^{\mathrm{F}})^{(1)}=d_{\mathrm{v}} \vartheta_{\lambda}^{(0)}+d_{\mathrm{h}}\vartheta_{\lambda}^{(1)}=d_{\mathrm{ v}}\lambda+d_{\mathrm{h}}\vartheta_{\lambda}^{(1)}\,.\]
Then
\[(d\vartheta^{\mathrm{F}})^{(2)} =d_{\mathrm{v}}(\vartheta^{\mathrm{F}(1)})+d_{\mathrm{h}}( \vartheta^{\mathrm{F}(2)})\] \[=d_{\mathrm{v}}\vartheta_{\lambda}^{(1)}-d_{\mathrm{h}}Pd_{ \mathrm{v}}\vartheta_{\lambda}^{(1)}\] \[=Pd_{\mathrm{h}}d_{\mathrm{v}}\vartheta_{\lambda}^{(1)}=-Pd_{ \mathrm{v}}d_{\mathrm{h}}\vartheta_{\lambda}^{(1)}=Pd_{\mathrm{v}}d_{\mathrm{ v}}\lambda=0\]
using the homotopy property \(d_{\rm h}\circ P+P\circ d_{\rm h}={\rm id}\), and in a similar way
\[(d\vartheta^{\rm F})^{(p+1)} =d_{\rm v}(\vartheta^{\rm F(p)})+d_{\rm h}(\vartheta^{\rm F(p+1)})\] \[=d_{\rm v}(\vartheta^{\rm F(p)})-d_{\rm h}Pd_{\rm v}(\vartheta^{ \rm F(p)})\] \[=Pd_{\rm h}d_{\rm v}(\vartheta^{\rm F(p)})=-Pd_{\rm v}d_{\rm h}( \vartheta^{\rm F(p)})=Pd_{\rm v}d_{\rm v}(\vartheta^{\rm F(p-1)})=0\]
for \(2\leq p\leq m-1\), where the penultimate equality arises recursively from
\[d_{\rm h}(\vartheta^{\rm F(p)})+d_{\rm v}(\vartheta^{\rm F(p-1)})=(d\vartheta ^{\rm F})^{(p)}=0\,.\]
Thus \(d\vartheta^{\rm F}=(d\vartheta^{\rm F})^{(m+1)}\), and we may see that this maximal contact component also vanishes by traversing the diagram in the opposite direction. For \(2\leq p\leq m\) we have
\[d_{\rm h}d_{\rm v}(\vartheta^{\rm F(p)}) =-d_{\rm h}d_{\rm v}Pd_{\rm v}(\vartheta^{\rm F(p-1)})\] \[=d_{\rm v}d_{\rm h}Pd_{\rm v}(\vartheta^{\rm F(p-1)})\] \[=d_{\rm v}d_{\rm v}(\vartheta^{\rm F(p-1)})-(d_{\rm v}P)d_{\rm h }d_{\rm v}(\vartheta^{\rm F(p-1)})\] \[=-(d_{\rm v}P)d_{\rm h}d_{\rm v}(\vartheta^{\rm F(p-1)})\,;\]
but
\[d_{\rm h}d_{\rm v}(\vartheta^{\rm F(1)})=d_{\rm h}d_{\rm v}\vartheta^{(1)}_{ \lambda}=-d_{\rm v}d_{\rm h}\vartheta^{(1)}_{\lambda}=d_{\rm v}d_{\rm v}\lambda=0\]
so that \(d_{\rm h}d_{\rm v}(\vartheta^{\rm F(m)})=0\). As \(d_{\rm h}:\Omega^{m+1,0}\to\Omega^{m+1,1}\) is injective by exactness, we see finally that \((d\vartheta^{\rm F})^{(m+1)}=d_{\rm v}(\vartheta^{\rm F(m)})=0\). We shall describe suitable local homotopy operators for \(d_{\rm h}\), constructed using vertical endomorphisms, in the next Section; by using them we obtain the following result.
**Theorem 1**.: _Let \(\lambda\) be the pullback to \(J^{\infty}\pi\) of a Lagrangian of any order, and let \(\vartheta_{\lambda}\) be the pullback to \(J^{\infty}\pi\) of any local Lepage equivalent of \(\lambda\) which is at most \(1\)-contact. A local homotopy operator \(P\) then defines a local Lepage equivalent \(\vartheta^{\rm F}\) which is an extension of \(\vartheta_{\lambda}\) and which satisfies the closure property, that \(d\vartheta^{\rm F}=0\) precisely when \(\lambda\) is null._
We can also consider a global version of this result, noting that the diagram chasing above would apply equally to global operators as it does to local ones. We have seen that additional structures, such as connections or nonholonomic projections, are needed to construct a global Lepage equivalent when the order of the Lagrangian is greater than two. A global homotopy operator for the horizontal differential on \(J^{\infty}\pi\) has also been found [1, Theorem 5.56] and again this uses a symmetric linear connection on the base manifold \(M\).
**Theorem 2**.: _Let \(\lambda\) be the pullback to \(J^{\infty}\pi\) of a Lagrangian of any order, and let \(\vartheta_{\lambda}\) be the pullback to \(J^{\infty}\pi\) of any global Lepage equivalent of \(\lambda\) which is at most \(1\)-contact, constructed using additional data as appropriate. A global homotopy operator \(P\), such as the one described in [1] using a symmetric linear connection, then defines a global Lepage equivalent \(\vartheta^{\rm F}\) which is an extension of \(\vartheta_{\lambda}\) and which satisfies the closure property, that \(d\vartheta^{\rm F}=0\) precisely when \(\lambda\) is null._
We remark that in fact there is no requirement for the homotopy operators in each term to be the same, and we could generalise the formula to
\[\vartheta^{\mathrm{F}}=\vartheta_{\lambda}+\sum_{p=1}^{m-1}(-1)^{p}(P_{p+1}d_{ \mathrm{v}}P_{p}d_{\mathrm{v}}\cdots P_{2}d_{\mathrm{v}})\vartheta_{\lambda}^{( 1)}\]
where \(P_{p}\) is a homotopy operator for the \(p\)-contact row of the variational bicomplex.
## 5 Vertical endomorphisms
The most basic example of a'vertical endomorphism' is the almost tangent structure on a tangent manifold \(TM\). This is simply a tensorial expression of the isomorphism between a vector space and its tangent space at any point, applied to the tangent spaces to a manifold, and may be regarded as a 1-form taking values in the sub-bundle of \(TTM\to TM\) containing the vertical vectors. A similar object may be defined using a more complicated procedure on a higher order tangent manifold \(T^{k}M\)[5], now giving a 1-form taking its values in the sub-bundle of vertical vectors in \(TT^{k}M\to T^{k}M\).
Vertical endomorphisms \(S^{\eta}\) on jet manifolds \(J^{k}\pi\), where \(\eta\in\Omega^{1}(M)\) is a closed 1-form, were defined in [18]. The construction started with a point \(j_{p}^{k}\phi\in J^{k}\pi\) and a tangent vector \(\xi\) at \(j_{p}^{k-1}\phi\in J^{k-1}\pi\) vertical over \(M\). Any such vector may be represented by a 1-parameter family of local sections \(\phi_{t}\) where \(\phi_{0}=\phi\) and \(\xi\) is the tangent vector at \(t=0\) to the curve \(t\mapsto j_{p}^{k-1}\phi_{t}\). Given a function \(f\) on \(M\) defined in a neighbourhood of \(p\), the _vertical lift_ of \(\xi\) to \(j_{p}^{k}\phi\) in the direction specified by \(df\) then used the 1-parameter family of local sections \(\psi_{t}:q\mapsto\phi_{tf(q)}(q)\) to define a curve \(j_{p}^{k}\psi_{t}\) in \(J^{k}\pi\) and therefore a tangent vector at \(j_{p}^{k}\phi\). The vertical endomorphism \(S^{\eta}\) at any point \(j_{p}^{k}\phi\in J^{k}\pi\) was then defined by starting with any tangent vector in \(T_{j_{p}^{k}\phi}J^{k}\pi\), projecting it to \(T_{j_{p}^{k-1}\phi}J^{k-1}\pi\), taking the vertical representative using the contact structure, and then applying the vertical lift (using any function \(f\) satisfying \(f(p)=0\) and \(df=\eta\) in a neighbourhood of \(p\)) to give a new tangent vector in \(T_{j_{p}^{k}\phi}J^{k}\pi\). It may be shown that this construction is well defined, and so independent of the choices of \(\phi_{t}\) and \(f\), and that it gives a tensor field \(S^{\eta}\in\Omega^{1}(J^{k}\pi)\otimes\mathfrak{X}(J^{k}\pi)\) expressed in coordinates1 as
Footnote 1: The numerical coefficient given in [18, eqn 3.4] and repeated in [19] after definition 6.5.6 is incorrect as it does not take account of the use of an individual index \(i\) in a multi-index formula.
\[S^{\eta}=\sum_{|J|+|K|\leq k-1}\frac{(J+K+1_{i})!}{J!\,K!\,(|K|+1)}\frac{ \partial^{|K|}\eta_{i}}{\partial x^{K}}\,\theta_{J}^{\alpha}\otimes\frac{ \partial}{\partial u_{J+K+1_{i}}^{\alpha}}\,.\]
It is evident from this formula that, when acting on forms, the operators \(S^{\eta}\) on \(J^{k}\pi\) and on \(J^{l}\pi\) with \(l>k\) are related by the pullback map \(\pi_{l,k}^{*}\), so that we may define a similar operator acting on forms on \(J^{\infty}\pi\) without ambiguity.
Given local coordinates \((x^{i})\) on \(U\subset M\), we write \(S^{i}\) rather than \(S^{dx^{i}}\) for the operators on \(U^{\infty}=\pi_{\infty}^{-1}(U)\). These local operators have the rather simpler coordinate description
\[S^{i}=\sum_{|I|=0}^{\infty}\bigl{(}I(i)+1\bigr{)}\theta_{I}^{\alpha}\otimes \frac{\partial}{\partial u_{I+1_{i}}^{\alpha}}\]
and may be used to construct local homotopy operators for the horizontal differential \(d_{\mathrm{h}}\) on \(J^{\infty}\pi\). One such homotopy operator, involving an ordering of the coordinates \(x^{i}\), was given in [22]. Other homotopy operators, not using such an ordering, may be constructed from two different repeated actions of \(S^{i}\) on forms: these are
\[\tilde{S}^{J}=i_{S^{j_{1}}\circ S^{j_{2}}\circ\cdots\circ S^{j_{r}}}\,,\qquad \hat{S}^{J}=i_{S^{j_{1}}}\circ i_{S^{j_{2}}}\circ\cdots\circ i_{S^{j_{r}}}\]
where \(|J|=r\) and \(J=1_{j_{1}}+1_{j_{2}}+\cdots+1_{j_{r}}\). Note that the first action is a derivation, whereas the second is not if \(r>1\); the multi-index notation is justified because operators \(S^{i}\) and \(S^{j}\) commute. The following result was obtained in [8, Theorem 1].
**Proposition 3**.: _Define the differential operators \(\tilde{P},\hat{P}:\Omega^{p,q}(U^{\infty})\to\Omega^{p,q-1}(U^{\infty})\), with \(p\geq 1\) and \(1\leq q\leq m\), by \(\tilde{P}(\omega)=i_{d/dx^{i}}\bigl{(}\tilde{P}^{i}(\omega)\bigr{)}\), \(\hat{P}(\omega)=i_{d/dx^{i}}\bigl{(}\hat{P}^{i}(\omega)\bigr{)}\) where_
\[\tilde{P}^{i}(\omega) =\sum_{I=0}^{\infty}\frac{(-1)^{|I|}(m-q)!|I|!}{p(m-q+|I|+1)!I!}d _{I}\tilde{S}^{I+1_{i}}\omega\,,\] \[\hat{P}^{i}(\omega) =\sum_{I=0}^{\infty}\frac{(-1)^{|I|}(m-q)!|I|!}{p^{|I|+1}(m-q+|I| +1)!I!}d_{I}\hat{S}^{I+1_{i}}\omega\,.\]
_Then both \(\tilde{P}\) and \(\hat{P}\) are homotopy operators for \(d_{\mathrm{h}}\)._
In general the operators \(\tilde{P}\) and \(\hat{P}\) are different, although they are equal when acting on forms projectable to \(J^{1}\pi\), and also when acting on forms in \(\Omega^{1,q}(U^{\infty})\). In the latter case, writing the operator as \(P\), [8, Theorem 2] gives
\[\omega-d_{\mathrm{h}}P\omega=\theta^{\alpha}\wedge\sum_{|I|=0}^{\infty}(-1)^{ |I|}d_{I}\bigl{(}i_{\partial/\partial u_{I}^{\alpha}}\omega\bigr{)}\]
for any \(\omega\in\Omega^{1,m}\), so that \(\omega-d_{\mathrm{h}}P\omega\) is a source form. If we write \(\vartheta_{\lambda}=\lambda-Pd_{\mathrm{v}}\lambda\) then
\[(d\vartheta_{\lambda})^{(1)}=d_{\mathrm{v}}(\vartheta_{\lambda}^{(0)})+d_{ \mathrm{h}}(\vartheta_{\lambda}^{(1)})=(d_{\mathrm{v}}\lambda)-d_{\mathrm{h}} P(d_{\mathrm{v}}\lambda)\]
so that in particular \((d\vartheta_{\lambda})^{(1)}\) is a source form. Thus \(\vartheta_{\lambda}\) is a Lepage form, and it is clearly a local Lepage equivalent of \(\lambda\). In coordinates with \(\lambda=L\,\omega_{0}\)
\[\vartheta_{\lambda}=L\,\omega_{0}+\sum_{|J|,|K|=0}^{\infty}\frac{(-1)^{|J|}(J +K+1_{j})!\,|J|!\,|K|!}{(|J|+|K|+1)!J!\,K!}d_{J}\biggl{(}\frac{\partial L}{ \partial u_{J+K+1_{j}}^{\alpha}}\biggr{)}\theta_{K}\wedge\omega_{j}\,,\]
so that it is the pullback to \(J^{\infty}\pi\) of the principal Lepage equivalent of \(\lambda\). We obtain the above formula from those for \(S\) and \(P\) by using the multi-index Leibniz' rule and the identity for weighted sums of binomial coefficients
\[\sum_{0\leq K\leq I}\frac{(-1)^{|K|}I!}{(|K|+p+1)K!(I-K)!}=\frac{p!|I|!}{(|I|+p+ 1)!}\]
obtained by first evaluating the integral \(\int_{0}^{1}x^{p}(x-1)^{r}dx\) in two different ways, and then using the Vandermonde identity for the convolution of scalar binomial coefficients.
We can also use the homotopy operators \(P\) to give a simple proof of the result mentioned earlier, that the difference between two Lepage equivalents for the same Lagrangian is the sum of a closed form and a form which is at least 2-contact. Let \(\vartheta_{\lambda}\) and \(\vartheta^{\prime}_{\lambda}\) be two Lepage equivalents for the Lagrangian \(\lambda\), and put \(\vartheta=\vartheta_{\lambda}-\vartheta^{\prime}_{\lambda}\), so that \(\vartheta^{(0)}=0\). As \(\vartheta_{\lambda}\) and \(\vartheta^{\prime}_{\lambda}\) give rise to the same Euler-Lagrange form, we see that
\[d_{\rm h}(\vartheta^{(1)})+d_{\rm v}(\vartheta^{(0)})=(d\vartheta)^{(1)}=0\]
so that \(d_{\rm h}(\vartheta^{(1)})=0\) and therefore locally \(\vartheta^{(1)}=d_{\rm h}P(\vartheta^{(1)})\). Then
\[\vartheta^{(1)}=dP(\vartheta^{(1)})-d_{\rm v}P(\vartheta^{(1)})\]
where \(d_{\rm v}P(\vartheta^{(1)})\in\Omega^{2,m-2}\), so that
\[\vartheta =dP(\vartheta^{(1)})+\big{(}\vartheta^{(2)}-d_{\rm v}P( \vartheta^{(1)})\big{)}+\cdots+\vartheta^{(m)}\] \[\in d\Omega^{1,m-2}\oplus\Omega^{2,m-2}\oplus\cdots\oplus\Omega^ {m,0}\,.\]
Finally in this Section we apply these operators to a first order Lagrangian \(\lambda=L\,\omega_{0}\). As \(d_{\rm v}\lambda=d\lambda\) is also first order, we see that
\[\lambda-Pd_{\rm v}\lambda=L\,\omega_{0}+S^{i}\bigg{(}\frac{\partial L}{ \partial u^{\alpha}}\theta^{\alpha}+\frac{\partial L}{\partial u^{\alpha}_{j} }\theta^{\alpha}_{j}\bigg{)}\omega_{i}=L\,\omega_{0}+\frac{\partial L}{ \partial u^{\alpha}_{i}}\theta^{\alpha}\wedge\omega_{i}\,,\]
the local expression of the Poincare-Cartan form. This is also first order, and we then see that each successive operator \(P^{i}\) is simply a multiple of \(S^{i}\). We obtain
\[(-Pd_{\rm v})^{p}\lambda=\frac{1}{(p!)^{2}}\frac{\partial^{p}L}{\partial u^{ \alpha_{1}}_{i_{1}}\cdots\partial u^{\alpha_{p}}_{i_{p}}}\theta^{\alpha_{1}} \wedge\cdots\wedge\theta^{\alpha_{p}}\wedge\omega_{i_{1}\cdots i_{p}}\,,\]
showing that \(\sum_{p=0}^{m}(-Pd_{\rm v})^{p}\lambda\) gives the local expression of the standard fundamental Lepage equivalent of a first order Lagrangian.
## 6 Connections and vertical tensors
We have seen that, for a first order Lagrangian, the fundamental Lepage equivalent may be constructed locally using the homotopy operators \(S^{i}\), and also that it is a global
object which may be constructed using the first order vertical tensor \(S\). These are two different facets of the same construction and they arise because, in the first order case, the formulation of a vertical endomorphism \(S^{\eta}\) does not in fact require the 1-form \(\eta\) to be closed. From the coordinate description
\[S^{\eta}=\eta_{i}\theta^{\alpha}\otimes\frac{\partial}{\partial u_{i}^{\alpha}}\]
it is clear that at any point \(j_{p}^{1}\phi\in J^{1}\pi\) the value of \(S^{\eta}\) depends only on the cotangent vector \(\eta|_{p}\) and not on the values of \(\eta\) at any other points. Thus, given any cotangent vector \(\eta|_{p}\in T_{p}^{*}M\), we may choose a closed 1-form \(\zeta\) in a neighbourhood of \(p\) satisfying \(\zeta|_{p}=\eta|_{p}\), for example the form given in coordinates centred on \(p\) by \(\zeta=d(\eta_{i}(p)x^{i})\), and put
\[S^{\eta}|_{j_{p}^{1}\phi}=S^{\zeta}|_{j_{p}^{1}\phi}\in(T^{*}J^{1}\pi\otimes TJ ^{1}\pi)_{j_{p}^{1}\phi}\,.\]
Doing this at each point of \(J^{1}\pi\) gives a well defined vertical endomorphism \(S^{\eta}\) for an arbitrary 1-form \(\eta\in\Omega^{1}(M)\), and it is clear that the mapping \(\eta\mapsto S^{\eta}\) is just that given by the vertical tensor \(S\).
The same approach will not work directly for higher order vertical endomorphisms. For example, the coordinate description of \(S^{\eta}\) on \(J^{2}\pi\) is
\[S^{\eta}=\eta_{i}\theta^{\alpha}\otimes\frac{\partial}{\partial u_{i}^{\alpha }}+\frac{1}{\#(ij)}\frac{\partial\eta_{i}}{\partial x^{j}}\theta^{\alpha} \otimes\frac{\partial}{\partial u_{(ij)}^{\alpha}}+\frac{2}{\#(ij)}\eta_{i} \theta_{j}^{\alpha}\otimes\frac{\partial}{\partial u_{(ij)}^{\alpha}}\]
and at any point \(j_{p}^{2}\phi\in J^{2}\pi\) the value of \(S^{\eta}\) depends, not just on the cotangent vector \(\eta_{p}\), but also on the derivative of the 1-form \(\eta\) at \(p\).
We can, however, deal with this problem by supposing that we are given a symmetric linear connection \(\nabla\) on \(M\); the infinitesimal parallel translation defined by \(\nabla\) will then provide enough information to specify the derivative of \(\eta\). The vertical endomorphism defined by the 1-form \(\eta\) (not necessarily closed) and the connection \(\nabla\) will be given in coordinates as
\[S^{\eta}_{\nabla}=\eta_{i}\theta^{\alpha}\otimes\frac{\partial}{\partial u_{i }^{\alpha}}+\frac{1}{\#(hj)}\eta_{i}\Gamma^{i}_{hj}\theta^{\alpha}\otimes \frac{\partial}{\partial u_{(hj)}^{\alpha}}+\frac{2}{\#(ij)}\eta_{i}\theta_{j }^{\alpha}\otimes\frac{\partial}{\partial u_{(ij)}^{\alpha}}\]
where \(\Gamma^{i}_{hj}\) are the connection coefficients of \(\nabla\), so that the mapping \(\eta\to S^{\eta}_{\nabla}\) will define a second order vertical tensor \(S_{\nabla}\), a section of the bundle \(\pi_{2}^{*}TM\otimes T^{*}J^{2}\pi\otimes TJ^{2}\pi\) over \(J^{2}\pi\).
Formally, as the 1-form \(\eta\) is a section of the cotangent bundle \(\tau:T^{*}M\to M\), we regard the connection \(\nabla\) as a linear Ehresmann connection \(\Gamma:T^{*}M\to J^{1}\tau\), a section of the jet bundle \(\tau_{1,0}:J^{1}\tau\to T^{*}M\), so that the connection coefficients \(\Gamma^{i}_{hj}\) are just the jet coordinates of \(\Gamma\). (Of course the connection \(\nabla\) also defines a linear Ehresmann connection on the tangent bundle, but there the jet coordinates are \(-\Gamma^{i}_{hj}\).) For each \(j_{p}^{2}\phi\in J^{2}\pi\) we may choose a closed 1-form \(\zeta\) in a neighbourhood of \(p\) satisfying \(j_{p}^{1}\zeta=\Gamma(\eta|_{p})\), for example
the form given in coordinates centred on \(p\) by \(\zeta=d\big{(}\eta_{i}(p)x^{i}+\frac{1}{2}\eta_{i}(p)\Gamma^{i}_{hj}(p)x^{h}x^{j} \big{)}\), and put
\[S^{\eta}_{\nabla}|_{j^{2}_{p}\phi}=S^{\zeta}|_{j^{2}_{p}\phi}\in(T^{*}J^{2}\pi \otimes TJ^{2}\pi)_{j^{2}_{p}\phi}\,.\]
Doing this at each point of \(J^{2}\pi\) now gives a well defined vertical endomorphism \(S^{\eta}_{\nabla}\) for an arbitrary \(1\)-form \(\eta\in\Omega^{1}(M)\), and so we can construct a second order vertical tensor with coordinate expression
\[S_{\nabla}=\partial_{i}\otimes\left(\theta^{\alpha}\otimes\frac{\partial}{ \partial u^{\alpha}_{i}}+\frac{1}{\#(hj)}\Gamma^{i}_{hj}\theta^{\alpha}\otimes \frac{\partial}{\partial u^{\alpha}_{(hj)}}+\frac{2}{\#(ij)}\theta^{\alpha}_{j }\otimes\frac{\partial}{\partial u^{\alpha}_{(ij)}}\right). \tag{6.1}\]
A similar procedure may be carried out for higher order vertical endomorphisms, but requires the use of semiholonomic jets to allow for symmetrization. For example, in the third order case we use the connection map \(\Gamma:T^{*}M\to J^{1}\tau\), regarded as a bundle morphism \(\tau\to\tau_{1}\) over the identity on \(M\), and its prolongation \(j^{1}\Gamma:J^{1}\tau\to J^{1}\tau_{1}\). The composition \(j^{1}\Gamma\circ\Gamma\) then takes its values in the semiholonomic manifold \(\widehat{J}^{2}\tau\subset J^{1}\tau_{1}\)[19, Section 5.3], so that if \(\mathrm{p}_{2}:\widehat{J}^{2}\tau\to J^{2}\tau\) is the symmetrization projection then we may use
\[\Gamma_{2}=\mathrm{p}_{2}\circ j^{1}\Gamma\circ\Gamma:T^{*}M\to J^{2}\tau\]
as the map which allows us to specify the first and second derivatives at \(p\) of the closed local \(1\)-form \(\zeta\) by setting \(j^{2}_{p}\zeta=\Gamma_{2}(\eta|_{p})\).
More generally, we construct the maps \(\Gamma_{l}\) recursively. Suppose we have the map \(\Gamma_{l-1}:T^{*}M\to J^{l-1}\tau\), and that it is a section of \(\tau_{l-1,0}\) with the property that \(j^{1}\Gamma_{l-1}\circ\Gamma\) takes its values in the semiholonomic manifold \(\widehat{J}^{l}\tau\subset J^{1}\tau_{l-1}\), so that we may set
\[\Gamma_{l}=\mathrm{p}_{l}\circ j^{1}\Gamma_{l-1}\circ\Gamma:T^{*}M\to J^{l}\tau\,.\]
We note first that
\[\tau_{l,l-1}\circ\Gamma_{l} =(\tau_{l-1})_{1,0}\circ i_{1,l-1}\circ\Gamma_{l}\] \[=(\tau_{l-1})_{1,0}\circ i_{1,l-1}\circ\mathrm{p}_{l}\circ j^{1} \Gamma_{l-1}\circ\Gamma\] \[=(\tau_{l-1})_{1,0}\circ j^{1}\Gamma_{l-1}\circ\Gamma\] \[=\Gamma_{l-1}\circ\tau_{1,0}\circ\Gamma\] \[=\Gamma_{l-1}\,,\]
so that
\[\tau_{l,0}\circ\Gamma_{l}=\tau_{l-1,0}\circ\tau_{l,l-1}\circ\Gamma_{l}=\tau_{ l-1,0}\circ\Gamma_{l-1}=\mathrm{id}_{T^{*}M}\]
and therefore that \(\Gamma_{l}\) is a section of \(\tau_{l,0}:J^{l}\tau\to T^{*}M\). We must also check that \(j^{1}\Gamma_{l}\circ\Gamma\) takes its values in the semiholonomic manifold \(\widehat{J}^{l+1}\tau\), the submanifold of \(J^{1}\tau_{l}\) given by
equality of the two maps \(j^{1}\tau_{l,l-1}\) and \(i_{1,l-1}\circ(\tau_{l})_{1,0}\) to \(J^{1}\tau_{l-1}\)[19, Section 6.2]; but at any point \(j^{1}_{p}\omega\in J^{1}\tau\) we know that
\[\big{(}j^{1}\tau_{l,l-1}\circ j^{1}\Gamma_{l}\big{)}(j^{1}_{p}\omega) =j^{1}(\tau_{l,l-1}\circ\Gamma_{l})(j^{1}_{p}\omega)\] \[=j^{1}\Gamma_{l-1}(j^{1}_{p}\omega)\]
and
\[\big{(}i_{l-1,1}\circ(\tau_{l})_{1,0}\circ j^{1}\Gamma_{l}\big{)} (j^{1}_{p}\omega) =\big{(}i_{l-1,1}\circ(\tau_{l})_{1,0}\big{)}\big{(}j^{1}_{p}( \Gamma_{l}\circ\omega)\big{)}\] \[=i_{l-1,1}\big{(}\Gamma_{l}(\omega(p))\big{)}\] \[=j^{1}\Gamma_{l-1}\big{(}\Gamma(\omega(p))\big{)}\,,\]
so that if \(j^{1}_{p}\omega\) is in the image of \(\Gamma\) then \(j^{1}_{p}\omega=\Gamma(\omega(p))\) and
\[\big{(}j^{1}\tau_{l,l-1}\circ j^{1}\Gamma_{l}\big{)}\big{(}\Gamma(\omega(p)) \big{)}=j^{1}\Gamma_{l-1}\big{(}\Gamma(\omega(p))\big{)}=\big{(}i_{l-1,1}\circ (\tau_{l})_{1,0}\circ j^{1}\Gamma_{l}\big{)}\big{(}\Gamma(\omega(p))\big{)}\]
as required. We may therefore define \(\Gamma_{l+1}=\mathrm{p}_{l+1}\circ\Gamma_{l}\circ\Gamma\) and continue the process. The recursion starts with \(l=2\) and \(\Gamma_{1}=\Gamma:T^{*}M\to J^{1}\tau\), or even degenerately with \(l=1\) and \(\Gamma_{0}=\mathrm{id}_{T^{*}M}:T^{*}M\to J^{0}\tau=T^{*}M\).
To find a coordinate expression for these maps, let \((x^{i},y_{j})\) be the coordinates on \(T^{*}M\), so that the jet coordinates on \(J^{1}\tau\) are \(y_{ij}\) and on \(J^{l}\tau\) are \(y_{iJ}\). As the connection is linear and symmetric, we see that \(y_{ij}\circ\Gamma=y_{h}\Gamma^{h}_{ij}\) with \(y_{ji}\circ\Gamma=y_{ij}\circ\Gamma\), and in general if \(y_{iJ}\circ\Gamma_{l}=y_{h}\Gamma^{h}_{J+1_{i}}\) then
\[y_{iJj}\circ j^{1}\Gamma_{l}=y_{h}\frac{\partial\Gamma^{h}_{J+1_{i}}}{ \partial x^{j}}+y_{hj}\Gamma^{h}_{J+1_{i}}\]
so that
\[y_{iJj}\circ j^{1}\Gamma_{l}\circ\Gamma=y_{g}\bigg{(}\frac{\partial\Gamma^{g} _{J+1_{i}}}{\partial x^{j}}+\Gamma^{g}_{hj}\Gamma^{h}_{J+1_{i}}\bigg{)}\,;\]
the coordinates \(y_{iJ+1_{j}}\circ\Gamma_{l+1}\) may then be obtained by symmetrization. In the degererate case, the coordinates of \(\Gamma_{0}\) are of course \(\Gamma^{h}_{i}=\delta^{h}_{i}\).
We have, therefore, obtained the following result.
**Theorem 4**.: _Let \(\pi:E\to M\) be a fibred manifold, and let \(\nabla\) be a symmetric linear connection on \(M\). On any jet manifold \(J^{k}\pi\) there is a canonical vertical tensor \(S_{\nabla}\) defined in the following way. If \(\eta\in\Omega^{1}(M)\) and \(j^{k}_{p}\phi\in J^{k}\pi\), let \(\zeta\) be any local closed \(1\)-form defined in a neighbourhood of \(p\) satisfying \(j^{k-1}_{p}\zeta=\Gamma_{k-1}(\eta|_{p})\) (for example, a \(1\)-form defined using a polynomial in coordinates \(x^{i}\) centred on \(p\)) and put \(S^{\eta}_{\nabla}|_{j^{k}_{p}\phi}=S^{\zeta}|_{j^{k}_{p}\phi}\). Then \(S^{\eta}_{\nabla}|_{j^{k}_{p}\phi}\) is independent of the choice of \(\zeta\). The resulting map \(j^{k}_{p}\phi\mapsto S^{\eta}_{\nabla}|_{j^{k}_{p}\phi}\) is a vertical endomorphism at each point \(j^{k}_{p}\phi\) only on the cotangent vector \(\eta|_{p}\) and so defines a vertical tensor \(\eta\mapsto S^{\eta}_{\nabla}\)._
The coordinate expression of \(S_{\nabla}\) is
\[S_{\nabla}=\partial_{h}\otimes\sum_{|J|+|K|\leq k-1}\frac{(J+K+1_{i})!}{J!\,K! \,(|K|+1)}\Gamma^{h}_{K+1_{i}}\theta^{\alpha}_{J}\otimes\frac{\partial}{ \partial u^{\alpha}_{J+K+1_{i}}}\,,\]
and combining the sums over the index \(i\) and the multi-index \(K\) in the usual way then gives
\[S_{\nabla}=\partial_{h}\otimes\sum_{\begin{subarray}{c}|J|+|K|\leq k\\ |K|>0\end{subarray}}\frac{(J+K)!}{J!\,K!}\Gamma^{h}_{K}\theta^{\alpha}_{J} \otimes\frac{\partial}{\partial u^{\alpha}_{J+K}}\,.\]
A similar formula, without an upper bound on the length of the multi-indices, may be used on \(J^{\infty}\pi\) for the map \(\pi^{*}_{\infty}T^{*}M\otimes T^{*}J^{\infty}\pi\to TJ^{\infty}\pi\).
## 7 Infinitesimal nonholonomic projections
As mentioned earlier, two possible approaches to defining global Lepage equivalents for higher order Lagrangians involve using either connections, or tubular neighbourhoods of holonomic jet manifolds inside nonholonomic jet manifolds. We have remarked that the latter approach really involves only the infinitesimal projection defined by the tubular neighbourhood at points of the holonomic submanifold, and we can now see that the existence of vertical tensors allows the two approaches to be related: a symmetric linear connection on the base manifold will define an infinitesimal nonholonomic projection \(TJ^{1}\pi_{k-1}\to TJ^{k}\pi\) for \(k\geq 2\).
The simplest example is in the second order case, where \(\mathrm{i}_{1,1}:J^{2}\pi\to J^{1}\pi_{1}\) is the canonical inclusion. We start with a point \(j^{2}_{p}\phi\in J^{2}\pi\) and a tangent vector \(\xi\in T_{j^{2}_{p}\phi}J^{1}\pi_{1}\) which is vertical over \(J^{1}\pi\), so that \(\xi\in V_{j^{2}_{p}\phi}(\pi_{1})_{1,0}\). We then apply the isomorphism
\[\mathsf{A}:V(\pi_{1})_{1,0}\to(\pi_{1})^{*}_{1}T^{*}M\otimes(\pi_{1})^{*}_{1, 0}V\pi_{1}\]
arising from the affine structure of \((\pi_{1})_{1,0}:J^{1}\pi_{1}\to J^{1}\pi\) (restricted to points of \(J^{2}\pi\)) and follow this by \(S_{\nabla}\), giving a map
\[p_{\nabla}=S_{\nabla}\circ\mathsf{A}:V_{J^{2}\pi}(\pi_{1})_{1,0}\to V\pi_{2,0}\]
so that \(p_{\nabla}(\xi)\in V_{j^{2}_{p}\phi}\pi_{2,0}\).
There are, of course, many possible extensions of \(p_{\nabla}\) to a map \(T_{J^{1}\pi}J^{1}\pi_{1}\to TJ^{1}\pi\); but there is precisely one such extension satisfying the requirement that \(p_{\nabla}\circ T\mathrm{i}_{1,1}=\mathrm{id}_{TJ^{2}\pi}\). We may see this by looking at coordinate representations. At any point \(\mathrm{i}_{1,1}(j^{2}_{p}\phi)\in J^{1}\pi_{1}\)
\[\mathsf{A}\!\left(\frac{\partial}{\partial u^{\alpha}_{\cdot j}}\right)=dx^{j} \otimes\frac{\partial}{\partial u^{\alpha}_{\cdot j}}\,,\qquad\mathsf{A}\! \left(\frac{\partial}{\partial u^{\alpha}_{\cdot ij}}\right)=dx^{j}\otimes \frac{\partial}{\partial u^{\alpha}_{\cdot i}}\,,\]
and composing with \(S_{\nabla}\) as presented in formula (6.1) in the previous Section gives
\[p_{\nabla}\!\left(\frac{\partial}{\partial u^{\alpha}_{j}}\right)=\frac{\partial} {\partial u^{\alpha}_{j}}+\frac{1}{\#(ik)}\Gamma^{j}_{ik}\frac{\partial}{ \partial u^{\alpha}_{(ik)}}\,,\qquad p_{\nabla}\!\left(\frac{\partial}{ \partial u^{\alpha}_{ij}}\right)=\frac{1}{\#(ij)}\frac{\partial}{\partial u^ {\alpha}_{(ij)}}\,.\]
(Nominally the image of, say,
\[\left.\frac{\partial}{\partial u^{\alpha}_{ij}}\right|_{\mathrm{i}_{1,1}(j^{ \alpha}_{\widetilde{p}}\phi)}\mapsto dx^{j}|_{p}\otimes\frac{\partial}{ \partial u^{\alpha}_{i}}\right|_{j^{\widetilde{p}}_{\widetilde{p}}\phi}\in T^{ *}_{p}M\otimes T_{j^{\widetilde{1}}_{\widetilde{p}}\phi}J^{1}\pi\]
is not directly in the domain of \(S_{\nabla}\); but as \(S_{\nabla}\) incorporates the projection \(T\pi_{2,1}:TJ^{2}\pi\to TJ^{1}\pi\) we may represent that image by an element of \(T^{*}_{p}M\otimes T_{j^{\widetilde{2}}_{\widetilde{p}}\phi}J^{2}\pi\) without ambiguity.) Noting now that
\[T\mathrm{i}_{1,1}\!\left(\frac{\partial}{\partial x^{i}}\right) =\frac{\partial}{\partial x^{i}} T\mathrm{i}_{1,1}\!\left(\frac{\partial}{\partial u^{\alpha}_{i}} \right) =\frac{\partial}{\partial u^{\alpha}_{i.}}+\frac{\partial}{\partial u ^{\alpha}_{i}}\] \[T\mathrm{i}_{1,1}\!\left(\frac{\partial}{\partial u^{\alpha}}\right) =\frac{\partial}{\partial u^{\alpha}_{.\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
This gives us the following result.
**Theorem 5**.: _Let \(\pi:E\to M\) be a fibred manifold, and let \(\nabla\) be a symmetric linear connection on \(M\). For each nonholonomic jet manifold \(J^{1}\pi_{k-1}\) there is a unique infinitesimal projection \(p_{\nabla}\) satisfying \(p_{\nabla}\circ T_{1,k-1}=\mathrm{id}_{J^{k}\pi}\) and \(p_{\nabla}|_{\widehat{J}^{k}\pi}=\mathrm{p}_{k}\), constructed by composing the isomorphism_
\[\mathsf{A}:V(\pi_{k-1})_{1,0}\to(\pi_{k-1})_{1}^{*}T^{*}M\otimes(\pi_{k-1})_{1, 0}^{*}V\pi_{k-1}\]
_arising from the affine structure of \((\pi_{k-1})_{1,0}:J^{1}\pi_{k-1}\to J^{k-1}\pi\) (restricted to points of \(J^{k}\pi\)) with the vertical tensor \(S_{\nabla}\) on \(J^{k}\pi\)._
## 8 Homotopy operators for the horizontal differential
We have seen that homotopy operators for the horizontal differential play an important part in the construction of Lepage equivalents satisfying the closure condition (and, indeed, of at most 1-contact Lepage equivalents in general), and that locally such homotopy operators can be constructed using vertical endomorphisms. We have also noted that global homotopy operators (depending on a choice of a symmetric linear connection) have been shown to exist, but the construction in [1] uses a quite different method, relating the horizontal differential on forms to an operator acting on evolutionary vector fields. It is therefore of some interest to see whether a global homotopy operator can be constructed directly for differental forms by using vertical differentials. I conjecture that this can be done, and offer a possible method of doing so. The proposed formula has been checked for small values of the parameters \(p\), \(q\) and \(r\) (see the Appendix for an example calculation); although the general result might be amenable to a direct calculation, there may well be a more geometric method of approaching it.
There are three ingredients in the proposed formula, which mimics the local formula described above. The vertical tensor \(S_{\nabla}\), regarded as a map \(\Omega^{p,q}\to\mathfrak{X}(M)\otimes\Omega^{p,q}\), has already been specified, and this can be iterated to give a map \(S_{\nabla}^{r}:\Omega^{p,q}\to\odot^{r}\mathfrak{X}(M)\otimes\Omega^{p,q}\), where \(\odot^{r}\mathfrak{X}(M)\) denotes the symmetric multivector fields on \(M\). We shall also need a covariant version of the horizontal differential, which we shall denote \(d_{\mathrm{h}\nabla}\); this will be a map \(\odot^{r}\mathfrak{X}(M)\otimes\Omega^{p,q}\to\odot^{r}\mathfrak{X}(M)\otimes \Omega^{p,q+1}\), given on basis tensors by
\[d_{\mathrm{h}\nabla}(X\otimes\omega)=\nabla X\wedge\omega+X\otimes d_{\mathrm{ h}}\omega\]
and extended by multiinerarity, symmetry and skewsymmetry. The final ingredient will be an operator \(\mathsf{C}:\odot^{r}\mathfrak{X}(M)\otimes\Omega^{p,q}\to\odot^{r-1} \mathfrak{X}(M)\otimes\Omega^{p,q-1}\) contracting a vector component with a form component, again taking advantage of symmetry and skewsymmetry. The proposed homotopy operator is then \(P_{\nabla}:\Omega^{p,q}\to\Omega^{p,q-1}\) where
\[P_{\nabla}\omega=\sum_{r=0}^{\infty}\frac{(-1)^{r}(m-q)!}{p(m-q+r+1)r!}\big{(} \mathsf{C}\circ d_{\mathrm{h}\nabla}\big{)}^{r}\mathsf{C}\big{(}S_{\nabla}^{r +1}\omega\big{)}\,.\]
Discussion
One of the features of the approach taken in this paper is that it combines the use of finite and infinite jets. Variational problems are by theie nature of finite order, and the various differential forms involved in their analysis are normally defined on a finite order jet manifold. Indeed, as we have seen, the properties of Lepage equivalents of first order and second order Lagrangians are rather different from those of higher order Lagrangians.
On the other hand, the variational bicomplex is best considered on the infinite jet manifold. In [1] a subcomplex called the Jacobian subcomplex which is projectable to a finite order jet manifold is shown after lengthy calculations to be locally exact; but no mention is made of a homotopy operator acting on forms which are not \(d_{\rm h}\)-closed. The operators \(\hat{P}\) and \(\tilde{P}\) described earlier, although acting on all the forms on each finite order jet manifold, generally increase their order. It seems to be the case that the complexity of ascertaining a bound on the order of the forms obscures the homotopy structure of the problem, and indeed the potential for a global solution. The alternative approach in [25], which involves a single homotopy operator for the variational derivative (and thus, essentially, for the vertical differential) avoids this problem, but then cannot reduce to the classical fundamental Lepage equivalent for first order Lagrangians; in addition, global versions are likely to be constrained by topological considerations.
The investigations in the second half of the paper suggest that vertical endomorphisms, when glued together as a vertical tensor using a symmetric linear connection, could be a significant part of the geometry of the jet bundle structure on a fibred manifold. If the conjecture that they define a global homotopy operator for \(d_{\rm h}\) is correct, then the simple formula
\[\vartheta_{\lambda,\nabla}=\sum_{p=0}^{m}(-P_{\nabla}d_{\rm v})^{p}\lambda\]
will give a Lepage equivalent of the Lagrangian \(\lambda\) satisfying the closure property without the need for a separate choice of \(\vartheta_{\lambda}^{(1)}\) to start the recursion. There will, though, be the question of whether the truncated form \(\lambda-(P_{\nabla}d_{\rm v})\lambda\) is the same as the Poincare-Cartan form constructed using the infinitesimal projections \(p_{\nabla}\).
A final observation is that we have not explicitly addressed the question of whether it is possible to find, for second order Lagrangians, a geometrical construction of a Lepage equivalent satisfying the closure condition independently of any connection, as can be done for the Poincare-Cartan form and the Caratheodory form. I suspect that this will not be the case.
## Acknowledgements
I should like to acknowledge correspondence with Nicoleta Voicu which encouraged me to return to this topic after a number of years. Some results from this paper were presented at a meeting in Torino in honour of Marco Ferraris in June 2023, and at the International Summer School on Global Analysis and Applications in Presov in August 2023.
## Appendix: An example calculation
We consider the form \(\omega=f^{i}_{\alpha m}dx^{m}\otimes\theta^{\alpha}_{i}\) where \(p=q=1\) and the form is projectable to \(J^{2}\pi\), so that the formula is
\[P_{\nabla}\omega=\sum_{r=0}^{\infty}\frac{(-1)^{r}(m-1)!}{(m+r)r!}\big{(} \mathsf{C}\circ d_{\mathrm{h}\nabla}\big{)}^{r}\mathsf{C}\big{(}S_{\nabla}^{r+ 1}\omega\big{)}\,.\]
We obtain
\[d_{\mathrm{h}}\omega=(d_{l}f^{i}_{\alpha m})dx^{l}\wedge dx^{m}\wedge\theta^{ \alpha}_{i}+f^{i}_{\alpha m}dx^{l}\wedge dx^{m}\wedge\theta^{\alpha}_{(il)}\]
so that
\[S_{\nabla}(d_{\mathrm{h}}\omega) =(d_{l}f^{i}_{\alpha m})\partial_{i}\otimes dx^{l}\wedge dx^{m} \wedge\theta^{\alpha}+f^{i}_{\alpha m}\Gamma^{k}_{il}\partial_{k}\otimes dx^{ l}\wedge dx^{m}\wedge\theta^{\alpha}\] \[\qquad+f^{h}_{\alpha m}\partial_{k}\otimes dx^{k}\wedge dx^{m} \wedge\theta^{\alpha}_{h}+f^{k}_{\alpha m}\partial_{k}\otimes dx^{h}\wedge dx ^{m}\wedge\theta^{\alpha}_{h}\,,\] \[\mathsf{C}S_{\nabla}(d_{\mathrm{h}}\omega) =(d_{i}f^{i}_{\alpha j})dx^{j}\wedge\theta^{\alpha}-(d_{j}f^{i}_{ \alpha i})dx^{j}\wedge\theta^{\alpha}+f^{i}_{\alpha j}\Gamma^{k}_{ik}dx^{j} \wedge\theta^{\alpha}-f^{i}_{\alpha k}\Gamma^{k}_{ij}dx^{j}\wedge\theta^{\alpha}\] \[\qquad+m\,\omega-f^{i}_{\alpha i}dx^{j}\wedge\theta^{\alpha}_{j}\]
and
\[S_{\nabla}^{2}(d_{\mathrm{h}}\omega) =2\partial_{i}\otimes\partial_{l}\otimes\left(f^{i}_{\alpha m}dx ^{l}\wedge dx^{m}\wedge\theta^{\alpha}\right),\] \[\mathsf{C}S_{\nabla}^{2}(d_{\mathrm{h}}\omega) =2m\partial_{i}\otimes\left(f^{i}_{\alpha j}dx^{j}\wedge\theta^{ \alpha}\right)-2\partial_{j}\otimes\left(f^{i}_{\alpha i}dx^{j}\wedge\theta^{ \alpha}\right),\] \[\tfrac{1}{2}d_{\mathrm{h}\nabla}\mathsf{C}S_{\nabla}^{2}d_{ \mathrm{h}}\omega =m\Gamma^{k}_{ih}\partial_{k}\otimes dx^{h}\wedge\left(f^{i}_{ \alpha j}dx^{j}\wedge\theta^{\alpha}\right)+m\partial_{i}\otimes\left((d_{k}f ^{i}_{\alpha j})dx^{k}\wedge dx^{j}\wedge\theta^{\alpha}\right)\] \[\qquad+m\partial_{i}\otimes\left(f^{i}_{\alpha j}dx^{k}\wedge dx^ {j}\wedge\theta^{\alpha}_{k}\right)-\Gamma^{k}_{jh}\partial_{k}\otimes dx^{h} \wedge\left(f^{i}_{\alpha i}dx^{j}\wedge\theta^{\alpha}\right)\] \[\qquad-\partial_{j}\otimes\left((d_{k}f^{i}_{\alpha i})dx^{k} \wedge dx^{j}\wedge\theta^{\alpha}\right)-\partial_{j}\otimes\left(f^{i}_{ \alpha i}dx^{k}\wedge dx^{j}\wedge\theta^{\alpha}_{k}\right),\] \[\tfrac{1}{2}\mathsf{C}d_{\mathrm{h}\nabla}\mathsf{C}S_{\nabla}^{2} (d_{\mathrm{h}}\omega) =m\Gamma^{k}_{ik}f^{i}_{\alpha j}dx^{j}\wedge\theta^{\alpha}-m \Gamma^{k}_{ij}dx^{j}\wedge f^{i}_{\alpha k}\theta^{\alpha}\] \[\qquad+m(d_{i}f^{i}_{\alpha j})dx^{j}\wedge\theta^{\alpha}+m \omega-(d_{j}f^{i}_{\alpha i})dx^{j}\wedge\theta^{\alpha}-f^{i}_{\alpha i}dx^{j }\wedge\theta^{\alpha}_{j}\,.\]
On the other hand,
\[S_{\nabla}\omega =\partial_{i}\otimes\left(f^{i}_{\alpha m}dx^{m}\wedge\theta^{ \alpha}\right),\] \[\mathsf{C}S_{\nabla}\omega =f^{i}_{\alpha i}\theta^{\alpha}\,,\] \[d_{\mathrm{h}}(\mathsf{C}S_{\nabla}\omega) =(d_{j}f^{i}_{\alpha i})dx^{j}\wedge\theta^{\alpha}+f^{i}_{\alpha i }dx^{j}\wedge\theta^{\alpha}_{j}\]
so that
\[\tfrac{1}{2}\mathsf{C}d_{\mathrm{h}\nabla}\mathsf{C}S^{2}_{\nabla}(d_{ \mathrm{h}}\omega)+d_{\mathrm{h}}(\mathsf{C}S_{\nabla}\omega)=m\Gamma^{k}_{ik}f^ {i}_{\alpha j}dx^{j}\wedge\theta^{\alpha}-m\Gamma^{k}_{ij}dx^{j}\wedge f^{i}_{ \alpha k}\theta^{\alpha}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+m(d_{i}f^ {i}_{\alpha j})dx^{j}\wedge\theta^{\alpha}+m\omega\,.\]
But from
\[\mathsf{C}S_{\nabla}(d_{\mathrm{h}}\omega)=(d_{i}f^{i}_{\alpha j})dx^{j}\wedge \theta^{\alpha}+f^{i}_{\alpha j}\Gamma^{k}_{ik}dx^{j}\wedge\theta^{\alpha}-f^{ i}_{\alpha k}\Gamma^{k}_{ij}dx^{j}\wedge\theta^{\alpha}+m\,\omega-d_{\mathrm{h}}( \mathsf{C}S_{\nabla}\omega)\]
we see that
\[\tfrac{1}{2}\mathsf{C}d_{\mathrm{h}\nabla}\mathsf{C}S^{2}_{\nabla}(d_{ \mathrm{h}}\omega)=m\,\mathsf{C}S_{\nabla}d_{\mathrm{h}}\omega-m(m-1)\omega+( m-1)d_{\mathrm{h}}\mathsf{C}S_{\nabla}\omega\]
so that
\[\omega=\bigg{(}\frac{1}{m-1}\mathsf{C}S_{\nabla}-\frac{1}{2m(m-1)}\mathsf{C}d _{\mathrm{h}\nabla}\mathsf{C}S^{2}_{\nabla}\bigg{)}d_{\mathrm{h}}\omega+d_{ \mathrm{h}}\bigg{(}\frac{1}{m}\mathsf{C}S_{n}\omega\bigg{)}\,.\]
|
2305.13245 | GQA: Training Generalized Multi-Query Transformer Models from Multi-Head
Checkpoints | Multi-query attention (MQA), which only uses a single key-value head,
drastically speeds up decoder inference. However, MQA can lead to quality
degradation, and moreover it may not be desirable to train a separate model
just for faster inference. We (1) propose a recipe for uptraining existing
multi-head language model checkpoints into models with MQA using 5% of original
pre-training compute, and (2) introduce grouped-query attention (GQA), a
generalization of multi-query attention which uses an intermediate (more than
one, less than number of query heads) number of key-value heads. We show that
uptrained GQA achieves quality close to multi-head attention with comparable
speed to MQA. | Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, Sumit Sanghai | 2023-05-22T17:16:38Z | http://arxiv.org/abs/2305.13245v3 | # GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
###### Abstract
Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We (1) propose a recipe for uptraining existing multi-head language model checkpoints into models with MQA using 5% of original pre-training compute, and (2) introduce grouped-query attention (GQA), a generalization of multi-query attention which uses an intermediate (more than one, less than number of query heads) number of key-value heads. We show that uptrained GQA achieves quality close to multi-head attention with comparable speed to MQA.
## 1 Introduction
Autoregressive decoder inference is a severe bottleneck for Transformer models due to the memory bandwidth overhead from loading decoder weights and all attention keys and values at every decoding step Shazeer (2019); Pope et al. (2022); de Jong et al. (2022). The memory bandwidth from loading keys and values can be sharply reduced through _multi-query attention_Shazeer (2019), which uses multiple query heads but single key and value heads.
However, multi-query attention (MQA) can lead to quality degradation and training instability, and it may not be feasible to train separate models optimized for quality and inference. Moreover, while some language models already use multi-query attention, such as PaLM Chowdhery et al. (2022), many do not, including publicly available language models such as T5 Raffel et al. (2020) and LLaMA Touvron et al. (2023).
This work contains two contributions for faster inference with large language models. First, we show that language model checkpoints with multi-head attention (MHA) can be _uptrained_Komatsuzaki et al. (2022) to use MQA with a small fraction of original training compute. This presents a cost-effective method to obtain fast multi-query as well as high-quality MHA checkpoints.
Second, we propose grouped-query attention (GQA), an interpolation between multi-head and multi-query attention with single key and value heads _per subgroup of query heads_. We show that uptrained GQA achieves quality close to multi-head attention while being almost as fast as multi-query attention.
## 2 Method
### Uptraining
Generating a multi-query model from a multi-head model takes place in two steps: first, converting the checkpoint, and second, additional pre-training to allow the model to adapt to its new structure. Figure 1 shows the process for converting a multi-head checkpoint into a multi-query checkpoint. The projection matrices for key and value heads are mean pooled into single projection matrices, which we find works better than selecting a single key and value head or randomly initializing new key and value heads from scratch.
The converted checkpoint is then pre-trained for
Figure 1: Overview of conversion from multi-head to multi-query attention. Key and value projection matrices from all heads are mean pooled into a single head.
a small proportion \(\alpha\) of its original training steps on the same pre-training recipe.
### Grouped-query attention
Grouped-query attention divides query heads into \(G\)_groups_, each of which shares a single key head and value head. GQA-g refers to grouped-query with \(G\) groups. GQA-1, with a single group and therefore single key and value head, is equivalent to MQA, while GQA-h, with groups equal to number of heads, is equivalent to MHA. Figure 2 shows a comparison of grouped-query attention and multi-head/multi-query attention. When converting a multi-head checkpoint to a GQA checkpoint, we construct each group key and value head by mean-pooling all the original heads within that group.
An intermediate number of groups leads to an interpolated model that is higher quality than MQA but faster than MHA, and, as we will show, represents a favorable trade-off. Going from MHA to MQA reduces \(H\) key and value heads to a single key and value head, reducing the size of the key-value cache and therefore amount of data that needs to be loaded by a factor of \(H\). However, larger models generally scale the number of heads, such that multi-query attention represents a more aggressive cut in both memory bandwidth and capacity. GQA lets us keep the same proportional decrease in bandwidth and capacity as model size increases.
Moreover, larger models suffer relatively less from memory bandwidth overhead from attention, as the KV-cache scales with model dimension while model FLOPs and parameters scale with the square of model dimension. Finally, standard sharing for large models replicates the single key and value head by the number of model partitions (Pope et al., 2022); GQA removes the waste from such partitioning. Therefore, we expect GQA to present a particularly good trade-off for larger models.
## 3 Experiments
### Experimental setup
ConfigurationsAll models are based on the T5.1.1 architecture (Raffel et al., 2020), implemented with JAX (Bradbury et al., 2018), Flax (Heek et al., 2020), and Flaxformer1. For our main experiments we consider T5 Large and XXL with multi-head attention, as well as uptrained versions of T5 XXL with multi-query and grouped-query attention. We apply MQA and GQA to decoder self-attention and cross-attention, but not encoder self-attention.
Footnote 1: [https://github.com/google/flaxformer](https://github.com/google/flaxformer)
UptrainingUptrained models are initialized from public T5.1.1 checkpoints. The key and value heads are mean-pooled to the appropriate MQA or GQA structure, and then pre-trained for a further \(\alpha\) proportion of original pre-training steps with the original pre-training setup (Raffel et al., 2020).
DataWe evaluate on summarization datasets CNN/Daily Mail (Nallapati et al., 2016), arXiv and PubMed (Cohan et al., 2018), MediaSum (Zhu et al., 2021), and Multi-News (Fabbri et al., 2019), translation dataset WMT 2014 English-to-German; and question answering dataset TriviaQA (Joshi et al., 2017). We do not evaluate on popular classification benchmarks such as GLUE (Wang et al., 2019) as autoregressive inference is less applicable for those tasks.
Figure 2: Overview of grouped-query method. Multi-head attention has H query, key, and value heads. Multi-query attention shares single key and value heads across all query heads. Grouped-query attention instead shares single key and value heads for each _group_ of query heads, interpolating between multi-head and multi-query attention.
Fine-tuningFor fine-tuning, we use a constant learning rate of 0.001, batch size 128, and dropout rate 0.1 for all tasks. CNN/Daily Mail and WMT use input length of 512 and output length 256. Other summarization datasets use input length 2048 and output length 512. Finally, TriviaQA uses input length 2048 and output length 32. We train until convergence and select the checkpoint with the highest dev performance. We use greedy decoding for inference.
TimingWe report time per sample per TPUv4 chip, as measured by xprof (Google, 2020). For timing experiments we use 8 TPUs with the largest batch size that fits up to 32 per TPU, and parallelization optimized separately for each model.
### Main results
Figure 3 shows average performance over all datasets as a function of average inference time for MHA T5-Large and T5-XXL, and uptrained MQA and GQA-\(8\) XXL models with uptraining proportion \(\alpha=0.05\). We see that a larger up-trained MQA model provides a favorable trade-off relative to MHA models, with higher quality and faster inference than MHA-Large. Moreover, GQA achieves significant additional quality gains, achieving performance close to MHA-XXL with speed close to MQA. Table 1 contains full results for all datasets.
### Ablations
This section presents experiments to investigate the effect of different modeling choices. We evaluate performance on a representive subsample of tasks: CNN/Daily Mail, (short-form summarization), MultiNews (long-form summarization), and TriviaQA (question-answering).
Checkpoint conversionFigure 4 compares the performance of different methods for checkpoint conversion. Mean pooling appears to work best,
\begin{table}
\begin{tabular}{l|c c|c c c c c c c} \hline \hline
**Model** & \(\mathbf{T_{inter}}\) & **Average** & **CNN** & **arXiv** & **PubMed** & **MediaSum** & **MultiNews** & **WMT** & **TriviaQA** \\ \hline & **s** & & **R\({}_{\mathbf{1}}\)** & **R\({}_{\mathbf{1}}\)** & **R\({}_{\mathbf{1}}\)** & **R\({}_{\mathbf{1}}\)** & **R\({}_{\mathbf{1}}\)** & **BLEU** & **F1** \\ \hline MHA-Large & 0.37 & 46.0 & 42.9 & 44.6 & 46.2 & 35.5 & 46.6 & 27.7 & 78.2 \\ MHA-XXL & 1.51 & 47.2 & 43.8 & 45.6 & 47.5 & 36.4 & 46.9 & 28.4 & 81.9 \\ MQA-XXL & 0.24 & 46.5 & 43.0 & 45.0 & 47.2 & 36.1 & 46.5 & 28.5 & 81.3 \\ GQA-8-XXL & 0.28 & 47.1 & 43.5 & 45.4 & 47.7 & 36.3 & 47.2 & 28.4 & 81.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Inference time and average dev set performance comparison of T5 Large and XXL models with multi-head attention, and 5% uptrained T5-XXL models with multi-query and grouped-query attention on summarization datasets CNN/Daily Mail, arXiv, PubMed, MediaSum, and MultiNews, translation dataset WMT, and question-answering dataset TriviaQA.
Figure 4: Performance comparison of different checkpoint conversion methods for T5-Large uptrained to MQA with proportion \(\alpha=0.05\). ‘Mean’ mean-pools key and value heads, ‘First’ selects the first head and ‘Random’ initializes heads from scratch.
Figure 3: **Uptrained MQA yields a favorable trade-off compared to MHA with higher quality and faster speed than MHA-Large, and GQA achieves even better performance with similar speed gains and comparable quality to MHA-XXL.** Average performance on all tasks as a function of average inference time per sample for T5-Large and T5-XXL with multi-head attention, and 5% uptrained T5-XXL with MQA and GQA-8 attention.
followed by selecting a single head and then random initialization. Intuitively, results are ordered by the degree to which information is preserved from the pre-trained model.
Uptraining stepsFigure 5 shows how performance varies with uptraining proportion for T5 XXL with MQA and GQA. First, we note that GQA already achieves reasonable performance after conversion while MQA requires uptraining to be useful. Both MQA and GQA gain from 5% uptraining with diminishing returns from 10%.
Number of groupsFigure 6 demonstrates the effect of the number of GQA groups on inference speed. For larger models the memory bandwidth overhead from the KV cache is less constraining Shazeer (2019), while the reduction in key-value size is sharper due to the increased number of heads. As a result, increasing the number of groups from MQA only results in modest slow-downs initially, with increasing cost as we move closer to MHA. We selected 8 groups as a favorable middle ground.
## 4 Related Work
This work is focused on achieving a better trade-off between decoder quality and inference time through reducing the memory bandwidth overhead Williams et al. (2009) from loading keys and values. Shazeer (2019) first proposed reducing this overhead through multi-query attention. Follow-up work showed that multi-query attention is especially helpful for long inputs Pope et al. (2022); de Jong et al. (2022).
A number of other methods have been proposed to reduce memory bandwidth overhead from keys and values, as well as parameters. Flash attention Dao et al. (2022) structures the attention computation to avoid materializing the quadratic attention scores, reducing memory and speeding up training. Quantization Dettmers et al. (2022); Zeng et al. (2022); Frantar et al. (2022) reduces the size of weights and activations, including keys and values, by lowering precision. Model distillation Hinton et al. (2015); Gou et al. (2021) instead reduces model size at a given precision, using data generated from the larger model to finetune the smaller model. Layer-sparse cross-attention de Jong et al. (2022) eliminates most cross-attention layers which make up the primary expense for longer inputs. Speculative sampling Chen et al. (2023); Leviathan et al. (2022); Kim et al. (2023) ameliorates the memory bandwidth bottleneck by proposing multiple tokens with a smaller model which are then scored in parallel by a larger model.
Finally, the uptraining procedure we propose is inspired by Komatsuzaki et al. (2022), which uptrains standard T5 checkpoints into sparsely activated Mixture-of-Experts models.
## 5 Conclusion
Language models are expensive for inference primarily due to the memory bandwidth overhead from loading keys and values. Multi-query attention reduces this overhead at the cost of decreased model capacity and quality. We propose to convert multi-head attention models to multi-query models with a small fraction of original pre-training compute. Moreover, we introduce grouped-query attention, an interpolation of multi-query and multi-head attention that achieves quality close to multi-head at comparable speed to multi-query attention.
Figure 5: Performance as a function of uptraining proportion for T5 XXL models with MQA and GQA-8.
Figure 6: Time per sample for GQA-XXL as a function of the number of GQA groups with input length 2048 and output length 512. Going from 1 (MQA) to 8 groups adds modest inference overhead, with increasing cost to adding more groups.
## Acknowlegements
We thank Santiago Ontanon, Afroz Mohiuddin, William Cohen and others at Google Research for insightful advice and discussion.
|
2304.04410 | Differentially Private Numerical Vector Analyses in the Local and
Shuffle Model | Numerical vector aggregation plays a crucial role in privacy-sensitive
applications, such as distributed gradient estimation in federated learning and
statistical analysis of key-value data. In the context of local differential
privacy, this study provides a tight minimax error bound of
$O(\frac{ds}{n\epsilon^2})$, where $d$ represents the dimension of the
numerical vector and $s$ denotes the number of non-zero entries. By converting
the conditional/unconditional numerical mean estimation problem into a
frequency estimation problem, we develop an optimal and efficient mechanism
called Collision. In contrast, existing methods exhibit sub-optimal error rates
of $O(\frac{d^2}{n\epsilon^2})$ or $O(\frac{ds^2}{n\epsilon^2})$. Specifically,
for unconditional mean estimation, we leverage the negative correlation between
two frequencies in each dimension and propose the CoCo mechanism, which further
reduces estimation errors for mean values compared to Collision. Moreover, to
surpass the error barrier in local privacy, we examine privacy amplification in
the shuffle model for the proposed mechanisms and derive precisely tight
amplification bounds. Our experiments validate and compare our mechanisms with
existing approaches, demonstrating significant error reductions for frequency
estimation and mean estimation on numerical vectors. | Shaowei Wang, Jin Li, Yuntong Li, Jin Li, Wei Yang, Hongyang Yan | 2023-04-10T06:44:15Z | http://arxiv.org/abs/2304.04410v1 | # Differentially Private Numerical Vector Analyses
###### Abstract
Numerical vector aggregation plays a crucial role in privacy-sensitive applications, such as distributed gradient estimation in federated learning and statistical analysis of key-value data. In the context of local differential privacy, this study provides a tight minimax error bound of \(O(\frac{d\alpha}{n\epsilon^{2}})\), where \(d\) represents the dimension of the numerical vector and \(s\) denotes the number of non-zero entries. By converting the conditional/unconditional numerical mean estimation problem into a frequency estimation problem, we develop an optimal and efficient mechanism called Collision. In contrast, existing methods exhibit sub-optimal error rates of \(O(\frac{d^{2}}{n\epsilon^{2}})\) or \(O(\frac{d\alpha^{2}}{n\epsilon^{2}})\). Specifically, for unconditional mean estimation, we leverage the negative correlation between two frequencies in each dimension and propose the CoCo mechanism, which further reduces estimation errors for mean values compared to Collision. Moreover, to surpass the error barrier in local privacy, we examine privacy amplification in the shuffle model for the proposed mechanisms and derive precisely tight amplification bounds. Our experiments validate and compare our mechanisms with existing approaches, demonstrating significant error reductions for frequency estimation and mean estimation on numerical vectors.
data aggregation, local differential privacy, shuffle model, minimax error bound, mean estimation.
## 1 Introduction
With increasingly stringent data privacy regulations being enacted (e.g., the General Data Protection Regulation [1] in the European Union, the California Consumer Privacy Act, and the Civil Code of the People's Republic of China), local differential privacy (LDP) has emerged as the _de facto_ standard for preserving data privacy in decentralized settings. Stemming from the classical notion of differential privacy in the database community [2], LDP operates without trusting data aggregators or other third parties. It enables users/agents to sanitize their personal data locally (e.g., on mobile devices, or IoT sensors) and offers information-theoretically rigorous privacy protection. In comparison to cryptography-based privacy preservation approaches (e.g., homomorphic encryption [3], secure multi-party computation [4]), LDP is highly efficient and scalable for data aggregation involving millions or billions of users. Currently, many large internet service providers (such as Apple [5], Google [6], and Microsoft [7]) are implementing LDP for regulatory compliance during user data collection and analysis.
Additionally, to address the unacceptably high error barriers resulting from stringent LDP constraints, researchers have recently introduced the shuffle model [8, 9] of differential privacy. In this model, messages from users are randomly permuted (by a shuffler, e.g., anonymous channels, trusted hardwares, and edge servers) before being sent to the aggregator/analyzer. This breaks the linkage between users and their messages, allowing messages to be concealed among others. Privacy is thus amplified after shuffling, enabling a lower local privacy level to satisfy a relatively higher privacy level (from the aggregator's perspective).
Numerical vectors are commonly found in user data for various applications, such as gradient estimation in federated learning [10, 11], sensor readings [12], and service usage histories [13] for user profile and usage analysis in web services. This study focuses on numerical vector analysis within the local and shuffle models of differential privacy. For clarity, we assume that the numerical vector \(\mathbf{x}_{i}\) for user \(i\) is a \(d\)-dimensional, \(s\)-sparse ternary vector [14, 15, 16, 17], belonging to the set \(\mathcal{X}^{s}\), defined as follows:
\[\mathcal{X}^{s}:=\{\mathbf{x}\mid\mathbf{x}\in\{-1,0,1\}^{d}\text{ and }\|\mathbf{x}\|_{0}=s\}.\]
This problem is pertinent to numerous real-world data aggregation tasks, including gradient estimation in federated learning and sensitive key-value data aggregation for user profile and usage analyses in web services.
### _Federated Gradient Estimation_
Federated learning [10] investigates machine learning systems in distributed settings, enabling each party to maintain the privacy of their raw data. During each gradient descent iteration for training or updating a machine learning model, locally computed gradients \(\mathbf{x}_{i}\) from participating parties (e.g., \(n\) mobile users) are averaged by the federation server (e.g., a parameter server):
\[\overline{\mathbf{x}}:=\frac{1}{n}\sum\nolimits_{i=1}^{n}\mathbf{x}_{i}. \tag{1}\]
To enhance communication efficiency, local gradients are often discretized and sparsified [14, 18].
The original work [10] considers sharing gradients to be more privacy-resistant than sharing raw data. However, recent studies show that the gradient \(\mathbf{x}_{i}\) still poses privacy risks, as local raw data may be inferred with confidence
from several transmitted gradients [19]. This highlights the need for rigorous privacy protection for local gradients.
### _Key-value Data Aggregation_
We refer to key-value data as paired (key, value) mappings, where the key \(j\in[d]\) represents an index, and the value \(\mathbf{x}_{j}\) is numerical. Note that a value is considered \(0\) if and only if the corresponding key is missing from or not defined in the key-value data; for any existing or defined keys, their corresponding values are binary as \(-1,1\). For instance, a user might represent preferences for watched movies as key-value data, assigning a value of \(1\) to movies they like and a value of \(-1\) to movies they dislike.
Common analyses of key-value data involve estimating unconditional and conditional mean statistics. The unconditional mean statistic for the key \(j\) is \(\overline{\mathbf{x}}_{j}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i,j}\), the non-missing frequency of the key \(j\) is:
\[\mathbf{\underline{x}}_{j}:=\frac{1}{n}\#\{\mathbf{x}_{i,j}\mid\mathbf{x}_{i,j }\ for\ i\in[n]\ and\ \mathbf{x}_{i,j}\neq 0\}, \tag{2}\]
and the conditional mean statistic is \(\overline{\mathbf{x}}_{j}:=\overline{\mathbf{x}}_{j}/\underline{\mathbf{x}} _{j}\).
### _Existing Results_
Within the framework of \(\epsilon\)-LDP, theoretical minimax lower bounds for various statistical estimation problems have been established, including multinomial distribution estimation [20], logistic regression/generalized linear model estimation [21], and sparse covariance matrix estimation [22]. Specifically, [21] derives minimax lower bounds for multi-dimensional mean estimation in numerical vectors with bounded \(\ell_{1}\)-norm or \(\ell_{2}\)-norm. However, an \(s\)-sparse numerical vector is a special case of \(\ell_{1}\)-norm or \(\ell_{2}\)-norm bounded vector with identical absolute non-zero entries. It remains an open question whether it holds the same bounds as the general case or has tighter bounds. Recently, for a broad family of \(\epsilon\)-LDP estimation problems that can be cast as mean estimation problems, [23] studies sample complexity lower bounds under certain error tolerance \(\alpha\), but their sample complexity results for \(s\)-sparse numerical vectors exhibit at least a \(1/\alpha\) gap compared to our minimax optimal sample complexity results.
In practice, numerous \(\epsilon\)-LDP mechanisms have been proposed for statistical estimation, such as multinomial distribution estimation on categorical data [6, 24, 25, 20] and one-dimensional mean estimation on numerical values [26, 27]. For \(\epsilon\)-LDP numerical vector or key-value data aggregation, existing approaches handle both dense numerical vectors (e.g., in [21, 28, 29]) and sparse numerical vectors (e.g., in [15, 16, 17, 30]). Specifically, [15, 16] uniformly and randomly select one dimension from \([d]\) and transform the multi-dimensional estimation problem to a one-dimensional numerical/categorical problem. The work of [17] follows a similar paradigm, but randomly selects one non-empty dimension from \(s\) dimensions. However, as we will show in Section 3, these mechanisms are sub-optimal.
To mitigate the high noise needed for LDP, [8, 9] introduce a (semi-trusted) shuffler to hide private views in the crowd. The seminal work [9] shows that \(n\) shuffled \(\epsilon\)-LDP views can preserve \((O(\epsilon\sqrt{\log(1/\delta)/n}),\delta)\)-differential privacy. The work [31] derives a similar conclusion specifically for binary randomized response messages. A later work [32] considers private views from other users as a "privacy blanket" and derives tighter privacy amplification bounds \((O(\min\{\epsilon_{0},1\}e^{\epsilon_{0}}\sqrt{\log(1/\delta)/n}),\delta)\). Recent works [33, 34] analyze the mixture property of arbitrary \(\epsilon\)-LDP randomizers and derive an asymptotically optimal bound of \((O((e^{\epsilon_{0}/2}-e^{-\epsilon_{0}/2})\sqrt{\log(1/\delta)/n}),\delta)\). This work shows that for specific \(\epsilon\)-LDP mechanisms, such as the proposed Collision and CoCo, it is possible to obtain tighter privacy amplification bounds.
### _Our Contributions_
The contributions of this work are summarized as follows:
* **Minimax lower bounds.** The squared error (or total variation error) lower bound of \(\epsilon\)-LDP \(s\)-sparse numerical vector mean estimation is \(O(\frac{ds}{\pi e^{2}})\) (or \(O(d\sqrt{\frac{\pi}{\pi e^{2}}})\)). Our proof considers \(s\)-sparse numerical vectors that are decomposable, thus reducing the bounding procedure to the case of multiple multinomial distributions.
* **An optimal mechanism via frequency estimation.** Since existing approaches are sub-optimal, we design a new mechanism: Collision, which matches the minimax lower bound. It has computational complexity \(O(s)\) and communication complexity \(O(\log s)\).
* **An optimized mean estimation mechanism.** Exploiting the negative correlation between two frequencies for each dimension, we design an optimized mechanism, _CoCo_, specifically for mean estimation, which further reduces estimation error by \(15\%\).
* **Tight privacy amplification in the shuffle model.** In the shuffle model of differential privacy, we derive exactly tight privacy amplification bounds for both Collision and CoCo. The amplification bounds are independent of dimension \(d\) and sparsity parameter \(s\), thus are favorable for high dimensional or even dense numerical vectors. When compared with existing results, our tight bounds save about \(25\%\) privacy budget.
The structure of the remaining paper is as follows: Section 2 provides background knowledge. Section 3 reviews the design of existing mechanisms and highlights their sub-optimality. In Section 4, we establish the minimax lower bounds. Next, in Section 5, we propose the new mechanism, Collision, that matches the established lower bound. In Section 6, we derive privacy amplification upper and lower bounds for the proposed mechanism in the shuffle model. Later, in Section 7, we propose an optimized mechanism for mean estimation. Section 8 presents experimental results. Finally, in Section 9, we conclude this work.
## 2 Preliminaries
In this section, we introduce the definition of numerical vectors, differential privacy, and the minimax risks of private estimation. Commonly used notations are listed in Table I.
### _Numerical Vector_
We define a numerical vector \(\mathbf{x}_{i}\) from every user \(i\) as a \(d\)-dimensional, \(s\)-sparse ternary vector, with the domain defined as follows:
\[\mathcal{X}^{s}:=\{\mathbf{x}\mid\mathbf{x}\in\{-1,0,1\}^{d}\text{ and }\|\mathbf{x}\|_{0}=s\}.\]
Here, \(s\) is the sparsity parameter: the number of non-zero elements in the numerical vector \(\mathbf{x}_{i}\). Real-world real-valued numerical vectors can be transformed to \(s\)-sparse ternary vectors with limited precision loss, such as by max-min normalization and stochastic ternary discretization.
Additionally, we use the set form representation for the \(s\)-sparse vector. Let \(j_{-}\) and \(j_{+}\) denote events where the \(j\)-th element of \(\mathbf{x}_{i}\) (i.e., \(\mathbf{x}_{i,j}\)) equals \(-1\) and \(1\), respectively. A numerical vector \(\mathbf{x}\) can be represented in the set form as:
\[\mathbf{Y}_{\mathbf{x}_{i}}:=\{j_{-}\mid j\in[d],\ \mathbf{x}_{i,j}=-1\}\bigcup \{j_{+}\mid j\in[d],\ \mathbf{x}_{i,j}=1\}.\]
### _Differential Privacy_
One common tool for measuring distance between two distributions is hockey-stick divergence (see Definition 1), which satisfies data processing inequality [35].
**Definition 1** (Hockey-stick divergence).: _The hockey-stick divergence between two random variables \(P\) and \(Q\) is:_
\[\mathcal{D}_{e^{e}}(P\|Q):=\int\max\{0,P(x)-e^{\epsilon}Q(x)\}\mathrm{d}x,\]
_where we use the notation \(P\) and \(Q\) to refer to both the random variables and their probability density functions._
Two variables \(P\) and \(Q\) are \((\epsilon,\delta)\)-indistinguishable if \(\max\mathcal{D}_{e^{e}}(P\|Q),\mathcal{D}_{e^{e}}(Q\|P)\leq\delta\). For datasets \(D\), \(D^{\prime}\) that are of the same size and differ only in one element, they are called _neighboring datasets_. The definition of differential privacy with budget/level \((\epsilon,\delta)\) is as follows.
**Definition 2** (\((\epsilon,\delta)\)-Dp [2]).: _Let \(\mathcal{D}_{K}\) denote the output domain, a randomized mechanism \(K\) satisfies \((\epsilon,\delta)\)-differential privacy if, for any neighboring datasets \(D,D^{\prime}\), the \(K(D)\) and \(K(D^{\prime})\) are \((\epsilon,\delta)\)-indistinguishable._
Let \(K\) denote a randomized mechanism for sanitizing a single user's data. The differential privacy in the local model with privacy budget \(\epsilon\) is as follows.
**Definition 3** (\(\epsilon\)-Ldp [20]).: _Let \(\mathcal{D}_{K}\) denote the output domain. A randomized mechanism \(K\) satisfies local \(\epsilon\)-differential privacy if, for any data pair \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}^{s}\), the \(K(\mathbf{x})\) and \(K(\mathbf{x}^{\prime})\) are \((\epsilon,0)\)-indistinguishable._
#### 2.2.1 The Shuffle Model of Differential Privacy
In the shuffle model, a semi-trustable shuffler lies between the users and the data collector (e.g., the server/statistician) and uniform-randomly permutes randomized messages from users. We denote the randomization algorithm on the user side as \(\mathcal{R}\) and the shuffling algorithm as \(\mathcal{S}\). The privacy goal of the shuffle model is to ensure the shuffled messages \(\mathcal{S}\circ\mathcal{R}(D)=\mathcal{S}(\mathcal{R}(x_{1}),...,\mathcal{R}( x_{n}))\) satisfy \((\epsilon_{c},\delta)\)-DP for all neighboring datasets:
**Definition 4** (\((\epsilon,\delta)\)-Dp in the shuffle model).: _A protocol \((\mathcal{R},\mathcal{S})\) satisfies \((\epsilon,\delta)\)-differential privacy in the shuffle model if, for any neighboring datasets \(D,D^{\prime}\), the \(\mathcal{S}\circ\mathcal{R}(D)\) and \(\mathcal{S}\circ\mathcal{R}(D^{\prime})\) are \((\epsilon,\delta)\)-indistinguishable._
When the randomization algorithm \(\mathcal{R}\) is an \(\epsilon\)-LDP mechanism, the seminal work shows \(\mathcal{S}\circ\mathcal{R}\) actually preserves \((\epsilon\sqrt{144\log(1/\delta)/n},\delta)\)-DP, which decreases with the number of users \(n\). This phenomenon is known as _privacy amplification via shuffling_. A very recent work [34] improves the bound to near-optimal \((O((e^{\epsilon/2}-e^{-\epsilon/2})\sqrt{\log(1/\delta)/n}),\delta)\). Specifically, when \(\mathcal{R}\) satisfies several mixture properties, [34] shows the divergence between \(\mathcal{S}\circ\mathcal{R}(D)\) and \(\mathcal{S}\circ\mathcal{R}(D^{\prime})\) is bounded by the divergence between a pair of two-dimension variables as in Theorem 1.
**Theorem 1** (Stronger clone reduction [34]).: _Given any \(n+1\) inputs \(x_{1},x_{1}^{\prime},x_{2},...,x_{n}\in\mathcal{X}\), consider an algorithm \(\mathcal{R}\) such that the output domain is finite and_
\[\mathcal{R}(x_{1}) =e^{\epsilon}\alpha\mathcal{Q}_{1}+\alpha\mathcal{Q}_{1}^{\prime}+( 1-\alpha-e^{\epsilon}\alpha)\mathcal{Q}_{1}^{*},\] \[\mathcal{R}(x_{1}^{\prime}) =\alpha\mathcal{Q}_{1}+e^{\epsilon}\alpha\mathcal{Q}_{1}^{\prime} +(1-\alpha-e^{\epsilon}\alpha)\mathcal{Q}_{1}^{*},\] \[\forall i\in[2,n],\ \mathcal{R}(x_{i}) =\alpha\mathcal{Q}_{1}+\alpha\mathcal{Q}_{1}^{\prime}+(1-2\alpha) \mathcal{Q}_{i}\]
_holds for some \(\epsilon\geq 0,\alpha\in[0,(e^{\epsilon}-1)/(e^{\epsilon}+1)]\) and some probability distributions \(\mathcal{Q}_{1},\mathcal{Q}_{1}^{\prime},\mathcal{Q}_{1}^{*},\mathcal{Q}_{2},...,\mathcal{Q}_{n}\). Let \(C\sim Binomial(n-1,2\alpha)\), \(A\sim Binomial(C,1/2)\), and \(\Delta_{1}=Bernoulli(e^{\epsilon}\alpha)\) and \(\Delta_{2}=Bernoulli(1-\Delta_{1},\alpha/(1-e^{\epsilon}\alpha))\); let \(P_{\alpha}=(A+\Delta_{1},C-A+\Delta_{2})\) and \(Q_{\alpha}=(A+\Delta_{2},C-A+\Delta_{1})\). Then for any distance measure \(\mathcal{D}\) that satisfies the data processing inequality,_
\[\mathcal{D}(\mathcal{S}(\mathcal{R}(x_{1}),..,\mathcal{R}(x_{n}))\|\mathcal{S}( \mathcal{R}(x_{1}^{\prime}),..,\mathcal{R}(x_{n})))\leq\mathcal{D}(P_{\alpha} \|Q_{\alpha}).\]
For any \(\epsilon_{0}\)-LDP mechanism \(\mathcal{R}\), it satisfies the mixture properties with parameters \(\epsilon=\epsilon_{0}\) and \(\alpha=(e^{\epsilon_{0}}-1)/(e^{\epsilon_{0}}+1)\). Furthermore, the distance \(\mathcal{D}(P_{\alpha}\|Q_{\alpha})\) increases with \(\alpha\) when \(\epsilon\) is fixed. Owing to the simplicity of the formulas for \(P\) and \(Q\), their hockey-stick divergence can be numerically computed in \(\tilde{O}(n)\) time [36] with a specified precision.
### _Local Private Minimax Risks_
Assuming samples \(x_{1},x_{2},...,x_{n}\) are \(n\) i.i.d. drawn from a distribution \(P\in\mathcal{P}\). Let \(\mathcal{K}_{\epsilon}\) denote the set of all possible mechanisms \(\mathbf{K}=\{K_{1},...,K_{n}\}\) that each satisfies \(\epsilon\)-LDP. Taking the samples as input, a serial of (adaptive or non-adaptive) mechanisms \(\mathbf{K}\in\mathcal{K}_{\epsilon}\) produce a list of sanitized views \(\{z_{1},z_{2},...,z_{n}\}\). If the parameter estimator:
\[\tilde{\theta}=\tilde{\theta}(\{z_{1},z_{2},...,z_{n}\})\]
is derived from these private views while having no access to input samples \(\{x_{j}\}_{j=1}^{n}\), the minimax MSE risk (under privacy budget \(\epsilon\)) is then:
\begin{table}
\begin{tabular}{c|l} \hline
**Notation** & **Description** \\ \hline \([i]\) & \(\{1,2,...,i\}\) \\ \([i:j]\) & \(\{i,i+1,...,j\}\) \\ \([\ ]\) & Iverson bracket \\ \hline \(n\) & the number of users (data owners) \\ \(d\) & the dimension of numerical vectors \\ \(s\) & the sparsity parameter of numerical vectors \\ \(\mathcal{X}^{s}\) & the domain of \(s\)-sparse numerical vector. \\ \(\mathbf{x}_{i}\) & the data of user \(i\) \\ \(\overline{\mathbf{x}}_{j}\) & the population mean of \(j\)-th dimension \\ \(\mathbf{\tilde{x}}_{j}\) & the non-missing frequency of \(j\)-th dimension \\ \(\epsilon\) & the (local) privacy budget \\ \(t\) & the outputing domain size \\ \hline \(\mathcal{D}\) & a distance measure over distributions \\ \(\mathcal{S}\) & the shuffling algorithm in the shuffle model \\ \(\epsilon_{c}\) & amplified privacy level in the shuffle model \\ \hline \end{tabular}
\end{table} TABLE I: List of notations.
\[\mathfrak{M}_{n}(\theta(\mathcal{P}),\|\cdot\|_{2}^{2},\epsilon)\] \[:=\inf_{\mathbf{K}\in\mathcal{K}_{\epsilon}}\inf_{\theta}\sup_{P \in\mathcal{P}}\mathbb{E}_{P,\mathbf{K}}[\|\widehat{\theta}(z_{1},z_{2},...,z_{ n})-\theta(P)\|_{2}^{2}].\]
## 3 Closely Related Works
Due to its broad applications, numerical vector aggregation with local and shuffle DP has been attracting increasing research attention. In addition to the literature reviewed in Section 1.3, we focus here on the most closely related works from [15, 16, 30, 37, 38].
### _Numerical Vectors with Local DP_
Existing works on \(\epsilon\)-LDP numerical vector aggregation can mainly be categorized into two types: those that perform dimension sampling in a data-agnostic manner (e.g., PrivKV in [15, 16]) and those that do so in a data-dependent manner (e.g., PCKV in [17]).
**The PrivK Mechanism [15].** The seminal work by [15] on \(\epsilon\)-LDP key-value data suggests initially randomly sampling a dimension \(j\in[d]\) from the key domain, followed by applying an \(\epsilon\)-LDP categorical mechanism to the corresponding (key, value) pair, which takes a value from \((j,0),(j,1),(j,-1)\). Here, \((j,0)\) indicates that the key is empty in the key-value data. In essence, the PrivKV mechanism is akin to dividing a population of \(n\) into \(d\) groups, where each group is used to estimate \(\llbracket j_{+}\in\mathbf{Y_{x}}\rrbracket\) and \(\llbracket j_{-}\in\mathbf{Y_{x}}\rrbracket\) for each \(j\in[d]\) with a privacy budget of \(\epsilon\). Given that the minimax lower error bound for estimating frequencies in a population of \(n^{\prime}\) with privacy budget \(\epsilon\) and domain size \(d^{\prime}\) is \(\Theta(\frac{d^{\prime}}{n^{\prime}e^{2}})\)[21], the estimation error of \(\llbracket j_{+}\in\mathbf{Y_{x}}\rrbracket\) and \(\llbracket j_{-}\in\mathbf{Y_{x}}\rrbracket\) is \(\Theta(\frac{d}{n\epsilon^{2}})\), since \(n^{\prime}=\frac{n}{d}\) and \(d^{\prime}=3\). Consequently, its total estimation error for frequencies or mean values of a \(d\)-dimensional vector is \(O(\frac{d^{2}}{n\epsilon^{2}})\). This result exhibits a gap of \(d/s\) from the optimal error rate in Theorem 2. Analogous methodology and findings also apply to subsequent works in [16, 37, 38].
**The PCKV Mechanism [17].** The study by [17] suggests sampling one key from the existing \(s\) keys in key-value data. Subsequently, an \(\epsilon\)-LDP categorical mechanism is applied to the corresponding \(1\)-sparse numerical vector, which is equivalent to categorical data with a domain size of approximately \(2d\). Considering that the minimax lower error bound for estimating frequencies in a population of \(n^{\prime}\) with privacy budget \(\epsilon\) and domain size \(d^{\prime}\) is \(\Theta(\frac{d^{\prime}}{n^{\prime}e^{2}})\), the total estimation error for scaled \(\llbracket j_{+}\in\mathbf{Y_{x}}\rrbracket\) and \(\llbracket j_{-}\in\mathbf{Y_{x}}\rrbracket\) in the PCKV mechanism is \(\Theta(\frac{d}{n\epsilon^{2}})\), as \(n^{\prime}=n\) and \(d^{\prime}=2d\). Owing to the preceding sampling procedure, the scale factor is \(s\), and the total variation error is amplified by \(s^{2}\). Consequently, the total estimation error for \(\llbracket j_{+}\in\mathbf{Y_{x}}\rrbracket\) and \(\llbracket j_{-}\in\mathbf{Y_{x}}\rrbracket\) in the PCKV mechanism is \(O(\frac{d\delta s}{n\epsilon^{2}})\). This result presents a gap of \(s\) from the optimal error rate in Theorem 2.
**The Amplified PCKV-GRR Mechanism [17].** In the PCKV mechanism, which employs the generalized randomized response (GRR [39]) as the base randomizer, privacy levels are enhanced through dimension sampling [17]. Specifically, this mechanism can be applied with a privacy budget of \(\epsilon^{\prime}=\log(s(\epsilon^{\epsilon}-1)+1)\), where \(\epsilon\) represents the original privacy budget. The mean squared estimation error for this case is given by \(O(\frac{se^{\epsilon}(se^{\epsilon}+d-s-\epsilon^{\epsilon})+(d-s)(se^{ \epsilon}+d-s-1)}{(e^{\epsilon}-1)^{2}})\), which equates to \(O(\frac{d}{n\epsilon^{2}})\) when \(\epsilon=O(1)\). It is important to note that the achieved estimation error exhibits a multiplicative gap of \(d/s\) compared to the optimal error rate.
**The Succinct Mechanism [30].** Recently, [30] proposes mapping pseudo-random \(+1,-1\) values into a single bucket, clipping the bucket's summation to a norm of \(\eta=O(\sqrt{s\log(n/\beta)})\), and adding Laplace noise with a scale of \(2\eta/\epsilon\). The mean squared error of this approach is \(O(\frac{ds\log n}{n\epsilon^{2}})\), resulting in a multiplicative gap of \(\log n\) compared to the optimal rate. Moreover, the mechanism requires prior knowledge of the population size \(n\), which may be impractical in certain scenarios (e.g., data collection in mobile/edge computing [40]). Although the mechanism is theoretically proven to be rate-optimal under the \(\ell_{\infty}\) error (see Section 7.3.2 for more details), its empirical \(\ell_{\infty}\) errors lag behind our proposal by approximately \(30\%\) in almost all settings (see Section 8.3).
### _Numerical Vectors in the Shuffle Model_
The shuffle model [9, 31, 33, 34, 40] and privacy amplification via shuffling has been successfully applied to numerical vectors, as demonstrated in recent studies (e.g., [41, 42, 38, 43]). Specifically, [43] independently sanitizes each dimension and transmits the sanitized vector to the shuffler; [41] further proposes separately transmitting each dimension to the shuffler, breaking the linkage of \(d\) dimensions for a single user. However, the local private mechanisms in [41, 43] are sub-optimal due to budget splitting for each dimension. The work by [42] addresses the sub-optimality issue through dimension sampling but does not exploit the sparsity in the gradient vector. The study [38] first selects \(s\) significant dimensions from the gradient vector, then applies local private mechanisms and shuffle amplification. Nonetheless, the local privacy mechanism in [38] is also sub-optimal due to budget splitting for every selected dimension, and there is no rigorous privacy guarantee for selected dimensions. In contrast, our local randomizer ensures all messages are differentially private and is minimax optimal. Additionally, the privacy amplification bounds in this work are strictly tight.
## 4 Minimax Lower Bounds
The Assouad's method [44] is a widely used tool for lower bounding through multiple hypothesis testing. It defines a hypercube \(\mathcal{V}=\{-1,1\}^{d}\) (\(d\in\mathbb{N}^{+}\)) and a family of distributions \(\{P_{\nu}\}_{\nu\in\mathcal{V}}\) indexed by the hypercube. A distribution family is said to induce a \(2\tau\)-Hamming separation for the loss \(\|\cdot\|_{2}^{2}\) if a vertex mapping (a function \(\kappa:\theta(\mathcal{P})\mapsto\{-1,1\}^{d}\)) exists, satisfying:
\[\|\theta-\theta(P_{\nu})\|_{2}^{2}\geq 2\tau\sum_{j=1}^{d}\llbracket[\kappa( \theta)]_{j}\neq\nu_{j}\rrbracket.\]
Assuming that nature first uniformly selects a vector \(V\in\mathcal{V}\), and the samples \(\mathbf{x}_{1},...,\mathbf{x}_{n}\) are drawn from the distribution \(P_{\nu}\) with \(V=\nu\), these samples are then used as input for \(\epsilon\)-LDP mechanisms \(\mathbf{K}\). The literature [21] presents an \(\epsilon\)-LDP version of Assouad's method, as follows.
**Lemma 1** (Private Assouad bound [21]).: _Let \(P_{+j}=\frac{1}{2^{2+1}}\sum_{\nu:\nu_{j}=1}P_{\nu}\) and \(P_{-j}=\frac{1}{2^{2+1}}\sum_{\nu:\nu_{j}=-1}P_{\nu}\), we have_
\[\mathfrak{M}_{n}(\theta(\mathcal{P}),\|\cdot\|_{2}^{2})\geq d\cdot\tau[1-( \frac{n(e^{\epsilon}-1)^{2}}{2d}F_{\mathbb{B}_{\infty}(\mathcal{X}^{s}), \mathcal{P})}\lx@note{footnote}{\ref{eq:1}},\]
_where \(\mathbb{B}_{\infty}(\mathcal{X}^{s})\) denote the collection of function \(\gamma\) with supremum norm bounded by \(1\) as:_
\[\mathbb{B}_{\infty}(\mathcal{X}^{s}):=\{\gamma:\mathcal{X}^{s}\mapsto\mathbb{R }\ \ |\ \|\gamma\|_{\infty}\leq 1\},\]
_and maximum possible discrepancy \(F_{\mathbb{B}_{\infty}(\mathcal{X}^{s}),\mathcal{P}}\) is defined as:_
\[\sup_{\gamma\in\mathbb{B}_{\infty}(\mathcal{X}^{s})}\sum_{i=1}^{d}\big{(}\int_ {\mathcal{X}^{s}}\gamma(x)(\text{d}P_{+j}(x)-\text{d}P_{-j}(x))\big{)}^{2}.\]
We consider numerical vectors that can be decomposed into \(s\) buckets, with each bucket containing \(\frac{d}{s}\) indices and only one non-zero entry. We then define a hypercube of length \(d\) and construct a class of \(\frac{2\delta^{2}s^{2}}{d^{2}}\)-Hamming separated probability distributions. Following Lemma 1, we bound the maximum possible marginal distance \(F_{\mathbb{B}_{\infty}(\mathcal{X}^{s}),\mathcal{P}}\) under the value of \(\frac{8\delta^{2}s}{d}\). Theorem 2 provides the final lower bounds for the problem of local private numerical vector mean estimation.
**Theorem 2**.: _For the numerical vector aggregation problem, for any \(\epsilon\)-LDP mechanism, there exists a universal constant \(c>0\) such that for all \(\epsilon\in(0,1]\),_
\[\mathfrak{M}_{n}(\theta(\mathcal{P}),\|\cdot\|_{2}^{2},\epsilon)\geq c\cdot \min\{\frac{s^{2}}{d},\frac{ds}{n\epsilon^{2}}\}.\]
Proof.: See Appendix A.
To understand the minimax rate, we can consider the non-private error rate of decomposable numerical vector aggregation, which is \(\mathbb{E}[\|\widehat{\theta}-\theta\|_{2}^{2}]\leq\sum_{i=1}^{d}\mathbb{E}[ \|\widehat{\theta}_{i}-\theta_{i}\|_{2}^{2}]\leq\frac{4s}{n}\). Thus, the enforcement of local \(\epsilon\)-LDP causes the effective sample size to decrease from \(n\) to \(O(n\epsilon^{2}/d)\).
Now consider the \(\ell_{1}\)-norm error metric, the estimation error lower bounds can be derived as \(O(\frac{d\sqrt{s}}{\sqrt{n\epsilon^{2}}})\) (see Theorem 3). Compared to the non-private error rate for decomposable numerical vector data:
\[\mathbb{E}[\|\widehat{\theta}-\theta\|_{1}]\leq\sum_{a=1}^{s}\sum_{j=1}^{d/s} \mathbb{E}[\|\widehat{\theta}_{a,j}-\theta_{a,j}\|\leq 2s\sqrt{\frac{d/s}{n}},\]
this also demonstrates that the \(\epsilon\)-LDP reduces the effective sample size from \(n\) to \(O(n\epsilon^{2}/d)\).
**Theorem 3**.: _For the numerical vector aggregation problem, for any \(\epsilon\)-LDP mechanism, there exists a universal constant \(c>0\) such that for all \(\epsilon\in(0,1]\),_
\[\mathfrak{M}_{n}(\theta(\mathcal{P}),\|\cdot\|_{1},\epsilon)\geq c\cdot\min \{\frac{s}{2},\frac{d\sqrt{s}}{\epsilon^{2}\sqrt{n}}\}.\]
Proof.: The proof for the \(\|\cdot\|_{1}\) error follows a similar procedure to the one for the \(\|\cdot\|_{2}^{2}\) error, with some differences in multiplicative factors in steps 2 and 4. In step 2, Equation (12) now becomes:
\[\|\widehat{\theta}-\theta_{\nu}\|_{1}\geq\frac{\delta s}{d}\sum_{j=1}^{l} \sum_{a=1}^{s}[\widehat{\nu}_{a_{j}}\neq\nu_{a_{j}}].\]
Consequently, the Hamming separation parameter with respect to the \(\ell_{1}\)-norm is \(\frac{\delta s}{d}\). Later, in step 4, according to Lemma 1, we obtain:
\[\max_{\nu\in\mathcal{V}}\mathbb{E}_{P_{\nu}}[\|\widehat{\theta}-\theta_{\nu} \|_{1}]\geq\delta s[1-(4n(e^{\epsilon}-1)^{2}\delta^{2}s/d^{2})\lx@note{ footnote}{\ref{eq:1}}].\]
By choosing the parameter \(\delta^{2}\) at \(\min\{1,d^{2}/(16ns(e^{\epsilon}-1)^{2})\}\), we establish the lower bound as:
\[\mathfrak{M}_{n}(\theta(\mathcal{P}),\|\cdot\|_{1},\epsilon)\geq\min\{\frac{s} {2},\frac{d\sqrt{s}}{8(e^{\epsilon}-1)\sqrt{n}}\}.\]
## 5 Optimal Frequency Mechanism
In this section, we propose a frequency-based mechanism (i.e., Collision) for \(\epsilon\)-LDP numerical vector aggregation that matches minimax error lower bounds.
To mitigate the curse of dimension/density on the performance of numerical vector estimation, existing \(\epsilon\)-LDP mechanisms employ the paradigm of _dimension/key sampling & categorical randomization_, which, however, fails to achieve the optimal statistical rate. We propose to first condense the numerical vector to prevent interference from the original dimension, and then sample one element from the dense vector using the exponential mechanism [45] to avoid splitting the privacy budget (in order to prevent dependence on \(s^{2}\)). We define an element domain:
\[\mathcal{Y}=\{1_{-},1_{+},2_{-},2_{+},...,d_{-},d_{+}\},\]
and represent an input \(\mathbf{Y}||\mathbf{x}\) as a subset of \(\mathcal{Y}\) with size \(s\). We also define an output domain as \(\mathcal{Z}=\{1,2,...,t\}\). The Collision mechanism probabilistically outputs one item \(z\in\mathcal{Z}\), the probability of which corresponds to whether the item has a collision with hashed events in \(\mathbf{Y_{x}}\). Here, the hash function \(H:\mathcal{Y}\mapsto\mathcal{Z}\) is uniformly chosen at random from a finite domain \(\mathcal{H}\) by each user independently, with an identical (and often uniform) distribution \(P_{\mathcal{H}}:\mathcal{H}\mapsto[0,1]\). We present the design of the Collision mechanism in Definition 5.
**Definition 5** (\((d,s,\epsilon,t)\)-Collision Mechanism).: _Given a random-chosen hash function \(H:\mathcal{Y}\mapsto\mathcal{Z}\) according to distribution \(P_{\mathcal{H}}\), taking an \(s\)-sparse numerical vector \(\mathbf{Y_{x}}\subseteq\mathcal{Y}\) as input, the Collision mechanism randomly outputs an element \(z\in\mathcal{Z}\) according to following probability design:_
\[\mathbb{P}[z|\mathbf{x}]=\begin{cases}\frac{e^{\epsilon}}{\Omega-e^{\epsilon} \cdot\#\{H(y)\mid H(y)\text{ for }y\in\mathbf{Y_{x}}\}}&\quad otherwise.\\ (t-\#\{H(y)\mid H(y)\text{ for }y\in\mathbf{Y_{x}}\})\cdot\Omega.&\quad otherwise.\end{cases}\]
_The normalization factor \(\Omega=s\cdot e^{\epsilon}+t-s\). An unbiased estimator of indicator \([\![j_{b}\in\mathbf{Y_{x}}]\!]\) for \(b\in\{-1,1\}\) and \(j\in[d]\) is:_
\[\widehat{[\![j_{b}\in\mathbf{Y_{x}}]\!]}=\frac{[\![H(j_{b})=z]\!]-1/t}{e^{ \epsilon}/\Omega-1/t}.\]
Figure 1 (b) and (a) demonstrate the probability design of the Collision mechanism on numerical vector \([0,0,1,0,-1,0]\) when hash values conflict with each other or not, respectively. It can be seen as an \(s\)-item generalization of the prevalent local hash [39] for \(1\)-item categorical data. When \(s\geq 2\), the hashed values may coincide with each other; thus, simply restraining the proportional probability
in \(\{1,e^{\epsilon}\}\) as [39] will cause inconsistency in the normalization factor \(\Omega^{\prime}=\#\{H(y)\mid H(y)\ for\ y\in\mathbf{Y_{x}}\}\).\((e^{\epsilon}-1)+t\) for different input \(\mathbf{Y_{x}}\), and hence violate \(\epsilon\)-LDP. Therefore, we fix \(\Omega\) at \(s\cdot e^{\epsilon}+t-s\) and uniformly redistribute the probability of coincided hash values to the remaining output domain as \(\frac{\Omega-e^{\epsilon}\cdot\#(H(y)\mid H(y)\ for\ y\in\mathbf{Y_{x}})}{(t- \#\{H(y)\mid H(y)\ for\ y\in\mathbf{Y_{x}}\})\cdot\Omega}\). In the Collision mechanism, the proportional probability is relaxed to \([1,e^{\epsilon}]\). We note that sampling one item from \(\mathbf{Y_{x}}\) and then feeding it into local hash [39] leads to \(O(\frac{ds^{2}}{ne^{\epsilon}})\) squared error (see a similar analysis in Section 1.3 for PCK [17]).
The local privacy guarantee of the mechanism is provided in Proposition 1, which is evident since \(s\geq\#H(y)\mid H(y)\ for\ y\in\mathbf{Y_{x}}\). The utility-optimality guarantee of the mechanism is presented in Theorem 4. For \(\epsilon=O(1)\), its computational complexity is bounded by \(t^{*}\approx s+2s-1+s\cdot e^{\epsilon}=O(s)\), and communication complexity is \(\log_{2}(2s-1+s\cdot e^{\epsilon})=O(\log s)\).
**Proposition 1**.: _The \((d,s,\epsilon,t)\)-Collision mechanism in Definition 5 satisfies \(\epsilon\)-LDP for numerical vector data._
**Theorem 4**.: _Given privacy budget \(\epsilon=O(1)\), with optimal choice of the output parameter \(t^{*}\), the mean estimation error of \((d,s,\epsilon,t)\)-Collision mechanism for numerical vector is \(O(\frac{ds}{ne^{\epsilon}})\)._
Proof.: Recall that the \(j\)-th mean value \(\overline{\mathbf{x}}_{j}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i,j}\) equals to \(\frac{1}{n}\sum_{i=1}^{n}[\mathbb{J}_{i}\in\mathbf{Y_{x}}]-\mathbb{J}_{i}\in \mathbf{Y_{x}}]\). Assuming the uniform randomness and independence of hash functions in \(\mathcal{H}\), we have each observed indicator \(\big{[}H(j_{b})=z\big{]}\) as a Bernoulli random variable with a success rate \(\frac{e^{\epsilon}}{\Omega}\) (when \(j_{b}\in\mathbf{Y_{x}}\)) or a success rate \(\frac{1}{t}\) (when \(j_{b}\notin\mathbf{Y_{x}}\)). Consequently, the mean squared error of estimated frequencies is:
\[Var[\widehat{\mathbf{x}}]\leq 2\sum_{j=1}^{d}\sum_{b\in[-1,1]}Var \big{[}\widehat{[\mathbb{J}_{b}\in\mathbf{Y_{x}}]}\big{]}\] \[\leq\frac{2}{n}\cdot\frac{s\cdot e^{\epsilon}/\Omega(1-e^{\epsilon }/\Omega)+(2d-s)\cdot 1/t(1-1/t)}{(e^{\epsilon}/\Omega-1/t)^{2}}.\]
Taking the previous formula as a function of continuous \(t\), the function is indeed convex when \(d\geq t\geq s\). Choosing an approximate optimal \(t^{*}\) at around \(2s-1+s\cdot e^{\epsilon}\), we obtain:
\[Var[\widehat{\mathbf{x}}]\leq\frac{2d\cdot\Theta(s^{3})+\epsilon\cdot\Theta( s^{3})}{n\cdot\epsilon^{2}\cdot(-1+(2+\epsilon)\cdot s)^{2}}\leq O(\frac{ds}{ne^{ \epsilon}}).\]
We note that a similar conclusion applies to non-missing frequency estimation (refer to the beginning of Section 7).
## 6 Privacy Amplification in Shuffle Model
When a semi-trusted shuffler is positioned between users and the aggregator, the aggregator only observes the shuffled private views \(\mathcal{S}(z_{1},z_{2},...z_{n})\), thereby amplifying the privacy level. This section aims to analyze the privacy amplification upper and lower bounds of \(n\) shuffled private views from the Collision mechanism.
### _Amplification Upper Bounds_
To prove the privacy amplification upper bounds based on Lemma 1, we begin by analyzing the mixture properties of the Collision mechanism. Given the hash function space \(\mathcal{H}\) and distribution \(P_{\mathcal{H}}\), with \(\mathbb{P}[H]\) denoting the probability \(P_{\mathcal{H}}[H]\) of selecting \(H\), we demonstrate in the following lemma that the \((d,s,\epsilon,t)\)-Collision mechanism has mixture parameter \(\beta\).
**Lemma 2** (Mixture Properties).: _Let \(x_{1}\), \(x^{\prime}_{1}\), \(x_{2}\), \(\ldots\), and \(x_{n}\) be elements of the set \(\mathcal{X}^{*}\). Let \(H(\mathbf{Y}_{x_{1}})\) denote the set of hashed values of \(\mathbf{Y}_{x_{1}}\), i.e., \(\{H(y)\mid H(y)\ for\ y\in\mathbf{Y}_{x_{1}}\}\), and let \(\mathcal{R}\) denote the \((d,s,\epsilon,t)\)-Collision mechanism with \(t>s\). We demonstrate the existence of distributions \(\mathcal{Q}_{1}\), \(\mathcal{Q}^{\prime}_{1}\), \(\mathcal{Q}^{\prime}_{1}\), \(\mathcal{Q}^{\prime}_{2}\), \(\ldots\), and \(\mathcal{Q}_{n}\) that satisfy the following properties:_
\[\mathcal{R}(x_{1}^{0}) =e^{\epsilon}\beta\mathcal{Q}_{1}+\beta\mathcal{Q}^{\prime}_{1}+(1- \beta-e^{\epsilon}\beta)\mathcal{Q}^{\prime}_{1} \tag{3}\] \[\mathcal{R}(x_{1}^{1}) =\beta\mathcal{Q}_{1}+e^{\epsilon}\beta\mathcal{Q}^{\prime}_{1}+( 1-\beta-e^{\epsilon}\beta)\mathcal{Q}^{\prime}_{1}\] (4) \[\forall i\in[2:n],\ \mathcal{R}(x_{i}) =\beta\mathcal{Q}^{0}_{1}+\beta\mathcal{Q}^{1}_{1}+(1-2\beta) \mathcal{Q}_{i} \tag{5}\]
_where \(\beta=\sum_{H\in\mathcal{H}}\mathbb{P}[H]\cdot\frac{(e^{\epsilon}-1)(s-|H( \mathbf{Y}_{x_{1}})\bigcap H(\mathbf{Y}_{x_{1}^{\prime}})|)}{se^{\epsilon}+t-s}\)._
Proof.: See Appendix B.
Given Lemma 2, then combining the reduction in Lemma 1, the monotonic property of \(\mathcal{D}(P_{\beta}\|Q_{\beta})\)[34, Lemma 5.1], and \(\beta\leq\frac{se^{\epsilon}-1}{se^{\epsilon}+t-s}\) (equality holds when \(H(\mathbf{Y}_{x_{1}})\bigcap H(\mathbf{Y}_{x_{1}^{\prime}})=\Phi\)), we arrive at the main theorem for privacy amplification upper bounds (in Theorem 5).
**Theorem 5** (Amplification Upper Bounds).: _Let \(\mathcal{R}\) denote the \((d,s,\epsilon,t)\)-Collision mechanism (assumed \(t>s\)), and let \(\alpha=\frac{s(e^{\epsilon}-1)}{se^{\epsilon}+t-s}\), then for any neighboring datasets \(D,D^{\prime}\), we have:_
\[\mathcal{D}(\mathcal{S}\circ\mathcal{R}(D)\|\mathcal{S}\circ\mathcal{R}(D^{\prime} ))\leq\mathcal{D}(P_{\alpha}\|Q_{\alpha}). \tag{6}\]
Proof.: Without loss of generality, we will consider two neighboring datasets, denoted as \(D\) and \(D^{\prime}\), that differ only in the first datum. That is, \(D=\{x_{1},x_{2},...,x_{n}\}\) and \(D^{\prime}=\{x^{\prime}_{1},x_{2},...,x_{n}\}\). Let \(\beta\) be defined as \(\sum_{H\in\mathcal{H}}\mathbb{P}[H]\cdot\frac{(e^{\epsilon}-1)(s-|H(\mathbf{Y}_{x _{1}})|\bigcap H(\mathbf{Y}_{x_{1}})|)}{se^{\epsilon}+t-s}\). By invoking Lemma 2 and Lemma 1, we obtain the following results:
\[\mathcal{D}(\mathcal{S}\circ\mathcal{R}(D)\|\mathcal{S}\circ\mathcal{R}(D^{ \prime}))\leq\mathcal{D}(P_{\beta}\|Q_{\beta}).\]
In addition, under the fixed value of \(e^{\epsilon}\), the data processing inequality of distance measure \(\mathcal{D}\) leads to the monotonically non-decreasing property of \(\mathcal{D}(P_{\beta}\|Q_{\beta})\) with respect to \(\beta\)[34, Lemma 5.1]. By taking into account the inequality \(\beta\leq\frac{s(e^{\epsilon}-1)}{se^{\epsilon}+t-s}\leq\alpha\), we are able to draw the final conclusion.
As the indistinguishable level between \(\mathcal{S}\circ\mathcal{R}(D)\) and \(\mathcal{S}\circ\mathcal{R}(D^{\prime})\) is upper bounded by the indistinguishable
Fig. 1: An illustration of the Collision mechanism without hash conflict (a) and with hash conflicts (b), where \(d=6\), \(s=2\), \(t=4\) and \(\epsilon=\log(2)\).
level between \(P_{\frac{s(\epsilon-1)}{s^{\epsilon+s+s}}}\) and \(Q_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}}\), we now focus on deriving the indistinguishable level of the latter pair. It is common in practice that \(\delta\in(0,1]\) is fixed (e.g., \(\delta=O(1/n)\)), and one wants to know the minimum \(\epsilon_{c}\) such that \(P_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}}\), and \(Q_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}}\) are \((\epsilon_{c},\delta)\)-indistinguishable. Directly solving the optimization problem is intractable; however, when \(\epsilon_{c}\) is fixed, one can easily numerically compute \(\mathcal{D}_{e^{\epsilon_{c}}}(P_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon +s+s}}}Q_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}})\) and \(\mathcal{D}_{e^{\epsilon_{c}}}(Q_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon +s}}}P_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}})\) (see reference [36] for an \(\mathcal{O}(n)\) implementation). Finally, use the fact that the above hockey-stick divergence is monotonic w.r.t. \(\epsilon_{c}\in[0,\epsilon]\), one can solve the minimization problem with satisfactory precision via binary search (e.g., in [33]).
We compare our amplification upper bounds based on Theorem 5 with known bounds in the literature, including the closed-form amplification bound in [9] (denoted as _EFMRT19_), numerical bounds by privacy blanket [32] (with both general parameter \(1-e^{-\epsilon}\) and specific parameter \(\frac{t}{se^{\epsilon}+t-s}\) on total variation similarity), the numerical clone reduction [33], and the numerical stronger clone reduction [34]. Some representative results are presented in Figure 2, which implies our bounds are tighter and save about \(20\%\)-\(30\%\) privacy budget. This also indicates that existing bounds still overestimate privacy consumption, while our bounds match amplification lower bounds when local budget \(\epsilon>\log(1+1/s)\) (see the next subsection).
### _Amplification Lower Bounds_
In this section, we show that the amplification upper bounds in the former subsection are actually tight. Specifically, we provide worst-case scenarios where the quantity \(\mathcal{D}(\mathcal{S}\circ\mathcal{R}(D)\|\mathcal{S}\circ\mathcal{R}(D^{ \prime}))\) is lower bounded by \(\mathcal{D}(P_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}}\|Q_{\frac{ s(\epsilon-1)}{s^{\epsilon+s+s}}})\). The underlying idea is separately counting the observed elements \(z\) in \([t]\) based on whether \(z\in H(\mathbf{Y}_{x_{1}})\) or \(z\in H(\mathbf{Y}_{x_{i}^{\epsilon}})\), and subsequently summarizing them as Binomial counts.
**Theorem 6** (Amplification Lower Bounds).: _Let \(\mathcal{R}\) denote the \((d,s,\epsilon,t)\)-Collision mechanism (assumed to \(2\)s), then there exists \(\mathcal{H}\) and neighboring datasets \(D,D^{\prime}\) such that:_
\[\mathcal{D}(\mathcal{S}\circ\mathcal{R}(D)\|\mathcal{S}\circ\mathcal{R}(D^{ \prime}))\geq\mathcal{D}(P_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}} \|Q_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}}).\]
Proof.: Considering hash functions \(\mathcal{H}\) and two neighboring datasets \(D=\{x_{1},x_{2}=x^{*},...,x_{n}=x^{*}\}\) and \(D^{\prime}=\{x_{1}^{\prime},x_{2}=x^{*},...,x_{n}=x^{*}\}\) such that for any \(H\in\mathcal{H}\), all three following equations holds (achievable when \(t\geq 3s\)):
\[H(\mathbf{Y}_{x_{1}})\bigcap H(\mathbf{Y}_{x_{1}^{\prime}}) =\Phi,\] \[H(\mathbf{Y}_{x_{1}})\bigcap H(\mathbf{Y}_{x^{*}}) =\Phi,\] \[H(\mathbf{Y}_{x_{1}^{\prime}})\bigcap H(\mathbf{Y}_{x^{*}}) =\Phi.\]
Now consider shuffled messages \(\mathcal{S}(\mathcal{R}(x_{1}),...,\mathcal{R}(x^{*}))\) and \(\mathcal{S}(\mathcal{R}(x_{1}^{\prime}),...,\mathcal{R}(x^{*}))\). We define a post-processing function \(g:\mathcal{H}\times\mathcal{Z}\mapsto\mathbb{N}^{2}\) on each message as follows (for any output \(H,z\in\mathcal{H}\times\mathcal{Z}\)):
\[g(H,z):=\begin{cases}(1,0),&\text{if }z\in H(\mathbf{Y}_{x_{1}});\\ (0,1),&\text{if }z\in H(\mathbf{Y}_{x_{1}^{\prime}});\\ (0,0),&\text{else}.\end{cases}\]
Let us define a function \(g_{n}:(\mathcal{H}\times\mathcal{Z})^{n}\mapsto\mathbb{N}^{2}\), which maps a set of \(n\) shuffled messages \(S\) to the summation of \(g(s)\) for all \(s\in\mathcal{S}\). It can be observed that \(g_{n}(\mathcal{R}(x_{1}),\mathcal{R}(x^{*}),...,\mathcal{R}(x^{*}))\overset{d} {=}P_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}}\) and \(g_{n}(\{\mathcal{R}(x_{1}),\mathcal{R}(x^{*},...,\mathcal{R}(x^{*})\})\overset{ d}{=}Q_{\frac{s(\epsilon^{\epsilon}-1)}{s^{\epsilon+s+s}}}\). Here, the notation \(\overset{d}{=}\) denotes that the two random variables have the same distribution. Finally, we use the data processing inequality of Hockey-stick divergence (or any other distance measure \(\mathcal{D}\) satisfying data processing inequality) to arrive at the conclusion.
We present the amplification lower bound in Theorem 6. Since the upper bound in Theorem 5 matches the lower bound, we conclude that the privacy amplification results in the former subsection are precisely tight (when \(t\geq 3s\)).
## 7 Optimized Mean Mechanism
Previous sections mainly consider frequency estimation over the event domain \(\mathcal{Y}=\{1,..,1,2,..,2,..,d,..,d_{+}\}\), which acts as intermediate results for both mean estimation and conditional mean estimation of numerical vectors. Specifically, the \(j\)-th mean value \(\overline{\mathbf{x}}_{j}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i,j}\) equals to \(\frac{1}{n}\sum_{i=1}^{n}(\left\|j_{+}\in\mathbf{Y}_{\mathbf{x}}\right\|- \left\|j_{-}\in\mathbf{Y}_{\mathbf{x}}\right\|)\), the \(j\)-th non-missing frequency \(\mathbf{x}_{j}=\frac{\#\{\mathbf{x}_{i,j}\mid\mathbf{x}_{i,j}\text{ for }i\in[n]\text{ and }\mathbf{x}_{i,j}\neq 0\}}\) equals to
Fig. 2: Comparison of amplification effects (base \(2\) logarithm of amplification ratio \(\frac{\epsilon}{\delta e}\), the higher the better, where \(\epsilon_{c}\) is the amplified privacy level in various amplification approaches) of Collision mechanism with \(n=10^{4}\) or \(10^{5}\), sparsity parameter \(s=4\) or \(64\), and varying local budget \(\epsilon\in[0.1,5.0]\). The hyperparameter \(t\) is set to \(t^{*}=\lfloor se^{\epsilon}+2s-1\rfloor\) in the Collision mechanism.
\(\frac{1}{n}\sum_{i=1}^{n}([j_{+}\in\mathbf{Y_{x}}]+[j_{-}\in\mathbf{Y_{x}}])\). According to the variance bounds of the plus/minus of two random variables, we have:
\[Var[\widehat{[j_{+}\in\widehat{\mathbf{Y}_{x}}]\pm[j_{-}\in\widehat{\mathbf{Y}_ {x}}]}]\leq 2\cdot Var[\widehat{j_{+}\in\widehat{\mathbf{Y}_{x}}}]+2\cdot Var[ \widehat{j_{-}\in\widehat{\mathbf{Y}_{x}}}].\]
Consequently, both \(Var[\overline{\mathbf{x}}]\) and \(Var[\underline{\mathbf{x}}]\) are not greater than \(2\cdot\mathbb{E}\big{[}\sum_{j\in[d],\ b\in\{-1,1\}}|[j_{b}\in\widehat{ \mathbf{Y}_{x}}]-[j_{b}\in\mathbf{Y_{x}}]|^{2}\big{]}=O(\frac{ds}{n^{c}})\).
In many scenarios (e.g., federated gradient averaging), statisticians pay more attention to the mean value \(\mathbf{\bar{x}}\). In this section, we analyze the pitfalls of the Collision mechanism for mean estimation and propose the correlated Collision mechanism (termed as CoCo), which obeys the negative correlation between \([\underline{j}_{+}\in\mathbf{Y_{x}}]\) and \([\underline{j}_{-}\in\mathbf{Y_{x}}]\) so as to reduce estimation error.
### _True/False/Opposite Collision Rate_
Recall that in the Collision mechanism, when \(j_{b}\in\mathbf{Y_{x}}\) or \(j_{b}\notin\mathbf{Y_{x}}\) holds, we have \(\mathrm{P}[H(j_{b})=z]=\frac{e^{*}}{10}\) and \(\mathrm{P}[H(j_{b})=z]=\frac{1}{t}\) respectively. We denote such conditional collision probabilities over the outputting domain as true/false/opposite collision rate (for \(j\in[d]\) and \(b\in\{+,-\}\)):
\[P_{t} :=\mathbb{P}[H(j_{b})=z\ \mid\ j_{b}\in\mathbf{Y_{x}}],\] \[P_{f} :=\mathbb{P}[H(j_{b})=z\ \mid\ j_{b}\notin\mathbf{Y_{x}}\ and\ j_{-b}\notin\mathbf{Y_{x}}],\] \[P_{o} :=\mathbb{P}[H(j_{b})=z\ \mid\ j_{-b}\in\mathbf{Y_{x}}].\]
The variance of the mean estimator can be expressed as \(Var\big{[}[\underline{j_{+}\in\widehat{\mathbf{Y}_{x}}}]-[\widehat{j_{-}\in \widehat{\mathbf{Y}_{x}}}]\big{]}=\frac{Var[H(j_{+})=z]-[H(j_{-})=z]}{(P_{t}-P_ {o})^{2}}\), which mainly depends on the discrepancy between the true/opposite collision rate. Meanwhile, in the Collision mechanism, we have \(P_{o}\equiv P_{f}\) and \(\frac{P_{b}}{P_{o}}<e^{*}\).
### _Mechanism Design_
To maximize the discrepancy between the true/opposite collision rate and thus reduce the variance of the mean estimator, the CoCo mechanism aims to achieve \(\frac{P_{b}}{P_{o}}\geq\frac{P_{b}}{P_{f}}\). To accomplish this goal, we enforce stronger negative correlation between \([H(j_{+})=z]\) and \([H(j_{-})=z]\).
Assuming the size of the outputting domain \(t\) is even, we use two hash functions: \(H_{1}:[d]\mapsto[\frac{t}{2}]\) and \(H_{2}:\mathcal{Y}\mapsto\{-1,+1\}\). For any \(j_{b}\in\mathbf{Y_{x}}\), the overall hash function \(H:\mathcal{Y}\mapsto[t]\) on \(j_{b}\) is defined as:
\[H(j_{b}):=H_{1}(j)+\frac{b\cdot H_{2}(j_{+})+1}{2}\cdot\frac{t}{2}.\]
Then, we assign the entry \(H(j_{b})\) in the output domain with a high relative probability \(e^{*}\) and the entry \(2\cdot H_{1}(j)+\frac{t}{2}-H(j_{b})\) with a low relative probability \(1\). The overall procedure of CoCo for a single user is summarized in Algorithm 1. Here, the sub-procedure \(RandomPermute\) uniformly randomizes the order of elements in the given list or set, while the sub-procedure \(Sum\) calculates the summation of weights in the provided list.
```
0: A numerical data \(\mathbf{x}\in\{-1,0,1\}^{d}\) with \(s\) non-zero entries, privacy budget \(\epsilon\), outputting domain size \(t\in\mathbb{Z}^{+}\) that \(t\geq 2s+2\) and \(t\ mod\ 2=0\), hash functions \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\).
0: A private view \(z\in[t]\) that satisfies \(\epsilon\)-LDP.
1:\(\triangleright\) Initialization
2: select hash function \(H_{1}:[d]\mapsto[\frac{t}{2}]\) from \(\mathcal{H}_{1}\) uniformly at random
3: select hash function \(H_{2}:\mathcal{Y}\mapsto\{-1,1\}\) from \(\mathcal{H}_{2}\) uniformly at random
4:\(W=\{0\}^{t}\)
5:\(\mathbf{Y_{x}^{\prime}}=RandomPermute(\mathbf{Y_{x}})\)
6:\(\triangleright\) Assign relative weights
7:for\(j_{b}\in\mathbf{Y_{x}^{\prime}}\)do
8:\(H(j_{b})=H_{1}(j)+\frac{b\cdot H_{2}(j_{+})+1}{2}\cdot\frac{t}{2}\)
9:\(W_{H(j_{b})}=e^{\epsilon}\)
10:\(H^{\prime}(j_{b})=2\cdot H_{1}(j)+\frac{t}{2}-H(j_{b})\)
11:\(W_{H^{\prime}(j_{b})}=1\)
12:endfor
13:\(\Omega=(e^{*}+1)\cdot s+t-2\cdot s\)
14:\(w=\frac{\Omega-Sum(W)}{t=2\cdot Sum(W)/(e^{*}+1)}\)
15:for\(k\in\{t/2\}\)do
16:if\(W_{k}=0\) and \(W_{k+t/2}=0\)then
17:\(W_{k}=w\)
18:\(W_{k+t/2}=w\)
19:endif
20:endfor
21:\(\triangleright\) Sampling with relative weights
22: sampling one element \(z\in[t]\) with probability \(\mathbb{P}[z=k]=\frac{W_{k}}{\Omega}\)
23:return\((H_{1},H_{2},z)\)
```
**Algorithm 2** CoCo Estimator
When \(s>1\), the \(H_{1}(j)\) may conflict with each other for non-zero entries \(\{j\mid\mathbf{x}_{j}\neq 0\}\). For every \(k,k+\frac{t}{2}\) bucket pair (\(k\in[\frac{t}{2}]\)), we simply overwrite relative probabilities when there are conflicts (at line \(8\)-\(11\) in Algorithm 1). To ensure that true/false/opposite collision rate is the same for every \(j\in[d]\), the order of non-zero entries in \(\mathbf{x}\) is firstly randomly permuted (line \(5\) in Algorithm 1). To ensure that the normalization factor \(\Omega=s\cdot(\epsilon+1)+(t-2\cdot s)\) is consistent for all possible inputs and hash functions, as in the Collision mechanism, the extra probability related to conflicted entries is uniformly redistributed to the remaining unassigned bucket pairs (at line \(14\)-\(20\)). The final output \(z\) is then sampled according to relative probabilities of each outputting entry.
```
0: A private view \((H_{1},H_{2},z)\) of unknown numerical data \(\mathbf{x}_{i}\).
0: Estimators of \([\underline{j}_{+}\in\mathbf{Y_{x_{i}}}]+[\underline{j}_{-}\in\mathbf{Y_{x_{i}}}]\) and \([\underline{j}_{+}\in\mathbf{Y_{x_{i}}}]-[\underline{j}_{-}\in\mathbf{Y_{x_{i}}}]\).
1:for\(j\in[d]\)do
2:for\(b\in\{-1,+1\}\)do
3:\(H(j_{b})=H_{1}(j)+\frac{b\cdot H_{2}(j_{+})+1}{2}\cdot\frac{t}{2}\)
4:endfor
5:\(\triangleright\) Estimator of \([\underline{j}_{+}\in\mathbf{Y_{x}}]+[\underline{j}_{-}\in\mathbf{Y_{x}}]\)
6:\(\widehat{\mathbf{x}}_{i,j}=\frac{[H(j_{+})=z]+[H(j_{-})=z]-2\cdot P_{f}}{P_{t}+P_{t} -P_{o}-2\cdot P_{t}}\)
7:\(\triangleright\) Estimator of \([\underline{j}_{+}\in\mathbf{Y_{x}}]-[\underline{j}_{-}\in\mathbf{Y_{x}}]\)
8:\(\widehat{\mathbf{x}}_{i,j}=\frac{[H(j_{+})=z]-[H(j_{-})=z]}{P_{t}-P_{o}}\)
9:endfor
10:return\(\{\widehat{\mathbf{x}}_{i,j},\widehat{\mathbf{x}}_{i,j}\}_{j\in[d]}\)
```
**Algorithm 3** CoCo Estimator
For better illustration, we depict an example of applying the \((d=10,s=3,\epsilon=\log 2,t=8)\)-CoCo mechanism on numerical data \(\mathbf{x}=[0,0,1,0,-1,0,0,0,-1,0]\) in Figure 3. It shows a case when overwrite/conflict happens for hash functions \((H_{1},H_{2})\).
We will now proceed to examine the behavior of CoCo in terms of true, false, and opposite collision rates. Let \(P_{ow}\) denote the probability that a non-zero entry \(j\) is overwritten on the outputting domain by other entries. The true collision rate is then:
\[P_{t}=P_{ow}\cdot\frac{e^{\epsilon}+1}{2\cdot\Omega}+(1-P_{ow})\cdot\frac{e^{ \epsilon}}{\Omega},\]
the false collision rate is:
\[P_{f}=\frac{1}{t},\]
and the opposite collision rate is:
\[P_{o}=P_{ow}\cdot\frac{e^{\epsilon}+1}{2\cdot\Omega}+(1-P_{ow})\cdot\frac{1}{ \Omega}.\]
The formula of the \(P_{ow}\) is close-formed. Separately considering the permuted order of a non-zero entry \(j\), since there are exactly \(s\) entries in \(\mathbf{Y_{x}}\), the probability that the entry \(j\) ranks \(k\) among \(s\) entries is \(\frac{1}{s}\) (for \(k\in[s]\)). When \(j\) is the \(k\)-th entry, there are remaining \(s-k\) entries that have not been hashed, thus the conflict/overwrite probability is \(1-(\frac{t-2}{t})^{s-k}\). Therefore, we have:
\[P_{ow}=1-\frac{1}{s}\sum_{k=1}^{s}(\frac{t-2}{t})^{s-k}=1-\frac{t^{s}-(t-2)^{s }}{2t^{s-1}\cdot s}. \tag{7}\]
When \(t\geq 2s+2\) and \(\epsilon>0\), \(P_{o}\) is always less than \(P_{f}\), and thus provides opportunity for more accurate mean estimation. As a comparison, the original Collision mechanism has \(P_{f}\equiv P_{o}\).
**Mean Estimator.** We now proceed to derive an unbiased estimator of the \(j\)-th mean value \(\frac{1}{n}\sum_{i\in[n]}\mathbf{x}_{i,j}\), which equals to \(\frac{1}{n}\sum_{i\in[n]}[\hat{J}_{i}\in\mathbf{Y_{x_{i}}}]-[\hat{J}_{j-}\in \mathbf{Y_{x_{i}}}]\). Observe that when some non-zero entry \(j_{\nu}^{\prime}\) (\(j^{\prime}\neq j\)) overwrites bucket pair \((H_{1}(j),H_{1}(j)+\frac{t}{2})\), since the hash function \(H_{2}\) is uniform pseudo-randomly, we have \(\mathbb{E}[\llbracket H(j_{+})=z\rrbracket-\llbracket H(j_{-})=z\rrbracket)=0\). Otherwise, when no overwrite happens to \((H_{1}(j),H_{1}(j)+\frac{t}{2})\), we have \(\mathbb{E}[\llbracket H(j_{+})=z\rrbracket-\llbracket H(j_{-})=z\rrbracket]= \frac{\llbracket z_{+}z\rrbracket-\llbracket z_{-}\nabla z\rrbracket}{e^{ \epsilon/2}-1/\llbracket z_{-}\nabla z\rrbracket}\). Combining two results together, we have \(\mathbb{E}[\llbracket H(j_{+})=z\rrbracket-\llbracket H(j_{-})=z\rrbracket]=P_ {ow}\cdot 0+(1-P_{ow})\cdot\frac{\llbracket z_{+}\nabla z\rrbracket-\llbracket z _{-}\nabla z\rrbracket}{e^{\epsilon/2}-1/\llbracket z_{-}\nabla z\rrbracket}\). Therefore, we arrived an unbiased estimator of the \(j\)-th mean value as (the \(H^{i}\) is hash function used by user \(i\)):
\[\frac{1}{n}\sum_{i\in[n]}\frac{\llbracket H^{i}(j_{+})=z^{i}\rrbracket- \llbracket H^{i}(j_{-})=z^{i}\rrbracket}{P_{t}-P_{o}}.\]
**Non-missing Frequency Estimator.** In the key-value data aggregation, statisticians are also interested in the non-missing frequency of each key: \(\mathbf{x}_{j}=\frac{1}{n}\#\{\mathbf{x}_{i,j}\mid\mathbf{x}_{i,j}\ for\ i\in[n] \ and\ \mathbf{x}_{i,j}\neq 0\}\). When \(j\) is a non-missing entry in \(\mathbf{x}\), since \(H(j+)\neq H(j_{-})\), it is obvious that \(\mathbb{P}[z=H(j_{+})\ or\ z=H(j_{-})]=P_{t}+P_{o}=\frac{e^{\epsilon}+1}{2\cdot 1}\); When \(j\) is a missing entry in \(\mathbf{x}\), we have \(\mathbb{P}[z=H(j_{+})\ or\ z=H(j_{-})]=2\cdot P_{f}\). Consequently, according to the transition matrix of the CoCo mechanism, we get:
\[\mathbb{E}\Big{[}\frac{\llbracket z=H(j_{+})\rrbracket+\llbracket z=H(j_{-}) \rrbracket-2\cdot P_{f}}{\frac{e^{\epsilon}+1}{2\cdot 1}-2\cdot P_{f}}\Big{]}= \llbracket\mathbf{x}_{i,j}\neq 0\rrbracket.\]
An unbiased estimator of the \(j\)-th non-missing frequency \(\underline{\mathbf{x}}_{j}\) is thus:
\[\frac{1}{n}\sum_{i\in[n]}\frac{\llbracket H^{i}(j_{+})=z^{i}\rrbracket+ \llbracket H^{i}(j_{-})=z^{i}\rrbracket-2\cdot P_{f}}{P_{t}+P_{o}-2\cdot P_{f}}.\]
We summarize these estimators in Algorithm 2, which relies on the transition probability matrix in Table II concerning various events on the outputs given conditions in the inputs.
We now analyze the complexities of the proposed CoCo mechanism. On the user side, the computational cost is \(O(s)\), and the communication cost is \(O(\log t)=O(\epsilon+\log s)\). On the server side, the naive approach in Algorithm 2 that derives estimators for each \((H_{1},H_{2},z)\) has a computational cost of \(O(n\cdot d)\), and a memory cost of \(O(\log t)\). Alternatively, one can first record frequencies of every \((H_{1},H_{2},z)\in\mathcal{H}_{1}\times\mathcal{H}_{2}\times[t]\), and then summarize \(\llbracket H(j_{b})=z\rrbracket\) with the frequency weight. Assuming the domain size of \(\mathcal{H}_{1}\times\mathcal{H}_{2}\) is constant, this approach has a computational cost of \(n+t\cdot d=O(n+dse^{\epsilon})\) and a memory cost of \(O(se^{\epsilon})\).
### _Theoretical Analyses_
In this part, we provide privacy and accuracy guarantees of the CoCo mechanism. The \(\epsilon\)-LDP guarantee of the mechanism is given in Proposition 2.
**Proposition 2**.: _The \((d,s,\epsilon,t)\)-CoCo mechanism in Algorithm 1 satisfies \(\epsilon\)-LDP for numerical vector data._
Proof.: First, the normalization factor \(\Omega\) in the CoCo mechanism is the same for any input \(\mathbf{x}\) and any hash functions \(H_{1}\in\mathcal{H}_{1},H_{2}\in\mathcal{H}_{2}\). Second, due to the identicalness of selecting hash functions (i.e., follow the same distribution),
\begin{table}
\begin{tabular}{|c||c|c|} \hline & \(\llbracket H(j_{b})=z\rrbracket\) & \(\llbracket H(j_{-b})=z\rrbracket\) \\ \hline \hline \(j_{b}\in\mathbf{Y_{x}}\) & \(P_{t}\) & \(P_{o}\) \\ \hline \(j_{-b}\in\mathbf{Y_{x}}\) & \(P_{o}\) & \(P_{t}\) \\ \hline \(j_{b}\notin\mathbf{Y_{x}}\ and\ j_{-b}\notin\mathbf{Y_{x}}\) & \(P_{f}\) & \(P_{f}\) \\ \hline \end{tabular}
\end{table} TABLE II: Conditional probabilities about the input \(\emptyset\) output for \(j\in[d]\) and \(b\in\{-1,+1\}\). The probability takes into account the randomness of selecting hash function, uniform pseudo-randomness of the hash functions, and the randomness of sampling \(z\).
Fig. 3: An illustration of the CoCo mechanism with hash conflicts/overwrite, where \(d=10\), \(s=3\), \(t=8\) and \(\epsilon=\log(2)\).
we only need to consider the private view \((H_{1},H_{2},z)\) given fixed \(H_{1},H_{2}\). Third, given \(H_{1}\) and \(H_{2}\), the relative probabilities of every outputting entry range from \(1.0\) to \(e^{\epsilon}\). Since \(t\geq 2s+2\) implies the \(w\) at line \(14\) is lower than \((e^{\epsilon}+1)/(2\Omega)\) but never lower than \(1/\Omega\), then we drop \(a\in[t]\) and any inputs \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}^{s}\), we have \(\frac{p_{i}^{2}=a\mathbf{x},H_{i},H_{i}}{|\bar{x}|=a|\mathbf{x}^{\prime},H_{i },H_{i}|}<\frac{e^{\epsilon}/\Omega}{1.0/\Omega}\leq e^{\epsilon}\).
#### 7.3.1 Mean Squared Error
With the outputting domain size parameter \(t\) fixed in the CoCo mechanism, its estimation errors of various estimators (see Algorithm 2) are presented in Lemma 3.
**Lemma 3**.: _For the \((d,s,\epsilon,t)\)-CoCo mechanism, the mean squared errors of estimators are:_
\[\sum_{j=1}^{d}|\widehat{\mathbf{x}}_{j}-\overline{\mathbf{x}}_{j}|^{2}=\frac{ s(P_{t}+P_{o})(1-P_{t}-P_{o})+(d-s)2P_{f}(1-2P_{f})}{(P_{t}+P_{o}-2P_{f})^{2}}, \tag{8}\]
\[\sum_{j=1}^{d}|\widehat{\mathbf{x}}_{j}-\overline{\mathbf{x}}_{j}|^{2}=\frac{ s((P_{t}+P_{o})-(P_{t}-P_{o})^{2})+(d-s)(2P_{f})}{(P_{t}-P_{o})^{2}}. \tag{9}\]
Proof.: See Appendix C
Based on the error formulation, we further choose parameter \(t\) in Theorem 7 (see Appendix D for proof). Consequently, the mean squared errors are approximately minimized and reach the optimal \(O(\frac{d\theta}{\epsilon^{2}})\) bound.
**Theorem 7** (Mean Squared Error Bounds).: _When \(\epsilon=O(1)\), takes as an input \(\mathbf{x}\), the \((d,s,\epsilon,t)\)-CoCo mechanism with \(t=\lceil e^{\epsilon}s+5s\rceil\) satisfies_
\[\sum\nolimits_{j=1}^{d}|\widehat{\mathbf{x}}_{j}-\underline{\mathbf{x}}_{j}| ^{2}\leq O\big{(}\frac{ds}{\epsilon^{2}}\big{)}; \tag{10}\]
_the \((d,m,\epsilon,t)\)-CoCo mechanism with \(t=\lceil e^{\epsilon}s+s+2\rceil\) satisfies_
\[\sum\nolimits_{j=1}^{d}|\widehat{\mathbf{x}}_{j}-\overline{\mathbf{x}}_{j}|^{2 }\leq O\big{(}\frac{ds}{\epsilon^{2}}\big{)}. \tag{11}\]
To illustrate the impact of parameter \(t\), we plot the variation of \(P_{t}/P_{o}/P_{f}\) and mean squared errors in Figure 4 and 5, in comparison to the previously proposed Collision mechanism. According to the variance bounds of the sum and difference of two variables (see the beginning of Section 7), the Collision mechanism also satisfies the same error bound. By designing the CoCo mechanism to have \(P_{o}<P_{f}\), the constant factor in its error is reduced.
#### 7.3.2 Maximum Absolute Error
In this section, we derive the expected maximum absolute error of the proposed CoCo mechanism for mean estimation and demonstrate that it is rate-optimal. Based on the (discrete) probability distributions of the observed variable \([\![H(j_{+})=z]\!]-[\![H(j_{-})=z]\!]\), we present the maximum absolute error bounds of the CoCo mechanism in Theorem 8 (see Appendix E for proof). This implies that the error is bounded by \(O(\frac{1}{\epsilon}\sqrt{\frac{s\log d}{n}})\).
**Theorem 8** (Maximum Absolute Error of Mean Estimation in CoCo).: _With privacy budget \(\epsilon=O(1)\), for mean value estimation on \(n\) users, the error due to Algorithm 1 and 2 is bounded by_
\[\max\nolimits_{j=1}^{d}|\widehat{\mathbf{x}}_{j}-\overline{\mathbf{x}}_{j}| \leq O\big{(}\sqrt{\frac{s\log(d/\beta)}{\epsilon^{2}n}}\big{)}\]
_with probability \(1-\beta\) over the randomness of the user-specific hash functions and the randomization in Algorithm 1._
Recently, for mean estimation of \(s\)-sparse numerical vectors, [30] analyzed lower bounds on the maximum absolute error \(\max\nolimits_{j=1}^{d}|\widehat{\mathbf{x}}_{j}-\overline{\mathbf{x}}_{j}|^ {2}\) under \(\epsilon\)-LDP. We restate the minimax lower bound \(O(\frac{1}{\epsilon}\sqrt{\frac{s\log d/s}{n}})\) in Theorem 9, which follows definitions in Section 4. Combining the upper error bounds in Theorem 8, we can conclude that the CoCo mechanism is minimax optimal (when \(s\leq\sqrt{d}\)) under the measurement of maximum absolute error.
**Theorem 9** (Lower Bounds of Mean Estimation [30]).: _For the numerical vector mean estimation problem, for any \(\epsilon\)-LDP mechanism, there exists a universal constant \(c>0\) such that for all \(\epsilon\in(0,1]\),_
\[\mathfrak{M}_{n}(\underline{\theta}(\mathcal{P}),\|\cdot\|_{\infty},\epsilon) \geq\min\Big{\{}c\cdot\frac{1}{\epsilon}\sqrt{\frac{s\log d/s}{n}},1\Big{\}}.\]
### _Privacy Amplification in the Shuffle Model_
In this section, we consider privacy amplification of the CoCo mechanism in the shuffle model. Since CoCo has a similar probability design as the Collision, let \(\mathcal{R}\) denote the \((d,s,\epsilon,t)\)-CoCo mechanism (assuming \(t>s\)), and let \(\alpha=\frac{s(e^{\epsilon}-1)}{s^{\epsilon}+t-s}\), then for any neighboring datasets \(D,D^{\prime}\), we also have:
\[\mathcal{D}(\mathcal{S}\circ\mathcal{R}(D)\|\mathcal{S}\circ\mathcal{R}(D^{ \prime}))\leq\mathcal{D}(P_{\alpha}\|Q_{\alpha}).\]
Fig. 4: The \(P_{t}\),\(P_{n}\) and \(P_{f}\) that varies with outputting domain size \(t\) when \(d=128\) and \(s=8\). Compared to the original Collision mechanism, the opposite collision rate \(P_{o}\) in CoCo is significantly lower and is smaller than the \(P_{f}\).
Fig. 5: The estimation errors that varies with outputting domain size \(t\) with \(d=128\), \(s=8\), \(n=1\), and \(\epsilon=0.5\). The _Minus Error_ denotes \(\sum_{j=1}^{d}|\widehat{\mathbf{x}}_{j}-\overline{\mathbf{x}}_{j}|^{2}\); the _Plus Error_ denote \(\sum_{j=1}^{d}|\widehat{\mathbf{x}}_{j}-\overline{\mathbf{x}}_{j}|^{2}\); the _Item Error_ denote \(\sum_{j\in\mathcal{X}}|[\widehat{\mathbf{y}}_{j}\in\overline{\mathbf{x}}_{ \mathbf{x}}]-[\widehat{\mathbf{y}}_{b}\in\mathbf{Y}_{\mathbf{x}}]|^{2}\). All results are the average values of \(10,000\) independent experiments. The CoCo has about \(20\%\) lower MSE errors on the mean estimator.
The equality holds when there are no hash collisions for all user data in \(D\) and \(D^{\prime}\) (requires \(t\geq 4s\)).
## 8 Experiments
In this section, we mainly evaluate the statistical efficiency of the proposed Collision/CoCo mechanism for \(\epsilon\)-LDP numerical vector aggregation. Competing mechanisms include the PCKV mechanism with unary encoding as the base randomizer [17] (denoted as PCKV-UE), the PrivKV mechanism [15], the PCKV mechanism with generalized randomized response as the base randomizer (denoted as PCKV-GRR), its privacy amplified version (denoted as PCKV-AGRR), and the succinct mean estimation protocol [30] (denoted as SUCCINCT). Since the performances of all these mechanisms are data-independent, it is sufficient to utilize synthetic datasets for fair evaluation. The parameters of synthetic datasets are listed as follows (default values are in **bold** form), covering most cases encountered in real-world applications:
1. Number of users \(n\): 10,000 and **100,000**.
2. Dimension \(d\): 256 and **512**.
3. Sparsity parameter \(s\): 4, 8, 16, and 32.
4. Privacy budget \(\epsilon\): 0.001, 0.01, 0.1, 0.2, 0.4, 0.8, 1.0, 1.5, and 2.0.
Since competing mechanisms are data-independent (i.e., estimation errors are irrelevant of true values), during each simulation, the numerical vector of each user is independently and randomly generated, the non-zero entries are uniformly and randomly selected from \(d\) dimensions, and each dimension has an equal probability of being \(-1\) or \(1\).
### _Evaluation Metric_
As frequency estimators are basic statistics for both the non-missing frequency estimation and mean estimation, we evaluate mechanisms with metrics TVE and MAE on \([j_{b}\in\mathbf{Y_{X}}]\). The total variation error (TVE) of frequency estimation is defined as:
\[\text{TVE}=\sum_{j\in[d],\ b\in\{-1,1\}}|[j_{b}\in\widehat{\mathbf{Y_{X}}}]-[j _{b}\in\mathbf{Y_{X}}]|,\]
and the maximum absolute error (MAE) is defined as:
\[\text{MAE}=\max_{j\in[d],\ b\in\{-1,1\}}|[\widehat{[j_{b}\in\mathbf{Y_{X}}]}-[j _{b}\in\mathbf{Y_{X}}]|.\]
For mean estimation, we use TVE and MAE metrics in the similar way.
Since the \(\frac{1}{s}\)-scaled frequencies lie in the \(2d\)-dimensional probability simplex, the estimated frequencies are post-processed by projecting them into the \(\Delta_{2d}\)-simplex [46]. All experimental results are the mean natural logarithm value of 100 repeated simulations.
### _Frequency Estimation_
In this section, we measure the performance of frequency estimation under various settings, such as varying sparsity, dimension, and number of users.
#### 8.2.1 Effects of sparsity \(s\)
Assuming that there are \(n=100,000\) users and the dimension is \(d=256\). When the number of non-zero entries in numerical vectors varies from \(4\) to \(32\), the TVE/MAE error results are presented in Figure 6 and Figure 7, respectively. The PCKV-UE mechanism improves upon the PrivKV in extremely sparse cases, but for other cases (e.g., \(s=32\)), the PCKV-UE and PrivKV mechanisms have similar performances. The Collision mechanism outperforms all competing mechanisms in almost all cases significantly, and on average reduces more than \(60\%\) errors. When the sparsity parameter and privacy budget are large (e.g., \(s\geq 16\), and \(\epsilon=2\)), the performance gap between PCKV-AGRR and Collision decreases.
#### 8.2.2 Effects of dimension \(d\)
Assuming that there are \(n=100,000\) users, but the dimension now increases to \(d=512\). When the number of non-zero entries in numerical vectors still varies from \(4\) to \(32\), the TVE and MAE results are shown in Figure 8 and Figure 9, respectively. Compared to cases with \(d=256\) (i.e., TVE results in Figure 6 and MAE results in Figure 7), it is evident that the TVE/MAE value grows with approximately \(\sqrt{d}\).
Fig. 6: Frequency estimation TVE results on \(n=100,000\) users with dimension \(d=256\) when sparsity \(s\) ranges from \(4\) to \(32\).
Fig. 7: Frequency estimation MAE results on \(n=100,000\) users with dimension \(d=256\) when sparsity \(s\) ranges from \(4\) to \(32\).
#### 8.2.4 Effects of Number of Users \(n\)
Assuming that there are only \(n=10,000\) users and the dimension is \(d=256\). When the number of non-zero entries in numerical vectors varies from \(4\) to \(32\), the TVE and MAE results are listed in Figure 10 and Figure 11, respectively. Compared to the case with \(n=100,000\) (i.e., Figure 6 and Figure 7), the \(\text{TYE}/\text{MAE}\) value is about \(\sqrt{100000}/10000\) times larger (i.e., decreases with approximately \(\sqrt{n}\)).
#### 8.2.5 After shuffling
In the shuffle model, given a global privacy goal \((\epsilon_{c},\delta)\), the local privacy budget approximately scales with \(\tilde{O}(\epsilon_{c}\sqrt{n/\log(1/\delta)})\). It is observed that the Collision mechanism outperforms existing approaches across all privacy regions. By combining the theoretical results that provide precisely tight privacy accounting for the Collision (see Theorem 5 and Figure 2), its performance in the shuffle model is assured.
of \(d=256\) (i.e., TVE results in Figure 14), it is easy to observe that the TVE grows roughly with \(\sqrt{d}\).
#### 8.4.3 Effects of number of users \(n\)
Simulated with \(n=10,000\) users and dimension \(d=256\), the TVE results are listed in Figure 16. Compared to the case of \(n=100,000\) (i.e. Figure 14, the TVE is about \(\sqrt{100000/10000}\) times larger.
### _Experimental summary_
Through experimental evaluation, we conclude that the Collision mechanism outperforms existing approaches in all cases for frequency estimation (especially when \(1\ll s\ll d\)), and the CoCo mechanism further improve accuracy by about \(15\%\) for mean estimation. Their performance gaps confirm our theoretical analyses on error bounds.
## 9 Conclusion
Within the local and shuffle model of differential privacy, this work has presented several _simple yet optimal_ results for the problem of numerical vector statistical estimation, which has its applications in federated learning and key-value data aggregation. We provided tight minimax error bounds for locally private estimation on numerical vectors. Our proof relies on a novel decomposition technique for data domain with sparse structure and an application of the local private version of Assouad methods. Given that existing approaches are suffering gaps form the minimax error bound, we further design an optimal mechanism based on frequency estimation, and then give an efficient implementation with \(O(s)\) or \(O(\log s)\) computation/communication complexity. Specifically for mean estimation, we propose the CoCo mechanism, which utilizes the negative correlation in frequencies to further reduce estimation error. To break the error bound of LDP, we consider numerical vector estimation in the shuffle model, and derive tight privacy amplification bounds for proposed mechanisms. Experimental results show \(30\%\)-\(60\%\) error reduction of our proposed optimal mechanisms when compared with current approaches.
**Future researches.** While this work studied numerical vector analyses in the single-message shuffle model, it is promising to further improve utility with multi-message protocols, at the cost of more communication overheads (e.g., tens of messages) per user.
## Acknowledgements
This work is extended from [47] in the 30th International Joint Conference on Artificial Intelligence (IJCAI 2021). Shaowei Wang is supported by National Key Research and Development (R&D) Program (Young Scientist Scheme No. 2022YFB3102400), National Natural Science Foundation of China (No.62102108), Natural Science Foundation
Fig. 14: Mean estimation MAE results with post-processing on \(n=100,000\) users with dimension \(d=256\) when sparsity \(s\) ranges from 4 to 32.
Fig. 12: Mean estimation MAE results without post-processing on \(n=100,000\) users with dimension \(d=256\) when sparsity \(s\) ranges from \(4\) to \(32\).
Fig. 13: Mean estimation MAE results with post-processing on \(n=100,000\) users with dimension \(d=256\) when sparsity \(s\) ranges from \(4\) to \(32\).
Fig. 15: Mean estimation TVE results with post-processing on \(n=100,000\) users with dimension \(d=512\) when sparsity \(s\) ranges from 4 to 32.
of Guangdong Province of China (No.2022A1515010061), Guangzhou Basic and Applied Basic Research Foundation (No.202201010194, No.622191-098). This work is also supported by National Key Project of China (No. 2020YFB1005700), National Natural Science Foundation of China for Joint Fund Project (No. U1936218), and the Pazhou lab, Guangzhou, China.
|
2305.04739 | Casimir-Onsager matrix for weakly driven processes | Modeling of physical systems must be based on their suitability to
unavoidable physical laws. In this work, in the context of classical,
isothermal, finite-time, and weak drivings, I demonstrate that physical
systems, driven simultaneously at the same rate in two or more external
parameters, must have the Fourier transform of their relaxation functions
composing a positive-definite matrix to satisfy the Second Law of
Thermodynamics. By evaluating them in the limit of near-to-equilibrium
processes, I identify that such coefficients are the Casimir-Onsager ones. The
result is verified in paradigmatic models of the overdamped and underdamped
white noise Brownian motions. Finally, an extension to thermally isolated
systems is made by using the time-averaged Casimir-Onsager matrix, in which the
example of the harmonic oscillator is presented. | Pierre Nazé | 2023-05-08T14:41:14Z | http://arxiv.org/abs/2305.04739v3 | # Onsager matrix for finite-time and weak processes
###### Abstract
Modeling of physical systems must be based on their suitability to unavoidable physical laws. In this work, in the context of classical, isothermal, finite-time, and weak drivings, I demonstrate that physical systems, driven simultaneously at the same rate in two or more external parameters, must have the Fourier transform of their relaxation functions composing a positive-definite matrix in order to satisfy the Second Law of Thermodynamics. By evaluating them in the limit of near-to-equilibrium processes, I identify that such coefficients are nothing more than the Onsager ones. The result is verified in paradigmatic models, where the extended Onsager matrices of the overdamped and underdamped white noise Brownian motions, driven simultaneously at the same rate in the stiffening and moving laser traps, are positive-definite.
## I Introduction
Modeling physical systems could be a nightmare if we are not prepared to test them in the appropriate way. In this sense, unavoidable physical laws that the system should respect are on our side to help us in this unfortunate task. In Ref. [1], my co-author and I have provided criteria to linear-response theory to be compatible with the Second Law of Thermodynamics in the finite-time and weak regime processes. However, this was made for driven systems performed by the perturbation of one single parameter. In this work, I provide an extension to such criteria, considering in this case drivings at the same rate with two or more external parameters in the same context.
Such extension is based again on the application of Bochner's theorem for a set of external parameters [2]. In this case, the Fourier transforms of the relaxation functions of the system must compose a positive-definite matrix. Given the similarity with the Onsager matrix on such property [3; 4], I investigated the behavior of this new matrix in the near-to-equilibrium regime. I verified then that this new matrix reduces to the Onsager one in this case. Therefore, an extension of such a concept to finite-time and weak processes can be considered.
I remark that such property of positive-definiteness of the matrix composed by the Fourier transforms of the relaxation functions is implicit in considerations of energy dissipation of basic textbooks [5]. However, a direct connection between such property with the validity of the Second Law of Thermodynamics for arbitrary finite-time processes is not. Also, extensions of Onsager reciprocal relations have been made in the literature [6; 7; 8; 9], but, as far as I know, just for specific types of systems and not in the regime proposed in this work.
Finally, to see the consistency of our result, I analyze two paradigmatic examples that are theoretically well-studied and experimentally verified: the overdamped and underdamped white noise Brownian motion. I calculate then the Onsager matrices related to the drivings at the same rates of the stiffening and moving laser traps and verify that they are indeed positive-definite.
## II Linear-response theory
Consider a classical system with a Hamiltonian \(\mathcal{H}(\mathbf{z}(\mathbf{z_{0}},t)),\lambda(t))\), where \(\mathbf{z}(\mathbf{z_{0}},t)\) is a point in the phase space \(\Gamma\) evolved from the initial point \(\mathbf{z_{0}}\) until time \(t\), with \(\lambda(t)\) being a time-dependent external parameter. During a switching time \(\tau\), the external parameter is changed from \(\lambda_{0}\) to \(\lambda_{0}+\delta\lambda\), with the system being in contact with a heat bath of temperature \(\beta\equiv\left(k_{B}T\right)^{-1}\), where \(k_{B}\) is Boltzmann's constant. The average work performed on the system during this interval of time is
\[\overline{W}\equiv\int_{0}^{\tau}\left\langle\overline{\partial_{\lambda} \mathcal{H}}(t)\right\rangle_{0}\dot{\lambda}(t)dt, \tag{1}\]
where \(\partial_{\lambda}\) is the partial derivative in respect to \(\lambda\) and the superscripted dot the total time derivative. The generalized force \(\left\langle\overline{\partial_{\lambda}\mathcal{H}}\right\rangle_{0}\) is calculated using the averaging \(\overline{\phantom{\dot{\dot{\dot{\dot{\dot{\dot{\dot{\dot{\dot{\dot{\dot{\dot{ \dot{\dot{\dot{\dot }}}}}}}}}}}}}}\) \) over the stochastic path and the averaging \(\langle\cdot\rangle_{0}\) over the initial canonical ensemble. The external parameter can be expressed as
\[\lambda(t)=\lambda_{0}+g(t)\delta\lambda, \tag{2}\]
where, to satisfy the initial conditions of the external parameter, the protocol \(g(t)\) must satisfy the following boundary conditions
\[g(0)=0,\quad g(\tau)=1. \tag{3}\]
We consider as well that \(g(t)\equiv g(t/\tau)\), which means that the intervals of time are measured according to the switching time unit.
Linear-response theory aims to express average quantities until the first-order of some perturbation parameter considering how this perturbation affects the observable to be averaged and the process of average [5]. In our
case, we consider that the parameter does not considerably changes during the process, \(|g(t)\delta\lambda/\lambda_{0}|\ll 1\), for all \(t\in[0,\tau]\). In that manner, using such framework, the generalized force can be approximated until first-order as
\[\begin{split}\left\langle\overline{\partial_{\lambda}\mathcal{H}}( t)\right\rangle_{0}&=\left\langle\partial_{\lambda}\mathcal{H} \right\rangle_{0}+\delta\lambda\left\langle\partial_{\lambda\lambda}^{2} \mathcal{H}\right\rangle_{0}g(t)\\ &\quad-\delta\lambda\int_{0}^{t}\phi_{0}(t-t^{\prime})g(t^{ \prime})dt^{\prime}.\end{split} \tag{4}\]
The quantity \(\phi_{0}(t)\) is the so-called response function [5], which can be conveniently expressed as the derivative of the relaxation function \(\Psi_{0}(t)\)[5]
\[\phi_{0}(t)=-\frac{d\Psi_{0}}{dt}. \tag{5}\]
In our particular case, the relaxation function is calculated as
\[\Psi_{0}(t)=\beta\left\langle\partial_{\lambda}\mathcal{H}(0)\overline{ \partial_{\lambda}\mathcal{H}}(t)\right\rangle_{0}-\mathcal{C}, \tag{6}\]
where the constant \(\mathcal{C}\) is calculated to vanish the relaxation function for long times [5]. The generalized force, written in terms of the relaxation function, can be expressed as
\[\begin{split}\left\langle\overline{\partial_{\lambda}\mathcal{H} }(t)\right\rangle_{0}&=\left\langle\partial_{\lambda}\mathcal{H }\right\rangle_{0}-\delta\lambda\widetilde{\Psi}_{0}g(t)\\ &\quad+\delta\lambda\int_{0}^{t}\Psi_{0}(t-t^{\prime})\dot{g}(t^{ \prime})dt^{\prime},\end{split} \tag{7}\]
where \(\widetilde{\Psi}_{0}\equiv\Psi_{0}(0)-\left\langle\partial_{\lambda\lambda}^ {2}\mathcal{H}\right\rangle_{0}\). Finally, combining Eqs. (12) and (7), the average work performed at the linear response of the generalized force is
\[\begin{split}\overline{W}=&\,\delta\lambda\left\langle \partial_{\lambda}\mathcal{H}\right\rangle_{0}-\frac{\delta\lambda^{2}}{2} \widetilde{\Psi}_{0}\\ &+\delta\lambda^{2}\int_{0}^{\tau}\int_{0}^{t}\Psi_{0}(t-t^{ \prime})\dot{g}(t^{\prime})\dot{g}(t)dt^{\prime}dt.\end{split} \tag{8}\]
We observe that the double integral on Eq. (8) vanishes for long switching times [1]. Therefore the other terms are part of the contribution of the difference of free energy, since this quantity is exactly the average work performed for quasistatic processes in isothermal drivings. Thus, we can split the average work into the difference of free energy \(\Delta F\) and irreversible work \(W_{\rm irr}\)
\[\Delta F=\delta\lambda\left\langle\partial_{\lambda}\mathcal{H}\right\rangle _{0}-\frac{\delta\lambda^{2}}{2}\widetilde{\Psi}_{0}, \tag{9}\]
\[W_{\rm irr}=\int_{0}^{\tau}\int_{0}^{t}\Psi_{0}(t-t^{\prime})\dot{\lambda}(t^ {\prime})\dot{\lambda}(t)dt^{\prime}dt. \tag{10}\]
In particular, the irreversible work can be rewritten using the symmetric property of the relaxation function [5]
\[W_{\rm irr}=\frac{1}{2}\int_{0}^{\tau}\int_{0}^{\tau}\Psi_{0}(t-t^{\prime}) \dot{\lambda}(t^{\prime})\dot{\lambda}(t)dt^{\prime}dt. \tag{11}\]
I establish at this point the regimes where linear-response theory is able to describe thermodynamic processes. Those regimes are determined by the relative strength of the driving with respect to the initial value of the protocol, \(\delta\lambda/\lambda_{0}\), and the rate by which the process occurs with respect to the relaxation time of the system, \(\tau_{R}/\tau\). See Fig. 1 for a diagram depicting the regimes. In region 1, the so-called slowly-varying processes, the ratio \(\delta\lambda/\lambda_{0}\) is arbitrary, while \(\tau_{R}/\tau\ll 1\). By contrast, in region 2, the so-called finite-time and weak processes, the ratio \(\delta\lambda/\lambda_{0}\ll 1\), while \(\tau_{R}/\tau\) is arbitrary. In region 3, the so-called arbitrarily far-from-equilibrium processes, both ratios are arbitrary. Linear-response theory can only describe regions 1 and 2 [1].
The objective of this work is to find conditions to satisfy the Second Law of Thermodynamics, that is, \(W_{\rm irr}\geq 0\), when the physical system is driven simultaneously at the same rate by two or more external parameters. This proceeding will lead to the extension of the Onsager matrix to finite-time and weak processes.
## III Extended Onsager Matrix
Consider now that the Hamiltonian of interest presents a \(n\) number of external parameters \(\lambda_{l}(t)\), which are simultaneously driven at the same rate. In this case, the average work will be
\[\overline{W}\equiv\sum_{l=1}^{n}\int_{0}^{\tau}\left\langle\overline{ \partial_{\lambda_{l}}\mathcal{H}}(t)\right\rangle_{0}\dot{\lambda_{l}}(t)dt, \tag{12}\]
Proceeding in the same fashion as we have done in the Sec. II, the irreversible work in the linear-response regime
Figure 1: (Color online) Diagram of nonequilibrium regions. Region 1: slowly-varying processes, Region 2: finite-time but weak processes and Region 3: arbitrarily far-from-equilibrium processes. Linear response theorem can describe regions 1 and 2.
will be
\[W_{\text{irr}}=\frac{1}{2}\sum_{l,m=1}^{n}\int_{0}^{\tau}\int_{0}^{\tau}\Psi_{lm }(t-t^{\prime})\dot{\lambda_{l}}(t^{\prime})\dot{\lambda_{m}}(t)dt^{\prime}dt, \tag{13}\]
where
\[\Psi_{lm}(t)=\beta\langle\partial_{l}\mathcal{H}(0)\partial_{m}\mathcal{H}(t )\rangle_{0}-\mathcal{C}_{lm}. \tag{14}\]
We remark that the even symmetry used to achieve Eq. (14) occur for \(\Psi_{ll}(t)\) and \(\Psi_{lm}(t)+\Psi_{ml}(t)\), \(l\neq m\). Also, \(C_{lm}\) are chosen in order to nullify the relaxation functions for long times. Analogously to what was done in Ref. [1], I demand as a basic condition that the Laplace transforms of the relaxation functions will be finite on \(s=0\). We can then define the relaxation time for each relaxation function
\[\tau_{R}^{lm}=\int_{0}^{\infty}\frac{\Psi_{lm}(t)}{\Psi_{lm}(0)}dt. \tag{15}\]
To obtain \(W_{\text{irr}}\geq 0\), we must have the following matrix
\[\mathcal{O}=[\hat{\Psi}_{lm}(\omega)] \tag{16}\]
positive-definite for any \(\omega\in\mathcal{R}\). For an explicit deduction see Appendix A. This means that \(\mathcal{O}\) should have all its eigenvalues non-negative. Also, for instance, for \(2\times 2\) matrices, it is enough to show that \(\mathcal{O}\) is symmetric and
\[\det\mathcal{O}\geq 0,\quad\text{Tr}\,\mathcal{O}\geq 0, \tag{17}\]
where \(\det\) and \(\text{Tr}\) are respectively its determinant and trace. In particular, given the time-reversal symmetric property \(\Psi_{lm}(t)=\Psi_{ml}(-t)\), it holds \(\hat{\Psi}_{lm}(\omega)=\hat{\Psi}_{lm}(-\omega)\). This means that when the Fourier transform does not depend on \(\omega\) the coefficients with reverse indexes are the same.
However, the question remains: what are the meaning of the coefficients \(\hat{\Psi}_{lm}(\omega)\)? I claim that they are the generalization of Onsager coefficients for finite-time and weak processes. To see that, consider the near-to-equilibrium regime where Onsager coefficients are defined. In this case, it holds the approximation [10]
\[\lim_{\tau/\tau_{R}^{lm}\gg 1}\Psi_{lm}(t)=2\tau_{R}^{lm}\Psi_{lm}(0)\delta(t). \tag{18}\]
Their Fourier transforms are
\[\lim_{\tau/\tau_{R}^{lm}\gg 1}\hat{\Psi}_{lm}=\sqrt{\frac{2}{\pi}}\tau_{R}^{lm} \Psi_{lm}(0). \tag{19}\]
Therefore, it holds
\[\lim_{\tau/\tau_{R}^{lm}\gg 1}\Psi_{lm}(t-t^{\prime})=\sqrt{2\pi}\left[\lim_{ \tau/\tau_{R}^{lm}\gg 1}\hat{\Psi}_{lm}\right]\delta(t-t^{\prime}). \tag{20}\]
The expression for the generalized force is
\[\langle\overline{\partial_{\lambda_{l}}\mathcal{H}}(t)\rangle_{0}=\sum_{m=1}^ {n}\int_{0}^{t}\Psi_{lm}(t-t^{\prime})\dot{\lambda}_{m}(t^{\prime})dt^{\prime}, \tag{21}\]
where I have omitted the part with instantaneous memory. Applying Eq. (20) in Eq. (21), we have
\[\langle\overline{\partial_{\lambda_{l}}\mathcal{H}}(t)\rangle_{0}=\sqrt{2\pi} \sum_{m=1}^{n}\left[\lim_{\tau/\tau_{R}^{lm}\gg 1}\hat{\Psi}_{lm}\right]\dot{ \lambda}_{m}(t), \tag{22}\]
which means that the coefficients of the matrix \(\mathcal{O}\) are Onsager ones in the appropriate regime. Observe that such coefficients do not depend on \(\omega\), which is coherent with the equality predicted between the Onsager coefficients with reverse indexes.
In particular, returning to finite-time processes, the role that the extended Onsager coefficients perform between the generalized force and the perturbation is rather complicated. Indeed
\[\langle\overline{\partial_{\lambda_{l}}\mathcal{H}}(t)\rangle_{0}=\frac{1}{ \sqrt{2\pi}}\int_{\mathcal{R}}\sum_{m=1}^{n}\hat{\Psi}_{lm}(\omega)(e^{i\omega t ^{\prime}}*\dot{\lambda}(t^{\prime}))(t)d\omega, \tag{23}\]
where
\[(e^{i\omega t^{\prime}}*\dot{\lambda}(t^{\prime}))(t)=\int_{0}^{t}e^{i\omega(t -t^{\prime})}\dot{\lambda}(t^{\prime})dt^{\prime}. \tag{24}\]
Instead, a better connection between both quantities is made by the Laplace transform of the relaxation functions
\[\langle\overline{\partial_{\lambda_{l}}\mathcal{H}}(s)\rangle_{0}=\sum_{m=1}^ {n}\widetilde{\Psi}_{lm}(s)\widetilde{\dot{\lambda}}_{m}(s). \tag{25}\]
## IV Examples: Brownian motion
I shall illustrate the consistency of our result showing the positive-definiteness of the matrices \(\mathcal{O}\) for the paradigmatic examples of an overdamped and underdamped white noise Brownian motion, driven simultaneously by the stiffening and moving laser traps.
### Overdamped case
I consider first an overdamped Brownian particle, whose dynamics are governed by the following Langevin equation
\[\dot{x}(t)+\frac{1}{\gamma}\partial_{x}\mathcal{V}(x(t),\lambda(t),\mu(t))= \eta(t), \tag{26}\]
where \(x(t)\) is its position at the instant \(t\), \(\gamma\) is the damping coefficient, \(\lambda(t)\) is the first control parameter, \(\mu(t)\) the second one and \(\eta(t)\) is a Gaussian white noise characterized by
\[\overline{\eta(t)}=0,\quad\overline{\eta(t)\eta(t^{\prime})}=\frac{2}{\gamma \beta}\delta(t-t^{\prime}). \tag{27}\]
The time-dependent potential will be a stiffening and moving laser traps
\[\mathcal{V}(x(t),\lambda(t),\mu(t))=\frac{\lambda(t)}{2}(x(t)-\mu(t))^{2}. \tag{28}\]
In this case, the extended Onsager matrix will be
\[\mathcal{O}=\begin{bmatrix}\sqrt{\frac{2}{\pi}}\frac{\gamma}{\beta\lambda_{0}} \frac{1}{4\lambda_{0}^{2}+\gamma^{2}\omega^{2}}&0\\ 0&\sqrt{\frac{2}{\pi}}\frac{\gamma}{\beta}\frac{1}{\lambda_{0}^{2}+\gamma^{2} \omega^{2}}\end{bmatrix}, \tag{29}\]
which clearly is positive-definite. Therefore, as expected, the overdamped white noise Brownian motion, when subjected to simultaneous driving in the stiffening and moving laser traps, obeys the Second Law of Thermodynamics, showing its consistency as a paradigmatic model.
### Underdamped case
I consider now an underdamped Brownian particle, whose dynamics are governed by the following Langevin equation
\[\frac{m}{\gamma}\ddot{x}(t)+m\dot{x}(t)+\frac{1}{\gamma}\partial_{x}\mathcal{V }(x(t),\lambda(t),\mu(t))=\eta(t), \tag{30}\]
where \(m\) is its mass, \(x(t)\) is its position at the instant \(t\), \(\gamma\) is the damping coefficient, \(\lambda(t)\) is the first control parameter, \(\mu(t)\) the second one and \(\eta(t)\) is a Gaussian white noise characterized by
\[\overline{\eta(t)}=0,\quad\overline{\eta(t)\eta(t^{\prime})}=\frac{2m}{\gamma \beta}\delta(t-t^{\prime}). \tag{31}\]
The time-dependent potential will be a stiffening and moving laser traps
\[\mathcal{V}(x(t),\lambda(t),\mu(t))=\frac{m\lambda(t)}{2}(x(t)-\mu(t))^{2}, \tag{32}\]
where \(4\lambda_{0}>\gamma^{2}\). The extended Onsager matrix will be
\[\mathcal{O}=\begin{bmatrix}\hat{\Psi}_{\lambda\lambda}(\omega)&0\\ 0&\hat{\Psi}_{\mu\mu}(\omega)\end{bmatrix}, \tag{33}\]
where
\[\hat{\Psi}_{\lambda\lambda}(\omega)=\frac{\sqrt{\frac{2}{\pi}}\gamma\left( \gamma^{4}+\gamma^{2}\left(\lambda_{0}+\omega_{0}^{2}\right)+\lambda_{0} \left(\omega^{2}+\omega_{0}^{2}\right)\right)}{\beta\lambda_{0}^{2}\left( \gamma^{2}+\omega^{2}\right)\left(\gamma^{4}+2\gamma^{2}\left(\omega^{2}+ \omega_{0}^{2}\right)+\left(\omega^{2}-\omega_{0}^{2}\right)^{2}\right)}, \tag{34}\]
and
\[\hat{\Psi}_{\mu\mu}(\omega)=\frac{4m\sqrt{\frac{2}{\pi}}\gamma\lambda_{0} \left(\gamma^{2}+\omega_{0}^{2}\right)}{\gamma^{4}+2\gamma^{2}\left(4\omega^ {2}+\omega_{0}^{2}\right)+\left(\omega_{0}^{2}-4\omega^{2}\right)^{2}} \tag{35}\]
with \(\omega_{0}=\sqrt{4\lambda_{0}-\gamma^{2}}>0\). Again, the extended Onsager matrix \(\mathcal{O}\) is clearly positive-definite. Therefore, as expected, the underdamped white noise Brownian motion, when subjected to simultaneous driving in the stiffening and moving laser traps, obeys the Second Law of Thermodynamics, showing its consistency as a paradigmatic model.
## V Final remarks
In this work, I have generalized the Onsager matrix to finite-time and weak processes. I did so in order to extend the compatibility criteria of linear-response theory with the Second Law of Thermodynamics to driven processes performed at the same rate of two or more external parameters. The positive-definiteness of the extended Onsager matrix is verified in two paradigmatic models of overdamped and underdamped white noise Brownian motions, driven simultaneously at the same rate in the stiffening and moving laser trap. Modeling the diffusive and thermoelectric phenomena of Irreversible Linear Thermodynamics to finite-time and weak processes in the Hamiltonian framework will be approached in future research.
###### Acknowledgements.
I acknowledge Jordan Horowitz and Akram Touil for the enlightening discussions which led me to such a result some years ago.
|
2303.11904 | Two-beam laser photon merging | Quasi-elastic scattering processes have long been thought of providing the
most promising signal for a first experimental detection of quantum vacuum
nonlinearity. A prominent example of such a process is vacuum birefringence.
However, these signals are typically strongly background dominated. This
problem can be circumvented by inelastic scattering processes. In this study,
we investigate the inelastic process of laser photon merging in the collision
of just two laser pulses under a finite angle, which provides signal photons of
a distinct frequency outside the frequency spectrum of the background. As a key
result, for the example of two laser beams of the same oscillation frequency we
demonstrate that by using high-intensity optical lasers and choosing an optimal
collision angle, photon merging should become accessible in experiment with
state-of-the-art technology. In this case three frequency $\omega$ laser
photons are merged to a single $3\omega$ photon. | Chantal Sundqvist, Felix Karbstein | 2023-03-21T14:50:08Z | http://arxiv.org/abs/2303.11904v1 | # Two-beam laser photon merging
###### Abstract
Quasi-elastic scattering processes have long been thought of providing the most promising signal for a first experimental detection of quantum vacuum nonlinearity. A prominent example of such a process is vacuum birefringence. However, these signals are typically strongly background dominated. This problem can be circumvented by inelastic scattering processes. In this study, we investigate the inelastic process of laser photon merging in the collision of just two laser pulses under a finite angle, which provides signal photons of a distinct frequency outside the frequency spectrum of the background. As a key result, for the example of two laser beams of the same oscillation frequency we demonstrate that by using high-intensity optical lasers and choosing an optimal collision angle, photon merging should become accessible in experiment with state-of-the-art technology. In this case three frequency \(\omega\) laser photons are merged to a single \(3\omega\) photon.
Introduction
The nature of the quantum vacuum is governed by quantum fluctuations. In the case of quantum electrodynamics (QED), these allow electromagnetic fields to interact nonlinearly by the coupling to virtual electron-positron pairs [1; 2; 3] (for recent reviews, see Refs. [4; 5; 6; 7; 8; 9; 10]). However, these effective couplings are parametrically suppressed by powers of \(|\vec{E}|/E_{\rm cr}\) and \(|\vec{B}|/B_{\rm cr}\) with the critical electric (magnetic) field strength \(E_{\rm cr}=m_{e}^{2}c^{3}/(e\hbar)\simeq 1.3\times 10^{18}\,\)V/m (\(B_{\rm cr}=E_{\rm cr}/c\simeq 4\times 10^{9}\,\)T). The strongest macroscopic electromagnetic fields presently available in the laboratory are generated by high power lasers reaching \(|\vec{E}|\approx{\cal O}(10^{14})\,\)V/m and \(|\vec{B}|\approx{\cal O}(10^{6})\,\)T in \(\mu\)m-sized focal volumes, such that generically \(|\vec{E}|\ll E_{\rm cr}\), \(|\vec{B}|\ll B_{\rm cr}\). These circumstances have so far prevented the direct observation of quantum vacuum signatures under controlled laboratory conditions. With ongoing advances in laser technology and the building of new dedicated high-intensity laser facilities, a particularly promising route to an experimental verification of QED vacuum nonlinearity is provided by all-optical pump-probe type setups. The attainable photonic signatures in this type of experiment can be divided in two main classes, namely _quasi-elastic_ and manifestly _inelastic_ processes.
_Quasi-elastic_ processes depend only on the oscillation frequency of one of the driving beams; in the monochromatic plane-wave limit they become strictly elastic. This results in signal photons with kinematic properties very similar to the probe photons. A prominent example of such a process is vacuum birefringence [11; 12; 13; 14; 15; 16]. Generically, _quasi-elastic_ processes provide satisfactory large signal photon numbers which however contend with the large background of the driving lasers.
For laser fields which can be modeled as paraxial beams _inelastic_ processes depend on the frequencies of both lasers. Typically, inelastic signatures are suppressed relatively to elastic ones. On the upside, the emission direction as well as the energy of the signal photons arising from inelastic scattering processes often differ significantly from those constituting the driving laser beams. Examples of inelastic signatures of quantum vacuum nonlinearity are photon splitting [17; 18; 19; 20; 21; 22; 23] and photon merging [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36].
So far, the great potential of inelastic quantum vacuum signatures for all-optical experiments has mainly been exemplified in scenarios involving multiple (\(>2\)) or specially tailored laser beams, cf., e.g. [37; 38; 39; 40; 41; 42; 43; 44]. The availability of just two fundamental-frequency high-intensity laser beams is typically considered as insufficient to achieve sizable inelastic signals in experiment. In the present work, we provide a thorough analysis of the effect of laser photon merging in the collision of two identical laser pulses at zero impact parameter. To this end, we analyze the emission characteristics of the merged signal photons in a scenario envisioning the collision of two pulsed, paraxial Gaussian laser beams under a finite angle.
Our paper is organized as follows: in Sec. II we briefly recall the theoretical foundations and detail the analytical modeling of our specific setup. This provides us with an analytic expression for the differential number of signal photons which we will use in Sec. III to deduce the emission characteristics of the merging signal. In Sec. IV we provide explicit results for the angular distribution and the total number of merged signal photons attainable in a polarization insensitive measurement. Finally, we end with Conclusions and an Outlook in Sec. V.
## II Theoretical foundations
Our analysis is based on the vacuum emission picture [45; 46], which allows to recast all-optical signatures of quantum vacuum nonlinearity in prescribed macroscopic electromagnetic fields as signal photon emission processes. The leading processes are zero-to-single signal photon transitions. The central object thereby is the zero-to-single signal photon transition amplitude \(\mathcal{S}_{(p)}(\vec{k})\) to a state with one signal photon of wave-vector \(\vec{k}\), energy \(k^{0}=|\vec{k}|\) and polarization \(p\). It is related to the differential number of signal photons to be measured far outside the interaction region of the driving laser fields as
\[\mathrm{d}^{3}N_{(p)}=\frac{\mathrm{d}^{3}k}{(2\pi)^{3}}\big{|}\mathcal{S}_{(p )}(\vec{k})\big{|}^{2}\,. \tag{1}\]
Regarding the study of quantum vacuum nonlinearities, the currently attainable laser fields can be considered as locally constant, weak fields, i.e. fields that vary on spatial scales much larger than the Compton wavelength of the electron \(\lam_{\mathrm{C}}=\hbar/(m_{e}c)\simeq 3.86\times 10^{-13}\,\mathrm{m}\) and fulfill \(\{|\vec{E}|,c|\vec{B}|\}\ll E_{\mathrm{cr}}\). A thorough derivation of the signal photon transition amplitude at one-loop order recapitulating particularly the approximations made for locally constant, weak fields can be found in Ref. [47]. All considerations presented are based on the leading correction term to classical Maxwell theory \(\mathcal{L}_{\mathrm{int}}\sim 4\mathcal{F}^{2}+7\mathcal{G}^{2}\) with the field invariants \(\mathcal{F}=(\vec{B}^{2}-\vec{E}^{2})/2\) and \(\mathcal{G}=-\vec{E}\vec{B}\).
Here, we study the process of laser photon merging in the collision of two identical, paraxial laser beams with linear polarization. Without loss of generality, we choose these to collide in the \(xz\)-plane with the \(x\)-axis as the bisecting line of the collision angle \(\vartheta_{\mathrm{coll}}\). See Fig. 1 for a sketch of the collision geometry. The unit wave vectors of the beams \(b\in\{1,2\}\) are \(\hat{\vec{k}}_{1}=\left(\cos\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right),0,\sin\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right)\right)\) and \(\hat{\vec{k}}_{2}=\left(\cos\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right),0,-\sin\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right)\right)\). As polarization vectors we use \(\hat{\vec{e}}_{\beta_{1}}=\left(-\sin\left(\frac{\vartheta_{\mathrm{coll}}}{2} \right)\cos\beta_{1},\sin\beta_{1},\cos\left(\frac{\vartheta_{\mathrm{coll}}}{ 2}\right)\cos\beta_{1}\right)\) and \(\hat{\vec{e}}_{\beta_{2}}=\left(\sin\left(\frac{\vartheta_{\mathrm{coll}}}{2} \right)\cos\beta_{2},\sin\beta_{2},\cos\left(\frac{\vartheta_{\mathrm{coll}}}{ 2}\right)\cos\beta_{2}\right)\). A particular choice of the angle parameter \(\beta_{b}\) fixes the polarization of beam \(b\in\{1,2\}\). The associated electric and magnetic fields are given by \(\vec{E}_{b}=\mathcal{E}_{b}\hat{\vec{E}}_{b}\) and \(\vec{B}_{b}=\mathcal{E}_{b}\hat{\vec{B}}_{b}\) with the amplitude profile \(\mathcal{E}_{b}\). They fulfill \(\hat{\vec{E}}_{b}\perp\hat{\vec{B}}_{b}\perp\hat{\vec{k}}_{b}\). We parameterize the wave vectors of the signal photons by \(\hat{\vec{k}}=(\cos\varphi\cos\vartheta,\sin\varphi,\cos\varphi\sin\vartheta)\)
with \(-\frac{\pi}{2}\leq\vartheta\leq\frac{\pi}{2}\) and \(-\pi\leq\varphi\leq\pi\). For \(\varphi=0\), \(\vartheta=0\) this matches the bisector of the collision angle \(\vartheta_{\text{coll}}\) between the incident beams. As polarization vector of the signal photons we use \(\hat{\vec{e}}_{\beta}=(-\cos\beta\sin\vartheta-\sin\beta\sin\varphi\cos \vartheta,\sin\beta\cos\varphi,-\sin\beta\sin\varphi\sin\vartheta+\cos\beta \cos\vartheta)\).
Using these notations and labeling the signal polarizations by the angle parameter \(\beta\), \(\mathcal{S}_{(p)}(\vec{k})\rightarrow\mathcal{S}_{\beta}(\vec{k})\), we obtain
\[\mathcal{S}_{\beta}(\vec{k})= \text{i}\frac{8}{45}\frac{\alpha^{2}}{m_{e}^{4}}\sqrt{\frac{k^{ 0}}{2}}\sin^{2}\left(\frac{\vartheta_{\text{coll}}}{2}\right)\sum_{m=1}^{2} \mathcal{I}_{m,3-m}(\vec{k})\] \[\times\left\{\left[\cos\varphi-\cos\left(\vartheta-(-1)^{m}\frac{ \vartheta_{\text{coll}}}{2}\right)\right]f\left(\beta_{1}+\beta_{2},\beta+ \beta_{3-m}\right)\right. \tag{2}\] \[\left.-\sin\varphi\sin\left(\vartheta-(-1)^{m}\frac{ \vartheta_{\text{coll}}}{2}\right)f\left(\beta_{1}+\beta_{2},\beta+\beta_{3-m }+\frac{\pi}{2}\right)\right\}\,,\]
where we have made use of the shorthand notations \(f(\mu,\nu)=4\cos\mu\cos\nu+7\sin\mu\sin\nu\) and
\[\mathcal{I}_{mn}(\vec{k})=\int\mathrm{d}^{4}x\,\mathrm{e}^{\mathrm{i}\mathrm{k }^{\vartheta}(\hat{\vec{k}}\cdot\vec{x}-t)}\,\mathcal{E}_{1}^{m}(x)\mathcal{E }_{2}^{n}(x)\,. \tag{3}\]
The leading contribution to the signal photon emission from the laser-driven QED vacuum amounts to a vacuum-fluctuation-mediated four-field interaction; see Fig. 2. One of these four fields is the signal photon field, induced by the effective interaction of three laser fields. A single paraxial field does not induce signal photons. Consequently, the powers of the field profiles \(\mathcal{E}_{b}\) in Eq. (3) in a two-beam collision are limited to \((m,n)\in\{(1,2),(2,1)\}\).
Aiming at an analytic evaluation of Eq. (3), we model the fields of the driving lasers as
Figure 1: Sketch of the collision geometry. The wave vectors of the two colliding laser pulses are \(\vec{k}_{1}\), \(\vec{k}_{2}\). The collision takes place in the \(xz\)-plane under a collision angle \(\vartheta_{\text{coll}}\). Its bisector is identified with the x-axis. The wave vector of the signal photons is \(\vec{k}\) and parameterized by the angles \(\varphi\), \(\vartheta\).
paraxial Gaussian pulses in the infinite Rayleigh range approximation, [9; 43; 47],
\[\mathcal{E}_{b}(x)=\mathfrak{E}_{b}\,\mathrm{e}^{-\left(\frac{x\cdot\hat{x}_{b}-t }{\tau_{b}/2}\right)^{2}}\mathrm{e}^{-\frac{\mu^{2}-(\vec{x}\cdot\hat{x}_{b})^ {2}}{w_{0,b}^{2}}}\cos\left\{\omega_{b}\left(\vec{x}\cdot\hat{\vec{k}}_{b}-t \right)\right\}\,. \tag{4}\]
This approximation is well justified as long as the spatial extents of the interaction region of the laser beams are governed by a length scale much smaller than the Rayleigh range \(z_{R,b}=\frac{w_{0,b}^{2}\omega_{b}}{2}\). For collision angles fulfilling \(\left\{\frac{w_{0,1}}{z_{R,2}},\frac{w_{0,2}}{z_{R,1}}\right\}\lesssim\sin \vartheta_{\mathrm{coll}}\) this is ensured automatically; cf. the detailed discussion in [48]. This corresponds to a constraint to \(18.6^{\circ}\ll\vartheta_{\mathrm{coll}}\ll 161.4^{\circ}\) for diffraction limited beams with \(w_{0}\approx\lambda\), i.e. \(w_{0}\omega=2\pi\). For less tightly focused laser beams, this approximation can be applied to a wider range of collision angles.
The profiles (4) are chosen such that both beams \(b\in\{1,2\}\) reach the peak field amplitude \(\mathfrak{E}_{b}\) in their common beam focus at \(\vec{x}=0\) at exactly the same time. The peak field amplitude is related to the laser pulse energy \(W_{b}\), pulse duration \(\tau_{b}\) and waist size \(w_{0,b}\) as [49]
\[\mathfrak{E}_{b}\simeq 2\Big{(}\frac{8}{\pi}\Big{)}^{\frac{1}{4}}\sqrt{\frac{W_ {b}}{\tau_{b}w_{0,b}^{2}\pi}}\,. \tag{5}\]
Note, that \(\tau_{b}\) and \(w_{0,b}\) are measured at \(1/\mathrm{e}^{2}\) of the peak intensity. The conversion of these parameters into quantities at half maximum (HM) is carried out according to
\[\tau_{b}^{\mathrm{HM}}=\tau_{b}\sqrt{\frac{\ln 2}{2}}\qquad\mathrm{and}\qquad w _{0,b}^{\mathrm{HM}}=w_{0,b}\sqrt{\frac{\ln 2}{2}}\,. \tag{6}\]
Performing the Fourier integration in Eq. (3) for the field profiles (4) with equal laser
Figure 2: Feynman diagram of the leading vacuum fluctuation induced corrections to classical Maxwell theory in the limit of weak fields, interpreted as a vacuum emission process. The crosses “\(\times\)” mark couplings to the laser fields. The depicted process results in signal photons \(\gamma\) with wave vector \(\vec{k}\) and polarization \(p\).
parameters, i.e. \(\tau_{1}=\tau_{2}\equiv\tau\), \(w_{0,1}=w_{0,2}\equiv w_{0}\), \(\omega_{1}=\omega_{2}\equiv\omega\) and \(W_{1}=W_{2}\equiv W\), we obtain
\[\mathcal{I}_{mn}(\vec{k})= \left(\frac{\pi}{2}\right)^{2}\left(\frac{\mathfrak{E}}{2}\right)^ {m+n}\frac{w_{0}^{3}\tau^{2}}{(m+n)\sqrt{mnH}\sin\left(\frac{\theta_{\rm coll} }{2}\right)}\sum_{l=0}^{m}\sum_{j=0}^{n}\binom{m}{l}\binom{n}{j}\mathrm{e}^{- \frac{w_{0}^{2}(k^{0})^{2}}{4(m+n)}\frac{h_{+}h_{-}}{\sin^{2}\left(\frac{ \vartheta_{\rm coll}}{2}\right)}} \tag{7}\] \[\times\mathrm{e}^{-\frac{\tau^{2}}{16(m+n)}\left[k^{0}+(m-2l+n-2 j)\omega\right]^{2}}\mathrm{e}^{-\frac{k^{0}\tau^{2}w_{0}^{2}}{4H(m+n)}(h_{+}+h_{-} -2\sin^{2}\left(\frac{\vartheta_{\rm coll}}{2}\right))(2j-n+2l-m)}\] \[\times\mathrm{e}^{-\frac{w_{0}^{2}\tau^{2}}{16mn(m+n)H\sin^{2} \left(\frac{\vartheta_{\rm coll}}{2}\right)}\left[4\omega(mj-nl)\sin^{2}\left( \frac{\vartheta_{\rm coll}}{2}\right)+k^{0}(h_{-}m-h_{+}n)\right]^{2}}\]
with
\[h_{\pm}=\cos\varphi\cos\left(\vartheta\pm\frac{\vartheta_{\rm coll}}{2} \right)-1 \tag{8}\]
and
\[H=\tau^{2}\cos^{2}\left(\frac{\vartheta_{\rm coll}}{2}\right)+4w_{0}^{2}\sin^ {2}\left(\frac{\vartheta_{\rm coll}}{2}\right)\,. \tag{9}\]
We can straightforwardly infer from the exponential factors in Eq. (7) that for sufficiently large pulse durations \(\{\omega_{1}\tau_{1},\omega_{2}\tau_{2}\}\gg 1\) as considered throughout this work, the energy of the signal photons is essentially determined by the oscillation frequencies/ photon energies of the driving laser beams. In particular, the integers \(m\) and \(n\) count the number of couplings of the fields of beam 1 and beam 2 to the electron positron loop, respectively, and the integers \(l\) and \(j\) specify whether the laser fields absorb or release energy. Here the following cases are possible: For \(l=0\) and \(j=0\) energy is absorbed by the respective laser field. Conversely, for \(l=m\) and \(j=n\) energy is released by the laser field. Finally, for either \((m,l)=(2,1)\) or \((n,j)=(2,1)\) one of the beams does effectively not influence the energy of the signal photons because the same energy is absorbed at one coupling and released at the other coupling of the same field. Hence, these cases describe quasi-elastic scattering processes where the signal photon energy depends only on the photon energy of one of the beams.
The absolute square of the transition amplitude, Eq. (2), required in the calculation of the differential number of signal photons, Eq. (1), can be expressed as a sum of terms proportional to \(\mathcal{I}_{m_{1}n_{1}}^{*}\mathcal{I}_{m_{2}n_{2}}\) with \((m_{i},n_{i})\in\{(1,2),(2,1)\}\), i.e. \(\mathrm{d}^{3}N\sim|\mathcal{I}_{12}+\mathcal{I}_{21}|^{2}=|\mathcal{I}_{12}| ^{2}+|\mathcal{I}_{21}|^{2}+\mathcal{I}_{12}\mathcal{I}_{21}^{*}+\mathcal{I} _{12}^{*}\mathcal{I}_{21}\). Based on the fact that for slowly varying pulse envelopes as considered here the energy dependence in Eq. (7) is essentially described by \(\mathcal{I}_{mn}\sim\exp\left\{-\frac{\tau^{2}}{16(m+n)}\left[k^{0}+(m-2l+n-2j )\omega\right]^{2}\right\}\), this leads to the condition
\[\left[k^{0}+(m_{1}-2l_{1}+n_{1}-2j_{1})\omega\right]^{2}+\left[k^{0}+(m_{2}-2 l_{2}+n_{2}-2j_{2})\omega\right]^{2}=0 \tag{10}\]
to yield a sizeable signal. This results in the following energy for the signal photons
\[k^{0}\simeq \omega\left[l_{1}+j_{1}+l_{2}+j_{2}-3\pm\mathrm{i}\left(l_{2}+j_{2}- l_{1}-j_{1}\right)\right]\,. \tag{11}\]
Of course, the signal photon energy \(k^{0}\) has to take on a real, positive value. We therefore conclude that the dominant signals are encoded either in contributions with \((l_{i},j_{i})\in\{(0,2),(2,0),(1,1)\}\) which lead to signal photons of energy \(k^{0}\approx\omega\), or in \((l_{i},j_{i})=(m_{i},n_{i})\in\{(1,2),(2,1)\}\) leading to \(k^{0}\approx 3\omega\).
## III \(3\omega\)-signal
In the remainder of this work, we will exclusively focus on the contributions to Eq. (7) which induce a signal at a photon energy \(k^{0}\simeq 3\omega\) and refer to this as the \(3\omega\)-signal. This signal is particularly interesting because the signal photon energy lies outside of the frequency spectrum of the driving lasers. On a microscopic level, two frequency \(\omega\) photons of one laser beam merge together with a frequency \(\omega\) photon of the other _assisting_ laser beam to form a single outgoing signal photon. For completeness, we note that this process persists in the zero frequency limit for the assisting field, where two photons of the same beam merge to yield a \(2\omega\) signal.
As the \(3\omega\)-signal photons clearly differ from the background of the frequency \(\omega\) laser photons in energy, they should be discernible in experiment without the immediate need to consider additional quantum vacuum induced modifications of the signal such as polarization. Therefore, we sum over the two transverse polarization states characterized by polarization vectors with \(\beta\) and \(\beta+\frac{\pi}{2}\) resulting in the polarization insensitive differential number of signal photons with \(k^{0}\approx 3\omega\),
\[\begin{split}\frac{\mathrm{d}^{3}N^{3\omega}}{\mathrm{d}k^{0} \mathrm{d}\varphi\mathrm{d}\sin\vartheta}=&\frac{\pi}{8}\left( \frac{7}{45}\right)^{2}\frac{\alpha^{4}}{m_{e}^{8}}\sin^{2}\left(\frac{ \vartheta_{\mathrm{coll}}}{2}\right)\frac{(k^{0})^{3}w_{0}^{6}\tau^{4}}{9H} \left(\frac{\mathfrak{E}}{2}\right)^{6}\left(16+33\sin^{2}\left(\beta_{1}+ \beta_{2}\right)\right)\\ &\times\sum_{p=1}^{2}\sum_{q=1}^{2}\Bigg{\{}\bigg{[}\prod_{m=p,q }\left(\cos\varphi-\cos\left(\vartheta-(-1)^{m}\frac{\vartheta_{\mathrm{ coll}}}{2}\right)\right)\] \[+\sin^{2}\varphi\prod_{m=p,q}\left(\sin\left(\vartheta-(-1)^{m}\frac{ \vartheta_{\mathrm{coll}}}{2}\right)\right)\bigg{]}\cos\left(\beta_{3-p}- \beta_{3-q}\right)\\ &\quad+2\sin\varphi\sin\left(\frac{\vartheta_{\mathrm{coll}}}{2} \right)h(q-p)\sin\left(\beta_{3-p}-\beta_{3-q}\right)\Bigg{\}}\\ &\times\mathrm{e}^{-\frac{\tau^{2}}{24}(k^{0}-3\omega)^{2}} \mathrm{e}^{-\frac{w_{0}^{2}(k^{0})^{2}\sin^{2}\varphi}{12}}\mathrm{e}^{- \frac{2(k^{0})^{2}\omega_{0}^{4}h^{2}}{3H}}\\ &\times\mathrm{e}^{-\frac{(k^{0})^{2}w_{0}^{2}\tau^{2}}{96H\sin 2\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right)}\left[9(h_{-}^{2}+h_{+}^{2} )+16\sin^{2}\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right)(h_{+}+h_{-}+\sin^ {2}\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right))+3\Theta_{pq}(h_{-}^{2}-h_ {+}^{2})\right]}{},\end{split} \tag{12}\]
where
\[h=\cos\varphi\cos\vartheta-\cos\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right) \tag{13}\]
\[\Theta_{pq}=\left\{\begin{array}{ll}-1,&\mbox{for $p=q=1$}\\ 0,&\mbox{for $p\neq q$}\\ 1,&\mbox{for $p=q=2$}\end{array}\right.. \tag{14}\]
Here, we used the notation \(\prod_{m=p,q}c_{m}=c_{p}c_{q}\).
The terms with \(p=q\) correspond to the two _direct_ contributions (\(\sim\left|\mathcal{I}_{12}\right|^{2}\) and \(\sim\left|\mathcal{I}_{21}\right|^{2}\), respectively) and the two terms with \(p\neq q\) are the interference terms or _indirect_ contributions (\(\sim\mathcal{I}_{12}\mathcal{I}_{21}^{*}\) and \(\sim\mathcal{I}_{12}^{*}\mathcal{I}_{21}\), respectively). The direct contributions to the differential number of \(3\omega\)-signal photons are invariant under the transformations \(\vartheta_{\rm coll}\to-\vartheta_{\rm coll}\) and \(\vartheta\to-\vartheta\) for the specific setup considered. Furthermore, the differential number of signal photons associated with the direct terms, i.e. \(p=q\), depends on the polarization of the incident beams only via the overall prefactor \(16+33\sin^{2}(\beta_{1}+\beta_{2})\). In order to maximize the signal, the two polarizations should thus be related via \(\beta_{1}+\beta_{2}=\frac{\pi}{2}\). In addition to the same overall factor, for the interference terms, i.e. \(p\neq q\), there appear additional factors depending on the polarization of the incident beams. Interestingly, these render the optimal choice for \(\beta_{1}\), \(\beta_{2}\) dependent on both the collision angle and the signal photon emission direction encoded in \(\varphi\), \(\vartheta\) and \(\vartheta_{\rm coll}\). However, because of the symmetry of the considered collision geometry, we can safely assume, that the signal's maximum lies in the collision plane where \(\varphi=0\). Upon insertion of \(\varphi=0\) into Eq. (12), the directional and polarization dependences factorize. We conclude that \(\cos(\beta_{p}-\beta_{q})\) should be maximized to maximize the signal photon numbers. This leaves us with the two conditions \(\beta_{1}+\beta_{2}=\frac{\pi}{2}\) and \(\beta_{p}-\beta_{q}=0\) to be simultaneously fulfilled to yield the largest signal. From these it is easy to infer that the interference term contributes most to the signal for \(\beta_{1}=\beta_{2}=\frac{\pi}{4}\). Being exclusively interested in the maximum signal we will adopt this choice for the polarizations of the incident beams in the remainder of this article.
### Emission characteristics
In a next step we aim at deriving relatively simple analytical scalings. To perform the integration over \(k^{0}\) in Eq.(12) we use the fact that in the parameter regime of interest to us the signal is strongly peaked at \(k^{0}=3\omega\); see also Refs. [9; 52]: first, we identify all factors of \(k^{0}\) in the prefactor to the exponential functions in Eq. (12) with \(k^{0}=3\omega\). Second, we formally extend the integration limits to \(\pm\infty\), such that \(\int{\rm d}k^{0}\to\int_{-\infty}^{\infty}{\rm d}k^{0}\). The resulting integral of Gaussian type can be readily integrated analytically and be expressed in terms of elementary functions. Therewith, we obtain the following (approximate) expression for
the emission-angle resolved differential signal photon number
\[\frac{\mathrm{d}^{2}N^{3\omega}}{\mathrm{d}\varphi\mathrm{d}\!\sin \vartheta}\approx\frac{(3\pi)^{\frac{3}{2}}}{\sqrt{2}}\left(\frac{7}{45}\right)^ {2}\frac{\alpha^{4}}{m_{e}^{8}}\sin^{3}\left(\frac{\vartheta_{\mathrm{coll}}}{ 2}\right)\left(\frac{\mathfrak{E}}{2}\right)^{6}\frac{\omega^{3}w_{0}^{6}\tau ^{4}}{H}\sum_{p,q=1,2}\] \[\times\left[\prod_{m=p,q}\left(\cos\varphi-\cos\left(\vartheta-(- 1)^{m}\frac{\vartheta_{\mathrm{coll}}}{2}\right)\right)+\sin^{2}\varphi\prod_{m =p,q}\left(\sin\left(\vartheta-(-1)^{m}\frac{\vartheta_{\mathrm{coll}}}{2} \right)\right)\right]\] \[\times\left\{4\tau^{2}\sin^{2}\left(\frac{\vartheta_{\mathrm{coll} }}{2}\right)+8w_{0}^{2}\left[\sin^{2}\varphi\sin^{2}\left(\frac{\vartheta_{ \mathrm{coll}}}{2}\right)+2h^{2}\right]\right. \tag{15}\] \[\left.\hskip 56.905512pt+\frac{w_{0}^{2}\tau^{2}}{H}\left[9(h_{-} ^{2}+h_{+}^{2})-4(h_{-}+h_{+})^{2}+3\Theta_{pq}(h_{-}^{2}-h_{+}^{2})\right] \right\}^{-\frac{1}{2}}\] \[\times\mathrm{e}^{-\frac{3\omega^{2}\tau^{2}}{8}\frac{8w_{0}^{2} \left[\sin^{2}\varphi\sin^{2}\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right) +2h^{2}\right]+\frac{w_{0}^{2}\tau^{2}}{H}\left[9(h_{-}^{2}+h_{+}^{2})-4(h_{- }+h_{+})^{2}+3\Theta_{pq}(h_{-}^{2}-h_{+}^{2})\right]}{4\tau^{2}\sin^{2} \left(\frac{\vartheta_{\mathrm{coll}}}{2}\right)+8w_{0}^{2}\left[\sin^{2} \varphi\sin^{2}\left(\frac{\vartheta_{\mathrm{coll}}}{2}\right)+2h^{2}\right] +\frac{w_{0}^{2}\tau^{2}}{H}\left[9(h_{-}^{2}+h_{+}^{2})-4(h_{-}+h_{+})^{2}+3 \Theta_{pq}(h_{-}^{2}-h_{+}^{2})\right]}}\]
with \(h_{\pm}\), \(H\) and \(h\) as introduced in Eqs. (8), (9) and (13). A sizable signal can only be generated if the exponential suppression is minimized. For the \(3\omega\)-signal as presented in Eq. (15) the exponential suppression is minimized for co-propagating beams, i.e. \(\vartheta_{\mathrm{coll}}=0\). This is also in line with plane wave considerations, [51]. However, for paraxial beams, no signal photons can be generated for this collision angle because \(\mathcal{F}=\mathcal{G}=0\), which manifests itself in the prefactor of Eq. (15). Thus, we have to bear in mind that considering only the exponent of the expression for the differential number of signal photons (15) is insufficient to infer the properties of the signal photons' emission characteristics. However, since the exponential suppression outweighs the influence of the prefactor, we can still say, without having a quantitative prediction at this stage, that the optimal collision angle \(\vartheta_{\mathrm{coll}}^{\mathrm{max}}\), i.e. the collision angle yielding the maximum signal, must be rather small (especially compared to the best choice of \(\vartheta_{\mathrm{coll}}^{\mathrm{max}}=\pi\) for the frequency \(\omega\) signal).
From Eq. (15) we can furthermore corroborate what we expected from the system's symmetry: firstly that the signal is maximal in the collision plane, i.e. for \(\varphi^{\mathrm{max}}=0\), and secondly that the signal photons originating from the interference terms are primarily emitted at \(\vartheta=0\), which is the bisector of the collision angle \(\vartheta_{\mathrm{coll}}\).
On the other hand, we obtain an estimate on the emission direction of the direct signals by considering the limit of large pulse durations \(\omega\tau\gg 1\) and weak focusing \(\omega w_{0}\gg 1\). In this parameter regime, one can expect the signal photons of energy \(k^{0}\approx 3\omega\) to be emitted in the vicinity of the direction \(\widetilde{k}_{\mathrm{max}}^{\mathrm{pw}}=(m\widetilde{k}_{1}+(3-m) \widetilde{k}_{2})/|m\widetilde{k}_{1}+(3-m)\widetilde{k}_{2}|\) with \(m\in\{1,2\}\).
Accordingly, in this plane wave limit, the polar and azimuthal angles of the emission direc
tion maximizing the amplitude of \(\frac{\mathrm{d}^{2}N^{3\omega}}{\mathrm{d}\varphi\mathrm{d}\mathrm{cos}\,\vartheta}\) are
\[\varphi_{\mathrm{max}}^{\mathrm{pw}}= 0\,, \tag{16}\] \[\vartheta_{\mathrm{max}}^{\mathrm{pw}}= \arcsin\left(\frac{(2m-3)\sin\left(\frac{\vartheta_{\mathrm{coll}} }{2}\right)}{\sqrt{5+4\cos\vartheta_{\mathrm{coll}}}}\right)\,.\]
We emphasize that \(\varphi_{\mathrm{max}}=0\) is expected to hold true also beyond the plane wave limit and for the interference terms (\(p\neq q\) in Eq. (15)) since the field profiles (4) introduce momentum components perpendicular to the collision plane only symmetrically. On the other hand, we expect deviations from \(\vartheta_{\mathrm{max}}^{\mathrm{pw}}\) when taking the full spatial and temporal beam profile into account.
### Analytical scalings
The difficulty of the integration over \(\vartheta\) and \(\varphi\) of Eq. (15) arises from the trigonometric dependencies on these angles, particularly in the exponent. As will be detailed below, this can be bypassed by approximating the signal with a function of Gaussian form, i.e. eliminating the dependence on the angles in the prefactor by substituting them with constants and expanding the argument of the exponent up to \(\mathcal{O}(\vartheta^{2})\) and \(\mathcal{O}(\varphi^{2})\), respectively.
First, we want to perform the integration over \(\varphi\). To this end, we perform the expansion around \(\varphi=0\) in both the overall prefactor and the arguments of the exponential functions. In the prefactor we keep only the leading term while in the exponent we keep contributions up to quadratic order in \(\varphi\). This is motivated by the fact that the maximum of the differential signal photon number, Eq. (15), can be found at this angle. As the partial signals are localized in a small angular region, it is possible to formally extend the integration limits to the complete real domain, leading to an integral of Gaussian form. The resulting expression after the \(\varphi\)-integration, \(\mathrm{d}N^{3\omega}/\mathrm{d}\mathrm{sin}\,\vartheta\), is rather lengthy without providing any additional insight and therefore not given explicitly.
We proceed similarly for the integration of \(\mathrm{d}N^{3\omega}/\mathrm{d}\mathrm{sin}\,\vartheta\) over \(\vartheta\): In a first step, we expand the argument of the exponential function around \(\vartheta=0\) up to \(\mathcal{O}(\vartheta^{2})\). The validity of this approach is initially only apparent for the indirect contributions, which maximize for \(\vartheta=0\). Notably, for the direct terms (\(p=q\)) the expansion of the exponential function around \(\vartheta=0\) features a term linear in \(\vartheta\), which causes a shift of the maximum of the exponential function to
\[\vartheta_{\mathrm{shift}}^{p,q}=\frac{3\tau^{2}\tilde{H}_{1}\Theta_{pq}\sin \left(\frac{\vartheta_{\mathrm{coll}}}{2}\right)}{\tilde{H}_{1}\tilde{H}_{2}-18 \tau^{4}w_{0}^{2}\sin^{2}(\frac{\vartheta_{\mathrm{coll}}}{2})\Theta_{pq}^{2}} \tag{17}\]
with
\[\begin{split}\tilde{H}_{1}=& w_{0}^{2}\sin^{2}\left( \frac{\vartheta_{\rm coll}}{4}\right)(8H+\tau^{2})+2\tau^{2}H\cos^{2}\left(\frac {\vartheta_{\rm coll}}{4}\right)\,,\\ \tilde{H}_{2}=& 20\tau^{2}\cos^{2}\left(\frac{ \vartheta_{\rm coll}}{4}\right)-(8H+\tau^{2})\,.\end{split} \tag{18}\]
For strongly focused beams with \(\tau\gg w_{0}\), Eq. (17) can be approximated by the simpler expression \(\vartheta_{\rm shift}\approx 3\sin\left(\frac{\vartheta_{\rm coll}}{2}\right)/(5+10 \cos\left(\frac{\vartheta_{\rm coll}}{2}\right)-4\cos\vartheta_{\rm coll})\), whose range of values well below 1 justifies the expansion around \(\vartheta=0\) also for the direct contributions. In addition, based on our estimations on the emission direction in the plane wave limit given in Eq. (16) together with the expectation of a small optimal collision angle, the expansion in \(\vartheta\) should be justified and give reasonable results also for weakly focused beams.
The maximum of the differential number of signal photons is expected to be close to \(\vartheta=\vartheta_{\rm shift}^{p,q}\). Therefore, we set \(\vartheta=\vartheta_{\rm shift}^{p,q}\) in the prefactor of the expression for \({\rm d}N^{3\omega}/{\rm d}\!\sin\vartheta\). After integrating over \(\vartheta\), we obtain the total number of \(3\omega\)-signal photons
\[\begin{split} N^{3\omega}\approx&\left(\frac{7}{45 }\right)^{2}\frac{\alpha^{4}}{m_{e}^{8}}\frac{2^{7}\sqrt{3}}{\pi^{2}}\frac{ \omega W^{3}}{\pi^{3}w_{0}^{2}}\left(\frac{\tilde{H}_{1}}{H}\right)^{\frac{3 }{2}}\sin\left(\frac{\vartheta_{\rm coll}}{4}\right)\sin\left(\frac{\vartheta_ {\rm coll}}{2}\right)\\ &\times\sum_{p,q=1,2}\frac{\prod_{m=p,q}\left[1-\cos\left( \vartheta_{\rm shift}^{p,q}-(-1)^{m}\frac{\vartheta_{\rm coll}}{2}\right) \right]}{\sqrt{\tilde{H}_{1}\tilde{H}_{2}-18\tau^{4}w_{0}^{2}\sin^{2}(\frac{ \vartheta_{\rm coll}}{2})\Theta_{pq}^{2}}}\cos\vartheta_{\rm shift}^{p,q}\\ &\times\frac{\sqrt{2H(4w_{0}^{2}\mathfrak{h}^{2}+\tau^{2}\sin^{ 2}(\frac{\vartheta_{\rm coll}}{2}))+\tau^{2}w_{0}^{2}\left(\mathfrak{a}_{+}^{2 }+9\mathfrak{a}_{-}^{2}-6\Theta_{pq}\mathfrak{a}_{+}\mathfrak{a}_{-}\right)} }{\sqrt{-8H\left(\mathfrak{a}_{+}^{2}-\mathfrak{a}_{-}^{2}+\mathfrak{a}_{+} \right)-\tau^{2}\left[\mathfrak{a}_{+}^{2}+9\mathfrak{a}_{-}^{2}+\mathfrak{a}_ {+}-3\Theta_{pq}(2\mathfrak{a}_{+}+1)\mathfrak{a}_{+}\right]}}\\ &\times\exp\Bigg{\{}-\frac{3\omega^{2}\tau^{2}w_{0}^{2}}{8\tilde {H}_{1}}\frac{\sin^{2}(\frac{\vartheta_{\rm coll}}{4})}{\tilde{H}_{1}\tilde{H} _{2}-18\tau^{4}w_{0}^{2}\Theta_{pq}^{2}\sin^{2}(\frac{\vartheta_{\rm coll}}{2} )}\bigg{\{}(\tau^{2}+8H)\tilde{H}_{1}\tilde{H}_{2}\\ &\qquad\qquad\qquad\qquad-36\tau^{4}\Theta_{pq}^{2}\cos^{2}( \frac{\vartheta_{\rm coll}}{4})\left[2\tilde{H}_{1}-3\tau^{2}H\cos^{2}(\frac{ \vartheta_{\rm coll}}{4})\right]\bigg{\}}\Bigg{\}}\end{split} \tag{19}\]
with
\[\begin{split}\mathfrak{a}_{+}=&\frac{1}{2}\left. \left(h_{+}+h_{-}\right)\right|_{\varphi=0,\vartheta=\vartheta_{\rm shift}^{ p,q}}=\cos\vartheta_{\rm shift}^{p,q}\cos\left(\frac{\vartheta_{\rm coll}}{2} \right)-1\\ \mathfrak{a}_{-}=&\frac{1}{2}\left.\left(h_{+}-h_{-} \right)\right|_{\varphi=0,\vartheta=\vartheta_{\rm shift}^{p,q}}=\sin\vartheta _{\rm shift}^{p,q}\sin\left(\frac{\vartheta_{\rm coll}}{2}\right)\end{split} \tag{20}\]
and
\[\mathfrak{h}=h\big{|}_{\varphi=0,\vartheta=\vartheta_{\rm shift}^{p,q}}=\cos \vartheta_{\rm shift}^{p,q}-\cos\left(\frac{\vartheta_{\rm coll}}{2}\right)\,. \tag{21}\]
To better display the dependence of the number of signal photons on the laser parameters, in Eq. (19) we expressed the peak field amplitude \(\mathfrak{E}\) in terms of the laser pulse energy via Eq. (5).
Eq. (19) simplifies considerably if we expand prefactor and exponent separately to leading
order in \(\vartheta_{\rm coll}\), resulting in
\[N^{3\omega}\approx\left(\frac{7}{45}\right)^{2}\frac{\alpha^{4}}{m_{e}^{8}} \frac{\omega W^{3}}{\tau^{2}w_{0}^{2}}\frac{\vartheta_{\rm coll}^{6}}{11^{3} \sqrt{33}}\sum_{p,q=1,2}\frac{(121-57\Theta_{pq}^{2})^{2}}{\sqrt{121-13} \Theta_{pq}^{2}}{\rm e}^{-\frac{2\tau\omega^{2}w_{0}^{2}}{2816}(11-2\Theta_{pq }^{2})\vartheta_{\rm coll}^{2}}\,. \tag{22}\]
This approach is motivated by the good agreement of the full expressions and the respective leading order expansions in the relevant parameter ranges of pulse durations \(20\ {\rm fs}\lesssim\tau\lesssim 150\) fs and of beam waists \(0.02\ \mu{\rm m}\lesssim w_{0}\lesssim 2\ \mu{\rm m}\). The deviations are below \(15\%\) for collision angles \(\vartheta_{\rm coll}\lesssim 65^{\circ}\). Since the optimal collision angle is found precisely in this angle regime (see below), Eq. (22) is expected to give accurate predictions on the maximal number of merged signal photons attainable in the collision of two laser pulses.
From Eq. (22), we can directly read of that \(N^{3\omega}\sim\tau^{-2}\) and \(N^{3\omega}\sim W^{3}\). For the beam waist \(w_{0}\) the scaling depends on the collision angle \(\vartheta_{\rm coll}\) and the photon energy \(\omega\). The larger \(\vartheta_{\rm coll}\) and \(\omega\), the fewer signal photons will be emitted when increasing \(w_{0}\).
In turn, the scaling with \(\omega\) depends on the collision angle \(\vartheta_{\rm coll}\) and the beam waist \(w_{0}\). On the one hand, the prefactor of Eq. (22) increases when \(\omega\) is increased. On the other hand, the exponential function causes the signal photon number to decrease with growing \(\omega\). The last effect is larger, the larger \(\vartheta_{\rm coll}\) and \(w_{0}\), and for realistic laser configurations typically outweighs the enlarging effect of the prefactor. However, laser beams with larger photon energy can also be focused to smaller beam waists, such that the beam divergence \(\theta=2/(w_{0}\omega)\) can be kept constant over a large range of \(\omega\). For fixed \(\theta\), it is beneficial for the signal to use large photon energies \(\omega\).
From Eq. (22) we also infer that the optimal collision angle \(\vartheta_{\rm coll}^{\rm max}\) moves to smaller values when \(w_{0}\) or \(\omega\) are increased. It even allows us to derive a formula for the optimal collision angle \(\vartheta_{\rm coll}^{\rm max,pq}\) for each partial signal, which reads
\[\vartheta_{\rm coll}^{\rm max,pq}=\frac{16\sqrt{11}}{3\omega w_{0}\sqrt{11-2 \Theta_{pq}^{2}}}\,. \tag{23}\]
We obtained this result by finding the root of each summand in Eq. (22) separately. Actually we are interested in the collision angle \(\vartheta_{\rm coll}\), which maximizes the total number of signal photons. However, adopting the same procedure for the total number of signal photons, i.e. for all summands together, leads to an expression which is not analytically solvable for \(\vartheta_{\rm coll}\). Nevertheless, Eq. (23) is very useful to take an educated guess on the optimal collision geometry because \(\vartheta_{\rm coll}^{\rm max,pq}\) for \(p=q\) and \(p\neq q\) do not differ very much. In fact, the product \(\omega w_{0}\) has a lower bound due to the diffraction limit. Assuming \(w_{0}=\lambda\) this bound becomes \(\omega w_{0}=2\pi\) and we can estimate that the optimal collision angles for the direct and the indirect signal are at most \(\sim 5^{\circ}\) apart. For \(w_{0}=\lambda\), the optimal collision angle is at \(\vartheta_{\rm coll}^{\rm max,p=q}\approx 53.8^{\circ}\) (\(\vartheta_{\rm coll}^{\rm max,p\neq q}\approx 48.6^{\circ}\)) for the direct (indirect) signal. For beams with larger beam waist \(w_{0}\) the optimal collision angle is at lower values.
At the angles given by Eq. (23) the exponential dependency of the signal photon number on the laser parameters drops out and we have
\[\begin{split} N_{\text{max}}^{3\omega}\approx&\left( \frac{7}{45}\right)^{2}\frac{\alpha^{4}}{m_{e}^{8}}\frac{W^{3}}{\tau^{2}\omega ^{5}w_{0}^{8}}\frac{2^{24}}{9^{3}\sqrt{33}}\text{e}^{-3}\left[1+\left(\frac{2^ {11}\sqrt{3}}{9^{4}}-1\right)\Theta_{pq}^{2}\right]\\ \approx& 4.83\frac{\alpha^{4}}{m_{e}^{8}}\frac{W^{3}}{ \tau^{2}\omega^{5}w_{0}^{8}}\left[1-0.46\Theta_{pq}^{2}\right]\,.\end{split} \tag{24}\]
The scaling of the number of merged photons with the various laser parameters becomes particularly evident from this expression.
## IV Results
In the following we present example results of the \(3\omega\)-signal for laser parameters from from the high-power laser system (HPLS) available at ELI-NP [53; 54] providing two identical laser pulses of photon energy \(\omega=1.51\) eV, duration \(\tau^{\text{HM}}=24\) fs and energy \(W=244\)J at a repetition rate of \(1/60\) Hz. Moreover, we assume these lasers to be focused to \(w_{0}=\lambda\).
We first discuss the spatial distribution of the signal photons in the collision of two identical optical laser beams at the numerically determined, optimal collision angle \(\vartheta_{\text{coll}}^{\text{max,num}}\approx 50.2^{\circ}\). Eq. (15) allows us to illustrate the contributions from the underlying microscopic processes separately, i.e. one photon of beam 1 merging with two photons of beam 2 (top panel of Fig.3a), two photons of beam 1 merging with one photon of beam 2 (middle panel of Fig.3a) and the interference of both (bottom panel of Fig.3a). As expected, the direct contributions (top, middle) are mirror images of each other with the mirror plane at \(\vartheta=0\). From Fig. 3a we can read off that the main emission directions of the direct contributions at this collision angle are only slightly shifted away from the angle bisector of the two incident beams to \(\vartheta\approx\pm 1.5^{\circ}\), respectively. The angular extent of the signal components is much larger: The direct contributions drop to \(1/e^{2}\) of their maximal amplitude at \(\Delta\vartheta\approx 10.5^{\circ}\) and \(\Delta\varphi\approx 15.5^{\circ}\) away from their main emission directions. Consequently, the contribution of the interference term is almost as large as the direct contribution at its main emission direction, i.e. at \(\vartheta=0=\varphi\). One has to keep in mind, that the interference term as shown in the bottom panel of Fig. 3a contributes twice to the total signal. Thus, the interference is responsible for a fraction of about one third to one half of the total number of signal photons at the optimal collision angle. Note however, that the interference term can also reduce the number of signal photons in some angular regions. In the case shown here, the contribution of the interference term is slightly negative around \(\vartheta\approx 0^{\circ}\) and \(\varphi\approx\pm 12^{\circ}\).
Although it is very interesting to understand how the single terms of Eq. (15) contribute, they can of course never be isolated in an experiment because the signal photons associated with the different contribution are indistinguishable, and only the total amplitude squared
is physical and strictly positive. Therefore, one will always only measure the total signal shown in Fig. 3b. The main emission direction of the total signal is found at \(\vartheta=0=\varphi\) as long as the direct contributions are not too far apart. Otherwise, the total signal will exhibit two main emission directions at \(\varphi=0\) and some \(\vartheta=\pm\vartheta_{\rm max}\). In the particular case considered here, the total signal is almost circularly distributed with \(1/e^{2}\)-widths of about \(\Delta\vartheta\approx 10^{\circ}\) and \(\Delta\varphi\approx 12^{\circ}\). For comparison note that the full beam divergences of the driving laser beams are given by \(\Theta=2/\pi\approx 36^{\circ}\).
While the direction of maximal emission of the indirect contribution is known to be \(\vartheta_{\rm max}=0=\varphi_{\rm max}\) for symmetry reasons, the emission directions of the direct contributions depend on the collision angle and the laser parameters. Therefore, we now analyze the emission directions of the direct contributions in more detail with special focus on their dependence of the chosen collision angle. To this end, we numerically determine the direction of maximal emission \(\vartheta_{\rm max}^{\rm num}\) for the partial contributions separately from Eq. (15). With this as a reference, we test the quality of our approximation of the emission direction \(\vartheta_{\rm max}^{\rm pw}\) deduced from plane wave considerations, Eq. (16), and also that of \(\vartheta_{\rm shift}^{p,q}\) as given in Eq. (17). Without loss of generality we consider only the partial contribution with \(p=1=q\), which gives us Fig. 4. The second direct contribution (\(p=2=q\)) would give us the same picture but mirrored at \(\vartheta_{\rm max}=0\).
The direction of maximal emission \(\vartheta_{\rm max}^{\rm num}\) exhibits several interesting features. Firstly, it is much closer to the angle bisector between the two incident beams (\(\vartheta=0\)) than one might have expected from the plane wave considerations, i.e. compared to \(\vartheta_{\rm max}^{\rm pw}\). Secondly, for small collision angles, the signal photons are even scattered closer towards the beam contributing only with one photon than to the beam contributing with two photons. At a collision angle of \(\vartheta_{\rm coll}\approx 45.5^{\circ}\), this behavior is reversed. The appearance of this extremum in the curve of the direction of maximal emission hints to the existence of two opposing effects. The second suggested expression for the direction of maximal emission, \(\vartheta_{\rm shift}^{p,q}\), approximates the numerical determined curve better than the plane wave solution \(\vartheta_{\rm max}^{\rm pw}\) but also fails to reproduce this essential feature. For a weaker focusing of the laser beams \(\vartheta_{\rm max}^{\rm num}\) as well as \(\vartheta_{\rm shift}^{p,q}\) converge towards \(\vartheta_{\rm max}^{\rm pw}\).
To not only give an impression on the deviation of the predicted angles of maximal emission but also on the corresponding signal amplitudes, we encode the deviation in the differential signal photon numbers at the angles \(\vartheta_{\rm max}^{\rm pw}\) and \(\vartheta_{\rm shift}^{p,q}\) compared to that at \(\vartheta_{\rm max}^{\rm num}\) in the color of the curves, i.e. the color is given by \(\left(\frac{{\rm d}^{2}N^{3\omega}}{{\rm d}\varphi{\rm d}\sin\vartheta}\right) \big{|}_{\vartheta=\vartheta_{\rm max}^{\rm approx}}/\left(\frac{{\rm d}^{2}N^{ 3\omega}}{{\rm d}\varphi{\rm d}\sin\vartheta}\right)\big{|}_{\vartheta= \vartheta_{\rm max}^{\rm pw}}\) with \(\vartheta_{\rm max}^{\rm approx}=\vartheta_{\rm max}^{\rm pw}\) and \(\vartheta_{\rm max}^{\rm approx}=\vartheta_{\rm shift}^{p,q}\), respectively.
As we are interested in getting the signal as large as possible, we pay special attention to the properties at the optimal collision angle \(\vartheta_{\rm coll}^{\rm max,num}\approx 50.2^{\circ}\), marked with dashed lines in Fig. 4. Here, the different approaches predict \(\vartheta_{\rm max}^{\rm num}=-1.4^{\circ}\), \(\vartheta_{\rm shift}^{1,1}=-6.4^{\circ}\) and \(\vartheta_{\rm max}^{\rm pw}=-8.9^{\circ}\). Evaluating the differential signal photon number at \(\vartheta_{\rm shift}^{1,1}\) and \(\vartheta_{\rm max}^{\rm pw}\) instead of
Figure 3: Angular distribution for the collision of two HPLS pulses according to Eq. (15) at \(\vartheta_{\rm coll}\,=\,\vartheta_{\rm coll}^{\rm max,num}\,\approx\,50.2^{\circ}\). Dashed lines indicate contours with the same differential contribution, with the outermost line being at \(1/e^{2}\) of the maximum value \({\rm d}^{2}N_{\rm max}^{3\omega}/({\rm d}\varphi{\rm d}\sin\vartheta)\). Left (from top to bottom): partial contributions for \(p\,=\,q\,=\,1,\,p\,=\,q\,=\,2\) (both direct terms), and \(p\,\neq\,q\) (interference term). Right: total signal (partial contribution of interference term included twice). Note the different scales for the partial contributions and the total signal.
\(\vartheta_{\rm max}^{\rm num}\) leads to a \(37\%\), respectively \(65\%\), smaller signal. Despite these discrepancies, \(\vartheta_{\rm shift}^{p,q}\) and \(\vartheta_{\rm max}^{\rm pw}\) serve as valid points around which to integrate numerically in order to obtain the total number of signal photons. Furthermore, we can use these approximations for theoretical purposes, e.g. to be able to determine an approximate expression for the total number of signal photons analytically, as done in Sec. III.2.
We obtain the number of signal photons \(N^{3\omega}\) as a function of the collision angle \(\vartheta_{\rm coll}\) by numerically integrating Eq. (15). In Fig. 5, the numerically determined direct (\(p=1=q\), Fig. 5a), indirect (\(p=1\), \(q=2\), Fig. 5b), and total \(3\omega\)-signal (Fig. 5c) are presented together with the corresponding approximate results according to Eq. (19) and Eq. (22).
While the direct contribution is being underestimated by the approximations in Eq. (19) and Eq. (22), the indirect contribution is being overestimated. The discrepancy between the numerical result and the approximations for the indirect contributions are mainly related to the neglect of higher order term in the expansion in \(\varphi\) and \(\vartheta\) of \(\mathrm{d}^{2}N^{3\omega}/\mathrm{d}\varphi\mathrm{d}\mathrm{s}\mathrm{i} \vartheta\). In the case of the direct contributions, there is also the fact that \(\vartheta=0\) is not optimal as an expansion point. As pointed out above, for a beam which is more plane-wave-like, the
direction of emission can be well approximated by \(\vartheta_{\rm max}^{\rm pw}\) which fulfills \(|\vartheta_{\rm max}^{\rm pw}|>|\vartheta_{\rm max}^{\rm num}|\) for the parameters considered here. Therefore, the level of agreement between the numerical and the approximated results involving an expansion in \(\vartheta\ll 1\) and thus also requiring \(\vartheta_{\rm max}\ll 1\) are expected to worsen with increasing beam waist and pulse durations relatively to those presented in Fig. (a)a. Interestingly, the numerical and the approximate analytical results are in good agreement again for the total signal. Also, the position of the optimal collision angle is reproduced with a deviation of less than \(2^{\circ}\) for the total signal. The deviations of the optimal collision angle \(\vartheta_{\rm coll}^{\rm max}\) are also observed in the upper panel of Fig. (a)a. The optimal collision angle \(\vartheta_{\rm coll}^{\rm max}\) is also observed in the upper panel of Fig. (a)a. The optimal collision angle \(\vartheta_{\rm coll}^{\rm max}\) is also observed in the lower panel of Fig.
the predicted optimal collision angle are larger for the partial contributions, but still below \(5^{\circ}\). While the optimal collision angles predicted by the approximate expressions are shifted to larger values for the direct contribution, they are shifted to lower values for the indirect contribution.
In summary, for the specific laser parameters stated at the beginning of this section we find the maximal number of merged photons to be 1.02 per shot, or, taking the repetition rate of 1/60 Hz into account, 61.2 per hour. The indirect contribution contributes approximately one third of the total signal photons. The collision angle under which these numbers can be reached is \(\vartheta_{\rm coll}\approx 50.2^{\circ}\).
## V Conclusions and Outlook
In the present work we have studied laser photon merging in the collision of two identical laser beams, modeled as pulsed Gaussian beams in the infinite Rayleigh range approximation. We have derived approximate expressions for the optimal collision angle and the signal photon number of the \(3\omega\)-signal and have compared them to numerical results. Within their limits of validity our approximations give consistent results compared to numerical calculations over a large parameter range. For the example of the collision of two optical high-intensity lasers of the 10 PW class available at ELI-NP, at zero impact the total signal photon number was reproduced to an accuracy of \(\approx 20\%\). This suggests hat our analytical approximations provide a convenient way to analyze the \(3\omega\)-signal with little numerical effort.
We have found the attainable number of merged photons per hour at the optimal collision angle of \(\vartheta_{\rm coll}\approx 50.2^{\circ}\) to be \(\approx 61.2\) at a repetition rate of 1/60 Hz for these parameters. Note also that for 1 PW class laser facilities the merging signal is still sizable. E.g. for the parameters of the Center of Advanced Laser Applications (CALA) in Munich (\(\omega=1.55\) eV, \(\tau^{\rm HM}=20\) fs,\(W=24\) J and \(w_{0}=800\) nm) [55; 56], one can expect to obtain about 5.5 merged photons per hour. The considerably lower laser energy is here compensated by the higher repetition rate of the laser of 1 Hz. Although these numbers are much lower than the signal photon numbers associated with vacuum birefringence, the merging signal can still be considered as prospective for a discovery experiment of quantum vacuum nonlinearity. Its clear advantage is that the frequency \(3\omega\) signal photons can be clearly distinguished from the driving laser beams. In contrast, the signal of vacuum birefringence typically competes with the large background of laser photons of the same frequency as the signal. Correspondingly, the discernible number of signal photons can be considered to be of the same order as the merging signal for the respective best options of collision geometry and laser systems available today, cf. e.g. [57]. Note also that three beam scenarios as discussed e.g. recently in Ref. [42; 43; 51] provide much larger merging signals than the two beam scenario
discussed here. However, the experimental implementation of a collision of three beams at a well defined spacetime point is considerably more difficult than the already extremely demanding task of colliding two tightly focused high-intensity laser beams.
In the present study we have emphasized the perspective of the nonlinear quantum vacuum signature of laser photon merging in a two-beam configuration. We have identified interesting features of the merging signal such as the acute optimal collision angle and the considerable number of signal photons. With the quantum vacuum signal of two-beam laser photon merging we have revealed a further possibility for the discovery of QED vacuum nonlinearities in an all-optical experiment. Most notably our proposal requires the collision of only two fundamental-frequency laser beams and thus avoids many experimental complications inherent to collision scenarios involving three or more high-intensity laser beams.
###### Acknowledgements.
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG) under Grant No. 416607684 within the Research Unit FOR2783/2.
|
2306.05177 | Modeling and Harmonic Balance Analysis of Parametric Amplifiers for
Qubit Read-out | Predicting the performance of traveling-wave parametric amplifiers (TWPAs)
based on nonlinear elements like superconducting Josephson junctions (JJs) is
vital for qubit read-out in quantum computers. The purpose of this article is
twofold: (a) to demonstrate how nonlinear inductors based on combinations of
JJs can be modeled in commercial circuit simulators, and (b) to show how the
harmonic balance (HB) is used in the reliable prediction of the amplifier
performance e.g., gain and pump harmonic power conversion. Experimental
characterization of two types of TWPA architectures is compared with
simulations to showcase the reliability of the HB method. We disseminate the
modeling know-how and techniques to new designers of parametric amplifiers. | Daryoush Shiri, Hampus Renberg Nilsson, Pavan Telluri, Anita Fadavi Roudsari, Vitaly Shumeiko, Christian Fager, Per Delsing | 2023-06-08T13:18:22Z | http://arxiv.org/abs/2306.05177v2 | # Modeling and Harmonic Balance Analysis of Parametric Amplifiers for Qubit Read-out
###### Abstract
Predicting the performance of traveling-wave parametric amplifiers (TWPAs) based on nonlinear elements like superconducting Josephson junctions (JJs) is vital for qubit read-out in quantum computers. The purpose of this article is twofold: (a) to demonstrate how nonlinear inductors based on combinations of JJs can be modeled in commercial circuit simulators, and (b) to show how the harmonic balance (HB) is used in the reliable prediction of the amplifier performance _e.g._, gain and pump harmonic power conversion. Experimental characterization of two types of TWPAs architectures is compared with simulations to showcase the reliability of the HB method. We disseminate the modeling know-how and techniques to new designers of parametric amplifiers.
Harmonic Balance, Josephson Junction, SNAIL, Parametric Amplifier, Nonlinear Circuits, Pump Depletion, Qubit Read-out.
## I Introduction
Parametric modulation of reactive elements in circuits for amplification purpose dates back to WWI when for example, Alexanderson and others proposed magnetic amplifiers for radio transmitters [1, 2, 3, 4]. The low noise figure of the first parametric amplifiers, as a result of using passive elements, was on par with that of Masers [5]. Later, the quest for wide-band amplification and obviating the bandwidth-gain trade-off led to proposing traveling-wave parametric amplifiers (TWPA).
The proposals were based on modulating the inductance or capacitance values of a nonlinear transmission line by a traveling, high amplitude pump, a process akin to the nonlinear optical materials where the refractive index (or light velocity) is power-dependent [6, 7, 8]. After the high-quality and low-loss germanium and silicon varactors became available, the traveling-wave parametric amplifiers were developed in different labs. Bell Laboratories implemented the first example of a TWPA with \(200\,\mathrm{MHz}\) bandwidth, \(10\,\mathrm{dB}\) gain and centered at \(700\,\mathrm{MHz}\)[5]. With the advent of low cost, high integration, and yield of semiconductor transistor amplifiers in the 1970s, the interest in varactor-based parametric amplifiers declined. The interest in TWPAs was sparked anew by the quest for low noise amplification of faint signals in astronomy and quantum physics experiments. The first parametric amplifier using JJs was reported by H. Zimmer in 1967, where the amplified, reflected signal wave was attributed to the nonlinear inductance of a Josephson junction [9]. Thereafter, the physics community witnessed a speedy development of JJ-based low-noise parametric amplifiers [10, 11, 12, 13]. Quantum noise-limited microwave amplifiers based on Josephson junctions are vital in _e.g._, increasing the sensitivity of dark matter detectors [14] and fidelity of qubit read-out [15]. Today, a myriad of designs based on JJs, superconducting quantum interference devices (SQUIDs) [16], and superconducting nonlinear asymmetric inductance elements (SNAILs) [17] are reported for amplification of microwave signals on which the information about the state of a quantum bit (qubit) is transferred. The low noise figure of the amplifier results in high fidelity in discriminating the states \(0\) and \(1\) of the qubit. The gain and bandwidth of \(15\)-\(20\,\mathrm{dB}\) and \(700\,\mathrm{MHz}\)-\(1\,\mathrm{GHz}\) with large tunability and low noise (less than a photon) have been reported [18, 19, 15].
Predicting the performance of parametric amplifiers and capturing the processes behind amplification is vital as this provides insight into how to enhance the gain mechanism and how to avoid those processes which impede or compete with the amplification. This, in turn, reduces the time-to-fabrication cycle. However, the complex mechanisms of power conversion between pump harmonics or undesired pump depletion effects, compression point of the amplifier, etc., cannot be captured by traditional (linear) small-signal methods like AC analysis and \(S\)-parameters. These methods are based on linearizing the circuit around the operating point by assuming a low amplitude stimulus applied around that point. The small-signal transfer functions like signal transmission (\(S_{21}\)) and reflection (\(S_{11}\)) are then extracted for each given bias and frequency [20, 21, 22]. However, in a strongly nonlinear circuit such as a TWPA, the injected high amplitude tone leads to the oscillations of the operating point. Furthermore, in the case of having two or more input tones, the intermixing products make the mapping between the input and transmitted/reflected waves less straightforward and \(S\)-parameters become less useful. Extracting the spectral content of the amplifier output or any stage of it, is possible by performing the time-domain transient analysis followed by the Fourier analysis. However, it requires a long simulation time and a very small time-step. The former is because the initial reflections and ringing at the output of the circuit must settle down so that the circuit reaches a periodic steady state. The latter is because the maximum frequency of interest (_e.g._, 5'th harmonic of the pump) sets a limit on the time-step by virtue of the Nyquist criterion. Moreover, to extract the gain spectrum, the above process should be repeated for every given signal and pump frequency and power level, which renders the simulation very time-consuming and
impractical.
Harmonic Balance (HB) is the most reliable and dominant method in the large signal analysis of highly nonlinear microwave circuits. It is used to analyze circuits with periodic, quasi-periodic, and steady-state response. The origin of this method goes back to Galerkin, Krylov, and Bogoliubov when they proposed that the solution of a nonlinear system can be written as a sum of known functions [23]. The unknown coefficients in the sum are found by starting with a trial solution in the governing equations of the nonlinear system and solving an algebraic equation. If the known functions are sinusoidal, then they are called harmonics, and the method is called harmonic balance. This is when it is expected the output of the system to have a steady-state periodic waveform (not necessarily sinusoidal). The modern versions of this method to solve nonlinear circuits appeared in 1960's-1980's by Baily, Lindenlaub, Nakhla, Vlach and others [24, 25]. Nowadays it is also used by mechanical and civil engineers in structural stability analysis.
In the HB method, the circuit is split into a linear sub-circuit and a nonlinear sub-circuit. In the linear circuit, the elements can be expressed by their frequency domain admittance or impedance, and the analysis is done in the Fourier domain. In the nonlinear circuit, the sub-circuit is studied in the time-domain using algebraic methods or by solving time-domain differential equations, depending on if the circuit is quasi-static or not [26]. Details of HB methods and its essentials are explained in Appendix A. Suffice to say that the maximum number of harmonics, \(k\), the method chosen to solve the error equation (equation 22 in Appendix A), and the time-step for the transient analysis part of HB, are the most important choices at the beginning.
Before HB simulation, the first step in the design/analysis of a parametric amplifier is the mathematical modeling of the nonlinear element. The nonlinear element in a superconducting parametric amplifier is a Josephson junction (JJ) or combinations of JJs as SNAILs or SQUIDs. All these elements generally work as nonlinear inductors whose value (the parameter) is a function of a bias current and/or an applied magnetic flux. An essential feature of JJ-based devices is that all higher order nonlinear effects are known from the Josephson relations [27, 28].
The modeling of JJs and SNAILs using the symbolic device definition (SDD) is discussed in section II, where the scattering parameter of SNAIL is also calculated and compared with the analytic method. Section III presents the design process of a TWPA with JJs and its gain simulation and measurement. It also shows how in the HB method, the convergence is achieved using the information gained from the transient analysis of the amplifier. In section IV, the gain spectrum of a SNAIL-TWPA is simulated with the HB method and compared with the experimental data. The agreement between simulation and measurement proves the efficiency and reliability of the HB method. Section V is focused on the modeling of power exchange between pump harmonics, a process which is detrimental to the gain of TWPAs. We show how the results of the coupled-mode theory and the HB analysis agree. The insight obtained from this part leads to creative solutions to achieve TWPAs of higher gain and bandwidth.
## II Symbolically Defined Model of Josephson Junction and SNAIL
Equation-based modeling of nonlinear components is possible in modern circuit simulators like Microwave Office [29], APLAC [30], and in general, using a Verilog A code. In this work, we use the symbolically-defined device (SDD) in Keysight ADS, which allows direct mathematical modeling of a device in both large-signal and small-signal regimes. The mathematical functions relating the device port voltages and currents can be modeled implicitly _i.e._, using \(f(I,V)=0\) or explicitly by giving the relation between current and voltage of each element or port _i.e._, \(I=g(V)\), where \(f(\cdot)\) and \(g(\cdot)\) are arbitrary nonlinear functions that can be derived from the Josephson relations. The device can have as many ports as required. Here we show how JJ and SNAIL are modeled as a black box using SDD.
### _Josephson Junction (JJ)_
The current and voltage of a Josephson junction in the superconducting state are related through Josephson relations below [27, 28, 31]. The junction is composed of a sandwich of metal-insulator-metal layers. At temperatures below the critical temperature (\(T<T_{\text{c}}\)), the metal becomes superconducting, where electrons pair up into so-called Cooper pairs. For aluminum, the critical temperature is \(T_{\text{c}}=1.175K\). The wave function of the Cooper pair condensate is a complex quantity whose norm squared is equal to the density of Cooper pairs. The phase difference between the wave functions of Cooper pairs on both JJ terminals is \(\phi\). The voltage is the time derivative of this phase which is proportional to the magnetic flux _i.e._, \(\Phi=\frac{\Phi_{0}}{2\pi}\phi\), where \(\Phi_{0}=2.07\times 10^{-15}\,\mathrm{Wb}\) is the quantum of magnetic flux. \(I_{\text{c}}\) is the critical current. If the current is higher than this value, the JJ exits the superconducting state and becomes dissipative, and the behavior approaches that of a normal resistor with resistance \(R_{N}\). Note that this resistor is fixed only if the applied voltage on the JJ is larger than \(2\Delta(T)/e\) and \(T=0\), where \(\Delta(T)\) is the energy to break the cooper pairs and \(e\) is the electronic charge [31]. In that case, \(R_{N}=\pi\Delta(T=0)/2eI_{\text{c}}\). Otherwise, a nonlinear voltage- and temperature-dependent resistor, \(R_{N}(V,T)\), must be added in parallel with the model of Figure 1. As we always work in the (\(I<I_{\text{c}}\)) and (\(T=0\)) regime, modeling of the dissipative operation in JJ is not necessary.
\[\left\{\begin{array}{c}I=I_{\text{c}}\sin(\phi)\quad\text{for}\quad I<I_{ \text{c}}\\ V=\frac{\mathrm{d}\phi}{\mathrm{d}t}=\frac{\Phi_{0}}{2\pi}\frac{\mathrm{d} \phi}{\mathrm{d}t}\Rightarrow\phi=\frac{2\pi}{\Phi_{0}}\int V\,\mathrm{d}t. \end{array}\right. \tag{1}\]
The potential energy stored in the JJ, \(U\), is found from:
\[U=\int V\cdot Idt=-E_{J}\cos(\phi), \tag{2}\]
where \(E_{J}=\frac{\Phi_{0}I_{\text{c}}}{2\pi}\) is called the Josephson energy. The Josephson relations mentioned above show that a JJ behaves like a nonlinear inductor, and its value is controlled by the bias current \(I\). The nonlinear inductance is found from \(\frac{1}{L_{J}}=\frac{\partial^{2}U}{\partial\Phi^{2}}\) which yields:
\[L_{J}=\frac{L_{J0}}{\sqrt{1-(\frac{I}{I_{\rm c}})^{2}}}, \tag{3}\]
where the zero-bias inductance \(L_{J0}\) is \(\frac{\Phi_{0}}{2\pi I_{\rm c}}\). To implement the JJ model, it is necessary to emulate the quantum mechanical phase mathematically. From the voltage-phase relation in Josephson relations, the phase is found by integrating the input voltage. An ideal integrator is implemented by injecting a current into a capacitor. Therefore a nonlinear voltage-to-current converter ('NonlinVCCS') is used to convert the voltage to current, and then it is injected into a capacitor. The coefficient and the capacitor values are chosen to obtain the phase quantity with the correct dimension _i.e._, radian. With \(C_{int}=1\,\mathrm{pF}\) (any value is possible), the coefficient in NonlinVCCS is \(C_{int}\times(\frac{2\pi}{\Phi_{0}})=3038.5349\). The phase is then fed to the input port of a _two-port_ SDD. To guarantee an infinite input impedance at the input port (to avoid loading), we set the current of the input port to zero by setting \(I[1,0]=0\). The output port current is then set to \(I[2,0]=I_{\rm c}\sin(\phi)\). The output terminals of two-port SDD are connected to both electrodes of JJ to guarantee that this is the current which passes through the JJ. The physical capacitance of the metal-insulator-metal junction, \(C_{j}\), is added in parallel to the input ports as shown in Figure 1. The typical value for a JJ capacitance made of aluminum _i.e._, a sandwich of Al-Al\({}_{2}\)O\({}_{3}\)-Al is about \(C_{j}=6-8\,\mathrm{fF}\). Note that the ground terminals in the model are only mathematical zero reference voltage in the model, and they do not correspond to any physical ground terminal in the real device.
The first simulation to check the correct operation of the JJ model is applying a constant DC voltage, \(V_{\rm dc}\), in the time-domain and then observing the current oscillations. In this topology, the JJ works as a frequency modulator (FM) or a voltage-controlled oscillator (VCO) circuit if a piece-wise constant DC voltage is applied. This is because the frequency of the current is proportional to the applied DC voltage. The frequency of the current oscillations (using equation 1) is \(f=\frac{2\pi}{\Phi_{0}}V_{\rm dc}\). Variation of the DC bias current leads to the change of inductance according to equation (3). This can be tested by AC or \(S\)-parameter analysis of a single JJ and a parallel capacitor as an LC low-pass filter. By observing the change of bandwidth (\(\omega_{-3\,\mathrm{dB}}\)) as a function of the bias current, the inductance change with the current according to equation (3) can be confirmed.
### _Superconducting Nonlinear Asymmetric Inductance eLement (SNAIL)_
Tunability of the device inductance using DC applied magnetic flux is attractive for amplifiers based on 3-wave mixing mechanism [17, 18, 32] as will be discussed in section IV. Circuit topology of a SNAIL element with three JJs in one branch is shown in Figure 2(a). The right branch (branch 2) in Figure 2(a) is composed of \(N=3\) equal JJs. The Josephson energy of these \(N=3\) series JJs is also \(\alpha\) times that of the single junction in the left branch (branch 1). This asymmetry entails the following relations between critical currents of two branches which are \(I_{\rm c1}=\alpha I_{\rm c2}\). Starting from KCL, we can write:
\[\left\{\begin{array}{c}I=I_{1}+I_{2},\\ I_{1}=I_{\rm c1}\sin(\phi_{1}),\quad I_{2}=I_{\rm c2}\sin(\phi_{2}),\end{array}\right. \tag{4}\]
where \(I\) is the total current passing through the SNAIL. Since the Junctions on branch 2 are of the same size, the phase change across each is the same. The total phase change summed up around the loop, including the one due to the winding external flux,\(\Phi_{\rm ext}\), must be equal to zero, therefore:
\[\phi_{1}-N\phi_{2}+\frac{2\pi\Phi_{\rm ext}}{\Phi_{0}}=0 \tag{5}\]
From which (after setting \(\frac{\pi\Phi_{\rm ext}}{\Phi_{0}}=F\)), we have,
\[\phi_{2}=\frac{2F+\phi_{1}}{N}. \tag{6}\]
The SNAIL model (Figure 2) is implemented by building two nonlinear current sources corresponding to each branch in equation (4) and joining them together to build the total current \(I\). Figure 2(b) shows the model implemented in ADS. The left branch uses a two-port SDD to implement \(I_{1}=I_{\rm c1}\sin(\phi_{1})\). The phase (\(\phi_{1}\)) is created by integrating and scaling the voltage drop (\(V\)) of the JJ (the leftmost branch of the SNAIL), as mentioned before for the single JJ. The current of the rightmost branch (\(N\) JJ's), the mathematical variable of phase (\(\phi_{1}\)), and \(2F\) are created and added together as two high-impedance voltage ports, \(v_{1}\) and \(v_{3}\), in the _three-port_ SDD. The rightmost port (port 2 or output) is then a sinusoidal function of the inputs, scaled by \(I_{\rm c2}\)_i.e._, we have:
\[I_{2}=I_{\rm c2}\sin\left(\frac{2F+\phi_{1}}{N}\right) \tag{7}\]
Note that to simulate a SQUID which is a SNAIL with the same number of JJs in each branch, it suffices to make \(N=1\) and assume \(I_{\rm c1}=I_{\rm c2}\)_i.e._, \(\alpha=1\). The physical capacitance of junctions can be added in parallel to their corresponding JJ voltage ports as shown by \(C_{j1}\) and \(\frac{1}{3}C_{j2}\). Note that the three series capacitors in Figure 2(b), which model the physical capacitance of three JJs on the right branch, are grouped in one capacitor. Since we always assume the circuit operates at superconducting state, the normal resistances of JJs, \(R_{N}(V,T)\) are not added to the SNAIL model.
Fig. 1: The circuit symbol (left) and pseudo-schematics of the mathematical model (right) of JJ behavior based on equation (1).
The equivalent inductance of the SNAIL is found from \(\frac{1}{L_{\text{tot}}}=\frac{\partial^{2}F_{\text{tot}}}{\partial\Phi^{2}}\), where \(E_{\text{tot}}\) is the total Josephson energy of the SNAIL and it is written as:
\[E_{\text{tot}}=-E_{J1}\cos(\phi_{1})-NE_{J2}\cos(\frac{2F+\phi_{1}}{N}) \tag{8}\]
\(E_{J1}\) and \(E_{J2}\) are Josephson energies of JJs on the left and right branches of SNAIL, respectively. From the above, it can be shown that \(L_{\text{tot}}\) is,
\[\frac{1}{L_{\text{tot}}}=(\frac{1}{L_{J0,\text{left}}})\cos(\phi_{1})+(\frac{ \alpha}{NL_{J0,\text{left}}})\cos\left(\frac{2F+\phi_{1}}{N}\right), \tag{9}\]
The individual inductance of the left branch at zero bias _i.e._, \(L_{J0,\text{left}}\), has a similar form to that of equation (3). Equation (9) shows that the inductance is a function of externally-applied magnetic flux (\(F\)). For analytic calculations, the value of \(\phi_{1}\) is found by solving equation (4) for a given input current, \(I\). However, the modeling of SNAIL simplifies the design process as the models of Figures 1 and 2 inherently include the nonlinear inductance and solving any extra equation _e.g._, equations (4) and (9), is not necessary
Furthermore, the phase (\(\phi_{1}\)) is always accessible in the SDD-based models as an extra node. This is helpful in the design/simulation of logic gates based on quantum flux _e.g._, rapid single flux quantum (RSFQ) [31] gates when counting the number of quantum fluxes in the SNAIL or SQUID loops is necessary. The value of the phase node shows how many quanta of flux each voltage pulse caries as flux is the area under the voltage pulse (see Josephson relations).
We conclude this section by showing the \(S\)-parameter analysis of an LC low pass filter (LPF) where the inductor is made of a SNAIL-13. The index 13 means one and three JJ's in the left and right branches of SNAIL, respectively _i.e._, \(N=3\). The bottom electrode of SNAIL is attached to the top electrode of a \(C=100\,\mathrm{fF}\) capacitor (see Inset in Figure 3). The \(S_{21}\) of the LPF is found using both numerical ABCD matrix method [33] and the \(S\)-parameter analysis using the SDD model of SNAIL. The results of both methods are the same. The transmission coefficient, \(S_{21}\), varies with the applied magnetic flux with a periodicity of a flux quantum (\(\Phi_{0}\)). This is evident from the current-phase dependence in equation (9).
## III JJ-based TWPA with 4-Wave Mixing (4WM)
The JJ-based traveling-wave amplifier (JJ-TWPA) in this example is composed of \(1000\) to \(2000\) JJ and capacitor unit cells, as shown in Figure 4. The bias current of the circuit, which passes through all series junctions, determines the inductance of each JJ. If the critical current of the JJ is \(I_{\text{c}}=1.4\,\mathrm{\SIUnitSymbolMicro\text{m}A}\), then with \(I_{\text{dc}}=0.7\,\mathrm{\SIUnitSymbolMicro\text{m}A}\), the inductance of each unit cell is \(L=0.2714\,\mathrm{nH}\). The TWPA is the discrete implementation of a transmission line, the input impedance of which is determined by the inductance and capacitance of each unit cell _i.e._, \(Z=\sqrt{L/C}\). For a \(Z=50\,\mathrm{\SIUnitSymbolMicro\text{m}}\) input impedance, the required capacitor is \(C=108.6\,\mathrm{fF}\). The magnitude of transmission, \(S_{21}\), for a JJ-TWPA with 2000 JJ's is shown in Figure 4. The ringing in the pass band results from the discreteness of the circuit. The cut-off frequency (bandwidth) is found from the dispersion of the circuit, which is \(\omega=\frac{2}{\sqrt{LC}}\)[34, 35, 32]. Note that the pump frequency must be lower than the cut-off and within the linear part of the dispersion to simultaneously satisfy phase matching (conservation of momentum of the photon) and conservation of energy. For \(I_{\text{dc}}=0.7\,\mathrm{\SIUnitSymbolMicro\text{m}A}\), and aforementioned \(C\), the bandwidth is 35 GHz. The bandwidth of the TWPA can be adjusted by DC bias current (\(I_{\text{dc}}\)) as it changes the inductance according to equation (3). When the bias current of the JJ-TWPA is nonzero, the nonlinearity of TWPA is akin to the \(\chi^{(3)}\) process in nonlinear optical materials. It is called the 4-wave mixing (4WM) process, where two photons of pump frequency \(f_{pump}\) add up and create a signal and an idler photon _i.e._, \(f_{pump}+f_{pump}=f_{signal}+f_{idler}\). Due to this, the TWPA gain spectrum has a mirror symmetry around the pump frequency, for which there is no gain in the spectrum. This is problematic, as the high-power pump is close to the qubit frequencies, and it may cause unwanted excitation of qubits. Putting the signal far away from the pump, on the other hand, avail less gain.
Fig. 2: (a) Circuit topology of a SNAIL with three JJ in branch two or \(N=3\). (b) Implementing the SNAIL model based on equations (1) and (7) by using the SDD and nonlinear controlled sources. Note that in general, \(N\) can be different from 3, _e.g._, for SQUID, use \(N=1\) and \(\alpha=1\).
### _Input Impedance with \(S\)-parameter and HB_
To see the shortcomings of small-signal methods, the input impedance of a unit cell (JJ and capacitor) using both small-signal (linear \(S\)-parameter) and large-signal and nonlinear (HB) analyses are compared. In the small-signal analysis, the input impedance is found from the reflection coefficient, \(S_{11}\) or \(\Gamma\) coefficient [33],
\[S_{11}=\Gamma=\frac{Z_{\text{in}}-Z_{0}}{Z_{\text{in}}+Z_{0}}, \tag{10}\]
where \(Z_{0}=50\,\Omega\) is the characteristic impedance of the transmission line. In harmonic balance analysis, the input impedance is found from the first harmonic of input voltage divided by the first harmonic of the input current source. For a low power input _e.g._, \(P_{\text{in}}=-140\,\mathrm{dBm}\), the results of HB method and \(S\)-parameter coincide (see dashed traces in Figure 5). However, by increasing the input power to \(P_{\text{in}}=-80\,\mathrm{dBm}\), the results of the large-signal and small-signal analysis deviate as shown by solid green trace in Figure 5. This shows why a linear analysis like \(S\)-parameter cannot completely represent the behavior of a strongly nonlinear circuit, and HB analysis is necessary.
### _Transient Time Analysis_
To perform HB analysis, it is better to run a transient (time-domain) simulation first to see when the circuit reaches the steady-state. Thereafter, the settings in the HB analysis and required parameters (as discussed below) can be chosen accordingly. The transient-assisted harmonic balance (TAHB) uses the steady-state time-domain solution of the circuit as an initial guess for HB. The TAHB option can be selected along with three important parameters, which are _Stop Time_, _Max Time Step_, and _Min Time for detecting steady-state_. Once the simulation time reaches the _Min Time for steady-state_, the HB solver starts working. The transient simulation results of a 2000JJ TWPA are shown in Figure 6 in which the circuit is fed by a DC current of \(0.7\,\mathrm{\SIUnitSymbolMicro}\)A and a single-tone pump of amplitude \(I_{\text{p}}=200\,\mathrm{nA}\) and frequency \(f_{\text{p}}=8\,\mathrm{GHz}\). The frequency must be in the pass-band of the TWPA, as shown in Figure 4 to avoid attenuation. The initial delay of \(1\,\mathrm{ns}\) is intentional for visibility. The beginning of the input, \(1000\,\mathrm{\SIUnitSymbolMicro}\)th, and \(2000\,\mathrm{\SIUnitSymbolMicro}\)th unit-cell waveforms start at \(1\,\mathrm{ns}\), \(7.36\,\mathrm{ns}\), and
Fig. 4: (Top) The schematic of aJJ-TWPA. Each unit cell is composed of a seriesJJ and a parallel capacitor. (bottom) The magnitude of transmission, \(S_{21}\), from \(S\)-parameter analysis for a TWPA with 2000 unit cells, \(I_{\text{c}}=1.4\,\mathrm{\SIUnitSymbolMicro}\)A, \(I_{\text{dc}}=I_{\text{c}}/2\), and \(C=108.6\,\mathrm{fF}\).
Fig. 3: (a) The magnitude of transmission, \(S_{21}\), versus external magnetic flux based on analytic extraction from the ABCD matrix. (b) The same data from \(S\)-parameter analysis in ADS® using SDD-based model of SNAIL-13. The flux is normalized to the quantum of flux, \(\Phi_{0}\). \(S_{21}\) changes periodically with the applied flux. Inset shows the schematic of the LC low pass filter, which later on is called the unit cell of the amplifier.
\(12.7\,\mathrm{ns}\), respectively. That means it takes \(t_{d}=11.7\,\mathrm{ns}\) for the wave to travel through 2000 junctions. This information is useful to determine the three parameters settings for TAHB.
Additionally, the wave group velocity in the amplifier chain, impedance mismatch, and values of inductance or capacitance per unit length can be extracted from this analysis. For example, knowing that the length of a single JJ is \(15\,\mathrm{\SIUnitSymbolMicro m}\), the total length of the TWPA is \(l=2000\times 15\,\mathrm{\SIUnitSymbolMicro m}=3\,\mathrm{cm}\). The group velocity of the microwave in the TWPA is then \(v=\mathrm{distance}/\mathrm{travel}\)\(\mathrm{time}=3\,\mathrm{cm}/11.7\,\mathrm{ns}=2.564\times 10^{6}\,\mathrm{m}/\mathrm{s}\). The group velocity is given by \(v=\frac{1}{\sqrt{L/L_{\mathrm{c}}}}\) (where subscript \(l\) means that the quantities are per unit length). As a test, we can assume the value of parallel capacitance in each unit cell is known, which is \(C=108.6\,\mathrm{fF}\). Therefore, \(C_{l}=108.6\,\mathrm{fF}/15\,\mathrm{\SIUnitSymbolMicro m}\). From this, the inductance value of each JJ is found to be \(L=0.26\,\mathrm{nH}\). This is very close to \(L=0.27\,\mathrm{nH}\), which was calculated based on JJ nonlinear inductance in equation (3). The reason for asymmetry in the output waveform around the DC bias is the existence of even harmonics of the injected pump at 8 GHz, which will be explained in Appendix B. The step-like jump in the input wave is the reflection of the output wave back to the input due to a slight impedance mismatch. For HB simulation, the minimum time to reach the steady-state is set to be \(t>40\,\mathrm{ns}\) to assure a better convergence.
### _Gain: Simulation vs. Measurement_
A nonlinear circuit like TWPA can be considered a mixer in which the signal and pump tones are mixed and different harmonics are generated at the outputs. ADS allows these harmonics to be accessed by calling them with their indices as follows, which is a useful way to obtain the gain spectrum of the amplifier.
If a nonlinear circuit has two input frequencies _i.e._, \(f_{\mathrm{s}}\) and \(f_{\mathrm{p}}\), then at the computed output spectrum, there will be mixed (up-converted) tones at \(nf_{\mathrm{s}}+mf_{\mathrm{p}}\), where \(m\) and \(n\) are integer numbers. To address each component of the spectrum at an arbitrary node, the command'mix(node name,\(\{n,m\}\))' is used. Note that the \(f_{\mathrm{s}}\) and \(f_{\mathrm{p}}\) should have the same order as they appear in the list of _Fundamental Frequencies_ in the HB setting [36]. To access the power of the signal component at the output, the power of the tone, which corresponds to \(\{n,m\}=\{1,0\}\), is plotted. Likewise, to see how the amplitude of the pump harmonics changes as a function of the signal frequency or length of the amplifier, the mixed component \(\{0,m\}\) is chosen. To access the idler frequency, \(\{n,m\}=\{-1,1\}\) is chosen, because the idler frequency is obtained from \(f_{\mathrm{i}}=-f_{\mathrm{s}}+f_{\mathrm{p}}\). The power gain in dB is then found by subtracting the output power from the input power both at \(\{1,0\}\) and expressed in dBm. The signal frequency, in this case, is chosen as a sweep parameter to plot the gain versus \(f_{\mathrm{s}}\).
In Figure 7, simulated and measured gain spectra of a JJ-TWPA with 1000 unit cells are shown. This amplifier has JJ's with critical current \(I_{\mathrm{c}}=1.318\,\mathrm{\SIUnitSymbolMicro m}\) and operates with 4-wave mixing. The value of capacitor in each unit cell is \(C=93\,\mathrm{fF}\) to obtain \(Z_{\mathrm{in}}=50\,\mathrm{\SIUnitSymbolMicro m}\). The DC bias current is zero, and the pump is injected at \(f_{\mathrm{p}}=6.0102\,\mathrm{GHz}\) with the amplitude of \(I_{\mathrm{p}}=I_{\mathrm{c}}/2\). The pump frequency in the simulation is slightly different from \(6\,\mathrm{GHz}\), to avoid the generation of mixing products that land at the same frequency. As can be seen in Figure 7, the simulated and measured values are close especially within the band of interest \(4-8\,\mathrm{GHz}\) where the gain maximum reaches \(10\,\mathrm{dB}\). The measurement setup of the TWPA is shown in Appendix C and discussed in detail in [18].
Fig. 5: The input impedance of a single unit cell (JJ and capacitor) of the JJ-TWPA. HB method and \(S\)-parameter analyses give the same result at low power (\(P_{\mathrm{in}}=-140\,\mathrm{dBm}\)) (blue and yellow lines). The deviation appears as the input power is increased (\(P_{\mathrm{in}}=-80\,\mathrm{dBm}\)) (green line) while the \(S\)-parameter result is the same despite the power increase. For both cases \(I_{\mathrm{dc}}=0.5I_{\mathrm{c}}\).
Fig. 6: The voltage waveforms of the JJ-TWPA at the input, the 1000’th, and the 2000’th (output) node in response to a single tone at \(f_{\mathrm{p}}=8\,\mathrm{GHz}\) and \(I_{\mathrm{p}}=200\,\mathrm{nA}\). The input-to-output delay is \(t_{d}=11.7\,\mathrm{ns}\). Steps at \(18\,\mathrm{ns}\) and \(23\,\mathrm{ns}\) are due to back and forth reflection of the wave.
## IV SNAIL-based TWPA and 3-Wave Mixing
It is shown that the TWPAs based on SNAIL promise higher gain with a smaller number of unit cells [17, 18, 32]. This is because a SNAIL at non-zero flux bias enhances the 3-wave mixing (3WM). In the 3WM process, one photon of the pump is down-converted and generates a signal and an idler photon _i.e._, \(f_{\text{p}}=f_{\text{s}}+f_{\text{i}}\). This means that pump frequency is always above and out of the gain spectrum, as a result of which the unwanted excitation of qubits due to a strong pump is mitigated. Designing the SNAIL-TWPA starts with finding the optimum flux bias. First, the total potential energy of SNAIL in equation (8) is expanded as a power series of \(\phi\) around the minimum of the potential. The terms which are proportional to \(\phi^{3}\) and \(\phi^{4}\) are responsible for 3WM and 4WM processes, respectively. By adjusting the flux bias, it is possible to enhance 3WM and reduce the 4WM terms in the potential energy [17, 18, 32, 37]. It is shown that the optimum flux bias is \(0.4\,\Phi_{0}\).
Here the simulation of a SNAIL-TWPA is reported, which has 440 unit cells. Each unit cell contains one SNAIL-13 and a capacitor to ground. The schematic of the amplifier and each unit cell are shown in Figure 8(a). The Josephson junctions are designed so that \(I_{\text{c1}}=3\,\upmu\)A, \(C_{\text{J1}}=8.2\,\mathrm{fF}\), and \(\alpha=1/3.75\). The capacitor in each unit cell is \(C=150\,\mathrm{fF}\) to make the input impedance \(50\,\Omega\).
Before running the HB analysis, an \(S\)-parameter analysis is performed to check the cut-off frequency of the amplifier and its dependence on the applied DC flux. Figure 8(b) shows that when the external flux bias is changed from \(0.4\Phi_{0}\) to \(0.45\Phi_{0}\), the bandwidth of the circuit is reduced due to increased inductance of the SNAILs. Note that for each flux value, the capacitance is readjusted to keep the input impedance of the TWPA the same.
### _Gain: Simulation vs. Measurement_
A time-domain (transient) analysis is performed, similar to the one presented for JJ-TWPA. From this step, the important parameters for TAHB are chosen, which leads to the convergence of HB analysis. Thereafter the gain and harmonic contents of the output in the presence of a pump are investigated using the HB method. The signal amplitude is less than one-thousandth of the pump _i.e._, \(I_{\text{s}}<I_{\text{p}}/1000\). The amplitude of the input pump current is \(I_{\text{p}}=100\,\mathrm{nA}\) and the frequency is \(f_{\text{p}}=8.5\,\mathrm{GHz}\). The magnetic flux bias is \(0.4\,\Phi_{0}\). Figure 9 shows the gain spectrum of the SNAIL-TWPA, which agrees well with the one measured at approximately the same input pump power and the same pump frequency _i.e._, \(f_{\text{p}}=8.5\,\mathrm{GHz}\). The oscillation in the gain spectrum is due to the back-and-forth traveling of the wave in the TWPA. In the yellow trace (HB analysis) of Figure 9, the frequency difference between two consecutive rings is about \(\Delta f=160\,\mathrm{MHz}\). This corresponds to the time delay of \(t_{d}=6.25\,\mathrm{ns}\). Transient simulation shows that this is indeed one period during which the wave traverses the length of the TWPA back and forth. At frequencies below the pump frequency, there is a flat gain range.
As expected from the 3WM process, the pump sits at the high end of the gain spectrum _i.e._, \(f_{\text{p}}=8.5\,\mathrm{GHz}\). The exchange of
Fig. 8: (a) The schematic of a SNAIL-TWPA with 440 unit cells. Each unit call has a SNAIL-13 and a parallel capacitor, \(C\). (b) The magnitude of transmission, \(S_{21}\), for different flux biases from 0.4 to 0.45 of a flux quantum.
Fig. 7: The gain spectrum of a JJ-TWPA with 1000 unit cells obtained by the HB method and measurement. The experimental data is for two different pump powers measured at room temperature (\(P_{RT}\)) at the signal generator output. Note that the gain is due to the 4WM process as \(I_{\text{dc}}=0\), and it is symmetric around the pump frequency.
power between the harmonics of the pump is a mechanism that causes a reduction of gain in the 3WM process. In the next section, we show where this exchange mechanism emanates from.
## V Power exchange between pump harmonics
Due to the strong nonlinearity and high power of the pump tone, higher harmonics of the pump are generated inside the bandwidth of the TWPA. The inter-mixing and power exchange between these harmonics cause a reduction of gain [32, 38, 18]. This section briefly reviews the coupled mode equations for power conversion between pump harmonics, and compares their predictions with HB simulations. We show agreement between HB analysis and coupled-mode theory in capturing this effect. Proposals to stop the power exchange between harmonics of the pump and enhance the gain in the 3WM regime are discussed in [32, 18, 39].
The coupled mode equations describe the evolution of the amplitude of each pump tone as they propagate along the length of the amplifier. The study starts with writing the discrete form of wave equation for phase, \(\phi\), which resembles the wave equation of a transmission line, except for the extra nonlinear terms due to dependence of group velocity to inductance via equations (3) and (6), for JJ and SNAIL, respectively. Using the trial solution as a linear combination of pump tones _i.e._, \(\phi(x,t)=\sum_{m=1}^{M}A_{m}e^{i(k_{m}x-\omega_{m}t)}\), the following re-scaled coupled mode equation is obtained. This equation couples each harmonic (\(m\)) to the rest of the harmonics above and below its frequency within the range of \(M\) harmonics [32].
\[\begin{split}\frac{da_{m}}{d\xi}=m&\Bigg{(}\sum_{n =m+1}^{M}\!\!a_{n}a_{n-m}^{*}\mathrm{e}^{i\mu\xi d_{n,m}}\\ &\quad-\frac{1}{2}\sum_{n=1}^{m-1}a_{n}a_{m-n}\mathrm{e}^{-i\mu \xi d_{n,m}}\Bigg{)}\end{split} \tag{11}\]
where \(\xi\) is the effective (normalized) length and is given by,
\[\xi(x)=\frac{c_{3}\omega_{1}^{2}A_{1}(0)x}{4a\omega_{0}^{2}} \tag{12}\]
The normalized amplitude of the wave, \(a_{m}\) is,
\[a_{m}=m\frac{A_{m}(x)}{A_{1}(0)} \tag{13}\]
where \(A_{1}(0)\) is the amplitude of the first pump harmonic at the input port. The effective phase mismatch is,
\[\mu=\frac{k_{2}-2k_{1}}{c_{3}\omega_{1}^{2}A_{1}(0)/(4a\omega_{0}^{2})} \tag{14}\]
and \(d_{m,n}=\frac{1}{2}mn(m-n)\) is a numerical factor, \(c_{3}\) is the 3WM coefficient in the power series expansion of potential energy versus \(\phi\), \(\omega_{1}\) is the frequency of the first harmonic, \(a\) is the physical length of a unit cell, \(x\) is the physical length variable, \(k_{1},k_{2}\) are the wave numbers for the first and second harmonic, and \(\omega_{0}\) is the resonance frequency of each unit cell _i.e._, \(1/\sqrt{LC}\).
The simplest case is when we assume \(M=2\) and the perfect phase matching (\(\mu=0\)), for which equation (11) has an exact analytic solution. The solution shows how the power of the main tone of the pump is converted to the second harmonic as the wave propagates inside the amplifier. This process, second harmonic generation (SHG), was also observed in nonlinear optical materials. The cases for \(M=2\) and \(M=3\) (third harmonic generation) were studied by [40, 41], which corroborates with what is calculated here for a TWPA.
Figure 10 shows that the power exchange between \(a_{1}\) and \(a_{2}\) takes place after a certain period. This period is called _frequency conversion distance_ (F
Fig. 10: The power exchange between the first and the second harmonic of the pump. The power of the second harmonic \(a_{2}\) reaches that of the first harmonic \(a_{1}\) after the frequency conversion distance \(\xi=1\). Note that the phase matching is assumed,_i.e._, \(\Delta k=0\) or \(\mu=0\).
Fig. 9: The gain spectrum for the SNAIL-TWPA with 440 unit cells. Blue and yellow traces are measurement and HB simulation, respectively. In simulation the signal current is 1/1000th of the pump. In experiment the input pump power is \(-97\,\mathrm{d}\mathrm{B}\mathrm{m}\). The pump frequency is \(8.5\,\mathrm{G}\mathrm{H}\mathrm{z}\).
power of the generated second harmonic becomes as strong as the first harmonic. Note that throughout the exchange process, the power conservation is satisfied _e.g._, \(\left|a_{1}\right|^{2}+\left|a_{2}\right|^{2}=1\). When the pump strength is large in relation to the phase mismatch, i.e., when \(\mu\) is small, more harmonics are generated, and one needs to include those in the coupled-mode equation (11) by increasing \(M\). Furthermore, the assumption of the phase matching is not always valid, and that leads to oscillations of power along the length of the amplifier. For the simulations in this section, we use a SNAIL with \(I_{\mathrm{c1}}=2.53\,\mathrm{\SIUnitSymbolMicro A}\), \(I_{\mathrm{c2}}=1.26\,\mathrm{\SIUnitSymbolMicro A}\), \(C_{\mathrm{SNAIL}}=25\,\mathrm{fF}\) and a ground capacitance of \(C_{0}=159\,\mathrm{fF}\) to make the input impedance approximately \(50\,\mathrm{\SIUnitSymbolMicro A}\). We use the flux bias of \(\Phi_{\mathrm{ext}}=0.45\Phi_{0}\), which eliminates 4WM while giving a the 3MW coefficient of \(c_{3}=1.11\).
In Figure 11, the exchange of power between 5 harmonics of the pump tone is shown as a function of the amplifier length. As can be observed, there is a good agreement between HB (solid lines) and coupled-mode numerical solution (dashed lines). However, HB is able to show more detailed features and oscillations. The pump current and its first tone frequency are \(I_{\mathrm{p}}=400\,\mathrm{nA}\) and \(f_{\mathrm{p}}=10\,\mathrm{GHz}\), respectively. As can be seen in Figure 11, the power in the second harmonic oscillates approximately every 40 unit cells, which is \(\approx 40\times 15\,\mathrm{\SIUnitSymbolMicro m}\) long. Increasing the pump power and/or pump main tone frequency leads to faster oscillation of power exchange between the pump harmonics as predicted by equation (13).
This study suggests that by adding a stop-band in the TWPA (_i.e._, a band gap in the dispersion of the TWPA), it is possible to suppress the second harmonic. As a result, the coupling between the higher harmonics and the main tone is cut. It was proven that having a dispersion-engineered band gap on the second harmonic of the pump indeed led to the gain improvement [18] as it quenched the power oscillation mechanism in Figure 11.
The same process of power conversion in Figure 11 was observed for a JJ-TWPA with 500 unit cells using a combination of time-domain and Fourier analysis [38] where JJ is modeled in WRspice [42]. We observe similar behavior in Figures 11 and 12 for the JJ-TWPA with 2000 unit cells as well that shows the computational power the HB method offers [43]. In Appendix B, we show a simple model of how the number of output pump harmonics in a JJ-TWPA is dependent on the DC current bias.
Plotting the power of output pump harmonics versus the power of the main input tone contains useful information. In the HB analysis, if there is only one input tone, the power of the output harmonic at each node of the amplifier is accessible using the'mix' command as 'dBm(mix(node name,\(N\)))', where \(N\) is the harmonic number. The power of each harmonic at each node can also be plotted versus a given sweep parameter. In this case the useful command is 'plot_vs', where the number of harmonic, \(N\) is used as, 'plot_vs(dB(node name[:..,\(N\)]), Sweep parameter)'. With this, the power of the first three pump harmonics at the output of SNAIL-TWPA is plotted versus the input power. As can be seen in Figure 12, in the low power regime (below \(-110\,\mathrm{dBm}\)), the power of three harmonics is growing linearly, and the ratio of their slopes is \(1:2:3\). This is expected from a nonlinear transfer function like \(y(x)=y_{o}+ax+bx^{2}+cx^{3}+...\), where \(y\) is the output and \(x\) is the input which is \(x=A\sin(\omega t+\beta)\). This transfer function is implemented by the nonlinear inductance of JJ or SNAIL.
Note that at high pump power _i.e._, \(P_{in}>-100\ dBm\), the compression of the main input tone starts as the power exchange between all harmonics takes place.
## VI Conclusion
The harmonic balance method provides a wealth of information about the traveling-wave parametric amplifiers based on JJs and SNAILs. Firstly, we showed how JJs and SNAILs
Fig. 11: The power exchange between five harmonics of the input pump tone in a 100-unit cell SNAIL-TWPA. The main tone frequency, current, and flux bias are \(f_{\mathrm{p}}=10\,\mathrm{GHz}\), \(I_{\mathrm{p}}=400\,\mathrm{nA}\), and \(F=0.45\), respectively. Solid lines are HB simulations, and dashed lines are the solutions of the coupled-mode equations.
Fig. 12: Power of three pump harmonics at the output versus the power of the main pump tone at the input of a 440-unit cell SNAIL-TWPA. The slopes have a ratio of 1:2:3 at low power. The main tone frequency is \(f_{\mathrm{p}}=8.5\,\mathrm{GHz}\), and the flux bias is \(F=0.4\).
can be modeled mathematically using symbolically defined devices. The nonlinearity of these devices is included in its entirety in the model, and there is no need to incur any approximations or extra analytical/numerical analysis. Secondly, from the HB analysis, the parameters like gain spectrum, harmonic content of the amplifier output or any intermediate nodes, and the mechanism of power conversion between pump harmonics were presented. The comparison and close match of the simulation results with the experimental measurement and coupled-mode theory shows the reliability of equation-based modeling and HB analysis in addressing the nonlinear physics of these amplifiers.
## Acknowledgments
This research was funded by the Knut and Alice Wallenberg (KAW) Foundation through the Wallenberg Center for Quantum Technology (WACQT). The authors acknowledge the use of the Nanofabrication Laboratory (NFL) at Chalmers University of Technology.
## Appendix.A Harmonic Balance Method
In the HB method, splitting a circuit into a linear and nonlinear part is done similar to Figure 13 where the voltages of common \(N\) ports between linear and nonlinear sections, are shown by \(V_{1},V_{2},...,V_{N}\). Also, the circuit has a few excitation ports which are attached to the linear sub-circuit as ports \(N+1\), \(N+2\), etc. Without loss of generality, we assume the split circuit has two common ports between linear and nonlinear sub-circuits _i.e._, \(N=2\) and there is one excitation port from which a single tone at frequency \(\omega_{\text{p}}\) drives the circuit. The currents of the linear and nonlinear parts are shown by subscripts \(L\) and \(NL\), respectively. The common voltages of the ports are shown by \(V_{1}\) and \(V_{2}\). Equation (15) relates the currents of linear sub-circuit (\(L\)) and excitation port to other port voltages. Assuming the voltages are known at the beginning (this is what we call _initial guess_ later), the currents are found from the following matrix multiplication:
\[\begin{bmatrix}\mathbf{I}_{L1}\\ \mathbf{I}_{L2}\\ \mathbf{I}_{L3}\end{bmatrix}=\begin{bmatrix}\mathbf{Y}_{11}&\mathbf{Y}_{12}&\mathbf{Y}_{13}\\ \mathbf{Y}_{21}&\mathbf{Y}_{22}&\mathbf{Y}_{23}\\ \mathbf{Y}_{31}&\mathbf{Y}_{32}&\mathbf{Y}_{33}\end{bmatrix}\cdot\begin{bmatrix}\mathbf{V}_{1} \\ \mathbf{V}_{2}\\ \mathbf{V}_{3}\end{bmatrix} \tag{15}\]
where \(\mathbf{I}\) and \(\mathbf{V}\) are vectors which contain the Fourier components of port currents and voltages, respectively. If \(k\) harmonics of the pump (\(\omega_{\text{p}}\)) are included in the Fourier spectrum of current and voltage, then each vector is \(k+1\) long (by including the DC component or \(\omega=0\)). Each vector is then written as follows:
\[\mathbf{I}_{Ln}=\begin{bmatrix}I_{Ln}(0)\\ I_{Ln}(\omega_{\text{p}})\\ \vdots\\ I_{Ln}(k\omega_{\text{p}})\end{bmatrix},\mathbf{V}_{n}=\begin{bmatrix}V_{n}(0)\\ V_{n}(\omega_{\text{p}})\\ \vdots\\ V_{n}(k\omega_{\text{p}})\end{bmatrix},n=1,2,... \tag{16}\]
The components of the admittance matrix in equation (15) form \(k\times k\) diagonal matrices whose diagonal elements are values of \(\mathbf{Y}\) (admittance) at pump harmonics. For example, \(\mathbf{Y}_{31}\) is,
\[\mathbf{Y}_{31}=\begin{bmatrix}Y_{31}(0)&0&\ldots&0\\ 0&Y_{31}(\omega_{\text{p}})&\ldots&0\\ \vdots&\vdots&\ddots&0\\ 0&0&0&Y_{31}(k\omega_{\text{p}})\end{bmatrix} \tag{17}\]
The current solved from equation (15) must be opposite to the terminal currents of nonlinear ports to satisfy Kirchhoff's current law (KCL). This means,
\[I_{L}+I_{NL}=0. \tag{18}\]
The currents of the linear terminal, \(\mathbf{I}_{L1}\) and \(\mathbf{I}_{L2}\) are found from equation (19), using the sub matrices of \(\mathbf{Y}\). The part which is due to contribution of source (drive) voltage, \(\mathbf{V}_{3}\), in the linear currents, is shown by \(\mathbf{I}_{\text{s}}\). Compared to Figure 13, \(\mathbf{I}_{\text{s}}\) is the same as \(\mathbf{I}_{N+1}\), where \(\mathbf{N}=2\). The part of the admittance matrix which relates port voltages to the currents is shown by \(\mathbf{\tilde{Y}}\).
\[\mathbf{I}_{L}=\begin{bmatrix}\mathbf{I}_{L1}\\ \mathbf{I}_{L2}\end{bmatrix}=\begin{bmatrix}\mathbf{Y}_{11}&\mathbf{Y}_{12}\\ \mathbf{Y}_{21}&\mathbf{Y}_{22}\end{bmatrix}\begin{bmatrix}\mathbf{V}_{1}\\ \mathbf{V}_{2}\end{bmatrix}+\mathbf{V}_{3}\begin{bmatrix}\mathbf{Y}_{13}\\ \mathbf{Y}_{23}\end{bmatrix}=\mathbf{\tilde{Y}}\mathbf{V}+\mathbf{I}_{\text{s}}. \tag{19}\]
The currents of the nonlinear terminals are found from:
1. Inverse Fourier transform of the port voltages \(\mathbf{V}_{1}\) and \(\mathbf{V}_{2}\) to find \(v_{1}(t)\) and \(v_{2}(t)\),
2. Finding the time-domain currents of nonlinear sub-circuit terminals from the functions of nonlinear elements (in this example two different or equal functions \(g_{1}\) and \(g_{2}\)), and
3. Finding the frequency-domain nonlinear currents using Fourier transform, \(\mathcal{F}\).
The summary of the above steps is written as,
\[\mathbf{I}_{NL}=\mathcal{F}\begin{Bmatrix}i_{NL1}(t)=g_{1}(\mathcal{F}^{-1}\{\bm {V}_{1},\mathbf{V}_{2}\})\\ i_{NL2}(t)=g_{2}(\mathcal{F}^{-1}\{\mathbf{V}_{1},\mathbf{V}_{2}\})\end{Bmatrix}. \tag{20}\]
From now on, the following algebraic equation is formed by substituting equations (19) and (20) into equation (18) and it is solved to find the new port voltage vectors, \(\mathbf{V}\),
\[\mathbf{\tilde{Y}}\mathbf{V}+\mathbf{I}_{\text{s}}+\mathbf{I}_{NL}=0. \tag{21}\]
Fig. 13: Partitioning a circuit into linear and nonlinear sub-circuits. The HB algorithm finds the unknown vector of common port voltages.
The left side of equation (21) is a function of voltage vectors which is called current error \(\mathbf{F(V)}\). The solutions for this equation are the points where the multidimensional surface \(\mathbf{F(V)}\) crosses the coordinate axes. The number of crossing points is the number of common ports between the two sub-circuits (in this example \(N=2\)). While there are different algorithms to minimize the norm of the current error, the most common and powerful method to solve \(\mathbf{F(V)}=0\) is using Newton-Raphson method [22, 23, 44]. This method is based on iterative solution of equation (21) by starting from an initial guess for the zero of \(\mathbf{F(V)}\)_e.g._, \(\mathbf{V}_{\text{old}}\),
\[\mathbf{V}_{\text{new}}=\mathbf{V}_{\text{new}}+\left(\frac{\partial\mathbf{F}}{\partial \mathbf{V}}\right)^{-1}_{V_{\text{old}}}\cdot\mathbf{V}_{\text{old}}. \tag{22}\]
The new estimate of the solution, \(\mathbf{V}_{\text{new}}\), is found by calculating the inverse of the Jacobian matrix in every iteration, which is,
\[\mathbf{J}=\partial\mathbf{F}/\partial\mathbf{V}. \tag{23}\]
The size of this matrix is \(2N(k+1)\times 2N(k+1)\) where \(N\) and \(k\) are the number of ports and harmonics, respectively. The factor of two is for imaginary and real parts of each port voltage at each harmonic. Details about convergence criteria, matrix solving algorithms, and methods of avoiding the local minima are discussed in [26].
Usually there are two different methods to calculate the inverse of Jacobian matrix in equation (22). The first method is a _direct_ method based on lower-upper (\(LU\)) factorization which is suitable for small circuits with a few nonlinear elements. In \(LU\) factorization, a square matrix, \(A\), is written as a product of a lower- and an upper triangular matrix _i.e._, \(A=LU\). The second method is the _Krylov_ sub-space method based on the generalized minimum residual (GMRES) method. The Krylov method is useful for circuits with a large number of harmonics or many nonlinear elements. Interested readers can find the details of the above algorithms in [45].
Choosing the maximum number of harmonics (\(k\)) is critical to obtain good convergence and correctly predict the circuit operations. A large value for \(k\) makes the solution of equation (22) very slow, on the other hand a very small value for \(k\) does not predict the real operation of the circuit. An initial guess for equation (22) is significant and it should be judged based on the expected behavior of the circuit. For example, if the nonlinear circuit output has a diode which clips the signal, it is better to start with a clipped waveform as an initial condition instead of a full sinusoidal.
Keysight's ADS uses the DC operating points of the circuit as a default initial guess. There is also an option which is called transient-assisted harmonic balance (TAHB). With this option, the steady state waveform resulted from a transient analysis is used as an initial guess for \(V\) in equation (22). In this article, TAHB is used. Note that the time-step of the transient simulation should follow the Nyquist criterion which is in turn imposed by the maximum number of harmonics chosen for the pump (or large signal tone) in HB _i.e._, \(k\). Another option for initial guess is running an HB analysis with fewer harmonics and using the solution of this step for the second round of HB simulation with higher number of harmonics. This is called HB-assisted HB or HBAHB. That means the solution of each HB analysis can be saved and reused as an initial guess for another round of HB simulations of the same or similar circuit.
## Appendix A Appendix.B Output Harmonics of a Jj-Twpa
The output spectrum of JJ-TWPA with 2000 unit cells is shown in Figure 14 for zero and non-zero DC bias currents and a single input tone at \(f=$7\,\mathrm{GHz}$\) with \(P_{\text{in}}=$-100\,\mathrm{dBm}$\). When there is no DC current, only odd harmonics of the input tone are generated at the output. By applying a non-zero DC bias current (\(I_{\text{dc}}=0.5I_{\text{c}}\)), the even harmonics (_e.g._, 14, 28, and 42 GHz) are generated at the output. The reason for this is understood if we consider a single JJ (nonlinear inductor) which is biased by a current source _i.e._, \(I=I_{\text{dc}}+\tilde{I}\sin(\omega t)\). The voltage which appears on the JJ is found from \(v(t)=L_{J}\cdot\mathrm{d}I/\mathrm{d}t\),
\[v(t)=\frac{L_{J0}}{\sqrt{1-\left(\frac{I_{\text{dc}}+I\sin(\omega t)}{I_{ \text{c}}}\right)^{2}}}\times(\omega\tilde{I}\cos\omega t) \tag{24}\]
Using the abbreviations as \(\alpha=\frac{I_{\text{dc}}}{I_{\text{c}}}\) and \(\beta=\frac{\tilde{I}}{I_{\text{c}}}\), and using the Taylor expansion of the equation (24) we can write,
\[v(t)=L_{J0}\omega\beta I_{\text{c}}\cos(\omega t)(\frac{1}{ \sqrt{1-\alpha^{2}}}+\frac{\alpha\beta\sin(\omega t)}{(1-\alpha^{2})^{1.5}}+\\ \frac{(2\alpha^{2}+1)\beta^{2}\sin^{2}(\omega t)}{2(1-\alpha^{2} )^{2.5}}+\\ \frac{\alpha(2\alpha^{2}+3)\beta^{3}\sin^{3}(\omega t)}{2(1-\alpha ^{2})^{3.5}}+...). \tag{25}\]
It can be shown that when \(\alpha=0\)_i.e._, \(I_{\text{dc}}=0\), only the frequency components at \(\omega\), \(3\omega\), \(5\omega\) and \((2k+1)\omega\) exist in the Fourier spectrum of the JJ voltage. On the other hand if
Fig. 14: The output spectrum of a JJ-TWPA with 2000 unit cells in response to a single tone at 7 GHz with \(-100\,\mathrm{dBm}\) power when (red) \(I_{\text{dc}}=0\) and (blue) \(I_{\text{dc}}=0.5I_{\text{c}}\). In the latter case, even harmonics are also created.
\(I_{\text{dc}}\neq 0\), then the even harmonics (\(2\omega\), \(4\omega\), and \(2k\omega\) ) are also generated due to mixing. The symmetry(asymmetry) of the steady-state output voltage around the bias value (output DC voltage) also shows if the output spectrum has only odd (or both odd and even) harmonics.
## Appendix C Experimental Setup
The TWPA chips are fabricated on high-resistivity (100) silicon wafers, and circuits are patterned on the evaporated aluminum using nano-lithography and etching. The silicon chip is \(7mm\times 5mm\), and its aluminum ground plane is wire bonded to the copper package. The TWPA chain resembles the CPW line (Figure 15(a)). The packages are designed with special care to mitigate the package and chip modes. The TWPA is characterized at a cryogenic temperature of \(10~{}mK\) within a dilution fridge (Figure 15(b)) where the two-level energy splitting of qubits (typically \(E=h\times 4~{}GHz\)) is immune from the thermal noise and unwanted excitations using different attenuators and filters on lines 1 and 2. In case of amplification of qubit read-out signal, the signal is fed using line 1 in Figure 15(b), and the pump enters through line 2. They are combined before entering the TWPA using a coupler. The amplified signal is sent back to the room temperature stage via line 3. In case of characterizing the TWPA itself (_e.g._, measuring the gain spectrum), the qubit is bypassed, and signal and pump are combined at room temperature and then go down via line 2. The reference \(50\,\Omega\) line is to extract the background noise of the whole chain and deduct it from the measurement. The hanging DC copper coil on the chip is to apply external flux and tunes the bandwidth and 3MW strength of the amplifier.
|
2306.02618 | Enhance Diffusion to Improve Robust Generalization | Deep neural networks are susceptible to human imperceptible adversarial
perturbations. One of the strongest defense mechanisms is \emph{Adversarial
Training} (AT). In this paper, we aim to address two predominant problems in
AT. First, there is still little consensus on how to set hyperparameters with a
performance guarantee for AT research, and customized settings impede a fair
comparison between different model designs in AT research. Second, the robustly
trained neural networks struggle to generalize well and suffer from tremendous
overfitting. This paper focuses on the primary AT framework - Projected
Gradient Descent Adversarial Training (PGD-AT). We approximate the dynamic of
PGD-AT by a continuous-time Stochastic Differential Equation (SDE), and show
that the diffusion term of this SDE determines the robust generalization. An
immediate implication of this theoretical finding is that robust generalization
is positively correlated with the ratio between learning rate and batch size.
We further propose a novel approach, \emph{Diffusion Enhanced Adversarial
Training} (DEAT), to manipulate the diffusion term to improve robust
generalization with virtually no extra computational burden. We theoretically
show that DEAT obtains a tighter generalization bound than PGD-AT. Our
empirical investigation is extensive and firmly attests that DEAT universally
outperforms PGD-AT by a significant margin. | Jianhui Sun, Sanchit Sinha, Aidong Zhang | 2023-06-05T06:36:18Z | http://arxiv.org/abs/2306.02618v2 | # Enhance Diffusion to Improve Robust Generalization
###### Abstract.
Deep neural networks are susceptible to human imperceptible adversarial perturbations. One of the strongest defense mechanisms is _Adversarial Training_ (AT). In this paper, we aim to address two predominant problems in AT. First, there is still little consensus on how to set hyperparameters with a performance guarantee for AT research, and customized settings impede a fair comparison between different model designs in AT research. Second, the robustly trained neural networks struggle to generalize well and suffer from tremendous overfitting. This paper focuses on the primary AT framework - Projected Gradient Descent Adversarial Training (PGD-AT). We approximate the dynamic of PGD-AT by a continuous-time Stochastic Differential Equation (SDE), and show that the diffusion term of this SDE determines the robust generalization. An immediate implication of this theoretical finding is that robust generalization is positively correlated with the ratio between learning rate and batch size. We further propose a novel approach, _Diffusion Enhanced Adversarial Training_ (DEAT), to manipulate the diffusion term to improve robust generalization with virtually no extra computational burden. We theoretically show that DEAT obtains a tighter generalization bound than PGD-AT. Our empirical investigation is extensive and firmly attests that DEAT universally outperforms PGD-AT by a significant margin.
Adversarial Training (AT), Projected Gradient Descent Adversarial Training (PGD-AT), Robust Generalization, Stochastic Differential Equation (SDE), Diffusion Enhanced Adversarial Training (DEAT) +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Though the configuration of hyperparameters is known to play an essential role in the performance of AT, there is little consensus on how to set hyperparameters with a performance guarantee. For example in Figure 1, we plot a list of recent AT papers on the (learning rate, batch size, weight decay) space according to each paper's specification and we could observe that the hyperparameters of each paper are relatively different from each other with little consensus. Moreover, the completely customized settings make it extremely difficult to understand which approach really works, as the misspecification of hyperparameters would potentially cancel out the improvements from the methods themselves. Most importantly, the lack of theoretical understanding also exhausts practitioners with time-consuming tuning efforts.
2. **The robust generalization gap in AT is surprisingly large.** Overfitting is a dominant problem in adversarially trained deep networks (Krizhevsky et al., 2017). To demonstrate that, we run both standard training (non-adversarial) and adversarial training on CIFAR10 with VGG (Krizhevsky et al., 2017) and SENet (Shi et al., 2017). Training curves are reported in Figure 2. We could observe the robust test accuracy is much lower than the standard test accuracy. Further training will continue to improve the robust training loss of the classifier, to the extent that robust training loss could closely track standard training loss (Krizhevsky et al., 2017), but fail to further improve robust testing loss. Early stopping is advocated to partially alleviate overfitting (Krizhevsky et al., 2017; Li et al., 2018), but there is still huge room for improvement.
### Contribution
In this paper, to address the aforementioned problems, we consider PGD-AT as an alternating stochastic gradient descent. Motivated by the theoretical attempts which approximate the discrete-time dynamic of stochastic gradient algorithm with continuous-time Stochastic Differential Equation (SDE) (Srivastava et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2019), we derive the continuous-time SDE dynamic for PGD-AT. The SDE contains a drift term and a diffusion term, and we further prove the diffusion term determines the robust generalization performance.
As the diffusion term is determined by (A) ratio of learning rate \(\alpha\) and batch size \(b\) and (B) gradient noise, an immediate implication of our theorem is that the robust generalization has a positive correlation with the size of both (A) and (B). In other words, we could improve robust generalization via scaling up (A) and (B). Although it is fairly simple to scale up (A) by increasing \(\alpha\) and decreasing \(b\), adjusting \(\alpha\) and \(b\) could be a double-edged sword. One reason is that small batch improves generalization while significantly increases training time. Considering the computational cost of adversarial training is already extremely expensive (e.g., the PGD-10 training of ResNet on CIFAR-10 takes several days on a single GPU), large batch training is apparently more desirable. \(\alpha\) is allowed to increase only within a very small range to ensure convergence of AT algorithm.
To overcome the aforementioned limitations, we propose a novel algorithm, DEAT (_Diffusion Enhanced Adversarial Training_), to instead adjust (B) to improve robust generalization (see Algorithm 2). Our approach adds virtually no extra computational burden, and universally achieves better robust testing accuracy over vanilla PGD-AT by a large margin. We theoretically prove DEAT achieves a tighter robust generalization gap. Our extensive experimental investigation strongly supports our theoretical findings and attests the effectiveness of DEAT.
We summarize our contributions as follows:
1. Theoretically, we approximate PGD-AT with a continuous-time SDE, and prove the diffusion term of this SDE determines the robust generalization. The theorem guides us how to tune \(\alpha\) and \(b\) in PGD-AT. To our best knowledge, this is the first study that rigorously proves the role of hyperparameters in AT.
2. Algorithmically, we propose a novel approach, DEAT (Diffusion Enhanced Adversarial Training), to manipulate the diffusion term with virtually no additional computational cost, and manage to universally improve over vanilla PGD-AT by a significant margin. We also theoretically show DEAT is guaranteed to generalize better than PGD-AT. Interestingly, DEAT also improves the generalization performance in non-adversarial tasks, which further verifies our theoretical findings.
**Organization** In Section 2, we formally introduce adversarial training and PGD-AT, which are pertinent to this work. In Section 3, we present our main theorem that derives the robust generalization bound of PGD-AT. In Section 4, motivated by the theoretical findings and in recognition of the drawbacks in adjusting \(\alpha\) and \(b\), we present our novel DEAT (Diffusion Enhanced Adversarial Training). We theoretically show DEAT has a tighter generalization bound. In Section 5, we conduct extensive experiments to verify our theoretical findings and the effectiveness of DEAT. Related works are discussed in Section A.1. Proofs of all our theorems and corollaries are presented in Appendix.
Figure 2. Classification accuracy for standard training (non-adversarial) and adversarial training on CIFAR10 with VGG and SENet. Table 1 summarizes the experimental setting. The adversarial test accuracy (dashed line) is far from the non-adversarial test accuracy (solid line) in both architectures. The generalization gap for the robust accuracy is significant and much larger than normal training.
Background: PGD-AT
In this section, we formally introduce PGD-AT which is the main focus of this work.
**Notation:** This paragraph summarizes the notation used throughout the paper. Let \(\theta\), \(\mathcal{D}\), and \(l_{\theta}(x_{i},y_{i})\) be the trainable model parameter, data distribution, and loss function, respectively. Let \(\{z_{i}=(x_{i},y_{i})\}_{i=1}^{N}\) denote the training set, and \(\{x_{i}\}_{i=1}^{N}\subset\mathbb{R}^{d}\). Expected risk function is defined as \(\mathcal{R}(\theta)\triangleq\mathbb{E}_{\gamma=\mathcal{D}}l_{\theta}(z)\). Empirical risk \(\mathcal{R}_{\zeta}(\theta)\) is an unbiased estimator of the expected risk function, and is defined as \(\mathcal{R}_{\zeta}(\theta)\triangleq\frac{1}{b}\sum_{j\in\zeta}\mathcal{R}_{j }(\theta)\), where \(\mathcal{R}_{j}(\theta)\triangleq l_{\theta}(z_{j})\) is the contribution to risk from \(j\)-th data point. \(\zeta\) represents a mini-batch of random samples and \(b\triangleq\lfloor\zeta\rfloor\) represents the batch size. Similarly, we define \(\nabla_{\theta}\mathcal{R}\), \(\nabla_{\theta}\mathcal{R}_{j}\), and \(\nabla_{\theta}\mathcal{R}_{\zeta}\) as their gradients, respectively. We denote the empirical gradient as \(\hat{g}(\theta)\triangleq\nabla_{\theta}\mathcal{R}_{\zeta}\) and exact gradient as \(g(\theta)\triangleq\nabla_{\theta}\mathcal{R}\) for the simplicity of notation.
In standard training, most learning tasks could be formulated as the following optimization problem:
\[\min_{\theta}\mathcal{R}(\theta)=\min_{\theta}\mathbb{E}_{(x_{i},y_{i})- \mathcal{D}}l_{\theta}(x_{i},y_{i}), \tag{1}\]
Stochastic Gradient Descent (SGD) and its variants are most widely used to optimize (1). SGD updates with the following rule:
\[\theta_{t+1}=\theta_{t}-\alpha_{t}s_{t}, \tag{2}\]
where \(\alpha_{t}\) and \(s_{t}\) are the learning rate and search direction at \(t\)-th step, respectively. SGD uses \(\hat{g}_{t}\triangleq\hat{g}(\theta_{t})\) as \(s_{t}\).
The performance of learning models, depends heavily on whether SGD is able to reliably find a solution of (1) that could generalize well to unseen test instances.
An adversarial attacker aims to add a human imperceptible perturbation to each sample, i.e., transform \(\{z_{i}=(x_{i},y_{i})\}_{i=1}^{N}\) to \(\{\tilde{z}_{i}=(\tilde{x}_{i}=x_{i}+\delta_{i},y_{i})\}_{i=1}^{N}\), where perturbations \(\{\delta_{i}\}_{i=1}^{N}\) are constrained by a pre-specified budget \(\Delta\) (\(\delta_{i}\in\Delta\)), such that the loss \(l_{\theta}(\tilde{x}_{i},y_{i})\) is large. The choice of budget is flexible. A typical formulation is \(\{\delta\in\mathbb{R}^{d}:\|\delta\|_{p}\leq\varepsilon\}\) for \(p=1,2,\infty\). In order to defend such attack, we resort to solving the following objective function:
\[\min_{\theta\in\mathbb{E}_{\theta}}\rho(\theta),\text{ where }\rho(\theta)= \mathbb{E}_{(x_{i},y_{i})-\mathcal{D}}[\max_{\delta_{i}\in\Delta}l_{\theta}(x_ {i}+\delta_{i},y_{i})] \tag{3}\]
Objective function (3) is a composition of an inner maximization problem and an outer minimization problem. The inner maximization problem simulates an attacker who aims to find an adversarial version of a given data point \(x_{i}\) that achieves a high loss, while the outer minimization problem is to find model parameters so that the "adversarial loss" given by the inner attacker is minimized. Projected Gradient Descent Adversarial Training (PGD-AT) (Goodfellow et al., 2016) solves this min-max game by gradient ascent on the perturbation parameter \(\delta\) before applying gradient descent on the model parameter \(\theta\).
The detailed pseudocode of PGD-AT is in Algorithm 1. Basically, projected gradient descent (PGD) is applied \(K\) steps on the negative loss function to produce strong adversarial examples in the inner loop, which can be viewed as a multi-step variant of Fast Gradient Sign Method (FGSM) (Kang et al., 2017), while every training example is replaced with its PGD-perturbed counterpart in the outer loop to produce a model that an adversary could not find adversarial examples easily.
```
Input: Loss function \(l_{\theta}(z_{i})\), initialization \(\theta_{0}\), total training steps \(T\), PGD steps \(K\), inner/outer learning rates \(\alpha_{l}/\alpha_{\mathcal{D}}\), batch size \(b\), perturbation budget set \(\Delta\);
1for\(t\in\{1,2,...,T\}\)do
2 Sample a mini-batch of random examples \(\zeta=\{(x_{i},y_{i},y_{i})\}_{j=1}^{b}\);
3 Set \(\delta_{0}=0,\hat{x}_{j}=x_{ij}\);
4for\(k\in\{1,...,K\}\)do
5\(\delta_{k}=\Pi_{\Delta}(\delta_{k-1}+\frac{\alpha_{t}}{b}\sum_{j=1}^{b}\nabla_{ x}l_{\theta_{t-1}}(\hat{x}_{j}+\delta_{k-1},y_{i}))\);
6
7 end for
8\(\theta_{t}=\theta_{t-1}-\frac{\alpha_{\mathcal{D}}}{b}\sum_{j=1}^{b}\nabla_{ \theta}l_{\theta_{t-1}}(\hat{x}_{j}+\delta_{K},y_{ij}))\);
9
10 end for return \(\theta_{T}\)
```
**Algorithm 1**PGD-AT (Projected Gradient Descent Adversarial Training)Goodfellow et al. (2016)
## 3. Theory: Robust Generalization Bound of PGD-AT
In this section, we describe our logical framework of deriving the robust generalization gap of PGD-AT, and then identify the main factors that determine the generalization.
To summarize the entire section before we dive into details, we consider PGD-AT as an alternating stochastic gradient descent and approximate the discrete-time dynamic of PGD-AT with continuous-time Stochastic Differential Equation (SDE), which contains a drift term and a diffusion term, and we would show the diffusion term determines the robust generalization. Our theorem immediately points out the robust generalization has a positive correlation with the ratio between learning rate \(\alpha\) and batch size \(b\).
Let us first introduce our logical framework in Section 3.1 before we present main theorem in Section 3.2.
### Roadmap to robust generalization bound
**Continuous-time dynamics of gradient based methods**
A powerful analysis tool for stochastic gradient based methods is to model the continuous-time dynamics with stochastic differential equations and then study its limit behavior (Kang et al., 2017; Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016). (Xie et al., 2017) characterizes the continuous-time dynamics of using a constant step size SGD (2) to optimize normal training task (1).
Lemma 1 ((Xie et al., 2017)).: _Assume the risk function 1 is locally quadratic, and gradient noise is Gaussian with mean 0 and covariance \(\frac{1}{b}H\), and \(H=BB^{T}\) for some \(B\). The following two statements hold,_
Footnote 1: Without loss of generality, we assume the minimum of the risk function is at \(\theta=0\), as we could always translate the minimum to \(0\).
1. _Constant-step size SGD (_2_) could be recast as a discretization of the following continuous-time dynamics:_ \[d\theta=-\alpha g(\theta)dt+\frac{\alpha}{\sqrt{b}}BdW_{t}\] (4) _where_ \(dW_{t}=\mathcal{N}(0,Idt)\) _is a Wiener process._
2. _The stationary distribution of stochastic process (_4_) is Gaussian and its covariance matrix_ \(Q\) _is explicit._
\(\alpha g(\theta)\) and \(\frac{\alpha}{\sqrt{b}}B\) are referred to as drift and diffusion, respectively. Many variants of SGD (e.g. heavy ball momentum (Srivastava et al., 2017) and Nesterov's accelerated gradient (Nesterov, 2018)) can also be cast as a modified version of (4), and we could explicitly write out its stationary distribution as well (K
**Assumption 1** (((27; 49; 86))): _Suppose the risk function is approximately convex and 2-order differentiable, in the region close to minimum, i.e., there exists a \(\delta_{0}>0\), such that \(\mathcal{R}(\theta)=\frac{1}{2}(\theta-\theta^{*})^{T}A(\theta-\theta^{*})\) if \(\|\theta-\theta^{*}\|\leq\delta_{0}\), where \(\theta^{*}\) is a minimizer of \(\mathcal{R}(\theta)\). Here \(A\) is the Hessian matrix \(\nabla_{\theta}^{2}\mathcal{R}\) around minimizer and is positive definite. Without loss of generality, we assume a minimizer of the risk is zero, i.e., \(\theta^{*}=0\)._
Though here we assume locally quadratic form of risk function, all our results from this study apply to locally smooth and strongly convex objectives. Note that the assumption on locally quadratic structure of loss function, even for extremely nonconvex objectives, could be justified empirically. ((42)) visualized the loss surfaces for deep structures like ResNet ((28)) and DenseNet ((36)), observing quadratic geometry around local minimum in both cases. And certain network architecture designs (e.g., skip connections) could further make neural loss geometry show no noticeable nonconvexity, see e.g. Figure 3.
**Assumption 2** (((27; 49; 86))): _Unbiased Gradients with Bounded Noise Variance): Suppose at each step \(t\), gradient noise is Gaussian with mean 0 and covariance \(\frac{1}{b}\Sigma(\theta_{t})\), i.e.,_
\[\hat{g}(\theta_{t})\approx g(\theta_{t})+\frac{1}{\sqrt{b}}\Delta g(\theta_{ t}),\quad\Delta g(\theta_{t})\sim\mathcal{N}(0,\Sigma(\theta_{t}))\]
_We further assume that the noise covariance matrix \(\Sigma(\theta_{t})\) is approximately constant with respect to \(\theta\), i.e., \(\Sigma(\theta_{t})\approx\Sigma=CC^{T}\). And noises from different iterates \(\{\Delta g(\theta_{t})\}_{t\geq 1}\) are mutually statistically independent._
Gaussian gradient noise is natural to assume as the stochastic gradient is a sum of \(b\) independent, uniformly sampled contributions. Invoking the central limit theorem, the noise structure could be approximately Gaussian. Assumption 2 is standard when approximating a stochastic algorithm with a continuous-time stochastic process (see e.g. (49)) and is justified when the iterates are confined to a restricted region around the minimizer.
## 4. Algorithm: Deat - a 'free' Booster to PGD-at
Theorem 1 indicates that the key factor which impacts the robust generalization is diffusion \(\sqrt{\frac{\alpha}{b}}AB\). And the definitive relationship is, large diffusion level positively benefits the generalization performance of PGD-AT.
Though increasing \(\frac{\alpha}{b}\) is straightforward, there are two main drawbacks. First, decreasing batch size is impractical as it significantly lengthens training time. Adversarial training already takes notoriously lengthy time compared to standard supervised learning (as the inner maximization is essentially several steps of gradient ascent). Thus, small batch size is simply not an economical option, Second, the room to increase \(\alpha\) is very limited as \(\alpha\) has to be relatively small to ensure convergence.
Furthermore, we also desire an approach that could universally improve the robust generalization independent of specifications of \(\alpha\) and \(b\), as they could potentially complement each other to achieve a even better performance. Thus, we propose to manipulate the remaining factor in the diffusion,
_Can we manipulate the gradient noise level \(B\) in PGD-AT dynamic to improve its generalization?_
Our proposed Diffusion Enhanced AT (DEAT) (i.e. Algorithm 2) provides a positive answer to this question. The basic idea of DEAT is simple. Inspired by the idea from (86), instead of using one single gradient estimator \(\hat{g}\), Algorithm 2 maintains two gradient estimators \(h_{t}\) and \(h_{t-1}\) at each iteration. A linear interpolation of these two gradient estimators is still a legitimate gradient estimator, while the noise (variance) of this new estimator is larger than any one of the base estimators. \(k_{1}\) and \(k_{2}\) are two hyperparameters.
We would like to emphasize when \(h_{t}\) and \(h_{t-1}\) are two unbiased and independent gradient estimators, the linear interpolation is apparently unbiased (due to linearity of expectation) and the noise of this new estimator increases. However, DEAT (and the following Theorem 2) does not require \(h_{t}\) and \(h_{t-1}\) to be unbiased or independent. In fact, DEAT showcases a general idea of linear combination of two estimators which goes far beyond our current design. We could certainly devise other formulation of \(h_{t}\) or \(h_{t-1}\), which may be unbiased or biased as in our current design.
It may be natural to think that why not directly inject some random noise to the gradient to improve generalization. However, existing works point out random noise does not have such appealing effect, only noise with carefully designed covariance structure and distribution class works ((81)). For example, (95) and (15) point out, if noise covariance aligns with the Hessian of the loss surface to some extent, the noise would help generalize. Thus, (79) proposes to inject noise using the (scaled) Fisher as covariance and (95) proposes to inject noise employing the gradient covariance of SGD as covariance, both requiring access and storage of second order Hessian which is very computationally and memory expensive.
DEAT, compared with existing literature, is the first algorithm on adversarial training, we inject noise that does not require second order information and is "free" in memory and computation.
```
Input: Loss function \(J(\theta,x,\delta)=l(\theta,x+\delta,y)-\lambda R(\delta)\), initialization \(\theta_{0}\), total training steps \(T\), PGD steps \(K\), inner/outer learning rates \(\alpha_{l}/\alpha_{O}\), batch size \(b\);
1for\(t\in\{1,2,...,T\}\)do
2 Sample a mini-batch of random examples \(\zeta=\{(x_{ij},y_{ij})\}_{j=1}^{b}\);
3 Set \(\delta_{0}=0,\hat{x}_{j}=x_{ij}\);
4for\(k\in\{1,...,K\}\)do
5\(\delta_{k}=\delta_{k-1}+\frac{\alpha_{l}}{b}\sum_{j=1}^{b}\nabla_{\delta}J( \theta_{t-1},\hat{x}_{j},\delta_{k-1})\);
6
7 end for
8\(h_{t}=k_{2}h_{t-2}+(1-k_{2})\hat{g}_{t}\), \(\theta_{t+1}=\theta_{t}-\alpha_{O}^{\prime}[(1+k_{1})h_{t}-k_{1}h_{t-1}]\),
9 where \(\alpha_{O}^{\prime}=\frac{\alpha_{O}}{\sqrt{(1+k_{1})^{2}+k_{1}^{2}}}\);
10
11 end for return \(\theta_{T}\)
```
**Algorithm 2**Diffusion Enhanced AT (DEAT)
Theorem 2 provides a theoretical guarantee that DEAT obtains a tighter generalization bound than PGD-AT.
Theorem 2 ().: _Let \(H_{1}\) and \(H_{2}\) be the covariance matrix of gradient noise from PGD-AT and DEAT, respectively. Let \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) be the upper bounds of generalization error of PGD-AT (Algorithm 3) and DEAT (Algorithm 2), respectively. The following statement holds,_
\[H_{2}=kH_{1},\quad\text{where}\quad k>1, \tag{8}\] \[\mathcal{G}_{1}\geq\mathcal{G}_{2}\]
_i.e., Algorithm 2 generates larger gradient noise than Algorithm 1, and such gradient noise boosts robust generalization._
Proof.: We only keep primary proof procedures and omit most of the algebraic transformations. Recall the updating rule for conventional heavy ball momentum,
\[\begin{split} d_{t+1}&=(1-\beta)\hat{q}_{t}+\beta d _{t}\\ \theta_{t+1}&=\theta_{t}-\alpha d_{t}\end{split} \tag{9}\]
where \(\beta\) is the momentum factor.
By some straightforward algebraic transformations, we know the momentum can be written as \(d_{t}=(1-\beta)\sum_{t=1}^{t}\beta^{t-1}\hat{q}_{t}\).
Suppose \(H\) is the noise covariance of \(\hat{q}_{t}\) and \(\nu^{2}\) is the scale of \(H\), i.e., \(\|H\|\leq\nu^{2}\). The noise level of \(d_{t}\) is \(\propto(1-\beta)\frac{1-\beta^{t+1}}{1-\beta}\nu^{2}\approx\nu^{2}\).
Momentum \(d\) does not alter the gradient noise level. We would resort to maintain two momentum terms \(d^{(1)}\) (\(d^{2}\)), and use the linear interpolation \((1+p)d^{(1)}-pd^{(2)}\) as our iterate.
The advantage is though the noise levels of \(d^{(1)}\) and \(d^{(2)}\) are both \(\nu^{2}\), the noise level of \((1+p)d^{(1)}-pd^{(2)}\) is \(\approx((1+p)^{2}+p^{2})\nu^{2}\)(Srivastava et al., 2017).
Thus, if we could show our proposed DEAT is indeed maintaining two momentum terms, we complete the proof of the statement \(H_{2}=kH_{1}\) and \(k>1\) in Theorem 2.
Recall line 7-8 in Algorithm 2,
\[\begin{split} h_{t}&=k_{2}h_{t-2}+(1-k_{2})\hat{q} _{t},\\ \theta_{t+1}&=\theta_{t}-\alpha_{0}^{\prime}[(1+k_{ 1})h_{t}-k_{1}h_{t-1}],\end{split} \tag{10}\]
We could transform it into,
\[\begin{split} h_{t}&=k_{2}h_{t-2}+(1-k_{2})\hat{q} _{t},\\ (\theta_{t+1}-\alpha_{0}^{\prime}k_{1}h_{t})&=( \theta_{t}-\alpha_{0}^{\prime}k_{1}h_{t-1})-\alpha_{0}^{\prime}h_{t},\end{split} \tag{11}\]
We could further write it into,
\[\begin{split} x_{t}&=\theta_{t}-k_{1}h_{t-1},\\ x_{t+1}&=x_{t}-\xi\hat{q}_{t}+k_{2}(x_{t-1}-x_{t-2 }),\end{split} \tag{12}\]
where \(\xi=a_{O}^{\prime}(1-k_{2})\). We know a conventional momentum can be written as,
\[\theta_{t+1}=\theta_{t}-\alpha\hat{q}_{t}+\beta(\theta_{t}-\theta_{t-1}) \tag{13}\]
where \(\alpha\) and \(\beta\) are learning rate and momentum factor, respectively. Note in Equation (12), the second line is exactly same as in Equation (13), indicating \(x_{t}\) has the same behavior as momentum. Further note \(x_{t+1}-x_{t}=\xi\hat{q}_{t}\), i.e., we maintain two momentum terms by alternatively using odd-number-step and even-number-step gradients. Combining everything together, we complete the proof of Theorem 2.
One advantage of DEAT is that it adds virtually no extra parameters or computation. Though it introduces two more hyperparameters \(k_{1}\) and \(k_{2}\), they are highly insensitive according to our experimental investigation.
Our experimental results in Figure 4 and Table 2 firmly attest that DEAT outperforms PGD-AT by a significant 1.5% to 2.0% margin with nearly no extra burden. We would like to emphasize that 1.5% to 2.0% improvement with virtually no extra cost is non trivial in robust accuracy. To put 1.5% to 2.0% in perspective, the difference among the robust accuracy of all popular architectures is only about 2.5% (see (Srivastava et al., 2017)). Our approach is nearly "free" in cost while modifying architectures includes tremendous extra parameters and model design. 2.0% is also on par with some other techniques, e.g., label smoothing, weight decay, that are already overwhelmingly used to improve robust generalization.
Training curves in Figure 5 reveal that DEAT can beat PGD-AT in adversarial testing accuracy even when PGD-AT has better adversarial training accuracy, which shows DEAT does alleviate overfitting.
## 5. Experiments
We conduct extensive experiments to verify our theoretical findings and proposed approach. We include different architectures, and sweep across a wide range of hyperparameters, to ensure the robustness of our findings. All experiments are run on 4 NVIDIA
Figure 3. 2D visualization of the loss surface of Wide-ResNet-56 on CIFAR-10 both without shortcut connections in Figure 2(a) and with shortcut connections in Figure 2(b) (Figure 6 in (Xu et al., 2018)). 3D visualization of the loss surface of ResNet-56 on CIFAR-10 both with shortcut connections in Figure 2(d) and without shortcut connections in Figure 2(c) (from [http://www.telesens.co/loss-landscape-viz/viewer.html](http://www.telesens.co/loss-landscape-viz/viewer.html)).
Quadro RTX 8000 GPUs, and the total computation time for the experiments exceeds 10K GPU hours. Our code is available at [https://github.com/jsyscjh/DEAT](https://github.com/jsyscjh/DEAT).
We aim to answer the following two questions:
1. [leftmargin=*]
2. _Do hyperparameters impact robust generalization in the same pattern as Theorem 1 indicates?_
3. _Does DEAT provide a 'free' booster to robust generalization?_
**Setup** We test on CIFAR-10 under the \(l_{\infty}\) threat model of perturbation budget \(\frac{8}{255}\), without additional data. Both the vanilla PGD-AT framework and DEAT is used to produce adversarially robust model. The model is evaluated under 10-steps PGD attack (PGD-10) (Vinyals et al., 2017). Note that this paper mainly focuses on PGD attack instead of other attacks like AutoAttack (Krizhevsky et al., 2017) / Rays (Krizhevsky et al., 2017) for consistency with our theorem. The architectures we test with include VGG-19 (Zhu et al., 2017), SENet-18 (Zhu et al., 2018), and Preact-ResNet-18 (Zhu et al., 2018). Every single data point is an average of 3 independent and repeated runs under exactly same settings (i.e., every single robust accuracy in Table 2 is an average of 3 runs to avoid stochasticity). The following Table 1 summarizes the default settings in our experiments.
Note that most of our experimental results are reported in terms of robust test accuracy, instead of the robust generalization gap. On one hand, test accuracy is the metric that we really aim to optimize in practice. On the other hand, robust test accuracy, though is not the whole picture of generalization gap, actually reflects the gap very well, especially in overparameterized regime, due to the minimization of empirical risk is relatively simple with deep models 5, even in an adversarial environment (Zhu et al., 2017). Therefore, we report only robust test accuracy following (Zhu et al., 2017; Li et al., 2017) by default. To ensure our proposed approach actually closes the generalization gap, we report the actual generalization gap in Fig 5, and observe DEAT can beat vanilla PGD-AT by a non-trivial margin in testing performances even with sub-optimal training performances.
Footnote 5: In the setting of over-parametrized learning, there is a large set of global minima, all of which have zero training error but the test error can be very different (Li et al., 2017; Li et al., 2017).
### Hyperparameter is Impactful in Robust Generalization
Our theorem indicates learning rate and batch size can impact robust generalization via affecting diffusion. Specifically, Theorem 1 expects larger learning rate/batch size ratio would improve robust generalization. We sweep through a wide range of learning rates \(0.01,0.12,0.014,\cdots,0.50\), and report the adversarial testing accuracy of both vanilla PGD-AT and DEAT for a selection of learning rates in Table 2 and Figure 4. Considering the computational time for AT is already very long, decreasing batch size to improve robust generalization is simply economically prohibitive. Thus, we mainly focus on \(\alpha\).
Table 2 exhibits a strong positive correlation between robust generalization and learning rate. The pattern is consistent with all three architectures. Figure 4 provides a better visualization of the positive correlation.
We further do some testing on whether such correlation is statistically significant or not. We calculate the Pearson's \(r\), Spearman's \(\rho\), and Kendall's \(\tau\) rank-order correlation coefficients 6, and the corresponding \(p\) values to investigate the statistically significance of the correlations. The procedure to calculate \(p\) values is as follows, when calculating \(p\)-value in Tables 3 and 4, we regard the data point in Table 2 as the accuracy for each \(\alpha\) and calculate the RCC between accuracy and \(\alpha\) and its \(p\)-value, following same procedure in (Zhu et al., 2017).
Footnote 6: They measure the statistical dependence between the rankings of two variables, and how well the relationship between two variables can be described using a monotonic function.
We report the test result in Table 3. The closer correlation coefficient is to \(+1\) (or \(-1\)), the stronger positive (or negative) correlation exists. If \(p<0.005\), the correlation is statistically significant 7.
Footnote 7: The criterion of ‘statistically significant’ has various versions, such as \(p<0.05\) or \(p<0.01\). We use a more rigorous \(p<0.005\).
Our theorem indicates ratio of learning rate and batch size (instead of batch size itself) determines generalization, which justifies the linear scaling rule in (Zhu et al., 2017), i.e., scaling the learning rate up when using larger batch, and maintaining the ratio between learning rate and batch size, would effectively preserve the robust generalization.
\begin{table}
\begin{tabular}{c|c} \hline Batch Size: 128 & Label Smoothing: False \\ \hline Weight Decay: \(5\times 10^{-4}\) & BN Mode: eval \\ \hline Activation: ReLu & Total Epoch: 110 \\ \hline LR Decay Factor: 0.1 & LR Decay Epochs: 100, 105 \\ \hline Attack: PGD-10 & Maximal Perturbation: \(\epsilon=8/255\) \\ \hline Attack Step Size: 2/255 & Threat Model: \(l_{\infty}\) \\ \hline \(k_{1}\): 1.0 & \(k_{2}\): 0.8 \\ \hline \end{tabular}
\end{table}
Table 1. Experimental Settings
Figure 4. Adversarial testing accuracy on CIFAR10 for vanilla PGD-AT and our proposed DEAT across a wide spectrum of learning rates. The figure demonstrates a strongly positive correlation between robust generalization and learning rate. We could also observe DEAT obtains a significant improvement over PGD-AT.
The side effect of adjusting batch size also demonstrates the necessity of our proposed approach, which could manipulate diffusion to boost generalization without extra computational burden.
### DEAT Effectively Improves Robust Generalization
We compare the robust generalization of vanilla PGD-AT and DEAT in Figure 4 and Table 2.
The improvement is consistent across all different learning rates/model architectures. The improvement is even more significant when learning rate is fairly large, i.e. when the baseline is working well, in both Table 2 and Figure 4. Our proposed DEAT improves 1.5% on VGG, and over 2.0% on SENet and Preact-ResNet.
Note 1.5% to 2.0% improvement is very significant in robust generalization. It actually surpasses the performance gap between different model architectures. In Figure 4, the boosted VGG can obtain similar robust generalization compared to SENet and ResNet. (Wang et al., 2017) measures the robust generalization of virtually all popular architectures, and the range is only approximately 3%. Considering adjusting architectures would potentially include millions of more parameters and carefully hand-crafted design, our proposed approach is nearly "free" in cost.
We plot the adversarial training and adversarial testing curves (using one specific learning rate) for all three architectures in Figure 5. It is very interesting to observe that our proposed approach may not be better in terms of training performances (e.g. in ResNet and SENet), but it beats vanilla PGD-AT by a non-trivial margin in testing performances. It is safe to say that DEAT effectively control the level of overfitting in adversarial training.
We further do a t-test to check the statistical significance of the improvement and report the result in Table 4. Note the mean improvement in the table (e.g. 1.22%) is averaged across all learning rates, and does not completely reflect the extent of improvement (as we pay more attention to the improvement with larger learning rates, where the improvement is larger than 1.5%). The p-values clearly indicate a statistical significant improvement across models.
\begin{table}
\begin{tabular}{c|c} \hline Architecture & Statistical Significance of Improvement \\ \hline Preact-ResNet & 1.22\% (**2.10e-09**) \\ \hline SENet & 1.21\% (**2.11e-09**) \\ \hline VGG & 1.11\% (**7.462e-10**) \\ \hline \end{tabular}
\end{table}
Table 4. Statistical test of significance of improvement. The p-values indicate a strongly significant improvement across all architectures.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-10} \multicolumn{1}{c}{} & \multicolumn{2}{c||}{Preact-ResNet (Wang et al., 2017)} & \multicolumn{3}{c||}{SENet (Wang et al., 2017)} & \multicolumn{3}{c|}{VGG (Wang et al., 2017)} \\ \hline \(\alpha\) & PGD-AT & DEAT & Acc\({}_{d}\) & \(\alpha\) & PGD-AT & DEAT & Acc\({}_{d}\) & \(\alpha\) & PGD-AT & DEAT & Acc\({}_{d}\) \\ \hline
0.010 & 44.11\% & 45.07\% & 0.96\% & 0.010 & 43.38\% & 44.16\% & 0.78\% & 0.010 & 40.34\% & 41.00\% & 0.66\% \\
0.012 & 44.92\% & 46.12\% & 1.20\% & 0.012 & 44.33\% & 45.25\% & 0.92\% & 0.012 & 40.97\% & 41.03\% & 0.06\% \\
0.014 & 45.26\% & 46.25\% & 0.99\% & 0.014 & 45.00\% & 45.90\% & 0.90\% & 0.014 & 40.75\% & 41.11\% & 0.36\% \\
0.018 & 46.21\% & 46.76\% & 0.55\% & 0.018 & 45.91\% & 47.25\% & 1.34\% & 0.018 & 40.93\% & 42.32\% & 1.39\% \\
0.020 & 46.30\% & 46.94\% & 0.64\% & 0.020 & 46.45\% & 47.51\% & 1.06\% & 0.020 & 41.46\% & 42.08\% & 0.62\% \\
0.022 & 45.92\% & 47.30\% & 1.38\% & 0.022 & 46.42\% & 47.81\% & 1.39\% & 0.022 & 41.81\% & 43.20\% & 1.39\% \\
0.024 & 46.47\% & 47.64\% & 1.17\% & 0.024 & 46.52\% & 48.06\% & 1.54\% & 0.024 & 42.35\% & 43.45\% & 1.10\% \\
0.028 & 46.24\% & 47.19\% & 0.95\% & 0.028 & 47.19\% & 48.20\% & 1.01\% & 0.028 & 43.07\% & 43.84\% & 0.77\% \\
0.030 & 46.61\% & 47.46\% & 0.85\% & 0.030 & 47.19\% & 48.16\% & 0.97\% & 0.030 & 42.42\% & 44.63\% & 2.21\% \\ \hline
0.100 & 47.21\% & 48.84\% & 1.63\% & 0.100 & 48.36\% & 50.29\% & 1.93\% & 0.100 & 45.84\% & 47.74\% & 1.90\% \\
0.150 & 48.05\% & 50.24\% & 2.19\% & 0.150 & 48.99\% & 51.36\% & 2.37\% & 0.150 & 46.99\% & 48.70\% & 1.71\% \\
0.200 & 49.04\% & 51.38\% & 2.34\% & 0.200 & 49.36\% & 52.00\% & 2.64\% & 0.200 & 47.94\% & 49.18\% & 1.24\% \\
0.250 & 49.34\% & 51.99\% & 2.65\% & 0.250 & 50.24\% & 52.19\% & 1.95\% & 0.250 & 48.07\% & 49.28\% & 1.21\% \\
0.300 & 50.01\% & 52.50\% & 2.49\% & 0.300 & 50.83\% & 52.90\% & 2.07\% & 0.300 & 48.76\% & 49.33\% & 0.57\% \\ \hline \end{tabular}
\end{table}
Table 2. Adversarial testing accuracy for both vanilla PGD-AT and DEAT. Acc\({}_{d}\) represents the accuracy difference between diffusion enhanced adversarial training and vanilla PGD-AT, i.e., Acc\({}_{\text{DEAT}}-\)Acc\({}_{\text{PGD-AT}}\).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c} \cline{2-9} \multicolumn{1}{c}{Rank Correlation Coefficient} & \multicolumn{2}{c|}{Preact-ResNet} & \multicolumn{2}{c||}{SENet} & \multicolumn{2}{c}{VGG} \\ \cline{2-9} & PGD-AT & DEAT & PGD-AT & DEAT & PGD-AT & DEAT & PGD-AT & DEAT \\ \hline Pearson’s \(r\) (\(p\)-value) & 0.889 (**5.5e-10**) & 0.896 (**2.8e-10**) & 0.711 (**1.4e-04**) & 0.762 (**2.4e-05**) & 0.916 (**3.3e-10**) & 0.862 (**5.8e-08**) \\ Spearman’s \(p\) (\(p\)-value) & 0.965 (**3.4e-16**) & 0.922 (**7.5e-07**) & 0.998 (**-2.2e-16**) & 0.982 (**-2.2e-16**) & 0.988 (**-2.2e-16**) & 0.992 (**8.9e-07**) \\ Kendall’s \(r\) (\(p\)-value) & 0.907 (**3.3e-11**) & 0.818 (**1.9e-12**) & 0.982 (**5.7e-11**) & 0.927 (**6.3e-10**) & 0.932 (**1.8e-10**) & 0.956 (**-2.2e-16**) \\ \hline \end{tabular}
\end{table}
Table 3. Rank correlation coefficients (corresponding significance level) between robust generalization and learning rate. All correlation coefficient indicates a strong positive relationship (close to +1). The p values are all highly statistically significant.
## 6. Conclusions
To our best knowledge, this paper is the first study that rigorously connects the dynamics of adversarial training to the robust generalization. Specifically, we derive a generalization bound of PGD-AT, and based on this bound, point out the role of learning rate and batch size. We further propose a novel training approach Diffusion Enhanced Adversarial Training. Our extensive experiments demonstrate DEAT universally outperforms PGD-AT by a large margin with little cost, and could potentially serve as a new strong baseline in AT research.
|
2306.15364 | The Architecture of a Biologically Plausible Language Organ | We present a simulated biologically plausible language organ, made up of
stylized but realistic neurons, synapses, brain areas, plasticity, and a
simplified model of sensory perception. We show through experiments that this
model succeeds in an important early step in language acquisition: the learning
of nouns, verbs, and their meanings, from the grounded input of only a modest
number of sentences. Learning in this system is achieved through Hebbian
plasticity, and without backpropagation. Our model goes beyond a parser
previously designed in a similar environment, with the critical addition of a
biologically plausible account for how language can be acquired in the infant's
brain, not just processed by a mature brain. | Daniel Mitropolsky, Christos H. Papadimitriou | 2023-06-27T10:25:22Z | http://arxiv.org/abs/2306.15364v1 | # The Architecture of a Biologically Plausible Language Organ
###### Abstract
We present a simulated biologically plausible _language organ_, made up of stylized but realistic neurons, synapses, brain areas, plasticity, and a simplified model of sensory perception. We show through experiments that this model succeeds in an important early step in language acquisition: the learning of nouns, verbs, and their meanings, from the grounded input of only a modest number of sentences. Learning in this system is achieved through Hebbian plasticity, and _without_ backpropagation. Our model goes beyond a _parser_ previously designed in a similar environment, with the critical addition of a biologically plausible account for how language can be acquired in the infant's brain, not just processed by a mature brain.
## 1 Introduction
It is beyond doubt that cognitive phenomena such as language, reasoning, and planning are the direct product of the activity of neurons and synapses. However, there is no extant overarching theory explaining exactly how this is done. In the words of 2004 Nobel laureate Richard Axel (A, 2018) _"We do not have a Logic for the transformation of neural activity into thought and action."_ Making progress on this open question, often called the _bridging problem_(Papadimitriou and Friederici, 2022), is identified by Axel (ibid.) as the most important challenge facing neuroscience today.
In recent years, a computational approach to the bridging problem has been undertaken. In (Papadimitriou et al., 2020), a computational system called the Assembly Calculus was proposed, based on a simplified mathematical model of spiking neurons and synapses, which reflects the basic elements and tenets of Neuroscience: brain areas, excitatory neurons, local inhibition, plasticity (see the next section for a detailed description of the enhanced version of this model used here). Within this framework, neuromorphic computational systems simulating certain large-scale cognitive phenomena were implemented: a system for planning in the blocks world (d'Amore et al., 2022); a system for learning to classify representations through few-shot training (Dabagia et al., 2022); and, perhaps more surprisingly, a system for _parsing sentences_ in English and other languages (Mitropolsky et al., 2021, 2022).
We believe that pursuing this research program of constructing more and more ambitious neuromorphic artifacts simulating cognitive phenomena is important, for at least two reasons. First, each step on this path entails concrete progress in the bridging problem, as more and more advanced domains of cognition are explored through artifacts consisting of reasonably realistic and brain-like, if stylized, systems of neurons and synapses. Second, further progress in this direction may be of interest to Artificial Intelligence: Despite amazing advances over the past ten years, arguably AI still lags behind human brains in several important dimensions: grounded language, continual learning, originality and inventiveness, emotional and social intelligence, and energy usage. Creating intelligent artifacts
that are more brain-like, and rely on modes of learning other than backpropagation, may eventually point to new possible avenues of progress for AI.
The biologically plausible parser of Mitropolsky et al. (2021) includes a lexicon containing neural representations of words. It is assumed that each neural representation of a word is wired so that, when excited by an outside stimulus, it sets in motion specific neural activities inhibiting and/or disinhibiting remote brain areas that are associated with the word's syntactic role (verb, subject, etc.). This works fine for the purposes of the parser, except that it leaves open perhaps the most important questions: How are these word representations created? How are these neural activities set up in the infant brain and how are they associated with the representation of each word, thus implementing the word's part of speech? And how are those other brain areas labeled with the appropriate syntactic roles? In other words, _how is language acquired in the human brain?_
This is the question we set out to answer in this paper.
We seek to create a _neuromorphic language organ:_ a tabula rasa of neural components -- roughly, a collection of brain areas with randomly connected neurons, with certain additional neural populations, all consistent with basic Neuroscience and plausibly set in place during the infant's development -- which, upon the input of modest amounts of grounded language, in any natural language, will acquire the ability to comprehend and generate syntactically and semantically correct sentences in the same language -- definitions of all these terms forthcoming.
One important remark is in order: By designing such a system, we are not articulating a scientific theory about the precise way in which language is implemented in the human brain -- a theory to be tested by experiments on human subjects. The artifact we create is a _proof of concept,_ an existence theorem stating that something akin to a language organ can be put together with basic neuroscientific materials which can be plausibly delivered by a biological developmental apparatus. We believe that this has not been done before. But, having said that, we have taken care that aspects of the system we present here are consistent with the consensus in berolinguistics about the nature of the language organ, _wherever such consensus exists;_ we point out instances of such convergence throughout the paper.
## 2 The Model
We next turn to discussing the _neural model_, henceforth referred to as nemo, that we use to build our neuromorphic language organ. Neuron biology (Kandel et al., 1991) is rich and complex -- there are apparently thousands of different types of neural cells, hundreds of kinds of neurotransmitters, and complex and very partially understood mechanisms by which axons grow and synapses are created and synaptic "weights" (if one assumes that such a parameter exists) change through plasticity. It is impossible to capture everything we know in neuroscience by a model of the brain that is useful for our purposes. Our desiderata for nemo are these:
* We want the model to be in basic agreement with what we know in neuroscience -- for example, _it should not entail backpropagation._
* We want it to be simple and elegant, mathematically rigorous, and amenable to mathematical proof of its properties.
* Importantly, we need to simulate it efficiently, if approximately, at the scale of tens of millions of neurons and trillions of synapses.
nemo is very much influenced by the Assembly Calculus (AC) (Papadimitriou et al., 2020), a model that was proposed a few years ago as a simplified though realistic mathematical description of brain computation, and capturing a few of the most established principals in neuroscience: the brain is a finite collection of _brain areas_, each with distinct cytological and functional properties. Individual neurons fire when they receive sufficient excitatory input from presynaptic neurons, and firing is an atomic operation. Synapses between neurons in the same area are essentially _random;_nemo assumes the strong randomness of Erdos-Renyi random graphs denoted by \(G_{n,p}\)(Erdos and Renyi, 1960), where all pairs of different neurons have the same probability \(p\) of being connected, independently. While it is known that the randomness of synaptic connectivity is more complex than \(G_{n,p}\) and influenced by locality and neuron type, see for example Song et al. (2005), the randomness of \(G_{n,p}\) is a robust and productive assumption -- for example, alternative models of randomness based on
locality bring about very similar behaviors. Certain pairs of different brain areas can be connected by fibers of axons, in either or both directions, and this results in random _bipartite_ connectivity between the neurons in these areas.
It is well known that a large minority of neurons in the mammalian brain are _inhibitory_ -- or _GABA-ergic neurons_, as the most common type is called, or _interneurons_ -- and that inhibition serves two distinct functions: _Local inhibition_ establishes in each area excitatory-inhibitory balance (EI balance), keeping the number of spiking neurons to a fixed fraction thus preventing seizures. Importantly, in memo local inhibitory neurons are not modeled explicitly; their effect is captured by the _\(k\)-cap operation_ explained below.
It is assumed in memo that _all neurons spike in synchrony_, in distinct time steps -- implicitly assumed to run at approximately 50 Hz in the brain. This is a necessary assumption for making memo mathematically tractable (so its properties can be proved analytically) and susceptible to efficient simulation. This synchrony assumption is definitely unrealistic: It is well known in Neuroscience that neuron spiking is asynchronous. However, this assumption is _not distortive:_ It has been established through simulations of asynchronous neural models that the basic behaviors of the AC and memo are maintained in those models, see for example Pokorny et al. (2019).
At each step, which neurons spike? It is assumed that, in each area, \(k\) of the area's \(n\) neurons fire, where \(k\) is a number much smaller than \(n\) -- think of it as the square root of \(n\). In particular, the \(k\) neurons that received the largest synaptic input from presynaptic neurons -- in the same area or in other areas -- are selected to spike. This is the \(k\)-cap operation (or \(k\)-winners-take-all), the mechanism through which the excitatory-inhibitory balance of each area, effected by its local inhibitory neurons, is captured. It is a productive simplification of the underlying process, in which the initial firing of many excitatory neurons excites the local inhibitory population (reacting much faster than their excitatory counterparts), which fire, inhibit many of the excitatory neurons, in return fewer inhibitory neurons fire, in an oscillation that quickly converges to the excitatory-inhibitory balance modeled by the \(k\)-cap.
Finally, memo features a simple version of _plasticity_. Plasticity, the ability of neural systems to incorporate the organism's experiences, mostly through changes in synaptic weights, is a fundamental characteristic of brains, considered the basis of all learning. There are many kinds of plasticity, and new kinds are discovered all the time; here we assume the most basic kind of Hebbian plasticity: If neuron \(i\) spikes at time \(t\), neuron \(j\) at time \(t+1\), and there is a synapse from \(i\) to \(j\), then the weight of this synapse, originally one, is multiplied by \(1+\beta\), where \(\beta>0\) is a plasticity parameter, typically \(5-10\%\). There are more complex and biologically accurate models of plasticity (such as STDP); however, simulations show that the simple Hebbian version adopted in memo is not inaccurate in any essential way (Constantinides and Nassar, 2021).
We now have all ingredients of memo required to describe the _dynamical system_ that carries out brain computation. We start with a finite number of brain areas named \(A,B,\ldots\), any pair of which may or may not be connected to one another through a fiber. One area \(I\) is called the _input_ area; representations of stimuli in this area typically initiate the computation. Each area has \(n\) excitatory neurons, and at each step precisely \(k\) of these through the \(k\)-cap operation. The neurons of each area are interconnected by a \(G_{n,p}\) directed graph of synapses, where \(p\) is a second parameter of the model (typically between \(0.001\) and \(0.01\)). To summarize, the equations of the dynamical system are as follows:
* (State) the state of the system at time \(t\) consists of, for each neuron \(i\), a bit \(f_{i}^{t}\in\{0,1\}\) denoting whether or not \(i\) fires at time \(t\), and the synaptic weights \(w_{i,j}^{t}\) for all synapses \((i,j)\).
* (Synaptic input) \(I_{i}^{t}\), the synaptic input of neuron \(i\) at time \(t\), is computed as \(I_{i}^{t}=\sum_{(j,i)\in E\ :\ f_{j}^{t}=1}w_{j,i}\);
* (\(k\)-cap) for \(i\) in area \(A\), \(f_{i}^{t+1}=1\) if \(I_{i}^{t}\) is in the top-\(k\) of \(\{I_{j}^{t}\ :\ j\in A\}\);
* (Plasticity) for each synapse \((i,j)\), \(w_{i,j}^{t+1}=w_{i,j}^{t}(1+f_{i}^{t}f_{j}^{t+1}\beta)\).
Although not used explicitly in the main algorithm of this paper, our memo has another type of _long-range interneurons_, or _LRIs_, a feature absent in the AC: LRIs are distinct populations of inhibitory neurons, extrinsic to the brain areas, which have _inhibitory_ synaptic connections to certain brain areas or other LRIs (all other synapses in memo are excitatory), and have excitatory connections
_from_ certain brain areas. LRIs can be thought of as the _control elements_ of brain computation, and are crucial in making nemo a hardware language capable of universal computation. LRIs are well attested in the neuroscience literature (Jinno et al., 2007; Melzer et al., 2012); in particular, there is evidence that they are necessary for establishing the \(\gamma\) rhythm of the brain thought to be coterminous with brain computation (Roux et al., 2017). LRIs rectify a marked weakness of the AC: Computation in the Assembly Calculus is represented in Papadimitriou et al. (2020) by Python-like programs with variables, conditionals and loops. It is unclear how these AC programs have evolved or how they are deployed in development, where they are stored, or how they are loaded and interpreted in the brain. LRIs replace these programs by a simple and biologically plausible framework. We discuss their use in extensions and future directions of our model, particularly for syntax, in Section 5.
#### The power of the model
At first glance, nemo as described above appears to be extremely simple; however, powerful behaviors can be accomplished in this framework. One important example, studied in Papadimitriou et al. (2020), is called _projection_. Let \(A\) and \(B\) be two areas (with a fiber from \(A\) to \(B\)) and suppose there is a fixed set \(a\subset A\) of \(k\) neurons in \(A\) that fires into \(B\) at each time step. This setup is very simple and important: it models a _fixed stimulus_ firing into a brain area.
How does the system evolve? At \(t=1\), \(a\) fires, resulting in some \(k\)-cap set of neurons \(b_{1}\) in \(B\). At \(t=2\), \(a\)_and_\(b_{1}\) both fire into \(B\), resulting in a some other \(k\)-cap \(b_{2}\) in \(B\), and so on for \(t=3,4,\ldots\). A priori, it is not clear that the \(b_{1},b_{2},b_{3},\ldots\) converge, because as new neurons in \(B\) fire, they might recruit more new neurons in \(B\). Without plasticity (i.e., \(\beta=0\)), the \(b_{t}\) do not converge. However, as confirmed in both experiment and proof in Papadimitriou et al. (2020), for \(\beta>0\) the \(b_{t}\) do converge to a stable set \(b\subset B\). In particular, after some time step \(\tilde{t}\), firing \(a\) will reliably activate \(b\) (by activate we mean that \(b\) fires as the next time step), just as firing any reasonably-sized subset of \(b\) inside \(B\) also activates all of \(b\). Such a stable set of neurons is called an _assembly_ or _ensemble_, and the assembly \(b\) is called the _projection_ of \(a\) into \(B\).
There is a substantial consensus that highly interconnected sets of neurons that fire together (called assemblies, ensembles, engrams) are the fundamental unit underlying cognitive mechanisms Buzsaki (2010), and the Assembly Calculus was proposed as a model that explains and models the emergence and dynamics of assemblies. nemo has several other operations: (1) merge, which is the formation of an assembly in an area when _multiple_ areas fire into it, (2) reciprocal project, when two assemblies are connected into each other both ways, and (3) sequence formation, that is a chain of projections from \(A\) to \(B\) that memorizes the order of projection. While the model is certainly an abstraction of neuronal activity, it is based on sound neurobiological principles, and each of these operations is a plausible abstraction of complex neural processes that are thought to underlie cognition.
#### Language in the Brain
When it comes to language in the brain, much less is known with certainty than about neuron cellular dynamics; see (Kemmerer, 2015; Friederici, 2017; Brennan, 2022) for recent books on the subject. Still, it is impossible to survey the entire field. Here, we summarize the state of our knowledge of the language organ most pertinent to this work.
The language organ carries out two main functions: speech production and speech comprehension. There is a broad consensus that, in the systems responsible for both functions, there exist abstract representations for each word in the language within a centralized lexical area; this area can be thought of as a hub-like interface between the phonological subsystem and the semantic representations of each word. Though not uncontroversial, there is also growing evidence that these representations are _shared_ between production and comprehension systems -- they are believed to reside in the mid and mid-posterior MTG Indefrey and Levelt (2004). On the other hand, the _semantics_ of nouns and verbs are represented in a distributed way across many brain areas, many at the periphery of the motor, visual, and other sensory cortex, and the aforementioned word representations are richly connected to these areas Martin (2007); Kiefer and Pulvermuller (2012); Popham et al. (2021).
Nouns and verbs differ in some of the context areas with which they are most strongly connected. Parts of the motor cortex are much more strongly involved in the processing of verbs than that of nouns (namely the PLTC and the pSTS subarea), whereas a different part of the motor context is more active in the processing of nouns involving action (i.e., tools and limbs) than verbs; see Gennari
(2012) and Watson et al. (2013) for surveys. Furthermore, areas of the motor cortex that are activated in response to perceiving someone _else_ perform an action, known as _mirror cells,_ are activated much more for verbs than for nouns; for a review on motor and mirror area recruitment for verbs vis-a-vis nouns, see Kemmerer and Gonzalez-Castillo (2010); Fernandino and Iacoboni (2010). In addition to involving different context areas, it is also known that there is a separation between the systems for the perception and generation of nouns and that of verbs, and there is growing evidence that this may, at least partly, be because noun and verb lexical representations reside in different subparts of the mid and mid-posterior MTG, that is, the lexical area Matzig et al. (2009); Vigliocco et al. (2011); Kemmerer et al. (2012). Our language organ model will reflect these principles by having separate lexical areas for nouns and verbs, and featuring contextual that are connected _exclusively_ to each of the noun and verb areas -- in addition to many shared context areas.
Neurolinguists also strongly suspect that there is an abstract phonological representation for each word (in an area called Spt) that is connected to the word's centralized lexical representation, and that the same representation is used in both perception and production Hickok et al. (2003); Okada and Hickok (2006). This representation can be thought of as containing implicit representations of sequences of phonemes and interfacing to the sensorimotor subsystems for the perception and production of these words. In our work, we abstract away phonological processing and acquisition, and will have a special phonological input/output area that is shared by production and perception.
To summarize, we have the following simplified picture of language in the brain: each word has a root representation in a lexical hub area, likely within different sub-areas for nouns and verbs, which is connected to a phonological representation of the word -- representations which are used both for recognizing and for articulating words. The lexical hubs are richly connected to many sensory and semantic areas across the brain through which the many complex shades of meaning and nuances of a word are represented; crucially, nouns and verbs have strong connections to different context areas.
#### Psycholinguistic theories
The most important comparisons to our work are the existing psycho- and neurolinguistic models of language processing. Among the most influential and established theories are the Lemma Model for production Levelt et al. (1999), the Dual Stream Model of language perception Hickok and Poeppel (2007), and the Hub-and-Spoke model for the semantic representation of words Ralph (2014). While there is much debate regarding in which ways these models can be combined, they all have in common the basic consensuses, or near-consensuses, outlined in the previous section. One important contribution of our work is that it constitutes a _concrete, neuronal implementation_ of the underlying common core of these three mainstream models of language processing. That is, whereas at the highest level the Lemma Model posits the existence of word lemmas connected to phonological representations and a hub-and-spoke like semantic network, and the Dual Stream Model predicts a lexical interface between the integration of phonological input and the semantic and syntactic features of a word, our work _fully implements_, in terms of realistic stylized neurons, the basic underlying mechanisms of these models. Importantly, our model explains how the _lexical representations_ common to these models can be acquired from grounded input.
A toy language.We will shortly define a language organ in nemo that will learn from sentences of a toy language with \(l\) nouns and \(l\) intransitive verbs, where \(l\) is a small parameter that we vary in our experiments. We denote the combined lexicon as \(L\). In this language, all sentences are of length two: "cats jump" and "dogs eat." Importantly -- and this is the hard part of our experiment -- the language can have either SV (subject-verb) principal word order (as in English, Chinese and Swahili) or VS (as in Irish, Classical Arabic, and Tagalog), and our model should succeed in either scenario.
## 3 The Language Organ
Our language organ, denoted \(\mathcal{O}\), consists of two separate _lexical areas_ for nouns and verbs, \(\textsc{Lex}_{\textsc{N}}\) and \(\textsc{Lex}_{\textsc{V}}\), and an area Phan containing the phonological representations of words. It also has several _context_ areas: Visual and Motor are the two basic ones, but there are several others which we denote \(\textsc{Context}_{i}\) for \(i\in[C]\). (\(C\), the number of additional context areas, is a parameter of the model; here \(C=10\)). Phan is connected through fibers with \(\textsc{Lex}_{\textsc{N}}\) and \(\textsc{Lex}_{\textsc{V}}\), whereas Visual is connected with \(\textsc{Lex}_{\textsc{N}}\), and Motor with \(\textsc{Lex}_{\textsc{V}}\). All other context areas \(\textsc{Context}_{i}\) are connected
to both \(\textsc{Lex}_{\textsc{N}}\) and \(\textsc{Lex}_{\textsc{V}}\); all these connections are two-way (see Figure 1). For each word \(W\), we additionally pre-select a random subset of \([C]\), representing which extra context areas are implicated for the word \(W\) (for instance an olfactory area for \(W=\)_flower_ an emotional affect area for _hug_, and so on). In our experiment, this set has only one element, denoted \(i[W]\).
Hearing each word \(W\) by the learner is modeled as the activation of a unique corresponding assembly \(\textsc{Phon}[W]\) for that word in \(\textsc{Phon}\) for the duration of the perception of a word, that is, for \(\tau\) time steps, where \(\tau\) is another parameter of the model. We further assume that our input is _grounded._: whenever a noun \(W\in L\) is heard it is also seen -- that is, an assembly corresponding to the static visual _perception_ of the object (cat, dog, mom, etc) is active in Visual, denoted \(\textsc{Visual}[W]\). Similarly, an assembly corresponding to the intransitive action (jump, run, eat, etc.) in Motor, denoted \(\textsc{Motor}[W]\) for a verb \(W\in L\). _These areas represent the union of the differing somatosensory cortical areas feeding into nouns and verbs covered in Section 2_). We also activate an assembly \(\textsc{Context}_{i_{W}}[W]\) in the extra context area corresponding to \(W\). Importantly, the assemblies in the contextual areas (\(\textsc{Visual},\textsc{Motor}\) and the \(\textsc{Context}_{i}\)) are activated throughout the perception of the entire sentence (that is, \(\tau\,\times\,\big{|}\) sentence \(\big{|}\) steps), _not_ just when the corresponding word is perceived. This corresponds to the fact that the learner perceives the sentence as a whole, associated with the world-state perceived that moment through shared attention with the tutor.
Effectively, the above means that in our experiment, whereas \(\textsc{Lex}_{\textsc{N}}\) and \(\textsc{Lex}_{\textsc{V}}\) are pristine _tabulae rasae_, areas with random connectivity devoid of special structure, \(\textsc{Phon}\) is pre-initialized with assemblies for each word in the lexicon; \(\textsc{Visual}\) has assemblies for each noun, as does Motor for each verb. This reflects that we seek to model the acquisition of highly grounded, core lexical items, and are abstracting away phonological acquisition -- which is of course a highly interesting direction in its own right. These lexical items are acquired before more abstract nouns and verbs (such as _peace_ and _explain_) that may require a variant of this representation scheme. We are confident that appropriate extensions of our basic model will handle abstract language -- see Section 5 for a discussion of this and other extensions.
To summarize, a sentence \(s=W_{1}W_{2}\) of our language in the SV setting (the VS setting is analogous) is input into \(\mathcal{O}\) as follows: the corresponding assemblies in all the context areas, that is \(\textsc{Visual}[W_{1}]\), \(\textsc{Motor}[W_{2}]\), \(\textsc{Context}_{i[W_{1}]}\) and \(\textsc{Context}_{i[W_{2}]}\) fire for \(t-1\in[2\times\tau]\), while \(\textsc{Phon}[W_{1}]\) fires for \(t-1\in[\tau]\), and then \(\textsc{Phon}[W_{2}]\) fires for \(t-\tau-1\in[\tau]\). We will denote these steps of the dynamical system by the shorthand \(\textsc{Feed}(s)\).
Figure 1: The architecture of the language organ \(\mathcal{O}\) in the nemo model of neuronal computation. This example show the state of a trained \(\mathcal{O}\) after hearing the word _dog_ in a grounded setting when the listener also sees a dog jumping (this could be part of a sentence like “the dog jumps”). The corresponding assemblies are active in Visual (the image of a dog) and Motor (the action of jumping); assemblies can also be active in \(\textsc{Context}_{i}\) areas, representing additional semantic contextual stimuli such as an emotional affect.
The learning experiment
We first select the parameters \(n,k,p,\beta\) (which may vary across different areas); \(l\) (the lexicon size), \(\tau\) (how many times each word fires), and \(C\) (the number of extra context areas). To _train_\(\mathcal{O}\), we generate random sentences \(s_{1},s_{2},\ldots\) in our toy language, executing Feed(\(s_{i}\)) for each \(s_{i}\).
Our experiments reveal that, for varying settings of the parameters (such as \(n=10^{6},k=10^{3},\beta=0.1,l=5,\tau=2\)), after some number of training sentences the model accomplishes something interesting and nontrivial, and _necessary for language acquisition:_ it forms assemblies for nouns in Lex\({}_{\text{N}}\) but not in Lex\({}_{\text{V}}\), assemblies for verbs in Lex\({}_{\text{V}}\) but not in Lex\({}_{\text{N}}\)1, and in addition, the assemblies in these areas are reliably connected to each word's corresponding assemblies in Phon, Motor, and Visual, and also reasonably well connected to the other context areas. In other words, and in a concrete sense, the model has learned which words are nouns and verbs, and has formed correct semantic representations of each word.
Footnote 1: To see why this is highly nontrivial, the reader is reminded that this is done in the absence of knowledge of whether, in the language being learned, subject precedes verb or the other way around.
We say that an experiment _succeeded_ after \(m\) training sentences if we have that for each word \(W\in L\), the resulting _synaptic weights_ of \(\mathcal{O}\) satisfy properties \(P\) and \(Q\). Property \(P\) captures a kind of _production_ ability -- that is, ability to go from semantic representations to phonological form, much like the mapping from lemma to lexeme in psycholinguistics; properties \(Q\) guarantee that a stable representation for each word is formed in the word's correct area -- Lex\({}_{\text{N}}\) or Lex\({}_{\text{V}}\) -- and not in the other area.
We start by defining the \(P\) property: A noun (respectively, verb) \(W\) satisfies property \(P\) if firing Visual\([W]\) (resp. Motor\([W]\)) and Context\([i[W]]\) activates Lex\({}_{\text{N}}\) (resp. via Lex\({}_{\text{V}}\)) almost all of the representation Phon\([W]\); in our tests, we define "almost" as least 75% of the cells in that assembly. We say the experiment satisfies \(P\) if every word satisfies the \(P\).
For the \(Q\) properties, suppose \(W\) is a noun and that Phon\([W]\) fires once. Let \(\nu\) be the resulting \(k\)-cap in Lex\({}_{\text{N}}\), and \(\mu\) the resulting \(k\)-cap in Lex\({}_{\text{V}}\). The properties \(Q_{i}\) are defined as follows.
1. \(Q_{1}\): the synaptic input into \(\nu\) is _greater_ than that into \(\mu\) by a factor of two.
2. \(Q_{2}\): if we fire \(\nu\), it activates Phon\([W]\) and Visual\([W]\); whereas if we fire \(\mu\), it does not activate any of the predefined assemblies in Phon or Motor.
3. \(Q_{3}\): if we fire \(\nu\), it activates \(\nu\) within Lex\({}_{\text{N}}\) itself; whereas if we fire \(\mu\), the next \(k\)-cap in Lex\({}_{\text{V}}\) has small overlap with \(\mu\) (less than 50%).
If \(W\) is a verb, the \(Q_{i}\) are defined as above but swapping noun with verb, and Motor with Visual. Intuitively, the \(Q_{i}\) capture that _a stable hub representation of each word has been formed in the correct part-of-speech lexical area for that word_. The experiment satisfies \(Q\) is every word satisfies the \(Q_{i}\).
**Results.** We run our nemo -based language organ with a variety of parameters with random sentences until success, that is, until \(P\) and \(Q\) are satisfied, and report the resulting training time. Despite representing a dynamical system of _millions of neurons and synapses_, the system converges and yields stable representations (satisfying \(P\) and \(Q\)) for reasonable settings of the parameters.
The results are summarized in Figure 2, where we see that the number of training sentences grows roughly linearly with the lexicon, or number of words acquired. While the number of training sentences may appear somewhat large, there are a few points to keep in mind. Our model describes the acquisition of one's "first words", the most contextually rich and consistent, for which 10-20 overhead sentences per word does not seam unrealistic. Furthermore, to our knowledge ours is the first simulation of a non-trivial part of language acquisition performed entirely in a bioplausible model of neurons and synapses. Nevertheless, reducing the number of training sentences is a crucial goal of this line of research: we propose a heuristic for this in the following subsection, and discuss ideas for future research in Section 5. We also experiment running the model with varying \(\beta\) (the plasticity parameter) revealing roughly inverse-exponential acceleration of the rate of convergence to stable representations with increasing \(\beta\). In experiments with or without extra context areas, the training time remains roughly the same. See Figure 2 for details.
### Individual word tutoring
Our model is able to learn word semantics from _full_ sentences, without ever being presented isolated words. While it is known that children can acquire language in this way, in our experiments the number of sentences required is rather large, and scales linearly with the size of the lexicon. An important problem for our theory is to understand how to reduce this size, especially to model later stages of acquisition, since humans acquire language from small amounts of data. We believe this is done two ways: At the early stages with _individual word tutoring_, and at later stages through _functional words_ (see the next section). To test individual word tutoring, after every fixed number of sentences we randomly select a single word \(W\in L\) and fire \(\textsc{Phon}[W]\) and its contextual areas for some \(\tau\) time-steps. We find that this greatly decreases the total training _time_. In particular, at early stages of acquisition, individual word tutoring reduced the training time by over 40%.
Figure 2: Results of our experiments. In (a) the learning experiment of section 4 is performed for varying sizes of the lexicon, revealing a linear trend (\(n=10^{5},p=0.05,\beta=0.06,k_{\textsc{LEX}_{\textsc{N}}}=k_{\textsc{LEX}_{ \textsc{V}}}=50,k_{\textsc{Context}_{i}}=20\), other areas \(k=100\), \(C=20\) and \(\tau=2\)). In (b) the learning experiment is repeated for varying \(\beta\) and \(C=0\), always for a lexicon of size \(4\). In (c) we run learning experiments as in (a) — green — along with two variants, one in which a round of individual word tutoring is performed after every \(2\) random sentences (blue), and another every \(5\) random sentences (green): individual word tutoring decreases training time significantly, particularly when a smaller set of words is taught at a given time. (a)-(c) were performed with both a NV and NV word orders with similar results; NV results are shown here and both are available in the supplementary data. Each experiment is repeated 5 times; means and standard deviations are reported.
Future Work
MultilingualityWe believe that our model can be extended to handle _multilinguality_ by adding an additional area Lang, connected into \(\textsc{Lex}_{\textsc{N}}\) and \(\textsc{Lex}_{\textsc{V}}\). Like the contextual areas, Lang would have several assemblies, one for every language the multilingual child is exposed to, with strong input into \(\textsc{Lex}_{\textsc{N}}\) and \(\textsc{Lex}_{\textsc{V}}\). For learning to succeed in the sense of Section 4, separate assemblies for each concept in each language must form in the lexical areas; we expect that this will require more training time -- reflecting the fact that multilingual children may begin to speak later than monolingual children (Hambly et al., 2013).
Functional words and faster learning.Functional words are words in closed lexical classes that have limited semantic content but have important syntactic roles (such as English prepositions, determinants, etc.); more broadly, functional categories include morphemes and inflectional paradigms of this type (e.g. the possessive marker "'s", the adverbializer "-ly" and so on). Functional categories are somewhat of a paradox: cross-linguistically, children begin to accurately _produce_ them much later than lexical words (verbs and nouns), but in recent decades, an explosion in language acquisition research has come to establish that young children are extremely sensitive to them, likely forming representations of them well before they can produce them, and utilizing them in many ways: to aid understanding, for learning lexical items (a word that follows "the" is likely to be a noun), and for bootstrapping syntax (Dye et al., 2018).
An important open problem is handling functional words, and, possibly, using them to accelerate word acquisition (reducing the learning times of 4, particularly important for modeling words with less contextual consistency). As a starting point, suppose we extend our language to have a mandatory article "a" before every noun (with no semantic content), that is, in the NV version of our language, every sentence has the form "a Noun Verb". \(\mathcal{O}\). Extending the model to acquire "a" (perhaps as a representing in an area for functional words Func) is an important goal; then, it can be used to quickly identify any following word as a noun (i.e., forming an initial representation in \(\textsc{Lex}_{\textsc{N}}\)).
Abstract words and contextual ambiguity.Currently, our model of grounded context is rather simplistic: we assume only _object nouns_ and _action verbs_, we have two areas that are specific to each kind of input, and several other unspecified contextual areas that fire randomly when we hear the word. Eventually, we would like to be able to handle abstract words like "disagreement" and "aspire". Extending our model, in particular its representation of semantics, to handle such words is one of our main future directions.
Generation and Syntax._Perhaps the most important direction left open by our work is syntax._ As a first step, we want the model to learn whether the toy language has NV or VN order. Concretely, this would entail the following experiment: after exposure to some number of random sentences (as in the current model), we can _generate_ sentences by activating the assemblies in contextual areas corresponding to every word in the sentence, and, letting the dynamical system run, it will fire the assemblies in Phon in the correct order of the language (NV or VN). This itself is but a small piece of syntax; transitive verbs and object would be the next step, which we believe can be carried out by modest, and hardly qualitative, extensions of our setup and methods.
## 6 Conclusion
We have defined and implemented a dynamical system, composed of millions of simulated neurons and synapses in a realistic but tractable mathematical model of the brain, and in line with what is known about language in the brain at a high level, that is capable of learning representations of words from grounded language input. We believe this is a first and crucial step in neurally plausible modeling of the language organ and of language acquisition. We have outlined a number of future directions of research, within the reach of our approach, that are necessary for a complete theory of language in the brain.
|
2310.06398 | On the sub-parsec scale core composition of FR 0 radio galaxies | Although Fanaroff-Riley (FR) type 0 radio galaxies are known to be the most
numerous jet population in the local Universe, they are much less explored than
the well-established class of FR I and FR II galaxies due to their intrinsic
weakness. Observationally, their nuclear radio, optical and X-ray properties
are comparable to the nuclear environment of FR Is. The recent detection of two
FR 0s in the high-energy band suggests that like in FR Is, charged particles
are accelerated there to energies that enable gamma-ray production. Up to now,
only the lack of extended radio emission from FR 0s distinguishes them from FR
Is. By comparing the spectral energy distribution of FR 0s with that of FR Is
and in particular with that of M87 as a well-studied reference source of the FR
I population, we find the broadband spectrum of FR 0s exceptionally close to
M87's quiet core emission. Relying on that similarity, we apply a
lepto-hadronic jet-accretion flow model to FR 0s. This model is able to explain
the broadband spectral energy distribution, with parameters close to
particle-field equipartition and matching all observational constraints. In
this framework, FR 0s are multi-messenger jet sources, with a nature and highly
magnetized environment similar to that of the naked quiet core of FR Is. | Margot Boughelilba, Anita Reimer | 2023-10-10T08:06:00Z | http://arxiv.org/abs/2310.06398v1 | # On the sub-parsec scale core composition of FR 0 radio galaxies
###### Abstract
Although Fanaroff-Riley (FR) type 0 radio galaxies are known to be the most numerous jet population in the local Universe, they are much less explored than the well-established class of FR I and FR II galaxies due to their intrinsic weakness. Observationally, their nuclear radio, optical and X-ray properties are comparable to the nuclear environment of FR Is. The recent detection of two FR 0s in the high-energy band suggests that like in FR Is, charged particles are accelerated there to energies that enable gamma-ray production. Up to now, only the lack of extended radio emission from FR 0s distinguishes them from FR Is. By comparing the spectral energy distribution of FR 0s with that of FR Is and in particular with that of M87 as a well-studied reference source of the FR I population, we find the broadband spectrum of FR 0s exceptionally close to M87's quiet core emission. Relying on that similarity, we apply a lepto-hadronic jet-accretion flow model to FR 0s. This model is able to explain the broadband spectral energy distribution, with parameters close to particle-field equipartition and matching all observational constraints. In this framework, FR 0s are multi-messenger jet sources, with a nature and highly magnetized environment similar to that of the naked quiet core of FR Is.
0000-0002-4880-788X]Margot Bougheiliba
0000-0002-4880-788X]Anita Reimer
## 1 Introduction
Following the Unified Model for Radio-Loud Active Galactic Nuclei (AGN), radio galaxies have their jets misaligned with the line of sight (Urry and Padovani, 1995). For that reason, radio galaxies form the dominant jetted AGN population. Because of this misalignment, the Doppler boosting enhancing the observed flux is small; hence, only a few sources have so far been detected in the gamma-ray band (see, e.g. Ajello et al., 2022; H. E. S. S. Collaboration et al., 2018; MAGIC Collaboration et al., 2018). Blazars, on the other side, with their jets pointing towards Earth, are brighter, but also more rare.
Based on their extended radio morphology, radio galaxies are usually classified as either faint edge-darkened Fanaroff-Riley type I (FR I) or bright edge-brightened type II (FR II) galaxies. The low-power FR Is are often linked to radiatively inefficient accretion flows, while the more powerful FR IIs are usually associated with more efficient accretion. Recently, a new type of radio galaxy has emerged, named FR 0 galaxies (Baldi et al., 2018). From the radio perspective, FR 0s are similar to FR Is, except for the lack of extended emission (i.e. on a kiloparsec scale). The optical properties of
FR 0s are comparable to FR Is, as they are also located in red massive early-type galaxies, and are classified as Low-Excitation Radio Galaxies from a spectroscopic point of view. An X-ray study of a subsample of FR 0s (Torresi et al., 2018) showed that FR 0s have a comparable X-ray luminosity to FR Is in the \(2-10\) keV band, confirming the similarity of the nuclear properties of the two classes. This study also indicates low Eddington-scaled luminosities, hinting towards radiatively inefficient accretion. In the high-energy domain, the detection of gamma rays from two of them (namely LEDA 55267 and LEDA 58287) has recently been reported (Paliya, 2021) (a third source is mentioned in that paper but it has been removed from the FR0CAT, see Baldi et al., 2019). The stacking analysis of the Fermi-LAT data in Paliya (2021) shows that the whole population could be considered as a gamma-ray-emitting class. Previously, Grandi et al. (2016) reported the first association of one FR 0, Tol 1326-379, with a gamma-ray source in the Fermi 3FGL catalogue (Acero et al., 2015). The 4FGL source catalogue (Abdollahi et al., 2020), however, reports no gamma-ray counterpart associated with Tol 1326-379, and it is unclear whether this FR 0 is a gamma-ray emitter or not (see, e.g. Fu et al., 2022). As of now \(\gtrsim 100\) FR 0s (FR0CAT, Baldi et al., 2018; Torresi et al., 2018) have been collected, sharing the following properties: residing at redshift z \(\lesssim 0.05\), the radio sources are located at maximum \(2^{\prime\prime}\) from the optical centre, and have a minimum FIRST flux of 5 mJy at 1.4 GHz. With these properties, FR 0s are shown to be in the order of \(\sim 5\) times more numerous than the FR I radio galaxies in the local Universe, which makes them the dominating jet population there (Baldi & Capetti, 2009, 2010).
Several hypotheses have been proposed so far, to explain the lack of extended radio emission from FR 0s. Evolutionary models consider FR 0s as young sources that evolve into more extended sources. These models are, however, ruled out, due to the distribution of radio sizes in the sample (Baldi et al., 2019). Alternatively, Garofalo et al. (2010) discussed the impact of the spin of the SMBH on the power of the associated jets. In this view, FR 0s have been proposed as being driven by a prograde, low-spin SMBH (Garofalo & Singh, 2019), and most of them are not reaching spin values for which non-negligible jets are inferred.
Another approach to gain insight into the true nature of this jet population is linked to their broadband spectral energy distribution (SED). In a recent work, Merten et al. (2021) compiled an average SED of FR 0s to collect information on their radiative environment. Here, we compare for the first time the broadband emission of FR 0s to FR Is, and in particular to M87 as one of the most detailed studied archetypal FR I galaxy. M87 has been deeply studied, both in its quiet, steady state and in its flaring state. In particular, in 2017, a multi-wavelength campaign focused on the quiet core emission of M87, providing constraints on the core magnetic field, the emission region and the jet properties of M87 (EHT MWL Science Working Group et al., 2021; Event Horizon Telescope Collaboration et al., 2021).
Section 2 presents the SED data we collect and discusses the implications taken from the comparison of FR 0s and FR Is. These motivate a model setup for the core region of FR 0s that we describe in Section 3. Section 4 presents the results of our broadband modelling of FR 0s. Our conclusion from this study is discussed in Section 5.
To build the broadband SED of a sample of 114 FR 0s we collected their available data from the NASA/IPAC Extragalactic Database (NED) 20191, following the method described in Merten et al. (2021). 104 sources are taken from the FR0CAT (Baldi et al., 2018) (note that 4 of the sources included in the original catalogue have been removed since then, see Baldi et al. (2019) for more details). The 10 additional sources come from a sample of 19 FR0s studied in the X-ray band (Torresi et al., 2018), among which 11 were not in the FR0CAT. From these 11 sources, we removed J004150.47-0, which is mentioned to be at the centre of its cluster (Abell85, see Torresi et al., 2018), to avoid flux contamination from the cluster. For these 10 sources, additional observational data from the SSDC SED builder2 were collected. We only use X-ray data if taken with the Neil Gehrels _Swift_ Observatory, _XMM_-Newton or _Chandra_ telescopes; observations from instruments with a larger angular resolution are discarded to avoid flux contamination from the sources' surroundings. Most Chandra data are taken from the Chandra Point Source Catalog 2.0.1 3(Evans et al., 2020) where we use the _flux_aper90_ fluxes (i.e. the reported fluxes represent the background-subtracted fluxes in the modified elliptical aperture). In order to have the most complete data collection available for the two individually gamma-ray detected sources, we built _Swift_-XRT spectra using the online tool4. For LEDA 55267, we used XSPEC (Arnaud, 1996) to have a binning of 20 counts per bin to present the data. The gamma-ray data of LEDA 55267 (SDSS J153016.15+270551.0) and LEDA 58287 (SDSS J162846.13+252940.9) are taken from Paliya (2021). As mentioned in Section 1, it is unclear if Tol 1326-379 is a gamma-ray emitter, since no association is reported in the 4FGL catalogue (Abdollahi et al., 2020). Nevertheless, for completeness, the SED of Tol 1326-379 is included into Figure 1, with the high-energy emission butterfly representation for an integral photon flux \(F_{>1\rm GeV}=(3.1\pm 0.8)\times 10^{-10}\,\rm phot\,cm^{-2}\,s^{-1}\) and a spectral index \(\Gamma=2.78\pm 0.14\), taken from Grandi et al. (2016). Its high-energy slope is much steeper than that of other gamma-ray emitting FR 0s. Fu et al. (2022), however, shows that Tol 1326-379 could be associated with 4FGL J1331.0-3818, with a gamma-flux compatible with the other two gamma-ray-detected sources LEDA 55267 and LEDA 58287, although the association remains ambiguous. For these reasons, we do not consider Tol 1326-379 as a gamma-ray emitting source.
Footnote 1: [https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/)
Footnote 2: [https://tools.ssdc.asi.it/SED/](https://tools.ssdc.asi.it/SED/)
Footnote 3: [https://cxc.cfa.harvard.edu/csc/](https://cxc.cfa.harvard.edu/csc/)
Footnote 4: [https://www.swift.ac.uk/user_objects/](https://www.swift.ac.uk/user_objects/)
We also collected data of 216 FR I sources listed in the FRICAT (Capetti et al., 2017) in the same way. Three sources (namely FRICAT 1053+4929, FRICAT 1428+4240, and FRICAT 1518+0613) are also listed as low-luminosity BL Lac Objects (Capetti et al., 2017; Capetti and Raiteri, 2015) which are therefore not included in our sample.
The multi-wavelength observation of M87's quiet core emission taken in 2017 (EHT MWL Science Working Group et al., 2021) are included, in order to compare the broadband SED of the two classes to a typical FR I source in a quiet state. The flaring states of M87 derived by H.E.S.S in 2005 (Aharonian et al., 2006), MAGIC in 2008 (Albert et al., 2008) and VERITAS in 2007 and 2010 (Acciari et al., 2008; Aliu et al., 2012) are also used for the comparison. We use the average fitted values and uncertainties reported all together in MAGIC Collaboration et al. (2020). The list of sources used in this work is shown in Table 3 in the Appendix.
Figure 1 shows the resulting broadband SEDs of FR 0s as compared to those of M87 and all the other FR Is, all scaled to the mean distance of FR 0s (i.e \(z\sim 0.04\)). Obviously, FR 0s and FR Is show a very similar spectrum, as expected by the observation in the wavebands discussed in Section
1. The flaring state SED of M87 (that we define as opposed to the quiet state shown in red in Figure 1 and includes most of the observations shown in blue) is unsurprisingly following the FR Is trend. The lack of radio emission from FR 0s as compared to FR Is is apparent below \(10^{11}\) Hz. What stands out in this comparison is the extreme similarity between M87's quiet core emission and the spectral behaviour of FR 0s, at all wavelengths.
Contrary to FR 0s, M87 has been extensively studied. Taking the above-highlighted similarity not as a chance coincidence, motivates to apply our knowledge deduced from M87's core observation to gain a deeper understanding - by modelling - of the FR 0s. The quiet core study of M87 infers a magnetic field strength of order \((1-30\,\)G) near the core, as well as the presence of an
Figure 1: Broadband SED of FR Is (grey dots), FR 0s (green dots and green butterfly), M87 in its 2017 quiet state (red stars). The blue star symbols are the SED of M87 with all the observations available in the NED. The butterfly plots in the very-high-energy gamma-ray range are the power-law spectra fitted to observations of the flaring state of M87 with H.E.S.S in 2005 (pink region), VERITAS in 2007 and 2010 (purple and brown regions respectively) and with MAGIC in 2008 (cyan region). The SED of Tol 1326-379 is shown in yellow, including the butterfly plot at high-energy gamma rays, using the values derived by Grandi et al. (2016). The fluxes from FR Is have been rescaled from their mean distance to the mean distance of FR 0s (i.e. from a luminosity distance \(d_{\rm L}\sim 1.5\times 10^{27}\) cm to \(d_{\rm L}\sim 5.4\times 10^{26}\) cm). In the same way, M87 was rescaled to the mean distance of FR 0s. The data behind this Figure is available in the online version at the journal webpage. The package contains 3.fits table files, a python script, and a ReadMe. Included are 114 FR 0, 216 FR I, and one M87 SED tables. The script can be used to read the data files. A list of all the sources is given in the Appendix Table 3.
advection-dominated accretion flow (Event Horizon Telescope Collaboration et al., 2021). Applying such values for the magnetic field to the simplest jet emission model, a one-zone Synchrotron Self-Compton (SSC) model, would result in a synchrotron-dominated SED, with a Compton-dominance \((\nu_{\rm comp}L_{\nu,{\rm comp}})/(\nu_{\rm syn}L_{\nu,{\rm syn}})\ll 1\), where \(\nu_{\rm syn}\), \(\nu_{\rm comp}\) are the synchrotron and Compton peak frequencies respectively, and \(L_{\nu,{\rm syn}}\), \(L_{\nu,{\rm comp}}\) the corresponding spectral luminosities at those peak energies, see Tavecchio et al. (1998); EHT MWL Science Working Group et al. (2021). The observed high-energy gamma rays from FR 0s would then have to originate in an emission region further down in the jet. Baldi et al. (2019), however, disfavour the large-scale origin of the high-energy radiation from FR 0s.
A model that reproduces the radio-to-gamma-ray quiet core emission of M87 in a one-zone setup was proposed by Boughelilba et al. (2022). This model focuses on the central region of the AGN, with a jet emission region of a few gravitational radii. Given the compactness of the FR 0s and the SED similarities, we explore here the same type of model for the FR 0 source class. In this model, the high-energy data are explained by the emission of protons, radiating in a high magnetic field. The model also accounts for the accretion flow that is expected in such low-luminosity objects.
## 3 Model
### Jet
In this paper, we follow the same approach as in Boughelilba et al. (2022). We consider a continuous cylindrical jet of radius \(R^{\prime}_{\rm em}\) and proper length \(l^{\prime}=\Gamma_{\rm j}l\), with \(l\) being the observed length. We assume that the emission region contains primary relativistic electrons and protons that are isotropically and homogeneously distributed in the comoving jet frame, and follow a power-law energy spectrum cutting off exponentially, such that the spectral number density \(n^{\prime}_{e,p}(E^{\prime})\propto E^{\prime-p_{e,p}}e^{-E^{\prime}/E^{ \prime}_{\rm max,e,p}}\) cm\({}^{-3}\), for \(E^{\prime}\geq E^{\prime}_{\rm min,e,p}\) (where e,p denotes the electrons or the protons, respectively).
The primary particles are continuously injected into the emission region at a rate \(q_{i}\) (cm\({}^{-3}\)s\({}^{-1}\)), where they experience energy losses caused by various interactions. Specifically, we consider photo-meson production, Bethe-Heitler pair production, inverse-Compton scattering, \(\gamma\)-\(\gamma\) pair production, decay of all unstable particles, synchrotron radiation (from electrons and positrons, protons, and \(\pi^{\pm}\), \(\mu^{\pm}\) and \(K^{\pm}\) before their respective decays), and particle escape at a rate \(\propto c/R^{\prime}_{\rm em}\). Positrons are treated the same way as electrons. Hence, in the following we will use electrons to refer to the two populations irrespective of their type.
To compute the time-dependent direct emission and cascade component from the jet's particles, we use a particle and radiation transport code (see, e.g. Reimer et al. (2019)) that is based on the matrix multiplication method described in Protheroe and Stanev (1993) and Protheroe and Johnson (1996). The interaction rates and secondary particles' and photons' yields are calculated by Monte Carlo event generator simulations (except for synchrotron radiation, for which they are calculated semi-analytically). These are then used to create transfer matrices, that describe how each particle spectrum will change after a given timestep \(\delta t\). To ensure numerical stability, we set \(\delta t\) equal to the smallest interaction time for any given simulation. In each timestep, energy conservation is verified. The steady-state spectra are calculated by running the simulation until convergence is reached, defined here when \(F_{\nu}(t+\delta t)/F_{\nu}(t)<1\pm 10^{-3}\).
Low-luminosity AGNs are expected to host accretion flows in a radiatively inefficient state. This is characterised by the formation of geometrically thick, optically thin, very hot accretion flows, called Advection-Dominated Accretion Flows (ADAFs, introduced by Rees et al., 1982; Ichimaru, 1977 and further developed by e.g. Narayan and Yi, 1995; Abramowicz et al., 1995). ADAFs exist only when the accretion rate is sufficiently low (\(\dot{M}\lesssim 0.01\dot{M}_{\rm Edd}\)), and consist of a plasma of thermal electrons and ions, where both components may have different temperatures, \(T_{e}\) and \(T_{i}\) respectively. Here, we investigate if and how an ADAF component would affect the global SED of FR0s. We use the ADAF model described in Boughelilba et al. (2022) and will summarize here only the main points. In the following, we use the normalized quantities \(r=R/R_{\rm S}\), with the Schwarzschild's radius \(R_{\rm S}=2r_{g}=2.95\times 10^{5}\,m_{\rm BH}\), \(m_{\rm BH}=M_{\rm BH}/M_{\odot}\) and \(\dot{m}=\dot{M}/\dot{M}_{\rm Edd}=\eta_{\rm eff}\dot{M}c^{2}/L_{\rm Edd}\), where \(\eta_{\rm eff}\) is the radiation efficiency of the standard thin disk (\(\eta_{\rm eff}\approx 0.1\)) and the Eddington luminosity \(L_{\rm Edd}\simeq 1.3\times 10^{47}\,m_{\rm BH,9}\,{\rm erg\,s^{-1}}\). We obtain the electron temperature by varying \(T_{e}\) using a bisection method to solve the balance equation \(q^{e+}=q^{e-}\) for each radius. Here \(q^{e+}\) is the electrons' heating rate, and \(q^{e-}\) is their cooling rate. The cooling mechanisms that we consider are synchrotron radiation, bremsstrahlung and Comptonization of the two previous components. The heating mechanisms consist of Coulomb collision between ions and electrons, and viscous energy dissipation. We make use of the one-zone, height-integrated, self-similar solutions of the slim disc equations derived by Narayan and Yi (1995) to describe the hot plasma. These solutions are appropriate only after the sonic point (Narayan et al., 1997), corresponding to \(\gtrsim 2-5\,r_{g}\). The quantities governing the accretion flow depend on the plasma parameter \(\beta\), which is the ratio between the gas and the total pressure (i.e., the sum of the magnetic and gas pressure), on the viscosity \(\alpha\) and on the heating fraction \(\delta_{e}\) which represents the fraction of viscous energy directly transmitted to the electrons of the plasma.
Furthermore, we take \(\dot{m}\) of the form \(\dot{m}=\dot{m}_{\rm out}\left(r/r_{\rm out}\right)^{s}\), where \(r_{\rm out}\) is the outer radius of the ADAF and is associated with an accretion rate \(\dot{m}_{\rm out}\), and \(s\) is a mass-loss parameter (introduced by (Blandford and Begelman, 1999)) that is used to include the presence of outflows or winds from the ADAF. Upon obtaining the electron temperature, the emitted spectrum from the ADAF is computed, integrating over the radius of the ADAF.
## 4 Results
Motivated by the similarity of the broadband SED of FR 0s to the one of M87's quiet core, we explore parameter sets for the modelling of the FR 0s' emission that are close to the M87 core model of Boughelilba et al. (2022). For the ADAF, we use the same viscosity \(\alpha=0.1\) and heating fraction \(\delta_{e}=5\times 10^{-3}\). We fix the value of the plasma \(\beta\) parameter to \(\beta=0.99\), which leads to a magnetic field strength in the central region of the ADAF to be of the order of the estimated jet core magnetic field strength. Lower values of \(\beta\) would imply unreasonably large magnetic field strengths. For the radial dependence of the accretion rate, parameterized by the index s, we explored values from 0.1 to 1 (the larger s is, the more powerful the outflow). Fixing \(s\) to \(s=0.1\) appears to be a reasonable trade-off between the expected lower power of the jets (compared to M87's jet, where s is set to \(s=0.4\)) and the radiative flux resulting from such ADAF configurations. We fix \(r_{\rm out}=5\times 10^{3}\), which is a typical value for an ADAF's extension and is well below the size of FR 0s, in the absence of other constraints.
For a black hole mass range of \(10^{7.4}\leq M_{\rm BH}/M_{\odot}\leq 10^{9}\)(Baldi et al., 2018) (with a mean value of \(M_{\rm BH}\approx 10^{8.4}\,M_{\odot}\)) for the FR 0 source class, one expects a lower ADAF X-ray luminosity than for the
FR Is possessing black holes with a mass range of \(10^{8}\leq M_{\rm BH}/M_{\odot}\leq 10^{9.5}\) (and with a mean value of \(M_{\rm BH}\approx 10^{8.55}\,M_{\odot}\)).
For adjusting the accretion rate in order to match the observations, we follow a step-by-step procedure. First, the accretion rate is set to the highest allowed value (for a given \(\alpha\), \(\beta\) and \(M_{\rm BH}\), Narayan and Yi, 1995). Then, we compute the associated magnetic field in the central region (namely where \(R\leq R^{\prime}_{\rm em}\)). If the magnetic field strength in the ADAF there exceeds the value of the jet core magnetic field, we decrease the accretion rate accordingly to reach this value. The accretion rate can be further reduced if needed to match the observations. The ADAF spectrum is then calculated with the method described above. We do so for the two gamma-ray detected sources, as well as for the 23 other sources where X-ray data are available. The resulting SED is a combination of the ADAF component, the jet component and the host galaxy's modified blackbody.
FR 0s' jets are expected to be less powerful than FR Is' and only mildly relativistic (Giovannini et al., 2023). Therefore, we explored parameters similar to those used to model M87 (Boughelilba et al., 2022) except a lower value for the average relative jet bulk velocity \(\beta_{\rm j}\), namely \(\beta_{\rm j}=0.55\), and a jet inclination with respect to the line of sight of \(20^{\circ}\). We consider a magnetic field strength in the range \(\sim 10-50\,\)G, primary particle spectral indices of \(1.7-2.3\) and an emission region of a few to hundreds of gravitational radii in size. Lower values for the magnetic field strength imply X-ray fluxes that do not reach the observed level: For the same ADAF parameters, lowering the magnetic field strength implies decreasing the accretion rate which results in correspondingly lower X-ray luminosities. Satisfactory results are obtained when using magnetic field strengths in the range \(25-50\,\)G. The emission region's size varies from \(R^{\prime}_{\rm em}=4\times 10^{15}\,\)cm for \(B=25\,\)G to \(R^{\prime}_{\rm em}=1.2\times 10^{15}\,\)cm for \(B=50\,\)G, in order not to overshoot the available jet power (predicted in the range \(10^{42.5}-10^{43.5}\,\)erg\(\,\)s\({}^{-1}\) for FR0s (Merten et al., 2021; Heckman and Best, 2014).
To allow the jet emission to reach the X-ray energies and corresponding flux levels, a hard slope is preferred and better fits are achieved with an electron spectral index of \(p_{\rm e}=1.7\). The proton spectral index is mainly constrained by the resulting jet power. For that reason, we keep models with \(p_{\rm p}=1.7\). The maximum proton energy varies from \(E^{\prime}_{\rm max,p}=10^{9}\,\)GeV to \(E^{\prime}_{\rm max,p}=5.5\times 10^{9}\,\)GeV. We model the two gamma-ray detected sources individually, for the subthreshold sample we aim at an average description of the population. The injection parameters and some resulting quantities for the different models are given in Tables 1 and 2.
Our best-fit accretion rate values depend on the magnetic field strength present in this region. We find values for the accretion rate at the outer boundary of the flow of \(\dot{m}(r=r_{\rm out})\sim 6\times 10^{-4}-2\times 10^{-3}\) when the jet's magnetic field strength is \(B=25\,\)G whereas \(\dot{m}(r=r_{\rm out})\sim 1\times 10^{-3}-4\times 10^{-3}\) for \(B=50\,\)G.
In Figure 2, we present the SED of LEDA 55267 and its model representations for a jet magnetic field strength of 25G and 50G, from left to right, respectively.
The same is shown in Figure 3 for the second gamma-ray detected source, namely LEDA 58287.
As described above, the 23 subthreshold sources with X-ray data possess a modelled ADAF and a jet. All the 112 subthreshold sources are modelled with the same jet parameters. For each source, the observed flux is calculated from the emitted luminosity, given their respective distance to Earth. The corresponding SEDs are displayed in Figures 4, in faint purple, for two magnetic field strengths, 25G on the left panel and 50G on the right panel respectively.
The average SED of the 112 FR 0s is shown as a solid plain blue line there. The TeV flux predicted by our models lies far below the sensitivity curves of the current Cherenkov telescopes5.
Footnote 5: The MAGIC differential sensitivity is available in machine-readable format at: [https://magic.mpp.mpg.de/newcomers/](https://magic.mpp.mpg.de/newcomers/) magic-team/technical-implementation0/ and the H.E.S.S curve is adapted from Holler et al. (2015), see [https://www.cta-observatory.org/science/ctao-performance/](https://www.cta-observatory.org/science/ctao-performance/)
We predict a strong MeV contribution from the ADAF to the overall sources' SED (even if slightly less important in the case of \(B=25\,{\rm G}\)). This component could be probed by future MeV gamma-ray
\begin{table}
\begin{tabular}{|l||l|l|l|} \hline & LEDA & LEDA & Subthreshold \\ & 55267 & 58287 & sample \\ \hline \(R^{\prime}_{\rm em}\) (cm) & \(4.0\times 10^{15}\) & \(4.0\times 10^{15}\) & \(4.0\times 10^{15}\) \\ \(n^{\prime}_{\rm inj,e}\) (cm\({}^{-3}\,{\rm s}^{-1}\)) & \(1.4\times 10^{-2}\) & \(1.6\times 10^{-3}\) & \(4.8\times 10^{-4}\) \\ \(n^{\prime}_{\rm inj,p}\) (cm\({}^{-3}\,{\rm s}^{-1}\)) & \(3.9\times 10^{-6}\) & \(3.7\times 10^{-6}\) & \(1.8\times 10^{-6}\) \\ \(E^{\prime}_{\rm min,e}\) (MeV) & 0.5 & 0.5 & 0.5 \\ \(E^{\prime}_{\rm max,e}\) (MeV) & \(1.2\times 10^{4}\) & \(8.0\times 10^{3}\) & \(1.5\times 10^{4}\) \\ \(E^{\prime}_{\rm min,p}\) (GeV) & 1.0 & 1.0 & 1.0 \\ \(E^{\prime}_{\rm max,p}\) (GeV) & \(3.0\times 10^{9}\) & \(5.5\times 10^{9}\) & \(2.0\times 10^{9}\) \\ \(p_{\rm e}=p_{\rm p}\) & 1.7 & 1.7 & 1.7 \\ \(u^{\prime}_{\rm part,ss}/u^{\prime}_{\rm B}\) & \(2.1\times 10^{-1}\) & \(9.7\times 10^{-2}\) & \(3.5\times 10^{-2}\) \\ \(L^{\prime}_{\rm jet,ss}\) (erg\(\,{\rm s}^{-1}\)) & \(3.6\times 10^{43}\) & \(3.3\times 10^{43}\) & \(3.1\times 10^{43}\) \\ \hline \end{tabular}
\end{table}
Table 1: Jet parameters used in the case \(B=25\,{\rm G}\). The size of the emission region is \(R^{\prime}_{\rm em}\), \(n^{\prime}_{\rm inj,e(p)}\) is the electron (proton) number density injection rate, and both types of particles are injected with spectral indices \(p_{\rm e,p}=1.7\), following the spectral shape described in 3.1. \(u^{\prime}_{\rm part,ss}/u^{\prime}_{\rm B}\) and \(L^{\prime}_{\rm jet,ss}\) represent the energy density ratio and jet power respectively, after the steady state is reached in the emission region.
\begin{table}
\begin{tabular}{|l||l|l|l|} \hline & LEDA & LEDA & Subthreshold \\ & 55267 & 58287 & sample \\ \hline \(R^{\prime}_{\rm em}\) (cm) & \(1.2\times 10^{15}\) & \(1.2\times 10^{15}\) & \(1.2\times 10^{15}\) \\ \(n^{\prime}_{\rm inj,e}\) (cm\({}^{-3}\,{\rm s}^{-1}\)) & \(4.2\times 10^{-1}\) & \(3.2\times 10^{-2}\) & \(1.9\times 10^{-2}\) \\ \(n^{\prime}_{\rm inj,p}\) (cm\({}^{-3}\,{\rm s}^{-1}\)) & \(1.3\times 10^{-4}\) & \(1.9\times 10^{-4}\) & \(8.2\times 10^{-5}\) \\ \(E^{\prime}_{\rm min,e}\) (MeV) & 0.5 & 0.5 & 0.5 \\ \(E^{\prime}_{\rm max,e}\) (MeV) & \(8.0\times 10^{3}\) & \(8.0\times 10^{3}\) & \(8.0\times 10^{3}\) \\ \(E^{\prime}_{\rm min,p}\) (GeV) & 1.0 & 1.0 & 1.0 \\ \(E^{\prime}_{\rm max,p}\) (GeV) & \(3.0\times 10^{9}\) & \(4.0\times 10^{9}\) & \(1.5\times 10^{9}\) \\ \(p_{\rm e}=p_{\rm p}\) & 1.7 & 1.7 & 1.7 \\ \(u^{\prime}_{\rm part,ss}/u^{\prime}_{\rm B}\) & \(4.8\times 10^{-1}\) & \(2.9\times 10^{-1}\) & \(1\times 10^{-1}\) \\ \(L^{\prime}_{\rm jet,ss}\) (erg\(\,{\rm s}^{-1}\)) & \(1.5\times 10^{43}\) & \(1.3\times 10^{43}\) & \(1.1\times 10^{43}\) \\ \hline \end{tabular}
\end{table}
Table 2: Same as in table 1 for the case \(B=50\,{\rm G}\).
instruments like e-ASTROGAM (De Angelis et al., 2017) or the All-sky Medium Energy Gamma-ray Observatory eXplorer (AMEGO-X) (Caputo et al., 2022; Fleischhack & Amego X Team, 2022)..
The steady-state jet power is estimated by \(L^{\prime}_{\rm jet,ss}=\pi R^{\prime}_{\rm em}\Gamma_{\rm j}^{2}\beta_{\rm j}c \sum_{i}u^{\prime}_{i}\) where \(u^{\prime}_{\rm i}\) is the energy density of radiation, electrons, protons (\(u^{\prime}_{\rm part,ss}\)) and magnetic field (\(u^{\prime}_{\rm B}\)) respectively. We assume a neutral jet and hence account for cold protons to balance the electrical charge. In the case of \(B=25\,{\rm G}\), we find the jet to be slightly magnetically dominated, i.e. \(u^{\prime}_{\rm part,ss}/u^{\prime}_{\rm B}\approx 3.5\times 10^{-2}-0.1\). For \(B=50\,{\rm G}\), the jet composition is very close to equipartition, i.e. \(u^{\prime}_{\rm part,ss}/u^{\prime}_{\rm B}\approx 0.1-0.5\). The resulting jet power is in the range \((1.1-1.5)\times 10^{43}\,{\rm erg\,s^{-1}}\) for \(B=50\,{\rm G}\) and \((3.0-3.6)\times 10^{43}\,{\rm erg\,s^{-1}}\) for \(B=25\,{\rm G}\). Our
Figure 3: Same as Figure 2 but for LEDA 58287
Figure 2: SEDs of LEDA 55267. The dotted line is the modified blackbody modelling the host galaxy emission, the dashed line is the emission coming from the ADAF, the dash-dotted line is the total jet’s emission and the solid line represents the total emission of the source and is the sum of the three components. The differential fluxes sensitivities for 50 hours of observation with MAGIC (Aleksić et al., 2016) and H.E.S.S (Holler et al., 2015) are shown with the cyan and pink dashed lines respectively. _Left:_ for a magnetic field strength of 25G in the jet. _Right:_ same for a magnetic field strength of 50G.
calculated neutrino output of the models predicts neutrino fluxes far below the current instruments' sensitivities (peak fluxes lie at \(\lesssim 10^{-13}\,\mathrm{GeV\,cm^{-2}\,s^{-1}}\) with a peak energy of \(E_{\mathrm{peak}}\sim 10^{17-18}\,\mathrm{eV}\)).
## 5 Conclusion
Aiming to gain a deeper understanding of the dominating jet population in the local Universe, Fanaroff-Riley type 0 radio galaxies, we compared these to the more extended but comprehensively studied FR Is. We found that the broadband SED of FR 0s is extremely similar to the archetypal FR I, M87, during its quiet steady state (described in detail in EHT MWL Science Working Group et al., 2021). The similarity goes from the core radio emission to the X-ray band, and up to gamma-rays for two individual sources detected in the high-energy band.
This motivates to consider an environment described by physical parameter values that is comparable to M87's quiet core. To test this, we applied a one-zone lepto-hadronic jet model, combined with the emission of an advection-dominated accretion flow to the FR 0 population. Alternatively, two-zones models, like a spine-sheath jet structure, are not rejected. Indeed, recently, Cheng et al. (2021); Baldi et al. (2021); Giovannini et al. (2023) showed that FR 0s have a smaller jet-to-counterjet ratio than FR Is, on pc-scale. This suggests that FR 0s' jets are mildly, or even not, relativistic, which can also be interpreted as the presence of a faint relativistic spine and a dominant slow sheath structure in the jet. In this framework, if FR 0s' jets are seen at a large viewing angle, as indicated by observations, mainly the sheath emission would be observed, and our results can be interpreted as the emission from this zone, at the first order. In the one-zone model context, we found that a compact subparsec-scale jet-flow emission region (from a few to a thousand gravitational radii for the jet, to \(5\times 10^{3}\,r_{g}\) for the ADAF, leading to a global region size of \(\sim 6\times 10^{-3}-0.3\,\mathrm{pc}\)) is able to explain the nuclear multiwavelength SED of FR 0s, provided that a magnetic field strength of \(25-50\,\mathrm{G}\) is reached in the core region. As reviewed by Baldi (2023), lower values of the magnetic field strength are expected to prevent the formation of large-scale jets and explain the lack of extended emission in
Figure 4: SEDs of the 112 sources that are not individually detected in the gamma-ray band. The faint purple lines are the individual fluxes of the 112 sources (see main text for details) and the purple blue line is the average of the 112 models. The differential fluxes sensitivities for 50 hours of observation with MAGIC, H.E.S.S are shown with the cyan and pink dashed lines respectively. _Left:_ for a magnetic field strength of 25G in the jet. _Right:_ same for a magnetic field strength of 50G.
FR 0s. Khatiya, N. et al., in prep. (2023) explore broadband modelling scenarios with such low field strengths, where then the jet's composition is strongly particle-dominated, and leptons can account for the high-energy observations.
In this model, the jet of FR 0s is mildly relativistic, with a velocity \(\beta_{\rm j}c=0.55\,c\), which is consistent with the value obtained by Giovannini et al. (2023) when observing the core of FR 0s in comparison to FR Is. The jet contributes mainly to the radio and gamma-ray band. The optical observations are dominated by the host galaxy. The jet and the ADAF both contribute to the X-ray band, predicting a strong ADAF-dominated MeV flux component.
As protons are, in this framework, accelerated up to \(\sim 6\times 10^{18}\,\rm eV\), FR 0s are multi-messenger sources and could contribute to the cosmic-ray flux up to the ankle (\(E^{\prime}\approx 10^{18}\,\rm eV\), see also Merten et al., 2021; Lundquist et al., 2022).
In this view, we find that FR 0s, given their observed nuclear properties and their broadband SED, are of a similar nature as that of the naked quiet core of FR Is, whose best-studied representation is the quiet core of M87.
This work acknowledges financial support from the Austrian Science Fund (FWF) under grant agreement number I 4144-N27. MB has for this project received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 847476. The views and opinions expressed herein do not necessarily reflect those of the European Commission. MB wishes to thank Paolo Da Vela and Giacomo Bonnoli for the fruitful discussions and insightful comments on this paper. This work benefited from the following software: NumPy (van der Walt et al., 2011), Matplotlib (Hunter, 2007), pandas (Wes McKinney, 2010; pandas development team, 2023), jupyter notebooks (Perez & Granger, 2007).
|
2303.00809 | Topological edge states in equidistant arrays of Lithium Niobate
nano-waveguides | We report that equidistant 1D arrays of thin-film Lithium Niobate
nano-waveguides generically support topological edge states. Unlike
conventional coupled-waveguide topological systems, the topological properties
of these arrays are dictated by the interplay between intra- and inter-modal
couplings of two families of guided modes with different parities. Exploiting
two modes within the same waveguide to design a topological invariant allows us
to decrease the system size by a factor of two and substantially simplify the
structure. We present two example geometries where topological edge states of
different types (based on either quasi-TE or quasi-TM modes) can be observed
within a wide range of wavelengths and array spacings. | Andrey V. Gorbach, Jesper Beer, Anton Souslov | 2023-03-01T20:22:23Z | http://arxiv.org/abs/2303.00809v1 | # Topological edge states in equidistant arrays of Lithium Niobate nano-waveguides
###### Abstract
We report that equidistant 1D arrays of thin-film Lithium Niobate nano-waveguides generically support topological edge states. Unlike conventional coupled-waveguide topological systems, the topological properties of these arrays are dictated by the interplay between intra- and inter-modal couplings of two families of guided modes with different parities. Exploiting two modes within the same waveguide to design a topological invariant allows us to decrease the system size by a factor of two and substantially simplify the structure. We present two example geometries where topological edge states of different types (based on either quasi-TE or quasi-TM modes) can be observed within a wide range of wavelengths and array spacings.
Topological photonic systems have recently attracted much attention, not only as a potential playground to explore fundamental physical effects associated with topological states, but also as a new platform to design structures for light manipulation [1; 2; 3]. Of particular interest are recently emerging all-dielectric structures, whereby topological properties are defined by the structure of a photonic crystal [4; 5]. Remarkably, non-trivial topological phases may exist even in simple 1D crystals [6]. A fundamental workhorse of topological physics is a 1D system called the Su-Schrieffer-Heeger (SSH) chain [7]. This model, which was originally proposed to describe excitations in polyacetylene molecules, represents a 1D dimer chain with alternating coupling (hopping) coefficients. The two topologically distinct phases of the chain correspond to two different configurations where either the stronger or the weaker coupling defines the unit cell [8]. The topological invariant that distinguishes these phases is known as either the winding number or the Zak phase [9]. One important manifestation of topological phases in 1D systems is the emergence of edge states [6]. Existence of such localized states is directly related to the topological properties of the bulk crystal through bulk-boundary correspondence [10; 11]. This correspondence dictates that, because the invariant has to abruptly change at the boundary of the topological material, this boundary is required to host protected edge states.
The vast majority of photonic topological structures explored so far impose the required crystal symmetry by spatially modulating the dielectric constant. This approach stems from a general analogy between condensed matter physics and photonics [12]. Particularly, the standard models describing light propagation in coupled waveguide systems directly map onto tight-binding models, such as the SSH chain [13]. Such models assume single-mode operation of the waveguides. Recently, an alternative approach has been proposed, whereby the multi-modeness of the coupled waveguides is exploited to expand the number of degrees of freedom per unit cell [14; 15; 16]. Here, the effective crystal structure and its topological properties are governed by the network of different intra- and inter-modal couplings, while the spatial arrangement of the waveguides can be entirely homogeneous. Thus, the complexity of topological photonic bands can be realised in much simpler and more compact structures.
In this work, we demonstrate that one-dimensional equidistant (homogeneous) arrays of Lithium Niobate on Insulator (LNOI) waveguides [17; 18; 19] can exhibit topologically distinct phases, leading to formation of topological edge states, see Fig. 1. Ridge waveguides are etched from a Lithium Niobate (LiNbO\({}_{3}\)) film of thickness \(h\) on a silica glass substrate. The waveguides are characterised by width \(w\) at the top, residual film thickness \(t\), and sidewall angle \(\varphi\) [Fig. 1(a)]. This angle typically varies from \(40^{\circ}\) to \(80^{\circ}\), depending on the particular etching process [19]. Combined with the anisotropic dispersion of bulk Lithium Niobate, the four geometrical parameters \((h,t,w,\varphi)\) represent a convenient toolbox for tuning the dispersion of an isolated waveguide. Particularly, for certain parameters one can observe a nearly degenerate behaviour of different pairs of guided modes within large spectral windows. One such example geometry is illustrated in Fig. 1(b), where two such modes, labelled quasi-TE\({}_{01}\) and quasi-TE\({}_{10}\), have similar effective indices within a wide wavelength interval \(1.0\mu\)m \(<\lambda<1.7\mu\)m (these modes would be completely degenerate in a perfect square waveguide). Adjusting the sidewall angle, a similar nearly-degenerate behaviour of quasi-TM\({}_{01}\) and quasi-TM\({}_{10}\) modes can be observed, see Fig. 1(c). This trend appears to be generic: similar nearly-degenerate behaviour of different pairs of modes can be observed in LNOI waveguides by varying the three geometrical parameters. The presence of two degenerate modes within each waveguide is the key ingredient that enables topological states within an equidistant array.
Arranging such waveguides in a regular 1D array, as in Fig. 1(a), we observe the formation of localized edge modes within a wide range of wavelengths and edge-to-edge separation distances \(s\) between the waveguides. Figure 1(g) shows one example of an edge mode in an array of waveguides with the same parameters as in Fig. 1(c). In this mode, the field intensity is exponentially localized within a few waveguides nearest to the edge of the array. This mode is doubly-degenerate: an equivalent "mirror" mode exists on the opposite edge. Similar edge modes are observed in the second geometry. Dashed lines in Fig. 1(b) and (c) show dispersions of the edge modes in finite-size arrays composed of waveguides having the two respective geometries. In both cases and for all wavelengths, the effective index of an edge mode appears to be in-between the indices of the two nearly degenerate modes of a single waveguide.
A close inspection of the field distribution of the edge mode in Fig. 1(g) within the area of the first waveguide, see Fig. 1(d), reveals that a superposition of the quasi-TM\({}_{01}\) and quasi-TM\({}_{10}\) modes is excited within this waveguide. These numerical results were obtained from COMSOL Multiphysics taking into account the material dispersion of both the Lithium Niobate and the silica glass substrate [20]. The corresponding mode profiles of an isolated waveguide are shown in Figs. 1(e) and 1(f). Thus, substracting the fields of quasi-TM\({}_{01}\) and quasi-TM\({}_{10}\) modes results in the diagonal structure observed in Fig. 1(d). A similar structure is observed in other waveguides, see Fig. 1(g), and in edge modes supported by the second geometry in Fig. 1(b).
What is the origin of these edge states? To answer this question, we consider a simple coupled-mode model, which takes into account interactions between the two different types of modes for each waveguides:
\[-i\frac{dA_{n}}{dz} = n_{A}A_{n}+C_{A}\left(A_{n+1}+A_{n-1}\right) \tag{1}\] \[+C_{x}\left(B_{n+1}-B_{n-1}\right)\;,\] \[-i\frac{dB_{n}}{dz} = n_{B}B_{n}+C_{B}\left(B_{n+1}+B_{n-1}\right)\] (2) \[-C_{x}\left(A_{n+1}-A_{n-1}\right)\;.\]
Here \(A_{n}\) and \(B_{n}\) are amplitudes of the two modes [e.g., quasi-TE\({}_{01}\) and quasi-TE\({}_{10}\) for the geometry in Fig. 1(b)]
Figure 1: Edge states in LNOI waveguide arrays: (a) a schematic view of an array; (b) dispersion of different guided modes in a single waveguide with \(w=700\)nm, \(h=800\)nm, \(t=100\)nm, \(\varphi=75^{\circ}\), using x-cut LN film (the extraordinary axis of the crystal is oriented horizontally). The dashed line illustrates dispersion of the edge mode in an array of \(N=10\) waveguides, edge-to-edge separations \(s=100\)nm; (c) the same as (b) but for \(\varphi=85^{\circ}\), dashed line illustrates dispersion of the edge mode with \(s=50nm\); (d)-(f) field profiles (norm of the electric field) of guided modes in the same geometry as in panel (c), with \(\lambda=0.7\mu\)m. An edge mode in waveguides array with \(N=10\) and \(s=50\)nm is shown in panel (g), and a zoom-in of the waveguide at the edge is displayed in (d). Parts (e) and (f) show profiles of quasi-TM\({}_{01}\) and quasi-TM\({}_{10}\) modes of an isolated waveguide, respectively. The arrows indicate the polarization of the local electric field. (Data from COMSOL Multiphysics).
Figure 2: Overlaps between quasi-TM\({}_{01}\) mode in waveguide \(n\) and quasi-TM\({}_{10}\) mode in waveguides \(n-1\) (a) and \(n+1\) (b) entering the calculation of the coupling constant in Eq. (3).
in the \(n\)-th waveguide, \(z\) is the dimensionless propagation length measured in the units of the wavelength in vacuum \(\lambda_{0}=2\pi c/\omega\), \(n_{A}\) and \(n_{B}\) are the effective indices of the two modes for an isolated waveguide, and \(C_{A}\), \(C_{B}\), and \(C_{x}\) are different intra- and inter-modal coupling coefficients. One important aspect of this model is the variation of signs of the inter-modal coupling coefficient \(C_{x}\) connecting mode \(A_{n}\) and mode \(B_{n+1}\) (\(C_{x}\)), and connecting mode \(A_{n}\) and mode \(B_{n-1}\) (\(-C_{x}\)). This is a direct consequence of the opposite parities of the two interacting modes [16]. Generally, the coupling coefficient between mode \(p\) in the waveguide \(n\) and mode \(q\) in the waveguide (\(n+1\)), with \(p\) and \(q\) each being either mode \(A\) or mode \(B\), is obtained via the overlap integral [21]:
\[C_{pq}=\omega\iint_{-\infty}^{+\infty}\vec{e}_{p}^{*}(x,y)\cdot\Delta\epsilon \vec{e}_{q}(x+T,y)dxdy\;, \tag{3}\]
where \(\vec{e}_{A,B}(x,y)\) are the modes of the isolated waveguide \(n\), \(T\) is the centre-to-centre distance between the waveguides, and \(\Delta\epsilon(x,y)\) is the difference between the permittivity tensor of the two-waveguide structure (waveguides \(n\) and \(n+1\)) and a single-waveguide structure (waveguide \(n\) only). Essentially, \(\Delta\epsilon\) is non-zero within the core area of the \((n+1)\)th waveguide only. For the modes of the same type, i.e., when \(p=q\), the coupling coefficients between pairs of waveguides \(n\) and \((n+1)\), and between waveguides \(n\) and \((n-1)\), will be the same. However, this is no longer the case if the modes are of different types. In particular, when the two modes have opposite symmetries with respect to \(x\rightarrow-x\), such as quasi-TM\({}_{01}\) and quasi-TM\({}_{10}\) modes, one obtains \(C_{pq}=-C_{qp}\), as illustrated in Fig. 2. Notably, this variation of signs preserves the Hermitian structure of the model, but induces an effective chirality in the array.
For an infinite array, the spectrum of the model in Eqs. (1,2) for plane waves \(A_{n},B_{n}\sim\exp(i\lambda z-iqn)\) consists of two bands:
\[\lambda_{1,2} = n_{+}+2C_{+}\cos(q) \tag{4}\] \[\pm\sqrt{(n_{-}+2C_{-}\cos(q))^{2}+4C_{x}^{2}\sin^{2}q}\;,\]
where \(n_{\pm}=(n_{B}\pm n_{A})/2\) and \(C_{\pm}=(C_{B}\pm C_{A})/2\). Notably, the gap between the bands closes at \(q=0\) when \(n_{-}+2C_{-}=0\), or at \(q=\pi\) when \(n_{-}-2C_{-}=0\). This gap closure is accompanied by a qualitative change in the structure of the eigenvectors, as illustrated in Figs. 3(a) and (b). Here, for the geometry as in Fig. 1(c), by varying the separation distance between the waveguides at a fixed wavelength, the balance between \(|n_{-}|\) and \(2|C_{-}|\) is tipped over. When \(|n_{-}|>2|C_{-}|\) (\(s>s_{0}\approx 130\)nm), the amplitudes of either \(A\) or \(B\) modes dominate across the entire Brillouin zone \(0\leq|q|\leq\pi\) in each band, see the thin lines in Figs. 3(b). Thus, each band can be associated with a particular mode (\(A\) or \(B\)) in this case. On the contrary, when \(|n_{-}|<2|C_{-}|\) (\(s<s_{0}\)), the structure of the modes within each band switches between mode \(A\) and \(B\) as the wavenumber \(q\) sweeps the Brillouin zone, see the thick lines in Fig. 3(b). These results of the coupled-mode model are in agreement with the full solution of Maxwell's equations with periodic boundary conditions. In Fig. 3(c), a part of the spectrum of an infinite waveguide array (in the vicinity of quasi-TM\({}_{10}\) and quasi-TM\({}_{01}\) modes for an isolated waveguide) is shown as obtained using COMSOL Multiphysics (solid curves). The corresponding spectrum of the coupled-mode model is shown with dashed curves. For the latter, we used COMSOL data for isolated waveguides to calculate the coupling coefficients according to Eq. (3). The profiles of the modes of the top band at different wavenumbers \(q\) are shown in Figs. 3(d)-(f). As predicted by the coupled-modes model, we observe a transition from quasi-TM\({}_{01}\) at \(q=0\) to quasi-TM\({}_{10}\) at \(q=\pi\).
For the model in Eqs. (1,2), it was demonstrated that, as the spectral gap closes and reopens again, the system undergoes a topological transition [16]. The same is true for the full model, as we confirm by calculating the Zak phase of the two bands using the Wilson loop approach [22]:
\[\theta\approx i\ln\Pi_{i=1}^{N}\left\langle\psi(k_{i}),\psi(k_{i+1})\right\rangle\;, \tag{5}\]
where \(\psi(k)\) are the normalized eigen-modes belonging to
Figure 3: Band structure of an infinite array with the same waveguide parameters as Fig. 1(b): (a) the balance between the modal detuning and coupling coefficients, \(n_{-}+2C_{-}\), as a function of the waveguide separation at \(\lambda=1.0\mu\)m. The gap closes at \(s\approx 130\)nm; (b) the structure of eigenvectors of the coupled-mode model in Eqs. (1,2) of the top (black) and bottom (red) bands for \(s=125\)nm (thick lines) and \(s=135\)nm (thin lines); (c) the band structure for \(s=125\)nm obtained using Comsol simulations (solid curves) and the coupled-mode model (dashed curves); (d)-(f) profiles of the modes of the top band for \(s=125\)nm at \(q/\pi=0\), \(0.1\), and \(1\), respectively.
a particular band, and the inner product of two modes is defined as
\[\langle a,b\rangle=\frac{1}{4}\iint\left[\vec{e}_{a}\times\vec{h}_{b}^{*}+\vec{e} _{b}^{*}\times\vec{h}_{a}\right]dxdy\;. \tag{6}\]
The evaluation in Eq. (5) is performed by discretizing the full Brillouin zone into \(N\) segments with \(k_{N+1}=k_{1}=-\pi\). As the separation between the waveguides crosses the transition point \(s=s_{0}\), we observe a jump from \(\theta=0\) (trivial phase corresponding to winding number 0) to \(\theta=\pi\) (non-trivial phase corresponding to winding number 1) in each of the two bands. In Figs. 4(a) and (b), the Zak phase of the top band is plotted for the same geometries as in Figs. 1(b) and (c), respectively. This topological phase transition is accompanied by the emergence of two degenerate edge modes (localized at either edge) in a finite-size array, as illustrated in Figs. 4(c)-(e). In Fig. 4(c), the spectrum of an infinite size coupled-modes model, Eq. (4), is shown with the shaded areas for the same geometry as in Fig. 1(c) at \(\lambda=1\mu\)m. The modes of a finite-size array (\(N=10\) waveguides) are shown with solid (Comsol simulations) and dashed (coupled-modes model) lines. As the gap of an infinite system closes and re-opens, the two modes corresponding to the bottom and top edges of the two bands at \(s>s_{0}\) merge together to form the two degenerate edge states at \(s<s_{0}\). In the coupled-modes model, these two states have a fixed effective index for any \(s\), see the dashed lines in Fig. 4(c). In the full system, the indices are no longer fixed due to the influence of a third band corresponding to quasi-TE\({}_{01}\) modes, c.f. Fig. 1(c). Nevertheless, the coupled-modes model gives a reasonably accurate prediction, not only for the effective index, but also for the detailed field profiles of the edge modes, as shown in Figs. 4(d) and (e).
Similar behaviour is observed with quasi-TE modes in the second geometry, as in Fig. 1(b). In Fig. 5(a) the spectrum of the infinite periodic structure (in a vicinity of the TE\({}_{01}\) and TE\({}_{10}\) modes of an isolated waveguide) is shown. This was obtained from COMSOL simulations of the periodic structure. Fig. 5(b) zooms in the area near the gap. In Fig. 5(c) we present mode profiles of a finite-size array with \(N=20\) waveguides. We picked 10 modes with indices closest to the gap, the corresponding indices are indicated in Fig. 5(b) with black circles, blue crosses, and red diamonds. Most of the modes appear to be delocalized, the corresponding indices are within either top (black circles) or bottom (blue crosses) bands of the infinite structure. The two modes indicated by red diamonds fall within the gap. Notably, the two modes have nearly degenerate indices, and their profiles appear to be very similar, with the field intensity being localized at the edges. These are the symmetric and anti-symmetric combinations of the edge modes, as generated by the COMSOL solver due to the degeneracy. The edge modes can thus be reconstructed from these two degenerate modes, as shown in Fig. 5(c). The bottom panel in Fig. 5(c) shows the corresponding edge mode as obtained from the coupled-modes model. As before, the two models appear to be in excellent agreement.
We find topological edge modes when the condition
\[|n_{-}|<2|C_{-}| \tag{7}\]
is satisfied. This inequality compares the mismatch in the effective indices, on the left side, and in the intra-modal coupling coefficients, on the right side, of the two nearly-degenerate modes of an isolated waveguide. Interestingly, the inter-modal coupling coefficient \(C_{x}\) does not enter this condition explicitly, but an interaction between the two families of modes is required to form the edge states. Surprisingly, we discover that for LNOI waveguide arrays the condition in Eq. (7) is satisfied across large regions of the \((\lambda,s)\) parameter space, leading us to conclude that topological states are a generic feature of this system, see e.g., Fig. 4(a) and (b). There are two factors which contribute to having large regions of parameter space correspond to topological states. First,
Figure 4: (a) and (b) Zak phase of the top band (corresponding to TE\({}_{10}\)/TM\({}_{10}\) modes for sufficiently large \(s\)) of an infinite waveguide array, Eq. (5), evaluated for the geometries as in Fig. 1(b) and (c), respectively. The light shaded areas indicate the regions with \(\theta=\pi\), where we predict the existence of edge modes; (c) modes of a finite waveguides array with \(N=10\) for the geometry as in Fig. 1(c) and with \(\lambda=1\mu\)m, obtained from COMSOL simulations. The red dashed lines show two modes of the coupled-modes model, Eqs. (1,2), which correspond to edge states. The shaded areas indicate bandwidths of the two bands of the coupled-modes infinite system, Eq. (4); (d) and (e) profiles of edge modes in finite waveguide arrays with \(N=10\), \(s=100\)nm and \(s=50\)nm. The top panels are the results of COMSOL simulations, the corresponding effective indices are marked with the star and pentagon symbols in panel (c). The bottom panels are the eigenmodes of the model in Eqs. (1,2).
we find pairs of nearly degenerate modes (quantified by a small effective index mismatch \(n_{-}\)) across large frequency windows due to the combined effects of the material dispersion of Lithium Niobate and the geometric dispersion of the nano-waveguides. Here we presented two example geometries with different combinations of the first-order TE and TM modes, but we expect this small mismatch to also occur for pairs of other higher-order modes. Second, due to the different parities of the participating modes, the inter-modal coupling coefficients \(C_{A}\) and \(C_{B}\) generally appear to be of opposite signs, thus maximising \(2|C_{-}|=|C_{B}-C_{A}|\). As a result, we observe non-trivial topology even when considering pairs of modes with a large detuning \(|n_{-}|\).
Thus, LNOI waveguide arrays represent a convenient topological photonics platform, in which topology occurs within systems readily fabricated using standard techniques. Significantly, exploiting pairs of near-degenerate modes replaces a two-waveguide unit cell by a single waveguide, thereby decreasing the size of arrays in which topological effects are observed by a factor of two. Combined with the strong second-order optical nonlinearity of Lithium Niobate, we find this system especially promising for further studies of nonlinear topological phenomena, such as topological optical parametric oscillations [23] or dynamics of two-colour topological edge solitons [24].
|
2303.09193 | Molecular dynamics analysis of particle number fluctuations in the mixed
phase of a first-order phase transition | Molecular dynamics simulations are performed for a finite non-relativistic
system of particles with Lennard-Jones potential. We study the effect of
liquid-gas mixed phase on particle number fluctuations in coordinate subspace.
A metastable region of the mixed phase, the so-called nucleation region, is
analyzed in terms of a non-interacting cluster model. Large fluctuations due to
spinodal decomposition are observed. They arise due to the interplay between
the size of the acceptance region and that of the liquid phase. These effects
are studied with a simple geometric model. The model results for the scaled
variance of particle number distribution are compared with those obtained from
the direct molecular dynamic simulations. | Volodymyr A. Kuznietsov, Oleh Savchuk, Roman V. Poberezhnyuk, Volodymyr Vovchenko, Mark I. Gorenstein, Horst Stoecker | 2023-03-16T10:07:19Z | http://arxiv.org/abs/2303.09193v3 | Molecular dynamics analysis of particle number fluctuations in the mixed phase of a first-order phase transition
###### Abstract
Molecular dynamics simulations are performed for a finite non-relativistic system of particles with Lennard-Jones potential. We study the effect of liquid-gas mixed phase on particle number fluctuations in coordinate subspace. A metastable region of the mixed phase, the so-called nucleation region, is analyzed in terms of a non-interacting cluster model. Large fluctuations due to spinodal decomposition are observed. They arise due to the interplay between the size of the acceptance region and that of the liquid phase. These effects are studied with a simple geometric model. The model results for the scaled variance of particle number distribution are compared with those obtained from the direct molecular dynamic simulations.
mixed phase, fluctuations, molecular dynamics
## I Introduction
The endpoint of a first-order phase transition, noted as the critical point (CP), occurs under different physical conditions, including most molecular and ferromagnetic systems [1; 2], nuclear matter [3], and potentially the hot QCD matter at nonzero baryon density [4; 5]. In the thermodynamic limit, particle number fluctuations exhibit singular behavior at the CP. These singularities are smeared out in finite-size systems. Nevertheless, small systems also demonstrate specific features of critical behavior such as enhancement of fluctuations [6; 7].
Event-by-event fluctuations in nucleus-nucleus collisions are used as an experimental tool to search for the QCD CP at finite baryon density [4; 5]. The presence of the QCD CP should manifest itself in the enhanced fluctuations of proton number [8] and possibly non-monotonic collision energy dependence of non-Gaussian fluctuation measures [9; 10]. Measurements of proton number fluctuations in nucleus-nucleus collisions have been performed by different experiments such as STAR [11; 12], HADES [13], and ALICE [14]. The measurements indicate a possible non-monotonic collision energy dependence of the kurtosis of proton number [11] as well as a possible enhancement of two-proton correlations over non-critical baselines [15] but conclusive evidence for the presence of QCD CP is still lacking.
The grand canonical ensemble (GCE) of statistical mechanics is the most suitable framework to study statistical fluctuations. Within this formulation, the cumulants of particle number distribution are straightforwardly connected to the chemical potential derivatives of thermodynamic potential. However, the GCE can not be directly used for the conditions realized in the experiment [16; 17]. Several essential restrictions should be taken into account: (i) finite size of systems created in the experiment [18; 19], (ii) influence of the global conservation laws, for instance, baryon number conservation [20; 21], and (iii) differences between coordinate and momentum space acceptances. Recently the subensemble acceptance method (SAM) to correct the fluctuation measurements for global conservation laws has been developed [22; 23; 24; 25]. This method is applicable for statistical systems in the presence of interactions. In the limit of ideal Maxwell-Boltzmann gas, it reduces to the binomial acceptance correction procedure [26; 20; 27].
In the present work we continue our studies [7] of particle number fluctuations within molecular dynamics (MD) simulations of the Lennard-Jones (LJ) fluid. The model considered here corresponds to an interacting system of non-relativistic particles. The presence of both attractive and repulsive interactions leads to a first-order liquid-gas phase transition (LGPT). The MD simulations of the LJ fluid provide a microscopic approach to fluctuations in a system with a phase transition. They also
allow one to study deviations from the baselines based on the GCE. This study thus complements earlier analyses of correlations and fluctuations in the first-order phase transition region performed using hadronic transport with mean fields [28, 29] or fluid dynamics with a finite-range term [30, 31]. With regard to mean quantities the molecular dynamics of non-equilibrium finite systems was studied previously in the context of heavy ion collisions in Refs. [32, 33, 34, 35, 36].
Our study is motivated by the measurements of baryon number fluctuations in heavy-ion collisions to probe the QCD phase structure. In particular, the LJ fluid can naturally model the nuclear liquid-gas transition between a dilute gas of nucleons and clusters and the dense nuclear liquid, if one regards the LJ particles as nucleon degrees of freedom. This nuclear LGPT is probed in nuclear collisions at low energies [37, 38, 39]. Experiments at higher collision energies, on the other hand, study the confinement-deconfinement transition, which may contain a critical point and a line of first-order phase transition at finite baryon density [4, 5]. The relevance of the LJ fluid to model the confinement-deconfinement transition may seem less evident, given that it does not describe the expected change of degrees of freedom from hadrons to quarks. Nevertheless, simulations of the LJ fluid do provide useful guidance to understand the behavior of baryon number fluctuations near the QCD CP, for two reasons: (i) the behavior of baryon number fluctuations is universal near the QCD CP and governed by the 3D-Ising universality class [5] - the same universality class that characterizes critical behavior in the LJ fluid [40]; (ii) the LJ fluid simulations can test the validity of the model-independent SAM procedure for subtracting the canonical ensemble effects on baryon number cumulants, this is particularly relevant given that the finite-size effects, that hinder the accuracy of the SAM, can be significant in the mixed phase region.
This work focuses on fluctuations in the mixed-phase region of a first-order phase transition. While significant attention has been given to higher-order measures of fluctuations of conserved charges at supercritical temperatures and in pure phases (see e.g. Refs. [41, 42, 43, 9, 10, 44, 45]), less attention has been paid to the mixed phase. However, it is possible for a system created in relativistic nucleus-nucleus collisions to enter the mixed phase of a first-order phase transition under certain conditions. This is particularly relevant because of the ongoing program of the HADES collaboration at the GSI Helmholtz-Gruyterum fur Schwerionenforschung mbH to measure higher-order net-proton and net-charge fluctuations in central Au+Au collisions at collision energies of \(0.2A-1.0A\) GeV. The system created in these collisions may undergo freeze-out in the mixed phase of the nuclear LGPT.
In our previous work [7], we studied a supercritical isotherm, \(T=1.06\,T_{c}\), observing a sizable increase of particle number fluctuations near the critical particle number density \(n\approx n_{c}\). In the present work, we study particle number fluctuations along a subcritical temperature \(T=0.76\,T_{c}\) inside the liquid-gas mixed phase. First, we look at the metastable part of the mixed phase - the so-called nucleation region. The simulation results are compared to a simple model of non-interacting particle clusters. Another part of the liquid-gas mixed phase - the spinodal decomposition region - demonstrates anomalous large particle number fluctuations. This happens at temperature \(T\) and particle number density \(n\) also far away from the CP. A simple analytical toy model is constructed to clarify these effects.
The paper is organised as follows. The details of MD with LJ potential and the results of the simulations for particle number fluctuations are presented in Sec. II. A brief description of the mixed phase structure is described in Sec. III. A simple model of non-interacting clusters in Sec. IV and a geometrical toy model in Sec. V are developed to interpret the MD results in the nucleation and spinodal decomposition regions, respectively. Summary in Sec. VI closes the article
Figure 1: The liquid-gas region of the Lennard-Jones fluid phase diagram. Horizontal dashed lines show the subcritical isotherm \(\tilde{T}=0.76\) studied in this work and the supercritical isotherm \(\tilde{T}=1.06\) explored in Ref. [7]. Solid and dashed lines show the binodal and spinodal lines, respectively. The blue and green regions correspond to the nucleation and cavitation metastable parts of the mixed phase, respectively. The grey area denotes the spinodal decomposition region. The black star represents the CP. The squares denote the \((\tilde{n},\tilde{T})\) points where the MD simulations in the mixed phase have been performed.
## II Molecular dynamics with Lennard-Jones potential
We use molecular dynamics simulations of the classical non-relativistic system of particles interacting via the Lenard-Jones (LJ) potential,
\[V_{\rm LJ}(r)=4\varepsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{ \sigma}{r}\right)^{6}\right]. \tag{1}\]
The first term in Eq. (1) corresponds to the repulsive forces at short distances whereas the second one describes the attractive interactions. The parameter \(\epsilon\) describes the depth of the attractive well, and \(\sigma\) corresponds to the size of the particle, which also defines the distance scale. It is convenient to introduce dimensionless reduced variables,
\[V_{\rm LJ}^{*}(r^{*})=V_{\rm LJ}(r)/\varepsilon=4\left((r^{*})^{-12}-(r^{*})^{ -6}\right)\, \tag{2}\]
with \(r^{*}=r/\sigma\) being the reduced distance. The reduced thermodynamic variables are the temperature \(T^{*}=T/(\varepsilon)\), particle number density \(n^{*}=n\sigma^{3}\), and pressure \(p^{*}=p\sigma^{3}/\varepsilon\). The particle's mass can be utilized to define the dimensionless time variable, \(t^{*}=t\sqrt{\varepsilon/(m\sigma^{2})}\).
The LJ system possesses a rich phase diagram (see e.g. Ref. [46] for an overview). At present, there are no direct analytical tools to compute the phase diagram in the LJ system. Nevertheless, numerical methods (see, e.g. Ref. [47]) allow one to compute the approximate locations of the LGPT binodal and spinodal lines, as well as the CP location. This part of the phase diagram is of primary interest in the present work, and it is shown in Fig. 1 in terms of the reduced temperature and density. The CP location has been estimated from numerous MD simulations [48]
\[T_{c}^{*} =1.321\pm 0.007\, \tag{3}\] \[n_{c}^{*} =0.316\pm 0.005\,\] \[p_{c}^{*} =0.129\pm 0.005\.\]
In what follows, we use a set of dimensionless variables scaled by the critical values
\[\tilde{T}\ \equiv\frac{T}{T_{c}}=\frac{T^{*}}{T_{\rm c}^{*}}\,\quad\tilde{n} \equiv\frac{n}{n_{c}}=\frac{n^{*}}{n_{\rm c}^{*}}\,\quad\tilde{p}\equiv\frac{p}{p_{c}}=\frac{p^{*}}{p_{ \rm c}^{*}}. \tag{4}\]
The quantities (3) correspond to the thermodynamic limit when the system's volume \(V\to\infty\). For finite systems, the physical meaning of the LGPT and its CP should be treated with caution, as they are only rigorously defined for infinite systems.
The MD simulations are performed by numerically integrating Newton's equations of motion using the Velocity Verlet integration method. The simulations are done for a system of \(N_{0}=400\) interacting particles in a cubic box of volume \(V_{0}\) with periodic boundary conditions with minimum image convention.
In the mixed phase the time of reaching the thermal equilibrium can be rather large (see Refs. [49]). After the equilibration time, \(\tilde{t}_{\rm eq}=50\), the LJ system reaches a state with a stable temperature1 (see Ref. [7]). The time of all simulations is \(\tau=10^{6}\). This large time interval guarantees small deviations (less than \(1\%\)) of the scaled variance in independent simulations.
Figure 2: The particle number distributions in a subvolume \(V=\alpha V_{0}\) for the system with \(N_{0}=400\) particles. The distributions \(P(N)\) obtained from the MD simulations at \(\tilde{T}=0.76\) and different values of \(\tilde{n}\) inside the mixed phase.
The total particle number \(N_{0}\) in the entire volume is fixed. To study the fluctuations of particle number one thus needs to choose a subvolume \(V=\alpha V_{0}\) (\(0<\alpha<1\)) of the whole volume. We choose a cubic subvolume placed in the geometrical center of the system. From the MD simulations, we obtain the normalized probability distribution \(P(N)\) to observe \(N\) particles in the subvolume \(V\).
A useful measure of particle number fluctuations is the scaled variance:
\[\omega=\frac{\left\langle N^{2}\right\rangle-\left\langle N\right\rangle^{2}}{ \left\langle N\right\rangle}. \tag{5}\]
In MD simulations, the values \(\left\langle N\right\rangle\) and \(\left\langle N^{2}\right\rangle\) can be calculated as time averages. In Fig. 2 we present the \(P(N)\) distribution at the subcritical temperature \(\tilde{T}=0.76\) for several different particle number densities \(\tilde{n}\) inside the mixed phase. The total number of particles is fixed as \(N_{0}=400\) and the subvolume fraction is taken as \(\alpha=0.2\). From Fig. 2, one observes substantial deviations of the resulting distributions from the Poisson distribution baseline. For \(\tilde{n}\approx 1\), a double-hump distribution is clearly observed.
Note that for any finite \(\alpha\), fluctuations of \(N\) in the subvolume \(V\) will be influenced by the exact global conservation of the total particle number \(N_{0}\) in the full volume \(V_{0}\). In the large volume limit, these effects can be taken into account analytically [21]. One can defined a scaled variance \(\tilde{\omega}\) corrected for exact \(N_{0}\) conservation as
\[\tilde{\omega}=\frac{\omega}{1-\alpha}. \tag{6}\]
The results for the corrected scaled variance \(\tilde{\omega}\) as a function of \(\tilde{n}\) are presented in Fig. 3 for both (a) the subcritical and (b) the supercritical isotherms \(\tilde{T}=0.76\) and \(\tilde{T}=1.06\), respectively. All results are obtained for \(N_{0}=400\) and \(\alpha=0.2\), as in Fig. 2.
One can immediately observe that fluctuations are much larger in the mixed phase at \(\tilde{T}=0.76\) compared to those along the temperature \(\tilde{T}=1.06\), slightly above the critical point. This indicates that, although the fluctuations exhibit singular behavior in the vicinity of the CP, they can be even larger in the mixed phase region away from the critical point.
In the following sections, we provide a brief overlook of the structure of the liquid-gas mixed phase and analyze the observed large values of \(\tilde{\omega}\) in the mixed phase in terms of simple analytical models.
## III Mixed phase structure
One can specify three different regions inside the mixed phase: nucleation, spinodal decomposition, and cavitation (see, e.g., Refs. [50; 51]). They are shown in Fig. 1 by blue, grey, and green colors, respectively. Their microscopic structures are symbolically illustrated in Fig. 4. The nucleation region includes a mixture of particles and small clusters (liquid droplets), whereas the cavitation region is represented by the liquid with small bubbles of the gaseous phase. In the context of heavy ion collision clusters correspond to nuclear fragments whose distributions were previously studied using MD in the case of expanding system in Refs. [32; 33; 34; 35; 52]. Experimental measurements of nuclear fragment mass distributions were used to probe the nuclear LGPT and the CP (see, e.g., Refs. [53; 54; 55; 37]). The nucleation and cavitation regions of the mixed phase correspond to the metastable states. In the MD simulations one expects to achieve
an equilibrated _steady state_ in these regions after a sufficiently long time. In most cases, however, the time to reach complete equilibrium appears very long. Note also that a strict physical meaning and location of the bounds of different regions are dependent on the size of the system (see, e.g., Refs. [56, 57, 58]) and can be sensitive to the collective motion [59, 60, 61].
The spinodal decomposition region is fundamentally different from the metastable nucleation and cavitation ones (see, e.g., Refs. [62, 63]). The LGPT manifests itself here as a fast system separation into the gaseous and liquid phases. The equilibrium states in this region (see, e.g., Ref. [64]) are achievable in the MD simulations. The heterogeneous structure of the spinodal decomposition phase is illustrated in Fig. 5, showing a strong influence on the particle number fluctuations obtained in the MD simulations. This is discussed in more detail in Sec. V. One can note a principal difference between the heterogeneous two-phase states in the spinodal decomposition region and the homogeneous mixtures of particles plus clusters in the nucleation region and liquid with gaseous bubbles in the cavitation region.
## IV Mixture of particles and clusters
To clarify some general features of the nucleation region, let us consider a non-interacting multi-component gas of \(k\)-particle clusters (\(k=1,2,\ldots\)). The GCE partition function reads
\[\begin{split}& Z_{\rm GCE}=\prod_{k\geq 1}\sum_{N_{k}=0}^{ \infty}\frac{\left(Vg(k)e^{\mu k/T}\right)^{N_{k}}}{N_{k}!}(2\pi kmT)^{3N_{k}/2 }\\ &=\prod_{k\geq 1}\exp\left[V(2\pi km\,T)^{3/2}\,g(k)\,\exp\left( \frac{\mu k}{T}\right)\right],\end{split} \tag{7}\]
where \(V\), \(T\), and \(\mu\) are, respectively, the system volume, temperature, and chemical potential that corresponds to the total conserved number \(N\) of particles over all clusters; \(g(k)\) is the "degeneracy" factor (number of internal states of the \(k\)-th cluster), and \(m\) the mass of a single particle, such that the mass of a \(k\)-particle cluster equals \(M_{k}=km\)). The system is considered to be in chemical equilibrium, thus \(\mu_{k}=k\mu\). The CE partition function \(Z_{\rm CE}(V,T,N)\) of the cluster model (7) is considered in Appendix A, where it is shown that the moments \(\langle k^{l}\rangle\) of
Figure 4: Different regions along the supercritical isotherm of the liquid-gas transition: (a) gaseous phase, (b) nucleation, (c) spinodal decomposition, and (d) cavitation.
Figure 5: Possible position of the liquid phase in the spinodal decomposition region relative to the acceptance subvolume (red dashed square).
the cluster distribution are identical between the CE and the GCE in the thermodynamic limit.
The cluster distribution (i.e., the normalized probability to find the \(k\)th cluster in the cluster system) can be written in a form
\[P_{k}(T,\mu)\equiv\frac{\langle N_{k}\rangle}{\sum\limits_{l\geq 1}\langle N_{l} \rangle}=\frac{k^{3/2}g(k)\exp\left(\frac{\mu k}{T}\right)}{\sum\limits_{l\geq 1 }l^{3/2}g(l)\exp\left(\frac{\mu l}{T}\right)}\, \tag{8}\]
where
\[k\langle N_{k}\rangle=\frac{\partial\ln\left[Z_{\rm GCE}(k)\right]}{\partial\mu} \tag{9}\]
is the GCE average number of the \(k\)th clusters. The clusters pressure \(p\) and particle number density \(n\) can be found as
\[p =(2\pi m)^{3/2}T^{5/2}\sum\limits_{k\geq 1}k^{3/2}g(k)\exp\left( \frac{\mu k}{T}\right), \tag{10}\] \[n =(2\pi mT)^{3/2}\sum\limits_{k\geq 1}k^{5/2}g(k)\exp\left( \frac{\mu k}{T}\right). \tag{11}\]
Using Eqs. (8) and (10) one can rewrite the pressure as
\[p=\frac{nT}{\langle k\rangle}\, \tag{12}\]
and the scaled variance \(\omega_{\rm gce}\)
\[\omega_{\rm gce}=T\left[\frac{dp}{dn}\right]^{-1}=\frac{T}{n}\left(\frac{ \partial n}{\partial\mu}\right)_{T}=\frac{\langle k^{2}\rangle}{\langle k \rangle}\, \tag{13}\]
where we defined \(\langle k^{l}\rangle\equiv\sum_{k\geq 1}k^{l}P_{k}\). Therefore, the first two moments of the cluster probability distribution \(P_{k}\) define both the system pressure (12) and scaled variance (13). Due to the evident inequalities, \(\langle k\rangle\geq 1\) and \(\langle k^{2}\rangle\geq\langle k\rangle\), the results (12) and (13) demonstrate that in the mixture of noninteracting \(k\)-th clusters (\(k=1,2,\ldots\)) the system pressure becomes smaller and the scaled variance larger than the corresponding ideal gas values \(p_{\rm id}=nT\) and \(\omega_{\rm id}=1\) with no cluster formation, i.e., when \(g(k=1)=1,\ g(k>1)=0\). General expression for cumulants \(\kappa_{n}[N]\) of any order \(n\) can be obtained:
\[\kappa_{n}[N]=\frac{\partial^{n}\ln\left[Z_{\rm GCE}\right]}{\partial\left( \frac{\mu}{T}\right)^{n}}=\langle k^{n}\rangle\sum\limits_{k\geq 1}\langle N_{k}\rangle. \tag{14}\]
The model of noninteracting clusters discussed above can be considered as an approximation for the LJ fluid in the nucleation region. The attractive part of the LJ potential is responsible for the \(k\)th cluster formation. On the other hand, the particle number density is still sufficiently small to justify the absence of the repulsive interaction effects between clusters.
By definition, a cluster is a bound system of particles. There are several ways to define clusters in molecular dynamics simulations [65, 66]. In the following, we will use the Hill algorithm [67]. A pair of particles \(i\) and \(j\) is assumed to be bound if their rest frame energy is
Figure 6: Cluster probability distributions \(P_{k}\) extracted from the MD of Lennard-Jones fluid for \(N_{0}=400\) and \(\tilde{T}=0.76\) at gaseous binodal \(\tilde{n}=0.16\) (a) and gaseous spinodal \(\tilde{n}=0.35\) (b). For comparison, \(P_{k}\) distributions are also presented for supercritical temperature \(\tilde{T}=1.06\).
negative,
\[(\tilde{v}_{i}-\tilde{v}_{j})^{2}+\tilde{V}_{LJ}(|\tilde{r}_{i}-\tilde{r}_{j}|)<0. \tag{15}\]
A given particle is assumed to belong to a cluster if it is bound to at least one other particle in that cluster. Finding clusters is thus equivalent to finding connected components in an undirected graph, whose vertices correspond to particles and where all bound pairs of particles [i.e. the condition (15) is satisfied] are connected by edges. We use depth-first search (DFS) to find the connected components of the graph and thus identify all the clusters.
Utilizing the above procedure, one obtains the probability distribution \(P_{k}\) in a Lennard-Jones fluid for given \(\tilde{n}\) and \(\tilde{T}\) from MD simulations. Examples of the extracted \(P_{k}\) distributions for \(\tilde{T}=0.76\) and \(\tilde{T}=1.06\) are shown in Figs. 6 (a) and (b) for \(\tilde{n}=0.16\) and \(\tilde{n}=0.35\), respectively. The results indicate that cluster formation becomes more significant when either temperature \(\tilde{T}\) is decreased or particle number density \(\tilde{n}\) is increased.
We then use the extracted \(P_{k}\) distributions to evaluate \(\langle k\rangle\) and \(\langle k^{2}\rangle\) which we then plug into (13) to estimate the GCE scaled variance in the cluster model. These results are compared with \(\tilde{\omega}\) calculated in a subvolume \(V=\alpha V_{0}\) directly from MD simulations. The cluster model results for \(\tilde{T}=0.76\) are shown in Fig. 7 by the orange line. These results agree qualitatively with direct MD simulations data (black line) in the range of densities \(0.16\lesssim\tilde{n}\lesssim 0.35\) corresponding to the nucleation region. In particular, cluster formation explains the strong rise (approximately by a factor of 20) of the scaled variance with \(\tilde{n}\) in the nucleation region. The cluster model, however, fails to describe the peak in \(\tilde{\omega}\) seen in MD simulations at higher densities, indicating its breakdown in the spinodal region.
## V Fluctuations in the spinodal region
In Ref. [68], the GCE particle number fluctuations were calculated in the mixed phase region. It was assumed that both the liquid and gas phases are entirely inside the system volume \(V_{0}\) that tends to infinity. In MD simulations here, we instead study fluctuations in a subvolume \(V=\alpha V_{0}\), which corresponds to a different scenario. We thus develop new models to understand qualitative features of the behavior observed in MD simulations.
In the spinodal region, one assumes that the volume \(V\) is partitioned into volumes \(V_{l}=xV\) and \(V_{g}=yV\) occupied by the liquid and gaseous phases, respectively (\(0<x<1\), \(y\equiv 1-x\)). The corresponding particle number densities in the liquid and gaseous phases are \(\rho_{l}\equiv N_{l}/V_{l}\) and \(\rho_{g}=N_{g}/V_{g}\). The \(r\)th moment of the particle number distribution in the subvolume \(V=\alpha V_{0}\) can then be presented as the following:
\[\langle N^{r}\rangle=\langle(N_{l}+N_{g})^{r}\rangle=V^{r}\left\langle(x\rho _{l}+y\rho_{g})^{r}\right\rangle\,. \tag{16}\]
The fluctuating quantities are the densities \(\rho_{l}\), \(\rho_{g}\), and the volume fraction \(x\), whereas the volume \(V\) is fixed. Following Refs. [69; 70] we assume that the fluctuations of all these quantities are independent in the thermodynamic limit, i.e., for any non-negative integers \(k\), \(m\), and \(n\).
The first moment (\(r=1\)), reduces via Eq. (16) to
\[\langle N\rangle=x_{0}Vn_{l}+y_{0}Vn_{g}=Vn\,\,, \tag{17}\]
where \(x_{0}=\langle x\rangle\) is the mean volume fraction occupied by the liquid phase, \(y_{0}\equiv 1-x_{0}\), and \(n_{l}=\langle\rho_{l}\rangle\) and \(n_{g}=\langle\rho_{g}\rangle\) are the mean densities in the liquid and gaseous phases, respectively. The particle number density is equal to \(n\equiv\langle N\rangle/V=N_{0}/V_{0}\). Equation (17) defines \(x_{0}\) in terms of the mean densities:
\[x_{0}\equiv\langle x\rangle=\frac{n-n_{g}}{n_{l}-n_{g}}\,\,. \tag{18}\]
At fixed temperature \(T<T_{c}\) the mean densities of the liquid \(n_{l}\) and gaseous \(n_{g}\) phases are assumed to remain constant with respect to system's particle number density \(n\) in the spinodal region in the thermodynamic limit.
Figure 7: The points connected by the solid line correspond to the MD results for \(N_{0}=400\) and \(\alpha=0.2\) at \(\tilde{T}=0.76\). The orange line demonstrates the cluster model results in the nucleation region \(0.16\leq\tilde{n}\leq 0.35\). The dashed line shows the results of the Minecraft model in the spinodal region \(0.35\leq\tilde{n}\leq 1.75\), and the dash-dotted line is its extension to the nucleation region.
These quantities coincide with the corresponding values on the liquid (right) and gaseous (left) binodals.
Using Eq. (16) one finds the variance of particle number distribution (see Ref. [68] for details):
\[\text{Var}[N]\equiv\langle N^{2}\rangle-\langle N^{2}\rangle\] \[=\text{Var}_{x}[N_{l}]\left(1+\frac{\text{Var}[x]}{x_{0}^{2}} \right)+\text{Var}_{x}[N_{g}]\left(1+\frac{\text{Var}[x]}{y_{0}^{2}}\right)\] \[\quad+V^{2}(n_{l}-n_{g})^{2}\text{Var}[x]. \tag{19}\]
Here \(\text{Var}_{x}[N_{l,g}]\) is the variance of \(N_{l,g}\) at fixed volume fraction \(x\) and \(\text{Var}[x]\) is the variance of the \(x\) distribution.
Suppose that there are several blobs of the liquid and gaseous phases, and all of them are much smaller than the subvolume \(V\). This would correspond to a spatially homogeneous mixed phase. In this case, \(\text{Var}[x]\) is expressed in terms of cumulants of \(V_{l}\) distribution as \(\text{Var}[x]\equiv V^{-2}\text{Var}[V_{l}]\). In the thermodynamic limit, \(V\to\infty\), all cumulants of extensive quantities are proportional to the system volume, \(\text{Var}_{x}[N_{\text{l,g}}]\sim V\) and \(\text{Var}_{x}[V_{l,g}]\sim V\). Eq. (19) reduces to
\[\text{Var}[N] =\text{Var}_{x}[N_{l}]+\text{Var}_{x}[N_{g}]\] \[+V^{2}(n_{l}-n_{g})^{2}\text{Var}[x]\, \tag{20}\]
where all terms are linear in \(V\). The result (21) coincides with that obtained for the GCE in Ref. [68], and it corresponds to the finite values of the scaled variance at \(T<T_{c}\) in the thermodynamic limit.
Note that the above derivation is based on the assumption of homogeneity. This assumption is valid for pure phases. In the mixed-phase region, however, this assumption may only be reasonable when applied to long-lived metastable phases. Such a configuration of the system, however, can not be viewed as an equilibrium configuration in a region of spinodal decomposition. There, the sizes of the liquid and gaseous blobs are both of the order of the total volume \(V_{0}\), and their volumes are comparable to the subvolume \(V\). Thus, the whole picture is strongly heterogeneous (see Fig. 4 (c) and Fig. 5). As a consequence, \(\text{Var}[x]\) becomes volume independent, thus, the last term in Eq. (19) is quadratic in \(V\) and makes the dominant contribution to fluctuations. Leaving only this last term, one obtains:
\[\text{Var}[N]=V^{2}(n_{l}-n_{g})^{2}\text{Var}[x]\, \tag{21}\]
and
\[\begin{split}\tilde{\omega}[N]&=\frac{\text{Var}[N] }{(1-\alpha)\langle N\rangle}\\ &=\alpha(1-\alpha)N_{0}\frac{(n_{l}-n_{g})^{2}}{n^{2}}\text{Var }[x]\.\end{split} \tag{22}\]
This result indicates that \(\tilde{\omega}[N]\) scales with \(N_{0}\), i.e. the scaled variance diverges in the thermodynamic limit. We checked that for \(N_{0}\gg 400\) the substantial increase of \(\tilde{\omega}\) is observed within MD simulations, however the scaling behaviour for fluctuations is out of the scope of the present paper. In the following, we present estimates for \(\text{Var}[x]\).
**Small \(\alpha\) limit.** At \(\alpha\ll 1\) one has \(V_{\text{l}}\gg V\) and \(V_{\text{g}}\gg V\). This means that one can neglect the events when both phases are simultaneously present inside the subvolume \(V\), and the whole subvolume is entirely inside either the gaseous or liquid phase. The probability distribution \(P[x]\) thus reads
\[P[x]=x_{0}\,\delta(1-x)+y_{0}\,\delta(x). \tag{23}\]
This means that one can neglect the events when both phases are simultaneously present inside the subvolume \(V\). From Eq. (23) one finds
\[\text{Var}[x]=x_{0}y_{0}. \tag{24}\]
The maximal value of \(\text{Var}[x]=0.25\) is reached at \(x_{0}=\sqrt[3]{0.5}\). Using Eqs. (18) and (21) one obtains:
\[\text{Var}[N]=V^{2}(n-n_{g})(n_{l}-n). \tag{25}\]
One sees that the scaled variance of particle number distribution is indeed divergent inside the mixed phase in the thermodynamic limit, scaling with the subvolume \(\tilde{\omega}\sim V\).
**Minecraft model.2** Now let us calculate \(\text{Var}[x]\) when the sizes of the volume, subvolume, and blobs are all comparable. For that we consider a simple "geometric" toy model of the cubic system with unit volume which contains both liquid and gaseous phases (see Fig. 8). The cubic subvolume \(V=\alpha\) is located in the center of the system with coordinates \((a_{\text{x}},a_{\text{y}},a_{\text{z}})=(0,0,0)\). The edge length of the subvolume is \(a=\sqrt[3]{\alpha}\). All liquid is condensed into a single blob which freely moves within the system. Here we neglect the effects of a geometric form and assume that this blob has a shape of a perfect cube. The volume of the cube of liquid is \(V_{l}=x_{0}\). Correspondingly its edge length is \(b=\sqrt[3]{x_{0}}\). The system has periodic boundary conditions, therefore, the coordinates \((b_{x},b_{y},b_{z})\) of the center of the cube of liquid are limited by \(-\frac{1}{2}<b_{\text{x}},b_{\text{y}},b_{\text{z}}<\frac{1}{2}\). The fraction \(x\) of the subvolume occupied by the liquid phase is the overlap volume between the cubic subvolume and the cubic liquid divided by the subvolume \(V=\alpha\).
Footnote 2: This name is inspired by the popular 3D video game.
The system has three degrees of freedom - the coordinates of the liquid cube \((b_{x},b_{y},b_{z})\). Since the cube
center is uniformly distributed over \(-\frac{1}{2}<b_{\rm x},b_{\rm y},b_{\rm z}<\frac{1}{2}\), the three coordinates are independent. The fraction \(x\) as a function of these three coordinates and can be written as
\[x=\frac{f(b_{\rm x})f(b_{\rm y})f(b_{\rm z})}{\alpha}. \tag{26}\]
Here \(f(b_{\rm i})\) is the overlap of liquid blob with subvolume along the coordinate \(i\) as a function of \(b_{\rm i}\).
The mean value \(\langle x\rangle\) can be found as
\[\langle x\rangle=\frac{1}{\alpha v}\left(\int_{-1/2}^{1/2}f(b_{\rm i}){\rm d }b_{\rm i}\right)^{3}=x_{0} \tag{27}\]
where \(v=1\) is the volume of the system. Similarly, one can calculate the variance of \(x\):
\[\mathrm{Var}[x]=-b^{6} \tag{28}\] \[+\left[\frac{b^{2}(3a-b)+\Theta_{a+b-1}(a+b-1)^{3}+\Theta_{b-a}(b -a)^{3}}{3a^{2}}\right]^{3}\]
where \(\Theta_{...}\equiv\Theta[...]\) is the step function and, as before, \(a=\sqrt[3]{\alpha}\) and \(b=\sqrt[3]{x_{0}}\). One sees that Eq. (28) reduces to Eq. (24) when \(\alpha\to 0\). In other limiting cases \(\mathrm{Var}[x]\to 0\) when \(\alpha\to 1\) or \(x_{0}\to 1\) or \(x_{0}\to 0\). \(\mathrm{Var}[x]\) as a function of \(x_{0}\) and \(\alpha\) is shown in Fig. 9.
The scaled variance \(\tilde{\omega}[N]\) given by Eq. (22), with \(\mathrm{Var}[x]\) estimated using the Minecraft model, Eq. (28), is shown in Fig. 7 in spinodal and nucleation regions by dashed and dotted lines, respectively.
## VI Summary
We studied particle number fluctuations inside the mixed phase of a liquid-gas phase transition by utilizing molecular dynamics simulations of the Lennard-Jones fluid. The simulations were performed for \(N_{0}=400\) particles in a cubic box with periodic boundary conditions. The fluctuations are studied inside a cubic subvolume \(V=0.2\,V_{0}\) located in the geometrical center of the system.
First, we briefly explore the supercritical temperature, where one observes the approximate Gaussian shape of the \(P(N)\) distribution. The scaled variance \(\tilde{\omega}\) characterizes the width of the \(P(N)\) distribution. It first increases with density from \(\tilde{\omega}\approx 1\) at small \(\tilde{n}\) to its maximum above unity around the critical density \(\tilde{n}=1\), and then it decreases with \(\tilde{n}\) to small values \(\tilde{\omega}<1\). This is illustrated in Fig. 3 (b).
The situation differs in the mixed phase, \(\tilde{T}<1\). The structure of the \(P(N)\) distribution is significantly more intricate. For \(\tilde{n}\approx 1\), the distribution is bi-modal, as shown in Fig. 2. The corresponding variance of particle number is much more significant compared to pure phases [Fig. 3 (a)].
To understand the qualitative features of the observed behavior, we formulate two phenomenological toy models.
The first model describes the system as non-interacting multi-component gas of \(k\)-particle clusters, taking the cluster probability distribution \(P_{k}\) directly from the MD simulations as input. The model describes semi
Figure 8: The illustration of the Minecraft toy model of an equilibrium system in the unstable region of the mixed phase. The subsystem is shown by the grey cube in the center while the green cube represents the liquid. The remaining space of the system is occupied by gas.
Figure 9: The variance of the volume fraction occupied by the liquid phase, \(\mathrm{Var}[x]\), as a function of \(\langle x\rangle\equiv x_{0}\) and \(\alpha\) calculated in the Minecraft model [Eq. (28)]. The dashed line corresponds to the maximum value of \(\mathrm{Var}[x]\) at fixed \(\alpha\).
quantitatively the rapid increase of \(\tilde{\omega}\) with density in the nucleation region of the mixed phase, i.e., the region between the gaseous binodal and spinodal [Fig. 7].
The second model - the Minecraft model - is formulated for the spinodal region of the mixed phase. The particles are separated into two phases, namely, the liquid blob with volume \(V_{\rm l}\) surrounded by gas. The size of the blob \(V_{\rm l}\) can be expressed through the total density of the system \(\tilde{n}\) and densities on the binodals. The Minecraft model considers the geometrical effects that become important when the volumes \(V_{\rm l,g}\) and \(V\) are of comparable size. With this consideration, the model indicates that \(\tilde{\omega}\sim N_{0}\rightarrow\infty\), thus the variance is divergent in the thermodynamic limit inside the spinodal region.
The present work is motivated by the study of event-by-event fluctuations in nucleus-nucleus collisions to probe the phase structure of QCD. Our MD simulations inside the mixed phase were performed for 400 particles, while the fluctuations were studied in the subvolume \(V=0.2\,V_{0}\). These two parameters correspond to typical total numbers of nucleons and the percentage of accepted final particles in heavy-ion collisions. The results indicate that large fluctuations of particle number in coordinate space can be interpreted as a signal of the spinodal region of the first-order phase transition. However, there are significant differences between our calculations and heavy-ion collisions. One difference is that in heavy-ion collisions, particles are not detected during the equilibrium phase of the collision but only after they fly away to the detector. Another difference is that particle momenta, not the coordinates, are measured in the experiment. We plan to address these issues by performing MD simulations of expanding systems.
Our simulations provide a first microscopic model test of the subensemble acceptance method (SAM) [21; 22] in the mixed phase region of a first-order phase transition. The SAM is a method for correcting the baryon number cumulants in heavy-ion collisions, which is model-independent in the thermodynamic limit, and it was previously tested in the crossover region at supercritical temperatures in Ref. [7]. Our simulations reveal that the SAM remains accurate in metastable regions of the phase diagram but breaks down in the spinodal decomposition region. The reason is that the finite-size effects remain sizable even in large systems in this region of the phase diagram. The treatment of the canonical effects in the spinodal region is thus more complex. It will require appropriate generalizations of the SAM, such as including macroscopic geometrical effects encompassed by the Minecraft model introduced here.
Another future avenue is generalizing the presented analysis to higher-order moments of particle number distributions, such as skewness and kurtosis.
## Acknowledgements
The authors are thankful to Jeroen van den Brink, Volker Koch, Flavio Nogueira, Scott Pratt and Jan Steinheimer for fruitful comments and discussions. O.S. acknowledges the scholarship grant from the GET_INvolved Programme of FAIR/GSI and support by the Department of Energy Office of Science through grant no. DE-FG02-03ER41259. This work is supported by the National Academy of Sciences of Ukraine, Grant No. 0122U200259. M.I.G. and R.V.P. acknowledge the support from the Alexander von Humboldt Foundation. This work was supported by a grant from the Simons Foundation (Grant Number 1039151). H.St. appreciates the Judah M. Eisenberg Professor Laureatus of the Walter Greiner Gesellschaft/Forderverein fur physikalische Grundlagenforschung Frankfurt, and the Fachbereich Physik at Goethe Universitat.
## Appendix A Cluster partition function in the CE
For a system of non-interacting multi-component gas of \(k\)th particle clusters, the canonical ensemble (CE) partition function reads
\[Z_{\rm CE}(V,T,N) = \prod_{k=1}^{N}\sum_{N_{k}\geq 0}\frac{(Vg(k))^{N_{k}}}{N_{k}!}( 2\pi kmT)^{3N_{k}/2} \tag{10}\] \[\times \delta\left[N-\sum_{k=1}^{N}kN_{k}\right]\.\]
Applying the integral form of the Kronecker symbol,
\[\delta\left[N-\sum_{k=1}^{N}kN_{k}\right]=\int\limits_{0}^{2\pi}\frac{d\varphi }{2\pi}\exp\left[i\varphi\left(N-\sum_{k=1}^{N}kN_{k}\right)\right]\, \tag{11}\]
to Eq. (10), one obtains
\[Z_{\rm CE}(V,T,N)=\int\limits_{0}^{2\pi}\frac{d\varphi}{2\pi}\ e^{-i\varphi N }\exp\left[\sum_{k\geq 1}r(k)e^{i\varphi k}\right]. \tag{12}\]
Here \(r(k)\equiv Vg(k)(2\pi kmT)^{3/2}\).
Using the Maclaurin expansion, one has
\[\exp\left[\sum_{k\geq 1}r(k)e^{i\varphi k}\right]=\sum_{l=0}^{\infty}\frac{B_{l }(r(1),\ldots,l!r(l))}{l!}e^{i\varphi l}, \tag{13}\]
where \(B_{l}\) are Bell polinomials [71].
Substituting (A4) into (A3) gives
\[Z_{\rm CE}(V,T,N)=\frac{B_{N}(r(1),\ldots,N!r(N))}{N!}. \tag{45}\]
From the above equations one finds the GCE partition function
\[Z_{\rm GCE}=\sum_{N=0}^{\infty}Z_{\rm CE}\exp\left(\frac{\mu N}{T}\right)=\prod _{k=1}^{N}\exp\left(r(k)e^{\mu k/T}\right)\, \tag{46}\]
which coincides with Eq. (7). \(Z_{\rm CE}\) can be expressed in terms of \(Z_{\rm GCE}\) through the Mellin transformation
\[Z_{CE}=\int\limits_{c-i\infty}^{c+i\infty}Z_{GCE}e^{-\mu N/T}d\mu. \tag{47}\]
The integral (47) can be evaluated in the large \(N\) limit using the steepest descent method [72]. Therefore,
\[Z_{\rm CE}(V,T,N) \approx \sqrt{\frac{2\pi T^{2}}{\sum_{k=1}^{N}k^{2}r(k)e^{\mu_{0}k/T}}} \tag{48}\] \[\times \exp\left(\sum_{k=1}^{N}r(k)e^{\mu_{0}k/T}-\frac{\mu_{0}}{T}N \right)\,\]
where \(\mu_{0}(T,N)\) can be found from the saddle point equation
\[\sum_{k=1}^{N}kr(k)e^{\mu_{0}k/T}-N=0. \tag{49}\]
Equation (48) indicated that the \(j\)-th moment of \(k\)th cluster distribution in the large \(N\) limit reads
\[\left\langle k^{j}\right\rangle_{\rm CE}\ =\ \left\langle k^{j}\right\rangle_{ \rm GCE}\ +\ O(N^{-1}). \tag{50}\]
This result shows that all moments \(j=1,2,\ldots\) of the \(k\)-th cluster distribution (\(k=1,\ldots,N\)) are the same in the CE and GCE in the thermodynamic limit \(N\to\infty\).
If \(N\) is large, this justifies the use of \(P_{k}\) probabilities from MD simulations as input into the calculations of fluctuations in the GCE using formulas (12) and (13).
|
2302.06189 | Infinitely many periodic solutions to a Lorentz force equation with
singular electromagnetic potential | We consider the Lorentz force equation $$
\frac{d}{dt}\left(\frac{m\dot{x}}{\sqrt{1-|\dot{x}|^{2}/c^{2}}}\right) = q
\left(E(t,x) + \dot x \times B(t,x)\right), \qquad x \in \mathbb{R}^3, $$ in
the physically relevant case of a singular electric field $E$. Assuming that
$E$ and $B$ are $T$-periodic in time and satisfy suitable further conditions,
we prove the existence of infinitely many $T$-periodic solutions. The proof is
based on a min-max principle of Lusternik-Schrelmann type, in the framework of
non-smooth critical point theory. Applications are given to the problem of the
motion of a charged particle under the action of a Li\'enard-Wiechert potential
and to the relativistic forced Kepler problem. | Alberto Boscaggin, Walter Dambrosio, Duccio Papini | 2023-02-13T08:56:48Z | http://arxiv.org/abs/2302.06189v1 | # Infinitely many periodic solutions to a Lorentz force equation
###### Abstract
We consider the Lorentz force equation
\[\frac{d}{dt}\left(\frac{m\dot{x}}{\sqrt{1-|\dot{x}|^{2}/c^{2}}}\right)=q\left( E(t,x)+\dot{x}\times B(t,x)\right),\qquad x\in\mathbb{R}^{3},\]
in the physically relevant case of a singular electric field \(E\). Assuming that \(E\) and \(B\) are \(T\)-periodic in time and satisfy suitable further conditions, we prove the existence of infinitely many \(T\)-periodic solutions. The proof is based on a min-max principle of Lusternik-Schrelmann type, in the framework of non-smooth critical point theory. Applications are given to the problem of the motion of a charged particle under the action of a Lienard-Wiechert potential and to the relativistic forced Kepler problem.
**Keywords:** Lorentz force equation, periodic solutions, non-smooth critical point theory, Lusternik-Schnirelmann category, Lienard-Wiechert potential, relativistic Kepler problem.
**AMS Subject Classification:** 34C25, 58E05, 58E30, 70H40, 78A35.
## 1 Introduction
According to the principles of electrodynamics [16], the motion of a slowly accelerated charged particle under the influence of an electromagnetic field is ruled by the Lorentz force equation
\[\frac{d}{dt}\left(\frac{m\dot{x}}{\sqrt{1-|\dot{x}|^{2}/c^{2}}}\right)=q\left( E(t,x)+\dot{x}\times B(t,x)\right),\qquad x\in\mathbb{R}^{3}, \tag{1.1}\]
where \(m\) is the mass of the particle, \(q\) is its charge and \(c\) is the speed of light; moreover, the electric and magnetic fields \(E\) and \(B\) are provided by the potentials \(V\) and \(A\) via the usual relations
\[E(t,x)=-\nabla_{x}V(t,x)-\partial_{t}A(t,x),\qquad B(t,x)=\mathrm{curl}_{x}A(t,x). \tag{1.2}\]
As well known (see, for instance, [13]) equation (1.1) is formally the Euler-Lagrange equation of the action functional
\[\int_{0}^{T}mc^{2}\left(1-\sqrt{1-\frac{|\dot{x}(t)|^{2}}{c^{2}}}\right)\,dt+ \int_{0}^{T}q\left(-V(t,x(t))+A(t,x(t))\cdot\dot{x}(t)\right)\,dt.\]
In spite of this, and probably due to the lack of smoothness of the kinetic part of the above functional, a systematic investigation of equation (1.1) with the tools of critical point theory has been initiated only very recently. More precisely, in [3, 4] a rigorous variational formulation in the space \(W^{1,\infty}\) is introduced, allowing for the use of non-smooth critical point theory in the version developed by Skulzkin [21], and, as a consequence, several existence and multiplicity results are given for solutions of equation (1.1) with either Dirichlet or periodic boundary conditions (see also [14] for the use of topological techniques). However, in both the papers the physically relevant case of singular electric and magnetic fields is not taken into account.
The aim of the present paper is to provide a contribution in this direction. To this end, we take advantage of a recent research [6] dealing with the equation
\[\frac{d}{dt}\left(\frac{m\dot{x}}{\sqrt{1-|\dot{x}|^{2}/c^{2}}}\right)=-\nabla _{x}V(t,x),\qquad x\in\mathbb{R}^{2}, \tag{1.3}\]
which is a version of (1.1) in the plane with \(A\equiv 0\) and \(q=1\). More precisely, in [6] equation (1.3) with a singular potential \(V\) given by \(V(t,x)=-\alpha/|x|-U(t,x)\) (with \(\alpha>0\)), is considered, namely
\[\frac{d}{dt}\left(\frac{m\dot{x}}{\sqrt{1-|\dot{x}|^{2}/c^{2}}}\right)=-\alpha \frac{x}{|x|^{3}}+\nabla_{x}U(t,x),\qquad x\in\mathbb{R}^{2}. \tag{1.4}\]
Let us point out that the motivation given in [6] for the above equation was not coming from electrodynamics, but rather from relativistic celestial mechanics: indeed, equation (1.4) is interpreted as a simple model, in special relativity, for the motion of a particle in a forced Kepler potential (see, for instance, [2] as well as the references in [5]). Of course, however, this can be of interest also in the context of electromagnetism and, actually, this interpretation is even more natural, since a rigorous treatment of the theory of gravitation should require the framework of general relativity: we refer to [15, Problem 34.3] for an interesting discussion and comparison about Kepler and Coulomb problems from the relativistic point of view.
By using minimization and min-max arguments in the framework of non-smooth critical point theory, it is proved in [6] that, for any external perturbation \(U\), non singular and \(T\)-periodic in time, equation (1.4) has infinitely many \(T\)-periodic solutions and, in particular, at least two \(T\)-periodic solutions of winding number \(k\) around the origin, for any integer \(k\neq 0\). Of course, such a result deeply relies on the presence of the singularity \(x=0\) for the potential \(V\), which produces a non-trivial topology for the domain of the action functional: the set of \(T\)-periodic paths winding \(k\) times around the origin is nothing but a connected component of the domain, and each of them (but the one with \(k=0\)) carries at least two periodic solutions of (1.4). Let us emphasize the universal character of this result, meaning that no assumptions on \(U\) (besides its smoothness) are needed: this is ultimately a consequence of the fact that a periodic path \(x\) winding around the origin with bounded velocity (since \(|\dot{x}|<c\)) is a priori-bounded.
In this paper, we provide a sort of generalization of the result in [6] applying to the Lorentz force equation (1.1). More precisely, we consider an electrostatic potential \(V<0\) defined in a set \(\Omega\) of the form
\[\Omega=\{(t,x)\in\mathbb{R}\times\mathbb{R}^{3}\,:\,x\neq r_{j}(t),\,\forall \,\,j=1,\ldots,N\}, \tag{1.5}\]
where the functions \(r_{1},\ldots,r_{N}:\mathbb{R}\to\mathbb{R}^{3}\) are \(T\)-periodic (for some \(T>0\)), of class \(C^{1}\) with \(\|\dot{r}_{i}\|_{\infty}<c\) and such that \(r_{i}(t)\neq r_{j}(t)\) for every \(t\in[0,T]\) and \(i\neq j\). Moreover, we assume that \(V\) has a Keplerian blow-up at the boundary of \(\Omega\) (cf. assumption (V) in Section 3) and that the magnetic potential \(A\) satisfies the global condition
\[|A(t,x)|\leq-\frac{\kappa^{\prime}}{c}\,V(t,x),\quad\forall\,\,(t,x)\in\Omega, \tag{1.6}\]
for some \(\kappa^{\prime}\in(0,1)\). Under these conditions, if both \(A\), \(V\) and their derivatives tend to zero at infinity, we prove that (1.1) has infinitely many \(T\)-periodic solutions (cf. Theorem 3.1).
Let us point out that the structure of the singularities of \(V\), described via the set \(\Omega\), is modeled on the relevant case of Lienard-Wiechert potentials (cf. [16] and Section 4), corresponding to the motion of a charged particle under the effect of \(N\) moving charged particles \(q_{1},\ldots,q_{N}\). In this situation, the functions \(r_{1},\ldots r_{N}\) are the motions laws of the particles generating the potentials and \(V\) and \(A\) are given by
\[V(t,x)=\sum_{i=1}^{N}\frac{q_{i}}{4\pi\varepsilon_{0}}\,\frac{1}{1-\eta_{i}(t _{i},x)\cdot\beta_{i}(t_{i})}\,\frac{1}{|x-r_{i}(t_{i})|},\qquad\beta_{i}(t)= \frac{\dot{r}_{i}(t)}{c},\ \eta_{i}(t,x)=\frac{x-r_{i}(t)}{|x-r_{i}(t)|},\]
and
\[A_{i}(t,x)=\sum_{i=1}^{N}\frac{\beta_{i}(t_{i})}{c}\,V_{i}(t,x),\]
where \(t_{i}=t_{i}(t,x)\) is the so-called retarded time (see (4.2) in Section 4). In particular, let us notice that condition (1.6) is satisfied since \(\|\dot{r}_{i}\|_{\infty}<c\), for every \(i=1,\ldots,N\).
As a second application of our main result, going back to the relativistic celestial mechanics framework, we can prove the existence of infinitely many \(T\)-periodic solutions for the relativistic forced Kepler problem in the space
\[\frac{d}{dt}\left(\frac{m\dot{x}}{\sqrt{1-|\dot{x}|^{2}/c^{2}}}\right)=-\alpha \frac{x}{|x|^{3}}+\nabla_{x}U(t,x),\qquad x\in\mathbb{R}^{3},\]
when \(U>0\) and \(U\to 0\) for \(|x|\to+\infty\) together with its gradient (cf. Theorem 4.2 in Section 4). In particular, this provides a partial generalization of the result given in [6] for the planar case mentioned above.
For the proof of Theorem 3.1 we use a variational approach, combining arguments from both [3, 4] and [6]. More precisely, we consider the functional \(I:W^{1,\infty}_{T}\to(-\infty,+\infty]\) defined as
\[I(x)=\int_{0}^{T}c^{2}\left(1-\sqrt{1-\frac{|\dot{x}(t)|^{2}}{c^{2}}}\right) \,dt+\int_{0}^{T}\left(-V(t,x(t))+A(t,x(t))\cdot\dot{x}(t)\right)\,dt,\]
whenever \(x\) belongs to the subset \(\Lambda\subset W^{1,\infty}_{T}\) of paths without collisions (that is, \((t,x(t))\in\Omega\) for every \(t\in[0,T]\), where \(\Omega\) is as in (1.5)) and \(\|\dot{x}\|_{\infty}\leq c\), and extended to \(+\infty\) otherwise. This functional satisfies the structural assumption of Skulzin non-smooth critical point theory and its critical points give rise to \(T\)-periodic solutions of equation (1.1); moreover, it is well-behaved near collisions, in the sense that if \(x_{n}\) approaches the boundary of \(\Lambda\), then \(I(x_{n})\to+\infty\). These properties can be proved by using arguments already developed in [3, 4, 6] and are collected in Lemma 3.4.
On the other hand, however, due to the three-dimensional setting, the approach of [6] based on the winding number cannot be used and a different strategy to achieve both existence and multiplicity has to be developed. In particular, inspired by classical results available in the setting of classical mechanics [1], we detect periodic solutions via a min-max principle of Lusternik-Schnirelmann type. For this, two main issues have to be faced. On one hand, we prove that the functional \(I\) satisfies a weak form of the Palais-Smale condition at any level \(c>\inf I=0\), cf. Lemma 3.5. On the other hand, we show that the proper domain of the action functional \(I\) contains compact subsets of arbitrarily large category, allowing us to define the min-max levels
\[c_{j}=\inf_{A\in\mathcal{F}_{j}}\sup_{x\in A}I(x),\qquad j\in\mathbb{N},\]
where \(\mathcal{F}_{j}\) is the family of compact subsets of the domain of \(I\) having category at least \(j\), cf. Lemma 3.6; moreover, \(c_{j}>0\) for any \(j\geq 3\), cf. Lemma 3.7. Then, taking advantage of the general min-max principle for non-smooth functionals proved in [6] (cf. Theorem 2.5 in Section 2), we can prove that for \(j\geq 3\) the number \(c_{j}\) is a critical level for the action functional. This would ensure the existence of infinitely many periodic solutions to equation (1.1) provided a sequence of distinct critical levels \(c_{j}\) exists, a fact which however seems hard to be established in general. Thus, adapting the arguments in the proof of [4, Th. 1] we prove that whenever two critical levels coincide, the corresponding critical level carries infinitely many critical points. From this, we deduce that the functional \(I\) has infinitely many critical points.
To the best of our knowledge, a technique of this type seems to be completely new in a non-smooth setting and we think that the general Lusternik-Schnirelmann min-max principle we introduce can be of independent interested.
The plan of the paper is the following. In Section 2, we describe the abstract variational setting and we provide the non-smooth min-max principle of Lusternik-Schnirelmann type (Theorem 2.6). In Section 3 we state and prove our main result (Theorem 3.1). Finally, in Section 4 we provide the above mentioned applications: the motion of a charged particle under the influence of periodic Lienard-Wiechert potentials (Theorem 4.1) and the perturbed relativistic Kepler problem (Theorem 4.2).
## 2 An abstract result
In this section, we present a result on the existence of infinitely many critical points for non-smooth functionals with singularities. More precisely, as in [6] we are concerned with functionals of the form described in the following assumption.
**Assumption 2.1**.: \(I:X\to(-\infty,+\infty]\) is a functional which can be decomposed as
\[I(x)=\psi(x)+\Phi(x),\quad\forall\ x\in X,\]
where, denoting by \(D_{\psi}=\{x\in X:\psi(x)<+\infty\}\) and \(D_{\Phi}=\{x\in X:\Phi(x)<+\infty\}\),
1. \(D_{\Phi}\) is open in \(X\) and \(D_{I}=D_{\psi}\cap D_{\Phi}\neq\emptyset\);
2. \(\psi:X\to\mathbb{R}\cup\{+\infty\}\) is convex and lower semi-continuous; moreover, \(\psi\) is continuous on any nonempty compact set \(A\subset X\) such that \(\sup_{A}\psi\) is finite;
3. \(\Phi:X\to\mathbb{R}\cup\{+\infty\}\) is locally Lipschitz continuous in \(D_{\Phi}\), i.e. every \(x\in D_{\Phi}\) has a neighborhood in which \(\Phi\) is Lipschitz continuous;
4. for any sequence \(\{x_{n}\}\) in \(D_{I}\) such that \(\operatorname{dist}(x_{n},\partial D_{\Phi})\to 0\), it holds that \(I(x_{n})\to+\infty\).
We now recall some basic definitions from [19, SS3.2].
**Definition 2.2**.: Let \(I:X\to(-\infty,+\infty]\) satisfy Assumption 2.1.
1. A point \(x\in D_{I}\) is a _critical point_ of \(I\) if \[\Phi^{0}(x;z-x)+\psi(z)-\psi(x)\geq 0,\quad\forall\,z\in X,\] where \[\Phi^{0}(x;u)\coloneqq\limsup_{w\to x,t\to 0^{+}}\frac{\Phi(w+tu)-\Phi(w)}{t}.\]
2. A _Palais-Smale_ (abbreviated _PS-_) _sequence for_ \(I\) _at level_ \(c\) is a sequence \(\{x_{n}\}\) in \(X\) such that \(I(x_{n})\to c\) and \[\Phi^{0}(x_{n};z-x_{n})+\psi(z)-\psi(x_{n})\geq-\epsilon_{n}\|z-x_{n}\|,\quad \forall\,n\in\mathbb{N},\,z\in X,\] (2.1) for some sequence \(\epsilon_{n}\to 0^{+}\).
_Remark 2.3_.: The functional \(\Phi\) in Section 3 is actually of class \(C^{1}\) in its domain \(D_{\Phi}\) and, thus,
\[\Phi^{0}(x;u)=d\Phi(x)[u].\]
We decided to present this more abstract section in the setting of nonsmooth calculus since, on one hand, the assumption \(\Phi\in C^{1}(D_{\Phi})\) doesn't really simplify the argument and, on the other, locally Lipschitz functionals immediately appear as soon as one considers some truncation of a \(C^{1}\) functional.
We also need to consider the following weak form of the Palais-Smale condition, as introduced in [3].
**Definition 2.4**.: Let \(I:X\to(-\infty,+\infty]\) satisfy Assumption 2.1 and assume that there exists a Banach space \(Y\) such that \(X\subset Y\) with continuous embedding. The functional \(I\) is said to satisfy the _weak Palais-Smale condition at level \(c\)_ if for every PS-sequence \(\{x_{n}\}\) in \(X\) such that \(I(x_{n})\to c\), there exist \(x\in X\) and a subsequence \(\{x_{n_{k}}\}\) such that \(x\) is a critical point of \(I\) with \(I(x)=c\) and \(x_{n_{k}}\to x\) in the \(Y\)-topology.
The existence of infinitely many critical points for a functional of the form \(I\) is obtained using a general non-smooth min-max principle, together with the Lusternik-Schnirelmann category. For the readers convenience, we recall here the definition and basic properties of the category (cf. [1]) and the min-max principle we use (established in [6] as a generalization of [17, Theorem 3.1]).
Given \(M\subset X\), the category of \(A\subset M\) relative to \(M\), denoted by \(\operatorname{cat}_{X}(A,M)\) is the least integer \(k\), if it exists, such that
\[A\subset A_{1}\cup\ldots\cup A_{k},\]
where \(A_{i}\subset M\) is closed and contractible in \(M\), for every \(i=1,\ldots,k\). The category is infinite if such a least integer does not exist. The category satisfies the following properties, which will be used in many situations:
1. if \(A\subset B\subset M\), then \(\operatorname{cat}_{X}(A,M)\leq\operatorname{cat}_{X}(B,M)\);
2. if \(A\subset M\subset N\), then \(\operatorname{cat}_{X}(A,N)\leq\operatorname{cat}_{X}(A,M)\);
3. if \(A,B\subset M\), then \(\operatorname{cat}_{X}(A\cup B,M)\leq\operatorname{cat}_{X}(A,M)+ \operatorname{cat}_{X}(B,M)\);
4. if \(A\subset M\) is closed and \(\varphi\in C(A,M)\) is a deformation (i.e. it is homotopic to the inclusion \(\iota_{A}:A\to M\)), then \(\operatorname{cat}_{X}(A,M)\leq\operatorname{cat}_{X}(\varphi(A),M)\).
**Theorem 2.5**.: _[_6_, Th. 2.4]_ _Let \(I=\psi+\Phi\) be a functional satisfying Assumption 2.1, let \(B\) be a closed set in \(X\) and \(\mathcal{F}\) be a family of compact sets in \(X\) such that:_
1. \(\mathcal{F}\) _is_ homotopy stable with extended boundary_ \(B\)_, that is, for each_ \(A\in\mathcal{F}\) _and each continuous deformation_ \(\eta\in C^{0}([0,1]\times X,X)\) _such that_ \[\eta(t,x)=x,\quad\forall\ (t,x)\in(\{0\}\times X)\cup([0,1]\times B)\qquad \text{ and }\qquad\eta([0,1]\times A)\subset D_{\Phi},\] _one has that_ \(\eta(\{1\}\times A)\in\mathcal{F}\)
_._
2. \(c:=\inf\limits_{A\in\mathcal{F}}\sup\limits_{x\in A}I(x)<+\infty\)_;_
3. _there exists a closed set_ \(F\) _in_ \(X\) _such that_ \[(A\cap F)\setminus B\neq\emptyset,\quad\forall\ A\in\mathcal{F}\qquad\text{and} \qquad\sup\limits_{B}I\leq\inf\limits_{F}I.\]
_Then, for any sequence \(\{A_{n}\}\) in \(\mathcal{F}\) such that \(\lim\limits_{n\to\infty}\sup\limits_{A_{n}}I=c\), there exists a PS-sequence \(\{x_{n}\}\subset X\) at level \(c\) such that \(\operatorname{dist}(x_{n},A_{n})\to 0\). If moreover \(\inf_{F}I=c\), then also \(\operatorname{dist}(x_{n},F)\to 0\)._
We are now in a position to state our result. For every integer \(j\in\mathbb{N}\), let us define
\[\mathcal{F}_{j}=\{A\subset D_{\Phi}\,:\,A\ \text{compact}\,\,\operatorname{cat}_{ X}(A,D_{\Phi})\geq j\}. \tag{2.2}\]
Moreover, let
\[c_{j}=\inf\limits_{A\in\mathcal{F}_{j}}\sup\limits_{x\in A}I(x), \tag{2.3}\]
for every \(j\in\mathbb{N}\) such that \(\mathcal{F}_{j}\) is not-empty. Then, we are able to prove the following result.
**Theorem 2.6**.: _Let \(I=\psi+\Phi\) be a functional satisfying Assumption 2.1 and the weak Palais-Smale condition at each level \(c>\inf I\). Moreover, let us assume that there exists \(j_{0}\in\mathbb{N}\) such that_
1. \(\mathcal{F}_{j}\neq\emptyset\)_, for every_ \(j\geq j_{0}\)__
2. \(c_{j}<+\infty\) _for every_ \(j\geq j_{0}\)__
3. \(c_{j_{0}}>\inf I\)_._
_Then, the functional \(I\) has infinitely many critical points. More precisely:_
1. \(c_{j}\) _is a critical level of_ \(I\)_, for every_ \(j\geq j_{0}\)__
2. _whenever_ \(c_{j_{1}}=c_{j_{2}}\) _for some_ \(j_{2}>j_{1}\geq j_{0}\)_, then the functional_ \(I\) _has infinitely many critical points at level_ \(c_{j_{1}}\)_._
Proof.: _(a1)_ Let us fix \(j\geq j_{0}\). We claim that the assumptions of Theorem 2.5 with \(B=\emptyset\), \(F=X\) and \(\mathcal{F}=\mathcal{F}_{j}\) are satisfied. Indeed, assumption (1) is a consequence of property (P4) of the category. Moreover, assumption (2) is guaranteed by (ii) and assumption (3) is trivially fulfilled since
\[(A\cap F)\setminus B=A\neq\emptyset,\quad\forall\ A\in\mathcal{F}_{j}\]
and
\[B=\emptyset\quad\Longrightarrow\quad\sup\limits_{B}I=-\infty.\]
Hence, we can apply Theorem 2.5 to obtain the existence of a PS-sequence at the level \(c_{j}\geq c_{j_{0}}>\inf I\). Since \(I\) satisfies the weak Palais-Smale condition at levels greater than \(\inf I\), we deduce that there exists a critical point at level \(c_{j}\).
_(a2)_ The argument here follows closely the one in [4, Theorem 1], with three main changes. The first one is that our functional \(\Phi\) is singular and locally Lipschitz continuous, instead of being even and in \(C^{1}(X)\). The second is that we use the Lusternik-Schnirelmann category instead of the Krasnoselskii
genus, a fact which is, however, linked to the first difference. Finally, we use item 2 in Assumption 2.1, which is weaker than the continuity of \(\psi\) on its proper domain \(D_{\psi}\) required in [4].
Concerning the weak PS-condition, we denote \(\|\cdot\|_{Y}\) the norm in \(Y\) (see Definition 2.4) and we set \(B_{Y}(x,r)=\{u\in X:\|u-x\|_{Y}<r\}\) which is open in \(X\) also w.r.t. the stronger topology induced by \(\|\cdot\|\).
By contradiction, let us assume that \(I\) has only \(n\in\mathbb{N}\) critical points at level \(c\coloneqq c_{j_{1}}=c_{j_{2}}\), which we label \(x_{1},\ldots,x_{n}\), and let \(r>0\) be such that the sets \(\overline{B_{Y}(x_{m},2r)}\) are pairwise disjoint and contained in \(D_{\Phi}\) (the closure is taken w.r.t. the norm \(\|\cdot\|\) of \(X\), if not otherwise specified). We define
\[N_{\rho}=B_{Y}(x_{1},\rho)\cup\cdots\cup B_{Y}(x_{n},\rho),\quad\forall\,\rho>0,\]
and observe that, arguing by contradiction and using the weak PS-condition, there exists \(\epsilon\in(0,r^{2})\) such that, for each \(x\in I^{-1}([c-\epsilon,c+\epsilon])\setminus N_{r}\), there is \(\xi_{x}\neq x\) such that
\[\psi(\xi_{x})-\psi(x)+\Phi^{0}(x,\xi_{x}-x)<-\sqrt{\epsilon}\|\xi_{x}-x\|. \tag{2.4}\]
Let \(A\in\mathcal{F}_{j_{2}}\) be chosen in such a way that
\[\sup_{A}I\leq c+\epsilon.\]
In particular, \(A\subset D_{\phi}\) and \(\sup_{A}I=\max_{A}I\) since \(\psi\) is bounded on \(A\) and, thus, continuous in \(A\) by Assumption 2.1. The set \(B=A\setminus N_{2r}\) is compact in \(X\) and \(B\in\mathcal{F}_{j_{1}}\), since
\[j_{1}<j_{2}\leq\operatorname{cat}_{X}(A,D_{\Phi})\leq\operatorname{cat}_{X}(B, D_{\Phi})+\operatorname{cat}_{X}\left(\bigcup_{m=1}^{n}B_{Y}(x_{m},2r),D_{\Phi} \right)=\operatorname{cat}_{X}(B,D_{\Phi})+1\]
by (P3). As a consequence we have
\[c\leq\max_{B}I\leq\max_{A}I\leq c+\epsilon.\]
We apply Ekeland variational principle (see also [4, Lemma 1(iii)]), to the map \(\Pi:\mathcal{F}_{j_{1}}\to]-,\infty+\infty]\) such that \(\Pi(A)=\sup_{A}I\), since \(\mathcal{F}_{j_{1}}\) is complete w.r.t. the Hausdorff metric
\[\operatorname{d}_{\mathrm{H}}(A,B)=\max\{\sup_{a\in A}\operatorname{dist}(a, B);\sup_{b\in B}\operatorname{dist}(b,A)\}\]
and \(\Pi\) is lower semi-continuous w.r.t. the same matric. Then, we obtain \(C\in\mathcal{F}_{j_{1}}\) such that
\[\max_{C}I \leq\max_{B}I \tag{2.5}\] \[\operatorname{d}_{\mathrm{H}}(B,C) \leq\sqrt{\epsilon}<r\] \[\sup_{D}I \geq\max_{C}I-\sqrt{\epsilon}\delta(C,D),\quad\forall\ D\in \mathcal{F}_{j_{1}},\]
where is the Hausdorff distance for compact sets of a metric space. In particular, \(C\cap N_{r}=\emptyset\) and the set \(S=\{x\in C:c-\epsilon\leq I(x)\}\) is contained in \(I^{-1}([c-\epsilon,c+\epsilon])\setminus N_{r}\) and is compact in \(X\).
Since the mapping
\[(x_{1},x_{2})\mapsto\phi(\xi_{x})-\phi(x_{1})+\Phi^{0}(x_{2};\xi_{x}-x_{2})+ \sqrt{\epsilon}\|\xi_{x}-x_{1}\|\]
is upper semi-continuous in \(X\times D_{\Phi}\) (by [9, Proposition 2.1.1]) and negative for \(x_{1}=x_{2}=x\in S\) by (2.4), for each \(x\in S\) there is a positive \(\delta_{x}<\|\xi_{x}-x\|\) such that \(\overline{B_{X}(x,\delta_{x})}\subset D_{\Phi}\) and
\[\phi(\xi_{x})-\phi(u)+\Phi^{0}(x+h;\xi_{x}-u)<-\sqrt{\epsilon}\|\xi_{x}-u\|, \quad\forall\ u\in\overline{B_{X}(x,\delta_{x})},\,h\in\overline{B_{X}(0, \delta_{x})}.\]
Since \(S\) is compact, there exist \(y_{1},\ldots,y_{\ell}\in S\) such that \(S\subset B_{1}\cup\cdots\cup B_{\ell}\), where \(B_{k}=B_{X}(y_{k},\delta_{y_{k}})\), \(1\leq k\leq\ell\). We observe that \(\xi_{y_{k}}\not\in B_{k}\), by construction, and, thus, we can fix some positive \(\delta\leq\min\{\delta_{C}/2,\delta_{y_{k}},\mathrm{dist}(\xi_{y_{k}},\overline {B}_{k}\cap C):1\leq k\leq\ell\}\), where \(\delta_{C}\coloneqq\min\{\mathrm{dist}(x,\partial D_{\Phi}):x\in c\}>0\) since \(C\subset D_{\Phi}\) by (2.5).
Let us denote by \(\eta,\eta_{k}:C\to[0,1]\) (\(1\leq k\leq\ell\)) continuous functions such that
\[\eta(x)=\begin{cases}1&\text{if }I(x)\geq c\\ 0&\text{if }I(x)\leq c-\epsilon\end{cases}\quad\text{and}\quad\eta_{k}(x)= \begin{cases}\frac{\mathrm{dist}(x,C\setminus B_{k})}{\sum_{m=1}^{\ell}\mathrm{ dist}(x,C\setminus B_{m})}&\text{if }x\in B_{k}\cap C\\ 0&\text{if }x\in C\setminus B_{k}\end{cases}\]
so that \(\sum_{k=1}^{\ell}\eta_{k}=1\) on \(S\). Let us consider the function \(\beta:[0,1]\times C\to X\) defined by
\[\beta(t,x)=\beta_{t}(x) \coloneqq x+t\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\| \xi_{y_{k}}-x\|}(\xi_{y_{k}}-x)\] \[=\left[1-t\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\|\xi_ {y_{k}}-x\|}\right]x+t\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\|\xi_{ y_{k}}-x\|}\xi_{y_{k}}\]
which is continuous and satisfies \(\|\beta_{t}(x)-x\|\leq\delta<\delta_{C}\) for all \((t,x)\in[0,1]\times C\) by construction. As a consequence \(\beta_{1}\) is a deformation of \(C\) in \(D_{\Phi}\) (observe that \(\beta_{0}\) is the identity on \(C\)) and \(D\coloneqq\beta_{1}(C)\) belongs to \(\mathcal{F}_{j_{1}}\) by property (P4).
From the estimate
\[t\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\|\xi_{y_{k}}-x\|}\leq\delta \sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\mathrm{dist}(\xi_{y_{k}},\overline{B}_{k} \cap C)}\leq\sum_{k=1}^{\ell}\eta_{k}(x)\leq 1,\]
we deduce that \(\beta_{t}(x)\) is a convex combination of \(x,\xi_{y_{1}},\ldots,\xi_{y_{\ell}}\) and, hence,
\[\psi(\beta_{t}(x))\leq\left[1-t\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x) }{\|\xi_{y_{k}}-x\|}\right]\psi(x)+t\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{ k}(x)}{\|\xi_{y_{k}}-x\|}\psi(\xi_{y_{k}}).\]
On the side of \(\Phi\), by Lebourg's theorem [9, Theorem 2.3.7] for each \(x\in C\) there exists \(\tau=\tau(x)\in(0,1)\) and \(\zeta\in\partial\Phi(\beta_{\tau}(x))\) such that \(\Phi(\beta_{1}(x))-\Phi(x)=\langle\zeta,\beta_{1}(x)-x\rangle\), where \(\partial\Phi(x)\) is the generalized gradient of \(\Phi\) at \(x\) (see [9, SS2.1]). Hence, we have
\[\Phi(\beta_{1}(x))-\Phi(x)\leq\Phi^{0}(\beta_{\tau}(x);\beta_{1}(x)-x)\leq \delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\|\xi_{y_{k}}-x\|}\Phi^{0}( \beta_{\tau}(x);\xi_{y_{k}}-x)\]
by [9, Propositions 2.1.1-2]. As a consequence we can estimate
\[I(\beta_{1}(x)) \leq\left[1-\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\|\xi_ {y_{k}}-x\|}\right]\psi(x)+\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\| \xi_{y_{k}}-x\|}\psi(\xi_{y_{k}})\] \[\quad+\Phi(x)+\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\| \xi_{y_{k}}-x\|}\Phi^{0}(\beta_{\tau}(x);\xi_{y_{k}}-x)\] \[=I(x)+\delta\eta(x)\sum_{k=1}^{\ell}\frac{\eta_{k}(x)}{\|\xi_{y_{ k}}-x\|}\left[\psi(\xi_{y_{k}})-\psi(x)+\Phi^{0}(\beta_{\tau}(x);\xi_{y_{k}}-x) \right].\]
Now, by construction, \(\|\beta_{\tau}(x)-x\|\leq\delta\leq\delta_{y_{k}}\) for all \(k=1,\ldots,\ell\), which implies that
\[\psi(\xi_{y_{k}})-\psi(x)+\Phi^{0}(\beta_{\tau}(x);\xi_{y_{k}}-x)<-\sqrt{\epsilon }\|\xi_{y_{k}}-x\|,\quad\forall\ x\in\overline{B}_{k},\ \ k=1,\ldots,\ell.\]
Therefore, we have
\[I(\beta_{1}(x))<I(x)-\delta\eta(x)\sqrt{\epsilon}\sum_{k=1}^{\ell}\eta_{k}(x)< I(x)-\delta\eta(x)\sqrt{\epsilon}\quad\forall\ x\in S.\]
On the other hand, if \(x\in C\setminus S\), we have \(\eta(x)=0\) and \(I(\beta_{1}(x))=I(x)<c-\epsilon\).
We can choose \(x_{0}\in C\) such that \(I(\beta_{1}(x_{0}))=\max I(\beta_{1}(C))=\max I(D)\geq c\) since \(D\in\mathcal{F}_{j_{1}}\). As consequences we have that \(x_{0}\in S\) and
\[c\leq\max_{D}I=I(\beta_{1}(x_{0}))<I(x_{0})-\delta\eta(x_{0})\sqrt{\epsilon} \leq I(x_{0}),\]
and, thus, \(\eta(x_{0})=1\) and
\[\max_{D}I<I(x_{0})-\delta\sqrt{\epsilon}\leq\max_{C}I-\delta\sqrt{\epsilon} \leq\max_{C}I-\mathrm{d}_{\mathrm{H}}(C,D)\sqrt{\epsilon},\]
which is a contradiction with (2.5).
## 3 The main result
In this section we state and prove our main result for the Lorentz force equation
\[\frac{d}{dt}\left(\frac{\dot{x}}{\sqrt{1-|\dot{x}|^{2}/c^{2}}}\right)=E(t,x)+ \dot{x}\times B(t,x),\qquad x\in\mathbb{R}^{3}, \tag{3.1}\]
where, as usual,
\[E(t,x)=-\nabla_{x}V(t,x)-\partial_{t}A(t,x),\qquad B(t,x)=\mathrm{curl}_{x}A(t,x). \tag{3.2}\]
Notice that, without loss of generality, we have normalized the charge-to-mass ratio to \(1\) (while, on the other hand, we prefer to keep track of the value \(c\) of the speed of light).
As already mentioned in the Introduction, our main interest is in covering the case when the potential \(V\) is singular. More precisely, we assume that the singularities of \(V\) are described by \(N\) functions \(r_{1},\ldots,r_{N}:\mathbb{R}\to\mathbb{R}^{3}\) which are \(T\)-periodic (for some \(T>0\)), of class \(C^{1}\), with \(\|\dot{r}_{i}\|_{\infty}<c\), and such that \(r_{i}(t)\neq r_{j}(t)\) for every \(t\in[0,T]\) and \(i\neq j\). Accordingly, we settle equation (3.1) on the open domain
\[\Omega=\{(t,x)\in\mathbb{R}\times\mathbb{R}^{3}\,:\,x\neq r_{j}(t),\,\forall\ j =1,\ldots,N\}.\]
The following result holds true.
**Theorem 3.1**.: _Let us assume that \(V:\Omega\to\mathbb{R}\) and \(A:\Omega\to\mathbb{R}^{3}\) are of class \(C^{1}\), \(T\)-periodic in the first variable, and satisfy the following conditions:_
* \(V(t,x)<0\) _for every_ \((t,x)\in\Omega\) _and there exist_ \(\kappa>0\) _and_ \(\delta>0\) _such that, for every_ \(i=1,\ldots,N\)_,_ \[V(t,x)\leq-\frac{\kappa}{|x-r_{i}(t)|},\quad\forall\ (t,x)\in\Omega\ \text{such that}\ |x-r_{i}(t)|<\delta;\] (3.3)
* _there exists_ \(\kappa^{\prime}\in(0,1)\) _such that_ \[|A(t,x)|\leq-\frac{\kappa^{\prime}}{c}V(t,x),\quad\forall\ (t,x)\in\Omega;\]
* _it holds that_ \[\lim_{|x|\to\infty}\left(|V(t,x)|+|\nabla_{x}V(t,x)+\partial_{t}A(t,x)|+| \mathrm{curl}_{x}A(t,x)|\right)=0,\] _uniformly in_ \(t\in\mathbb{R}\)_._
_Then, equation (3.1) has infinitely many \(T\)-periodic solutions._
_Remark 3.2_.: Notice that the potential \(A\) can be either regular or singular: however, in this last case, the behavior of \(A\) near the singularities has to be consistent with assumption (AV1). Let us observe that the case \(A\equiv 0\) is allowed.
_Remark 3.3_.: Let us point out that assumption (AV2) can be replaced by
\[\lim_{|x|\to\infty}\left(|V(t,x)|+|\nabla_{x}V(t,x)|+|D_{x}A(t,x)|\right)=0,\]
uniformly in \(t\in\mathbb{R}\). Indeed, the condition
\[\lim_{|x|\to\infty}\left(|\nabla_{x}V(t,x)+\partial_{t}A(t,x)|+|\mathrm{curl}_ {x}A(t,x)|\right)=0, \tag{3.4}\]
uniformly in \(t\in\mathbb{R}\), is used to prove the validity of the weak Palais-Smale condition (cf. the proof Lemma 3.5 and, in particular, formula (3.12), which in turn is obtained from (3.11) using the expression for \(d\Phi\) given by (3.9)). When assuming, instead of (3.4), the condition
\[\lim_{|x|\to\infty}(|\nabla_{x}V(t,x)|+|D_{x}A(t,x)|)=0,\]
uniformly in \(t\in\mathbb{R}\), then the same conclusion can be obtained using (3.8) instead of (3.9) in formula (3.11). We prefer to suppose (AV2) because it can be verified in a more direct way in the application to Lienard-Wiechert potentials (cf. (4.11) and (4.12)).
The rest of the section is devoted to the proof of Theorem 3.1, which follows from the abstract result Theorem 2.6.
So, let us first describe the variational setting; in what follows, we take advantage of results given both in [3] (where, however, \(V\) and \(A\) are not allowed to be singular) and in [6] (where \(A=0\), but \(V\) is singular). Let us consider the Banach space
\[X=\left\{x\in W^{1,\infty}(0,T;\mathbb{R}^{3}):x(0)=x(T)\right\},\]
endowed with its usual norm \(\|x\|=\|x\|_{\infty}+\|\dot{x}\|_{\infty}\). We define the functional \(\psi:X\to(-\infty,+\infty]\) as
\[\psi(x)=\begin{cases}\int_{0}^{T}c^{2}\left(1-\sqrt{1-\frac{|\dot{x}(t)|^{2}}{c^ {2}}}\right)\,dt&\text{if }\|\dot{x}\|_{\infty}\leq c;\\ +\infty&\text{otherwise}.\end{cases}\]
According to the notation of Section 2, we thus have
\[D_{\psi}=\{x\in X:\|\dot{x}\|_{\infty}\leq c\}.\]
Moreover, we consider the open subset of \(X\)
\[\Lambda=\{x\in X:(t,x(t))\in\Omega,\quad\forall\ t\in[0,T]\}\]
and we define \(\Phi:X\to(-\infty,+\infty]\) as
\[\Phi(x)=\begin{cases}\int_{0}^{T}\left(-V(t,x(t))+A(t,x(t))\cdot \dot{x}(t)\right)\,dt&\text{if }x\in\Lambda;\\ +\infty&\text{otherwise},\end{cases}\]
so that \(D_{\Phi}=\Lambda\). Finally, we define the action functional \(I:X\to(-\infty,+\infty]\) as
\[I(x)=\psi(x)+\Phi(x),\quad\forall\ x\in X,\]
and we recall the notation \(D_{I}=D_{\psi}\cap D_{\Phi}\).
For further convenience, we observe that assumption (AV1) implies that
\[-V(t,x(t))+\dot{x}(t)\cdot A(t,x(t))\geq(1-\kappa^{\prime})\,\left(-V(t,x(t)) \right),\quad\forall\ x\in D_{I},\,t\in[0,T], \tag{3.5}\]
and then, by assumption (V),
\[\Phi(x)=\int_{0}^{T}\left(-V(t,x(t))+\dot{x}(t)\cdot A(t,x(t))\right)\,dt>0, \quad\forall\ x\in D_{I}. \tag{3.6}\]
Taking into account that \(\psi\geq 0\) and that \(I=+\infty\) outside \(D_{I}\), we deduce that
\[I(x)>0,\quad\forall\ x\in X. \tag{3.7}\]
In the next Lemma, we show that this functional satisfies the structural Assumption 2.1 of Section 2 and that, moreover, its critical points correspond to classical \(T\)-periodic solutions of the Lorentz force equation (3.1).
**Lemma 3.4**.: _The functional \(I\) satisfies Assumption 2.1. Moreover, the functional \(\psi\) is lower semicontinuous with respect to uniform convergence, namely: if \(x\in X\) and \(\{x_{n}\}\) is a sequence in \(D_{\psi}\) such that \(x_{n}\to x\) uniformly on \([0,T]\), then \(x\in D_{\psi}\) and_
\[\psi(x)\leq\liminf_{n\to+\infty}\psi(x_{n}).\]
_Moreover, each critical point \(x\in D_{I}\) of \(I\) satisfies \(|\dot{x}(t)|<c\) for every \(t\in[0,T]\) and correspond to a classical \(T\)-periodic solution of equation (3.1)._
Proof.: Most of the above statement has been already proved in [6, Proposition 3.2] (and, in turn, in corresponding results in [3]); notice indeed that the functional \(\psi\) is the same as the one considered therein, while \(\Phi\), despite the presence of the magnetic term, is still of class \(C^{1}\) on the open set \(D_{\Phi}=\Lambda\), with
\[d\Phi(x)[y]=\int_{0}^{T}\left(-\nabla_{x}V(t,x(t))\cdot y(t)+A(t,x(t))\cdot \dot{y}(t)+\left((D_{x}A(t,x(t))^{T}\dot{x}(t)\right)\cdot y(t)\right)\,dt, \tag{3.8}\]
cf. [3, Lemma 1] (in the above formula, the term \((D_{x}A)^{T}\dot{x}\) is meant as the product of the transpose of the Jacobian matrix \(D_{x}A\) with the (column) vector \(\dot{x}\)). Notice that, by integrating by parts,
\[\int_{0}^{T}A(t,x(t))\cdot\dot{y}(t)\,dt=-\int_{0}^{T}\partial_{t}A(t,x(t)) \cdot y(t)\,dt-\int_{0}^{T}(D_{x}A(t,x(t))\dot{x}(t))\cdot y(t)\,dt\]
From this, together with the identity
\[((D_{x}A(t,x(t))^{T}\dot{x}(t))\cdot y(t)-(D_{x}A(t,x(t))\dot{x}(t))\cdot y(t )=(\dot{x}(t)\times\mathrm{curl}_{x}A(t,x(t)))\cdot y(t),\]
we can rewrite \(d\Phi\) in the equivalent form
\[d\Phi(x)[y]=\int_{0}^{T}\left((-\nabla_{x}V(t,x(t))-\partial_{t}A(t,x(t))) \cdot y(t)+(\dot{x}(t)\times\mathrm{curl}_{x}A(t,x(t)))\cdot y(t)\right)\,dt. \tag{3.9}\]
Using this formula, and recalling (3.2), the fact that critical points of \(I\) gives rise to classical \(T\)-periodic solutions of equation (3.1) can be proved with the very same arguments of [6, Proposition 3.3] (see also [3, Theorem 2]).
The only point which requires a bit of care is the proof of the property of blow-up on the boundary (that is, item 4 of Assumption 2.1), for which we give the complete details. At first, we notice that
\[\partial D_{\Phi}=X\setminus\Lambda=\{x\in X:\ \exists\,i\in\{1,\ldots,N\}\ \exists\,t_{0}\in[0,T]:\ x(t_{0})=r_{i}(t_{0})\}.\]
So, let us consider a sequence \(\{x_{n}\}\) in \(D_{I}\) such that \(d_{n}:=\mathrm{dist}(x_{n},\partial D_{\Phi})\to 0\) and, accordingly, let \(y_{n}\in\partial D_{\Phi}\) be such that \(\|x_{n}-y_{n}\|\leq 2d_{n}\). Since \(\|\dot{x}_{n}\|_{\infty}\leq c\) for any \(n\), we find that
\[\|\dot{y}_{n}\|_{\infty}\leq c+\|\dot{y}_{n}-\dot{x}_{n}\|_{\infty}\leq c+\|y _{n}-x_{n}\|\leq c+d_{n}\leq c+1\]
for \(n\) large enough. Moreover, since \(y_{n}(t_{n})=r_{i_{n}}(t_{n})\) for some \(t_{n}\in[0,T]\) and \(i_{n}\in\{1,\ldots,N\}\), we have
\[\|y_{n}-r_{i_{n}}\|_{\infty}\leq(2c+1)T\]
and thus the sequence \(\{y_{n}\}\) is bounded in \(X\). Since \(\|x_{n}-y_{n}\|\leq 2d_{n}\), the sequence \(\{x_{n}\}\) is bounded in \(X\) as well. Therefore, the Ascoli-Arzela theorem yields the existence of a continuous function \(z\) such that, up to subsequence, \(x_{n}\to z\) and \(y_{n}\to z\) uniformly on \([0,T]\). Hence, \(z(0)=z(T)\) and \(z(t_{0})=r_{i_{0}}(t_{0})\) for some \(t_{0}\in[0,T]\), limit point of the sequence \(t_{n}\), and \(i_{0}\in\{1,\ldots,N\}\), limit point of the sequence \(i_{n}\). Moreover, passing to the limit in the Lipschitz-continuity condition
\[|x_{n}(t_{2})-x_{n}(t_{1})|\leq c|t_{2}-t_{1}|,\quad\text{ for every }t_{1},t_{2}\in[0,T],\]
we easily see that \(z\in D_{\psi}\subset X\). Hence, the function \(z-r_{i_{0}}\) is Lipschitz continuous and so
\[\int_{0}^{T}\frac{1}{|z(t)-r_{i_{0}}(t)|}\,dt=+\infty.\]
Therefore, by (V) and (AV1) and using Fatou's lemma we obtain
\[\liminf_{n\to+\infty}\int_{0}^{T}\left(-V(t,x_{n}(t))+A(t,x_{n}(t)) \cdot\dot{x}_{n}(t)\right)\,dt\geq\liminf_{n\to+\infty}\int_{0}^{T}\left(-(1- \kappa^{\prime})\,V(t,x_{n}(t))\right)\,dt\] \[\geq(1-\kappa^{\prime})\int_{0}^{T}\liminf_{n\to+\infty}\left(-V( t,x_{n}(t))\right)\,dt\geq\kappa\left(1-\kappa^{\prime}\right)\int_{0}^{T} \liminf_{n\to+\infty}\frac{1}{|x_{n}(t)-r_{i_{0}}(t)|}\,dt\] \[=\kappa\left(1-\kappa^{\prime}\right)\int_{0}^{T}\frac{1}{|z(t)- r_{i_{0}}(t)|}\,dt=+\infty.\]
Since \(0\leq\psi(x_{n})\leq mc^{2}T\), we finally conclude that \(I(x_{n})\to+\infty\) as desired.
Let us now notice that
\[\inf_{X}I=0.\]
Indeed, we have already observed that \(I>0\), cf. (3.7). Moreover, for a sequence \(x_{n}(t)\equiv\xi_{n}\) with \(|\xi_{n}|\to+\infty\) we readily see, by assumption (AV2), that \(I(x_{n})=\Phi(x_{n})\to 0\). With this in mind, the next result ensures that the functional \(I\) satisfies, at each level \(c>\inf I=0\), the weak Palais-Smale condition, according to Definition 2.4 with \(Y=L^{\infty}(0,T)\).
**Lemma 3.5**.: _The functional \(I\) satisfies the weak Palais-Smale condition at each level \(c>0\)._
Proof.: Let \(\{x_{n}\}\subset X\) be a Palais-Smale sequence at level \(c>0\); incidentally, let us notice that \(\{x_{n}\}\subset D_{I}\), since otherwise \(I(x_{n})=+\infty\). The proof will be divided in two steps.
At first, we show that the sequence \(\{x_{n}\}\) is bounded in \(L^{\infty}\) (and, thus, in \(X\)). To see this, let us write \(x_{n}=\tilde{x}_{n}+\bar{x}_{n}\), where \(\bar{x}_{n}=\frac{1}{T}\int_{0}^{T}x_{n}\) and \(\int_{0}^{T}\tilde{x}_{n}\,dt=0\). Since \(\|\dot{\tilde{x}}_{n}\|_{\infty}=\|\dot{x}_{n}\|_{\infty}\leq c\), we have that \(\|\tilde{x}_{n}\|_{\infty}\) is bounded. So, assuming by contradiction that \(\|x_{n}\|_{\infty}\) is not bounded yields, up to subsequences, \(|\bar{x}_{n}|\to+\infty\). Then \(|x_{n}(t)|\geq|\bar{x}_{n}|-\|\tilde{x}_{n}\|_{\infty}\) and so
\[\min_{t}|x_{n}(t)|\to+\infty. \tag{3.10}\]
Choosing \(z=\bar{x}_{n}\) in (2.1), we obtain
\[d\Phi(x_{n})[-\tilde{x}_{n}]+\psi(\bar{x}_{n})-\psi(x_{n})\geq-\epsilon_{n}\| \tilde{x}_{n}\|,\quad\forall\ n\in\mathbb{N}, \tag{3.11}\]
that is, using (3.9),
\[\psi(x_{n})\leq\epsilon_{n}\|\tilde{x}_{n}\|+\int_{0}^{T}\left((\nabla_{x}V(t,x(t))+\partial_{t}A(t,x_{n}(t)))\cdot\tilde{x}_{n}(t)+(\dot{x}_{n}(t)\times \operatorname{curl}_{x}A(t,x_{n}(t)))\cdot\tilde{x}_{n}(t)\right)\,dt. \tag{3.12}\]
Therefore, recalling the boundedness of \(\|\tilde{x}_{n}\|_{\infty}\), (3.10) and assumption (AV2) we obtain \(\psi(x_{n})\to 0\). On the other hand, for the same reasons \(\Phi(x_{n})\to 0\) and so
\[\psi(x_{n})=I(x_{n})-\Phi(x_{n})\to c>0,\]
a contradiction.
As a second step, we show that the boundedness of \(\{x_{n}\}\) implies the existence of a subsequence \(\{x_{n_{k}}\}\) converging in \(L^{\infty}(0,T)\) to a critical point \(x\) of the functional \(I\) at level \(c\) (that is, the condition required in the definition of weak Palais-Smale condition at level \(c\)). For this, we combine the arguments used in the proof of [6, Proposition 3.5] with the ones in the proof of [3, Lemma 5].
We now consider the sets \(\mathcal{F}_{j}\) and the min-max levels \(c_{j}\) defined respectively in (2.2) and (2.3) and we turn to the proof of the validity of assumptions (i)-(ii)-(iii) of Theorem 2.6.
At first, we deal with (i)-(ii).
**Lemma 3.6**.: _For every integer \(j\geq 1\), it holds that:_
1. \(\mathcal{F}_{j}\neq\emptyset\)_,_
2. \(c_{j}<+\infty.\)__
Proof.: We first prove that (i) holds when there is only one curve \(r_{1}\) of singularities. To this aim, we make use of the following auxiliary open sets in \(C_{T}\coloneqq\{x\in C([0,T],\mathbb{R}^{3}):x(0)=x(T)\}\), endowed with the topology of uniform convergence:
\[\Lambda_{1} =\{x\in C_{T}:x(t)\neq r_{1}(t),\quad\forall\ t\in[0,T]\}\] \[\Lambda_{0} =\{x\in C_{T}:x(t)\neq 0,\quad\forall\ t\in[0,T]\}\]
and of the the continuous and dense immersion \(\iota:X\to C_{T}\). We have that \(X\cap\Lambda_{1}=\iota^{-1}(\Lambda_{1})\) and that \(\iota|_{X\cap\Lambda_{1}}:X\cap\Lambda_{1}\to\Lambda_{1}\) is a homotopy equivalence by [20, Theorem 16]. Since the affine isometry \(x\mapsto x-r_{1}\) maps \(\Lambda_{1}\) onto \(\Lambda_{0}\), we have that its composition with \(\iota|_{X\cap\Lambda_{1}}\) provides a homotopy equivalence between \(X\cap\Lambda_{1}\) and \(\Lambda_{0}\). Using [12, Corollary 2.8], we deduce that the cup length in \(\mathbb{Z}_{2}\) of \(\Lambda_{0}\) is infinite. Since the cup length is a homotopy invariant, we infer that \(X\cap\Lambda_{1}\) contains compact sets with arbitrarily large category by [10, Lemma 2.9].
Now, we are going to show that, for each \(j\geq 1\), there exists a compact \(A\subset\Lambda\cap D_{\psi}\) such that \(\operatorname{cat}_{X}(A,\Lambda)\geq j\); this will imply both (i) and (ii), since our functional \(\psi\) is bounded in \(D_{\psi}\). We just showed that there exists a compact \(A_{1}\subset\Lambda_{1}\) such that \(\operatorname{cat}_{X}(A_{1},\Lambda_{1}\cap X)\geq j\). For each \(\lambda>0\) and \(x\in X\) we define \(x_{\lambda}=r_{1}+\lambda(x-r_{1})\) and observe that \(x_{\lambda}\in\Lambda_{1}\cap X\) if and only if \(x\in\Lambda_{1}\cap X\). We set \(A_{\lambda}=\{x_{\lambda}:x\in A_{1}\}\) which is compact and omeomorphic to \(A_{1}\) so that
\[\operatorname{cat}_{X}(A_{\lambda},\Lambda_{1}\cap X)=\operatorname{cat}_{X}( A_{1},\Lambda_{1}\cap X)\geq j\quad\forall\ \lambda>0.\]
Now, let \(\delta\coloneqq\min\{|r_{1}(t)-r_{j}(t)|:t\in[0,T],\,j=2,\ldots,N\}>0\). Since
\[\|x_{\lambda}-r_{1}\|=\lambda\|x-r_{1}\|\leq\lambda\operatorname{dist}(r_{1},A _{1})<+\infty\quad\forall\ x\in A_{1}\ \text{and}\ \forall\ \lambda>0,\]
for \(\lambda<\delta/\operatorname{dist}(r_{1},A_{1})\) we have that \(A_{\lambda}\subset\Lambda\) and
\[\operatorname{cat}_{X}(A_{\lambda},\Lambda)\geq\operatorname{cat}_{X}(A_{1}, \Lambda_{1}\cap X)\geq j\]
by property (P2). On the other hand, we have that
\[\|\dot{x}_{\lambda}\|_{\infty}\leq\|\dot{r}_{1}\|_{\infty}+\lambda\|\dot{x}- \dot{r}_{1}\|_{\infty}\leq c\quad\forall\ x\in A_{1}\quad\text{if}\ \lambda\leq\frac{c-\|\dot{r}_{1}\|_{\infty}}{\max_{x\in A_{1}}\|\dot{x}-\dot{r}_ {1}\|_{\infty}}.\]
Hence, if \(\lambda>0\) is small enough we have that \(A_{\lambda}\subset D_{\psi}\cap\Lambda\) and \(A_{\lambda}\in\mathcal{F}_{j}\).
Finally, we prove that (iii) of Theorem 2.6 is satisfied with \(j_{0}=3\) (while it can be shown that \(c_{1}=c_{2}=0\)).
**Lemma 3.7**.: _It holds that \(c_{3}>0\)._
Proof.: Suppose by contradiction that \(c_{3}=0\), that is
\[\inf_{A\in{\cal F}_{3}}\sup_{x\in A}I(x)=0.\]
Then, for every \(n\in\mathbb{N}\) there exists \(A_{n}\in{\cal F}_{3}\) such that
\[0\leq\sup_{x\in A_{n}}I(x)<\frac{1}{n}.\]
Of course, \(A_{n}\subset D_{I}\). Hence, taking into account that \(I=\psi+\Phi\), with \(\psi\geq 0\) and \(\Phi>0\) in \(D_{I}\) (cf. (3.6)), we get
\[0\leq\psi(x)<\frac{1}{n}\quad\mbox{ and }\quad 0\leq\Phi(x)<\frac{1}{n},\quad \forall\ x\in A_{n}. \tag{3.13}\]
In particular, noticing that
\[\psi(x)\geq\frac{1}{2}\,\int_{0}^{T}|\dot{x}(t)|^{2}\,dt,\quad\forall\ x\in A_ {n},\]
we obtain
\[||\dot{x}||_{L^{2}}<\sqrt{\frac{2}{n}},\quad\forall\ x\in A_{n}, \tag{3.14}\]
and so, from Sobolev inequality (see, for instance, [18, Proposition 1.3]),
\[|\tilde{x}(t)|\leq\sqrt{\frac{T}{6n}},\quad\forall\ x\in A_{n},\ t\in[0,T], \tag{3.15}\]
where we have written as usual \(x(t)=\bar{x}+\tilde{x}(t)\), with \(\bar{x}=\frac{1}{T}\int_{0}^{T}x(t)\,dt\).
Now, we claim that, fixed an arbitrary constant \(R>0\) with the property that
\[\max\{|r_{i}(t)|:\ t\in[0,T],\ i=1,\ldots,N\}\leq\frac{R}{2}, \tag{3.16}\]
there exists \(n^{*}\in\mathbb{N}\) such that, for every \(n\geq n^{*}\),
\[x\in A_{n}\quad\Longrightarrow\quad|x(t)|\geq 2R,\quad\forall\ t\in[0,T]. \tag{3.17}\]
Indeed, let \(\iota^{*}\) be defined by
\[\iota^{*}=\inf\{-V(t,x):\ t\in[0,T],\ |x|<3R,\ (t,x)\in\Omega\}\]
and observe that \(\iota^{*}>0\) by assumption (V); moreover, let \(n^{*}\in\mathbb{N}\) be such that
\[n^{*}\geq\max\left(\frac{1}{(1-\kappa^{\prime})\,\iota^{*}\,T},\frac{2T}{R^{2 }}\right). \tag{3.18}\]
Assume now by contradiction that there exist \(n\geq n^{*}\), \(x\in A_{n}\) and \(t_{0}\in[0,T]\) such that \(|x(t_{0})|<2R\). Then, from (3.14) we infer that
\[|x(t)|\leq|x(t_{0})|+\int_{0}^{T}|\dot{x}(t)|\,dt\leq|x(t_{0})|+\sqrt{\frac{2 T}{n}},\quad\forall\ t\in[0,T],\]
and hence from (3.18) we deduce that \(|x(t)|<3R\), for every \(t\in[0,T]\). Therefore, recalling (3.5) and (3.13), we have
\[\frac{1}{n^{*}}>\Phi(x)\geq(1-\kappa^{\prime})\,\int_{0}^{T}-V(t,x(t))\,dt>(1- \kappa^{\prime})\,\iota^{*}\,T,\]
which contradicts (3.18).
At this point, we notice that from (3.15) and (3.18) it follows that
\[|\tilde{x}(t)|\leq R,\quad\forall\ x\in A_{n},\ t\in[0,T].\]
Hence, taking into account (3.17) we deduce that, for every \(n\geq n^{*}\),
\[x\in A_{n}\quad\Longrightarrow\quad|\bar{x}+(1-\lambda)\tilde{x}(t)|\geq R, \quad\forall\ t\in[0,T],\ \forall\lambda\in[0,1]. \tag{3.19}\]
In particular, recalling (3.16), \(\bar{x}+(1-\lambda)\tilde{x}\in\Lambda\) for every \(x\in A_{n}\) and \(\lambda\in[0,1]\). Hence, the map \(H:[0,1]\times A_{n}\to\Lambda\) given by
\[H(\lambda,x)=\bar{x}+(1-\lambda)\tilde{x}\]
provides a deformation in \(\Lambda\) of \(A_{n}\) into \(A_{n}^{\prime}=H(1,A_{n})\). Hence, by property (P4) of the category,
\[\text{cat}_{X}(A_{n},\Lambda)\leq\text{cat}_{X}(A_{n}^{\prime},\Lambda). \tag{3.20}\]
On the other hand, we observe that, setting
\[\Xi=\{x\in\Lambda\,:\,x(t)\equiv c\text{ with }|c|\geq R\},\]
from (3.19) we have that \(A_{n}^{\prime}\subset\Xi\subset\Lambda\). Hence, from properties (P1) and (P2) of the category,
\[\text{cat}_{X}(A_{n}^{\prime},\Lambda)\leq\text{cat}_{X}(\Xi,\Lambda)\leq\text {cat}_{X}(\Xi,\Xi). \tag{3.21}\]
The set \(\Xi\) is clearly homeomorphic to \(\mathbb{R}^{3}\setminus B_{R}(0)\) (with \(B_{R}(0)\) the open ball of radius \(R\)) and so \(\text{cat}_{X}(\Xi,\Xi)=2\). Hence, (3.20) and (3.21) yield
\[\text{cat}_{X}(A_{n},\Lambda)\leq 2,\]
contradicting the fact that \(A_{n}\in\mathcal{F}_{3}\).
From Lemmas 3.4, 3.5, 3.6 and 3.7 we deduce that all the assumptions of Theorem 2.6 are satisfied and then Theorem 3.1 is proved.
## 4 Applications
In this section, we give some applications of our main result.
The first one deals with the motion of a charge under the effect of the electric and magnetic field generated by \(N\) moving charges.
For the second one, we move to the interpretation of equation (3.1) in relativistic celestial mechanics, dealing with the motion of a particle in a perturbed Kepler potential.
### The Lienard-Wiechert potentials
Les us consider the motion of a charged particle with \(m/q=1\) under the effect of \(N\) moving electric point charges.
We denote by \(q_{1},\ldots,q_{N}\) the moving charges and by \(r_{1},\ldots,r_{N}\) their trajectories, which we assume to be \(C^{2}\) functions \(r_{j}:\mathbb{R}\to\mathbb{R}^{3}\), \(T\)-periodic and such that \(|\dot{r}_{j}(t)|<c\) for every \(t\in[0,T]\) and \(r_{i}(t)\neq r_{j}(t)\) for every \(t\in[0,T]\) and \(i\neq j\) (cf. Section 3). Let us now set, for \(i=1,\ldots,N\),
\[\beta_{i}(t)=\frac{\dot{r_{i}}(t)}{c},\quad\forall\ t\in[0,T],\ i=1,\ldots,N,\]
and observe that
\[||\beta_{i}||_{\infty}<1. \tag{4.1}\]
Moreover, we define \(\eta_{i}:\Omega\to\mathbb{R}^{3}\) by
\[\eta_{i}(t,x)=\frac{x-r_{i}(t)}{|x-r_{i}(t)|},\quad\forall\ (t,x)\in\Omega\]
and \(t_{i}:\Omega\to\mathbb{R}\) by the implicit relation
\[t_{i}=t-\frac{1}{c}\,|x-r_{i}(t_{i})|. \tag{4.2}\]
It is well-known that, for every \(i=1,\ldots,N\), the number \(t_{i}\) is the _retarted_ time. The existence and uniqueness of a solution of (4.2) for a fixed \((t,x)\in\Omega\) is a standard fact in special relativity and it can be proved by means of a plain implicit function argument, which also implies that \(t_{i}\) is a function of class \(C^{1}\). Moreover, the periodicity of \(r_{i}\) implies that \(t_{i}\) is \(T\)-periodic as a function of the time variable \(t\).
The Lienard-Wiechert scalar and vector potentials generated by the point charge source \(q_{i}\), \(i=1,\ldots,N\), acting on a charge at the point \((t,x)\), are given, respectively, by
\[V_{i}(t,x)=\frac{q_{i}}{4\pi\varepsilon_{0}}\,\frac{1}{1-\eta_{i}(t_{i},x) \cdot\beta_{i}(t_{i})}\,\frac{1}{|x-r_{i}(t_{i})|} \tag{4.3}\]
and
\[A_{i}(t,x)=\frac{\beta_{i}(t_{i})}{c}\,V_{i}(t,x), \tag{4.4}\]
where \(t_{i}=t_{i}(t,x)\) and \(\varepsilon_{0}\) is the vacuum permittivity. For future reference, let us recall that the corresponding electric and magnetic fields are given by
\[E_{i}(t,x)= \frac{q_{i}}{4\pi\varepsilon_{0}}\,\left(\frac{\eta_{i}(t_{i},x) -\beta_{i}(t_{i})}{\gamma_{i}^{2}\,(1-\eta_{i}(t_{i},x)\cdot\beta_{i}(t_{i})) ^{3}}\,\frac{1}{|x-r_{i}(t_{i})|^{2}}\right.\] \[\left.+\frac{\eta_{i}(t_{i},x)\times((\eta_{i}(t_{i},x)-\beta_{i} (t_{i}))\times\dot{\beta}_{i}(t_{i}))}{c\,(1-\eta_{i}(t_{i},x)\cdot\beta_{i}(t _{i}))^{3}}\,\frac{1}{|x-r_{i}(t_{i})|}\right),\]
where \(\gamma_{i}=1/\sqrt{1-|\beta_{i}|^{2}}\) is the Lorentz factor, and
\[B_{i}(t,x)=\frac{\eta_{i}(t_{i},x)}{c}\times E_{i}(t,x), \tag{4.5}\]
respectively (cf. [16]).
Let us notice that \(V_{i}\) and \(A_{i}\) (and then \(E_{i}\) and \(B_{i}\)) are well-defined in \(\Omega\): indeed, from (4.2) we first deduce that
\[x-r_{i}(t_{i})=0\quad\Longleftrightarrow\quad t=t_{i},\]
thus implying that \((t,x)=(t_{i},r_{i}(t_{i}))\), which is impossible if \((t,x)\in\Omega\). On the other hand, if \((t,x)\in\Omega\) we have
\[|\eta_{i}(t_{i},x)\cdot\beta_{i}(t_{i})|\leq||\beta_{i}||_{\infty}\]
and then, by (4.1),
\[1-\eta_{i}(t_{i},x)\cdot\beta_{i}(t_{i})\geq 1-||\beta_{i}||_{\infty}>0. \tag{4.6}\]
We are now in a position to state our result on periodic motions under Lienard-Wiechert potentials.
**Theorem 4.1**.: _In the above setting, let us assume that \(q_{i}<0\), for every \(i=1,\ldots,N\). Let_
\[V(t,x)=\sum_{i=1}^{N}V_{i}(t,x),\quad A(t,x)=\sum_{i=1}^{N}A_{i}(t,x), \tag{4.7}\]
_for every \((t,x)\in\Omega\), where \(V_{i}\) and \(A_{i}\), \(i=1,\ldots,N\), are given in \((\ref{eq:A1})\) and \((\ref{eq:A2})\), respectively. Then, the corresponding Lorentz force equation \((\ref{eq:A1})\) has infinitely many \(T\)-periodic solutions._
Proof.: The result follows from Theorem (3.1). We need to show that \(V\) and \(A\) satisfy assumptions (V), (AV1) and (AV2).
As far as (V) is concerned, the assumption \(q_{i}<0\), for every \(i=1,\ldots,N\), implies that \(V(t,x)<0\), for every \((t,x)\in\Omega\). Moreover, from (4.2) and the definition of \(\beta_{i}\) we deduce that
\[c(t-t_{i})=|x-r_{i}(t_{i})|\leq|x-r_{i}(t)|+|r_{i}(t)-r_{i}(t_{i})|\leq|x-r_{ i}(t)|+c||\beta_{i}||_{\infty}(t-t_{i}), \tag{4.8}\]
thus implying
\[c(t-t_{i})\leq\frac{|x-r_{i}(t)|}{1-||\beta_{i}||_{\infty}}.\]
Using this estimate in (4.8), we infer
\[|x-r_{i}(t_{i})|\leq\frac{1}{1-||\beta_{i}||_{\infty}}\,|x-r_{i}(t)|.\]
This relation, together with (4.6) and the sign assumption on the charges, implies that
\[V_{i}(t,x)\leq\frac{q_{i}}{4\pi\varepsilon_{0}}\,\frac{1}{|x-r_{i}(t)|},\]
for every \((t,x)\in\Omega\). Recalling that \(V_{j}(t,x)<0\), for every \((t,x)\in\Omega\) and \(j=1,\ldots,N\), we conclude that
\[V(t,x)=V_{i}(t,x)+\sum_{j\neq i}V_{j}(t,x)\leq\frac{q_{i}}{4\pi\varepsilon_{0} }\,\frac{1}{|x-r_{i}(t)|},\]
for every \((t,x)\in\Omega\). This proves the validity of (3.3) with \(\kappa=\max\{\kappa_{i}:\,i=1,\ldots,N\},\kappa_{i}=-q_{i}/4\pi\varepsilon_{0}\) and \(\delta>0\) arbitrary.
The validity of (AV1) is an immediate consequence of the definition of \(A_{i}\) given in (4.4). Indeed, (AV1) is satisfied with
\[\kappa^{\prime}=\max\{||\beta_{i}||_{\infty}:\,i=1,\ldots,N\}\]
(observe that \(\kappa^{\prime}<1\) by (4.1)).
Finally, we pass to the proof of the validity of (AV2), first observing that (1.2) implies that (AV2) can be written as
\[\lim_{|x|\to+\infty}\left(|V(t,x)|+|\nabla_{x}E(t,x)|+|B(t,x)|\right)=0, \tag{4.9}\]
uniformly in \(t\in\mathbb{R}\). For every \(i=1,\ldots,N\), from (4.6) we infer that
\[|V_{i}(t,x)|\leq\frac{-q_{i}}{4\pi\varepsilon_{0}}\;\frac{1}{1-||\beta_{i}||_ {\infty}}\,\frac{1}{|x-r_{i}(t_{i})|},\]
for every \((t,x)\in\Omega\). Now, defining
\[\Theta=\max\{|r_{i}(t)|:\ t\in\mathbb{R},\ i=1,\ldots,N\},\]
it is immediate to see that the set \(E=\{(t,x)\in\mathbb{R}\times\mathbb{R}^{3}:\ |x|>\Theta+1\}\) satisfies \(E\subset\Omega\) and that
\[|V_{i}(t,x)|\leq\frac{-q_{i}}{4\pi\varepsilon_{0}}\;\frac{1}{1-||\beta_{i}|| _{\infty}}\,\frac{1}{|x|-\Theta},\quad\forall\ (t,x)\in\Omega,\ |x|>\Theta+1,\]
thus implying
\[\lim_{|x|\to+\infty}|V_{i}(t,x)|=0,\quad\forall\ i=1,\ldots,N, \tag{4.10}\]
uniformly in \(t\in\mathbb{R}\). Taking again into account (4.6), the fact that \(\eta_{i}\), \(\beta_{i}\) and \(\dot{\beta_{i}}\) are bounded and the definition of \(\Theta\), we deduce that there exists \(Z^{\prime}>0\) such that
\[|E_{i}(t,x)|\leq\frac{q_{i}}{4\pi\varepsilon_{0}}\;\frac{Z^{\prime}}{(1-|| \beta_{i}||_{\infty})^{3}}\;\bigg{(}\frac{1}{|x|-\Theta}+\frac{1}{(|x|-\Theta) ^{2}}\bigg{)}\,,\quad\forall\ (t,x)\in\Omega,\ |x|>\Theta+1.\]
This proves that
\[\lim_{|x|\to+\infty}|E_{i}(t,x)|=0,\quad\forall\ i=1,\ldots,N, \tag{4.11}\]
uniformly in \(t\in\mathbb{R}\). Finally, from (4.5) and (4.11), recalling that \(\eta_{i}\) is bounded, we infer
\[\lim_{|x|\to+\infty}|B_{i}(t,x)|=0,\quad\forall\ i=1,\ldots,N, \tag{4.12}\]
uniformly in \(t\in\mathbb{R}\). From the fact that (4.10), (4.11) and (4.12) hold for every \(i=1,\ldots,N\), recalling (4.7), we can conclude that (4.9) is fulfilled.
### The forced relativistic Kepler problem
Let us consider the equation
\[\frac{d}{dt}\left(\frac{m\dot{x}}{\sqrt{1-|\dot{x}|^{2}/c^{2}}}\right)=- \alpha\frac{x}{|x|^{3}}+\nabla_{x}U(t,x),\qquad x\in\mathbb{R}^{3}, \tag{4.13}\]
interpreted as the relativistic Kepler problem (\(m,\alpha>0\)), perturbed by an external force.
The following result holds true.
**Theorem 4.2**.: _Let \(U:\mathbb{R}\times\mathbb{R}^{3}\rightarrow\mathbb{R}\) be a \(C^{1}\) function, \(T\)-periodic in the first variable, satisfying \(U(t,x)>0\) for every \((t,x)\in\mathbb{R}\times\mathbb{R}^{3}\) and_
\[\lim_{|x|\rightarrow+\infty}(|U(t,x)|+|\nabla_{x}U(t,x)|)=0, \tag{4.14}\]
_uniformly in \(t\in\mathbb{R}\). Then, equation (4.13) has infinitely many \(T\)-periodic solutions._
Proof.: The result follows from Theorem 3.1. Indeed, let us first observe that here \(\Omega=\{(t,x)\in\mathbb{R}\times\mathbb{R}^{3}:x\neq 0\}\),
\[V(t,x)=-\frac{\alpha}{m|x|}-\frac{1}{m}U(t,x),\quad A(t,x)=0,\quad\forall\ (t,x )\in\Omega.\]
Then, from the sign condition on \(U\) we plainly deduce that (V) (with \(\kappa=1\) and arbitrary \(\delta>0\)) is satisfied. Moreover, assumption (AV1) is trivially fulfilled since \(A\equiv 0\). Finally, from assumption (4.14), we infer that
\[\lim_{|x|\rightarrow+\infty}(|V(t,x)|+|\nabla_{x}V(t,x)|)=0,\]
uniformly in \(t\in\mathbb{R}\). Recalling again that \(A\equiv 0\), this proves the validity of (AV2).
|
2304.02061 | Generating Continual Human Motion in Diverse 3D Scenes | We introduce a method to synthesize animator guided human motion across 3D
scenes. Given a set of sparse (3 or 4) joint locations (such as the location of
a person's hand and two feet) and a seed motion sequence in a 3D scene, our
method generates a plausible motion sequence starting from the seed motion
while satisfying the constraints imposed by the provided keypoints. We
decompose the continual motion synthesis problem into walking along paths and
transitioning in and out of the actions specified by the keypoints, which
enables long generation of motions that satisfy scene constraints without
explicitly incorporating scene information. Our method is trained only using
scene agnostic mocap data. As a result, our approach is deployable across 3D
scenes with various geometries. For achieving plausible continual motion
synthesis without drift, our key contribution is to generate motion in a
goal-centric canonical coordinate frame where the next immediate target is
situated at the origin. Our model can generate long sequences of diverse
actions such as grabbing, sitting and leaning chained together in arbitrary
order, demonstrated on scenes of varying geometry: HPS, Replica, Matterport,
ScanNet and scenes represented using NeRFs. Several experiments demonstrate
that our method outperforms existing methods that navigate paths in 3D scenes. | Aymen Mir, Xavier Puig, Angjoo Kanazawa, Gerard Pons-Moll | 2023-04-04T18:24:22Z | http://arxiv.org/abs/2304.02061v3 | # Generating Continual Human Motion in Diverse 3D Scenes
###### Abstract
We introduce a method to synthesize animator guided human motion across 3D scenes. Given a set of sparse (3 or 4) joint locations (such as the location of a person's hand and two feet) and a seed motion sequence in a 3D scene, our method generates a plausible motion sequence starting from the seed motion while satisfying the constraints imposed by the provided keypoints. We decompose the continual motion synthesis problem into walking along paths and transitioning in and out of the actions specified by the keypoints, which enables long generation of motions that satisfy scene constraints without explicitly incorporating scene information. Our method is trained only using scene agnostic mocap data. As a result, our approach is deployable across 3D scenes with various geometries. For achieving plausible continual motion synthesis without drift, our key contribution is to generate motion in a goal-centric canonical coordinate frame where the next immediate target is situated at the origin. Our model can generate long sequences of diverse actions such as grabbing, sitting and leaning chained together in arbitrary order, demonstrated on scenes of varying geometry: HPS, Replica, Matterport, ScanNet and scenes represented using NeRFs. Several experiments demonstrate that our method outperforms existing methods that navigate paths in 3D scenes.
## 1 Introduction
Our goal is to generate animator guided rich long-term human behavior in arbitrary 3D scenes, including a variety of actions and transitions between them. Such a system should allow for goal-directed generation of humans moving about from one place to another, for example, walking towards the couch to sit on it, and then standing up and approaching the shelf to grab something from it, as illustrated in Figure 1. It should allow users to specify with minimal interaction what kind of actions to perform, while keeping the realism and expressivity required for applications such as synthetic data generation, robotics, VR/AR, gaming, etc.
While the community has seen promising progress in animator guided motion synthesis in 3D scenes, most works are restricted to a single action and do not handle transitions [73, 69, 62], preventing them from producing long range diverse motion. They are also not deployable in a wide variety of real scenes [58, 65, 66, 26]. The reason for this is that they synthesize motion by conditioning on scene geometry and require training on a dataset featuring 3D humans interacting in 3D scenes and objects [27, 26, 73]. Generalizing these methods to arbitrary 3D scenes would require collecting motion data registered to a myriad of possible 3D scenes and objects, which is not scalable.
In contrast, humans can navigate cluttered scenes, pick objects from a shelf they have never seen before, and sit on novel furniture and surfaces. Most of the clutter in the scene is often ignored, and what matters most are not the exact details of the object / scene geometry but whether they afford each action. Our hypothesis is that motion, to a large extent, is driven to avoid obstacles and focused on reaching the next immediate goal or target contacts in the environments. Thus, it should be possible to generate human motion without accounting for all the details in the 3D scene.
Based on this insight, we propose a novel framework for animator-guided motion synthesis in 3D scenes without relying on scene-registered motion data. As such, our method can be trained on regular mocap data, which is relatively easily captured and abundantly available [46]. Since our method does not explicitly condition on the geometry of the scene, it can be deployed across 3D scenes with varied geometry.
Our method relies on two key observations: first, we can represent actions in a 3D scene as a set of sparse desired target contacts (we use 3 or 4 contacts such as the location of the two feet and a hand or the location of two feet and the root) to be reached, which we refer to as _action keypoints_. These keypoints can be provided by an animator using an interface or generated by automated heuristics, allowing animators to trade off the speed and control over the generation motion. An interesting finding in this paper is that _action keypoints_ are a powerful abstraction of several actions in 3D scenes, and can be used to execute instructions such as "sit there" or "grab at this height". Second, avoiding obstacles in 3D scenes can be achieved by path following. The challenge is to follow arbitrarily long paths, smoothly making the human transition into and out of the action, and then walk towards the next target. For this, we break down motion into three pieces: walking, transition into and out of an action. For path following and transitions, we introduce the idea of training a motion synthesis model entirely with _scene-agnostic motion data_ to reach the origin of a _canonical coordinate frame_. For navigating paths, this model is sampled iteratively to converge at the origin of the _canonical coordinate frame_ defined using waypoints and tangents on the path. For transitions in and out of actions, motion is synthesized by placing target poses at the origin of the canonical coordinate frame. By iteratively synthesizing motion in the _canonical coordinate frame_, our method allows for long range motion synthesis that transitions between walks and various actions in a 3D scene.
Unlike existing methods for motion synthesis [26, 58], our method allows for synthesizing motion without requiring any manual phase or action annotation.
For the first time, we demonstrate long-range human motion synthesis on a wide range of scene datasets: Replica [61], Matterport [8], HPS [22], Scannet [10] and a NeRF scene. Furthermore, we show that our model can perform actions at different places, such as grabbing from any shelf, table or cabinet at any height or sitting on any surface that affords sitting. We will make our code and models publicly available which can be used by animators to synthesize goal directed human motion across 3D scenes.
To summarize, our contributions are as follows:
* We present a method that departs from existing methods for motion synthesis in 3D scenes by only using regular motion capture data and that is deployable across varied 3D scenes.
* We introduce a novel idea of iteratively converging motion at the origin of a canonical coordinate frame, which allows to synthesize long-range motion in 3D scenes.
## 2 Related Work
Human Motion Prediction without the 3D scene.Predicting the dynamics of human motion is a long studied problem in computer vision and graphics. Classic works explored using Hidden Markov Chains [5] and Gaussian Processes [64], physics based models [44] for predicting future motion. Recently, recurrent neural networks [19, 31] have been used for motion prediction [17, 48, 3] also in combination with Graph Neural Networks [36, 47, 40, 11], and variational Auto-encoders [35] to add diversity [23, 74]. Yuan et al. [71]. An intrinsic problem of recurrent methods is that they drift over time [1].
More recent approaches employ transformers to generate unconditional or text and music conditioned motion sequences [1, 41, 39, 51, 52]. We also build on transformer architectures but aim to generate motion in real 3D scenes.
Motion Inbetweening [14, 25, 49, 70, 2, 34] is another classic paradigm for motion synthesis where the task is to fill in frames between animator provided keyframes.
Our approach builds on recent progress in transformer architectures [41], and classical ideas such as motion inbetweening, combined with the novel idea of a canonical coordinate frame and action keypoint representation in order to generate motion in 3D scenes.
Character Control in Video Games.Motion matching [54], its learnt-variant [9, 32] and motion graphs [38, 15, 37, 56, 55] are classical methods often employed in the industry for generating kinematic motion sequences, controlled by environment and user specified constraints. Similar to our goal, some works [53, 7] use a combination of these approaches and IK to generate human behaviors in synthetic scenes. However, these approaches require significant human effort to author realistic animations, and IK approaches easily produce non-realistic animations.
Deep learning variants such as Holden _et al_. [33] introduce phase-conditioning in a RNN to model the periodic nature of walking motion. In several works by Starke _et al_. [58, 60, 59] the idea of local phases is extended to synthesize scene aware motion, basketball motion and martial arts motion. All these methods generate convincing motion but phases are non-intuitive for non-periodic motion and often require manual labelling.
Static Human Pose Conditioned on Scenes.The relationship between humans, scenes, and objects is another recurrent subject of study in computer vision and graphics. Classical works include methods for 3D object detection [20, 21] and affordance prediction using human poses [12, 18, 16].
Several recent works, generate plausible static poses conditioned on the a 3D scene [42, 74, 68, 72, 28, 76] using recently captured human interaction datasets [27, 22, 57, 4, 63, 6]. Instead of static poses, we generate _motion_ coherent with the scene which is significantly harder.
Scene Aware Motion Synthesis.Some works leverage reinforcement learning to synthesize navigation in _synthetic_ 3D scenes [43, 75]. Other works focus on a single action, such as grabbing [62, 69] but do not demonstrate transitions to new motions. These methods are not demonstrated in real 3D scenes with multiple objects and clutter. Recent real interaction datasets [27, 22, 57, 4, 63, 6] have powered methods to synthesize 3D scene aware motion [66, 65, 6, 67]. These datasets are crucial to drive progress, but do not capture the richness and variety of real world scenes. Hence, these methods are often demonstrated only on small scenes from PROX [27] and Matterport [8].
We draw inspiration from Hassan et al. [26] which combine path planning with neural motion synthesis, and from Zhang et al. [73] which synthesize contact controlled human chair interaction. These methods require the geometry of the isolated interacting object as input, which make them hard to generalize to real 3D scenes. Unlike these methods, we demonstrate _long chained sequences of actions_ in _generic real 3D scenes_, which is enabled with our origin
Figure 2: Overview of our method. We generate human motion satisfying keypoint constraints by diving it into 3 stages: a _Walk Motion_, which animates the human as it walks between keypoints, a _Transition-In_, which blends the walking motion with the pose specified by the keypoints and a _Transition-Out_, which animates the human back to the walking pose. We use an autoregressive transformer, _WalkNet_, to synthesize the walking motion, and a masked-autoencoder transformer to generate the blending motion. By moving the motion into a Goal-Centric Canonical Coordinate Frame our method can generalize to a wide set of 3D scenes.
canonicalization and action keypoints.
## 3 Method
Our method takes as input a seed motion sequence and a list of action keypoints \(\{\mathbf{a}_{1},\dots,\mathbf{a}_{n}\}\) specifying interactions at different locations in the scene. These keypoints can be specified by users or generated using language commands and scene segmentations (Sec. 3.2). Our goal is to synthesize motion that starts at the seed motion and transitions in and out each of the action keypoints in the input list.
The first step is to optimize for a pose that fits the action keypoints at target locations using Inverse Kinematics and a pose-prior 3.3). These poses along with the starting seed motion act as anchors to guide the motion synthesis process.
Using scene-agnostic motion capture data placed in a goal-centric canonical coordinate frame (Sec. 3.4), we train _WalkNet_ (Sec. 3.5) to synthesize walking motion that converges at the origin of a canonical coordinate frame, and _TransNet_ (Sec 3.6) that synthesizes motion inbetween a seed motion sequence and a target pose also at the origin. At test time (see Fig. 2), _WalkNet_ is used to reach canonicalized intermediate goals along a path computed with a path planning algorithm, thus creating long motion by successively reaching the origin. Once the walking motion reaches the vicinity of an anchor pose, _TransNet_ synthesizes transition from walking motion to the anchor pose and vice versa. This allows to synthesize motion in 3D scenes without the need for motion data coupled with 3D scenes. Our framework is general and highly modular, which allows it to be updated with novel methods for motion synthesis.
### SMPL Body Model
We use the SMPL body model [45] to represent the human subject. SMPL is a differentiable function \(M(\mathbf{\phi},\mathbf{\theta},\mathbf{t},\mathbf{\beta})\) that maps global body orientation \(\mathbf{\phi}\), pose \(\mathbf{\theta}\), translation \(\mathbf{t}\) and shape \(\mathbf{\beta}\) parameters to the vertices of a human mesh along with the 3D joint locations of the SMPL skeleton. We assume that \(\mathbf{\beta}\) remains static throughout our method. We denote motion sequences as an ordered list of SMPL parameter tuples. For example \(\mathcal{C}=[(\mathbf{r},\mathbf{\phi},\mathbf{\theta})_{j}]_{j=1:D}\) denotes a motion sequence of \(D\) frames.
### Generating Keypoints in a Scene
Keypoints can be efficiently collected using a 3D user interface, as described in the supp. mat, or keypoints can be inferred from the geometry of the scene, and can be therefore generated via action labels or language. An example of automatic KP generation can be seen in Fig. 4. Given a point cloud of the scene with semantic labels and a language description of a task, we can use simple heuristics to generate keypoints that can synthesize the described motion. More details can be found in the supp. mat.
### From Action Keypoints to an Anchor Pose
The first step is to infer a pose from the action keypoints in a target location \(\mathbf{a}=\left\{\mathbf{k}_{i}\right\}_{i=1}^{P}\), where \(\mathbf{k}_{i}\in\mathbb{R}^{3}\) indicates the desired locations for corresponding SMPL joints denoted as \(m_{i}(\cdot)\). We find as few as three to four joints (\(P=3,4\)) are usually sufficient. Since the problem is heavily under-constrained we optimize the latent space of VPOSER [50] denoted as \(\mathbf{z}\). Denoting \(f(\mathbf{z})\mapsto(\phi,\theta)\) as the mapping from the latent space \(\mathbf{z}\) to the SMPL pose parameters, we minimize the following objective
\[\mathbf{z},\mathbf{t}=\arg\min_{\mathbf{z},\mathbf{t}}\sum_{i=1}^{P}||m_{i}(f(\mathbf{z}),\mathbf{t})- \mathbf{k}_{i}||_{2} \tag{1}\]
Figure 4: Using language instruction and semantic segmentation, keypoints can be automatically placed in a 3D scene.
Figure 3: a) Using keypoints and tangents along a path, we move motion from the scene coordinate frame into b) the goal-centric canonical coordinate frame, where c) _WalkNet_ synthesizes motion that converges at the origin of the coordinate frame. d) Once the synthesized motion reaches the origin, we move it back to the scene coordinate frame.
Please see the supplementary material for further details to make the optimization well behaved. We repeat this step for each target action \(\mathbf{a}_{1}\dots\mathbf{a}_{N}\), obtaining \(N\) pose-anchors \(\mathcal{A}=\{\mathbf{t}_{i}^{A},\mathbf{\phi}_{i}^{A},\mathbf{\theta}_{i}^{A}\}_{i=1:N}\).
### Canonical Coordinate Frame
One of our key ideas to sythesize motion in 3D scenes is to make transformers synthesize motion that always converge at the origin of a canonical coordinate frame. This way at test time long motion is composed by consecutively going to the next goal placed at the origin. Thus, we canonicalize the training sequence clips by using the planar translation \(\mathbf{t}_{C}\), and rotation \(\mathbf{R}_{\mathbf{C}}\) of the last frame in a sequence clip as follows
\[\mathbf{\phi}_{j}^{C}=\mathbf{R}_{C}^{-1}\mathbf{\phi}_{j}\ \,\mathbf{r}_{j}^{C}=\mathbf{R}_{C} ^{-1}(\mathbf{r}_{j}-\mathbf{t}_{C})\ . \tag{2}\]
By construction, this transformation outputs a new set of \(L\) frames \([(\mathbf{r}^{C},\mathbf{\phi}^{C},\mathbf{\theta})_{j}]_{j=1:L}\), where the last pose is at the origin and oriented towards a canonical axis of orientation \(\gamma\) (arbitrary fixed axis). Let \(\mathbf{X}\) denote a matrix whose columns are vectorized motion parameters (pose and translation combined) We will use the following notation to denote the canonicalization in Eq. (2) for a full sequence as
\[\mathbf{X}^{C}=C(\mathbf{X};\mathbf{R}_{C},\mathbf{t}_{C}) \tag{3}\]
Synthesizing motion in the goal-centric canonical coordinate frame, allows us to synthesize walking motion along paths in a 3D scene (Sec. 3.5) and transitions in and out of actions (Sec. 3.6) without the need for scene registered data.
### WalkNet
**Training.** Using walking sequence clips of variable lenght \(L\) canonicalized (last pose at origin), we train _WalkNet_. _WalkNet_ takes \(K\) motion frames as input \(\mathcal{W}_{inp}=[(\mathbf{r}^{W},\mathbf{\phi}^{W},\mathbf{\theta}^{W})_{j}]_{j=1:K}\) and predicts the next \(K\) frames in the sequence \(\mathcal{W}_{out}=[(\mathbf{r}^{W},\mathbf{\phi}^{W},\mathbf{\theta}^{W})_{j}]_{j=K:2K}\). The training sub-clips of size \(2K<L\) are randomly sampled from the training walking sequences.
Expressing sequences as matrices (columns are translations and poses) as explained in the previous section, the transformer takes as input a matrix \(\mathbf{X}_{in}\in\mathbb{R}^{K\times 219}\) and outputs a matrix \(\mathbf{X}_{out}\in\mathbb{R}^{K\times 219}\). We denote the learned mapping as \(T:\mathbf{X}_{in}\mapsto\mathbf{X}_{out}\). Note that we input the pose as vectorized joint rotation matrices, which make learning more stable compared to using joint angles.
Test time.We use the _WalkNet_ to follow long paths, by breaking the path into intermediate goals that are canonicalized to the origin (Fig. 3). To traverse scenes avoiding obstacles, we compute the path between the seed motion \(\mathcal{I}\) and the first anchor pose \(\mathcal{A}_{1}\) using A-star[24]. Along the path, we sample \(P\) goals and compute tangents to the path: \(\{\mathbf{q}_{p},\mathbf{l}_{p}\in\mathbb{R}^{3}\}_{p=1\dots P}\). Then we recursively canonicalize such that tangents \(\mathbf{l}_{p}\) align with the canonical axis \(\gamma\). Hence, canonical translation and rotation are computed as follows
\[\mathbf{t}_{C}=\mathbf{q}_{p},\qquad\mathbf{R}_{C}=\exp(\widehat{\mathbf{l}_{p}\times \mathbf{\gamma}}) \tag{4}\]
where \(\exp(\cdot)\) is the exponential map recovering the rotation from the screw-symmetric matrix \(\widehat{\mathbf{l}_{p}\times\mathbf{\gamma}}\). With this, the motion sequence from goal \(p-1\) to goal \(p\) is obtained by canonicalizing, predicting future motion with the learned mapping \(T\) and uncanonicalizing
\[\mathbf{X}_{in}\xrightarrow{C(\cdot,\mathbf{R}_{C},\mathbf{t}_{C})}\mathbf{ X}_{in}^{C}\xrightarrow{T}\mathbf{X}_{out}^{C}\xrightarrow{C(\cdot, \mathbf{R}_{C}^{T},-\mathbf{t}_{C})}\mathbf{X}_{out}. \tag{5}\]
Although the transformer outputs K future frames, at test time, we use it recursively with a stride of 1 for better performance. That means we effectively predict one pose at a time, and we discard the \(K+1:2K\) frames. In this manner, the motion always goes to the origin, we never have to explicitly send the goal coordinates as input to the network, and we do not drift. When we are sufficiently close to an anchor pose, we predict the transition with TransNet.
### TransNet
We synthesize transitions between walks and actions again in a canonicalized frame. To do so, we train _TransNet_ - a transformer based motion inbetweener - using AMASS sequences placed in the canonical coordinate frame. The task of _TransNet_ is to fill in the motion from a seed sequence \(\mathbf{X}_{in}\) to a target _anchor pose_.
Training.We train _TransNet_ by asking it to recover training clips from masked out ones. We observe that directly asking to infill many frames does not work reliably. Inspired by training of language models [13], we progressively grow the mask during training until the desired length. Formally, let \(\mathbf{X}\) be a training clip of length \(M\), let \(\mathbf{V}\in[0,1]^{M\times 219}\) be a matrix mask with zero-column vectors for the frames that need to be infilled. The network is tasked to recover \(\mathbf{X}\) from the masked out matrix \(\mathbf{X}\odot\mathbf{V}\). The mask \(\mathbf{V}\) is progressively grown to mask all the motion frames between \(\frac{M}{2}\) to \(M-1\) frames - everything except the seed motion and the last anchor pose. For more details, please see supp. mat.
**Test time** We use _TransNet_ to synthesize transitions in 3D scenes by moving \(\frac{M}{2}\) frames of a motion sequence into the canonical coordinate frame by using the orientation and position of the motion-anchor pose - the motion-anchor pose is thus placed at the origin of the canonical coordinate frame. _TransNet_ is then tasked to infill the missing frames. (Fig. 5)
### Chained actions
With our models and representations we can chain actions trivially. At run time, we have to satisfy an arbitrary
number of actions keypoints \(\{\mathbf{a}_{1},\dots\mathbf{a}_{N}\}\) at different locations. First we compute anchor poses as explained in Sec. 3.3. Obstacle free paths connecting the locations of actions are computed with A*. We rely on _WalkNet_ to follow paths until we are sufficiently close to the first anchor pose. Feeding TransNet with the last \(M/2\) predicted frames of _WalkNet_ and the anchor pose, we predict the transition into the first anchor pose. To transition out we also use TransNet with no modification. We sample a location along the path from \(\mathbf{a}_{1}\) to \(\mathbf{a}_{2}\) at a fixed distance \(\delta\) and place a walking pose from our database. TransNet then can transition into this walking pose (Fig. 5). Then we activate _WalkNet_ and the process is repeated until all actions are executed. In addition, we can repeatedly use TransNet to execute several actions at the same location, like grabing at different heights.
## 4 Experiments
In this section we present implementation details of our method. Next, we compare our approach with existing methods. Our experiments show that we clearly outperform existing baselines. Next, we ablate our design choices and finally present qualitative results of our method.
### Implementation Details
_WalkNet_ and _TransNet_ are BERT [13] style full-attention transformers. Both consist of 3 attention layers - each composed of 8 attention heads. We use an embedding size of 512 for both transformers. For more details please see the supplementary material. For training both transformers, we set the learning rate to \(1e^{-5}\). Both networks are trained using an \(L2\) loss. We set \(M=120\) and \(K=30\). We experimented with three different values of \(M\) and found that \(M=120\) produces the least foot-skating. Please see the supplementary material for this experiments.
### Datasets
**Motion Data:** To train _TransNet_ and _Walknet_ we use the large mocap dataset **AMASS**[46]. For exact details how this is done, please see the supplementary material.
**Scene Datasets:** We demonstrate that our method is able to generate realistic human motion in scenes from **Matterport3D**, **HPS**, **Replica** and **ScanNet** datasets. All these datasets have been reconstructed using RGB-D scanners or LIDAR scanners and contain scans with sizes ranging from 20 \(m^{2}\) to 1000 \(m^{2}\). While Replica, Matterport scenes contain perfect geometry, ScanNet scenes do not. Our method is able to generalize across all these scenes.
### Evaluation Metrics:
We compare our method with existing baselines using perceptual studies and a foot skate metric. Additionally, we ablate various components of our method with the same foot skate metric.
**Perceptual Study:** We synthesize two motion sequences - one using our method and another using a baseline method and show the two synthesized sequences to participants in our perceptual study. The participant is asked to answer "Which motion looks most realistic?" and "Which motion satisfies scene constraints best?". The study is conducted in such a manner that the participant is forced to choose one of two motions in front of him.
**Foot Skating (FS):** The foot-skate metric measures how much foot-skate occurs during a synthesized motion measured in cm/frame. For N frames, it is defined as:
\[s=\sum_{p=1}^{N}[v_{p}(2-2^{\frac{hp}{H}})\mathbf{1}_{h_{p}<=H}]\]
where \(h_{p}\) is the height of the vertex and \(v_{p}\) is the velocity of a foot vertex on the right toe in frame \(p\) and \(H=2.5\) cm
### Comparison with Baselines
As aforementioned, no method addresses the task of continual motion synthesis in arbitrary 3D scenes. For completeness we do our best to compare our approach with three existing methods: SAMP [26], GAMMA [75], Wang et al. [66] which all generate animator guided motion by navigating A* paths in 3D scenes. Though, these methods use different forms of animator guidance - such as action labels,
Figure 5: Using a) the motion-anchor pose in the 3D scene (purple), b) we move the motion sequence into the canonical coordinate frame. c) There _TransNet_ synthesizes transitions (blue) between the input motion and the pose placed at the origin (purple). d) Once the motion is synthesized, we move it back to the scene coordinate frame.
we modify them by incorporating the KP information used by our method. Note that except GAMMA, none of these baselines can be deployed in arbitrary 3D scenes without significant modifications, as described below.
**SAMP:** SAMP is written entirely in Unity and can only synthesize sitting and lying actions in synthetic scenes. Unlike SAMP our method requires no manual action annotation. The object of interaction and the action to perform are the animator guidance provided as input to SAMP. SAMP synthesizes motion by explicitly conditioning on the geometry of the object of interaction, and by navigating A* paths. For comparison with SAMP, we represent the object of interaction in one of our test scenes with a synthetic object in Unity. Using KPs, we represent the orientation of action in our test scene and use this orientation to port A* paths used in our test 3D scene into Unity and run the publicly available code of SAMP. For exact details, please see the supp. mat. Note that SAMP cannot synthesize chained actions nor can it be deployed in arbitrary 3D scenes. For instance it cannot sit on stairs nor can it perform a grabbing action near a bookshelf. The comparison is included for completeness as SAMP also navigates A* paths.
**Wang et al.**: We run the pre-trained code of Wang et al. on scenes from HPS, Replica and Matterport Datasets. Instead of using action labels to generate anchor poses as done in the original paper, we replace this step with the motion anchors generated using our inverse kinematics step. Since Wang et al. [66] is trained using the PROX dataset and synthesizes navigational motion across A* paths by explicitly conditioning on scene geometry, it does not generalize at all to 3D scenes beyond these datasets.
**GAMMA:** GAMMA only navigates 3D scenes and is unable to synthesize human-scene interaction. Similiar to the navigation part of our method, it uses the start and end of a path as animator guidance. For the purpose of this comparison, we generate a set of paths in 3D scenes using A* and synthesize walking motion along this path using GAMMA and our method. GAMMA is unable to follow the exact waypoints of the path and as such produces significant interpenetrations with the 3D scene.
For visualizations of motion synthesized by these baselines, please see the supplementary video. We synthesize 5 motion sequences of a total duration of 300 seconds using each method in 5 different scenes for our perceptual study. In Tab. 1, we report the results of our perceptual study with 50 participants (see Sec. 4.3). Each column corresponds to the percentage of users who choose the method corresponding to the column heading. Our results are preferred by a vast majority of the participants. In Tab. 3, we report the numbers corresponding to foot skate metric.
### Ablation Studies
**Can _TransNet_ be replaced with other inbetweeners? We compare _TransNet_ with the SoTA inbetweening method NeMF [29] for the task of transitioning in and out of actions. For our task of infilling \(\frac{M}{2}-1\) frames in the canonical coordinate frame, _TransNet_ produces more natural motion and less foot skating. We hypothesize that this occurs as NeMF is a general purpose inbetweener that can infill an arbritrary number of frames, whereas _TransNet_ is a motion inbetweener custom designed for the purpose of infilling \(\frac{M}{2}-1\) motion frames in the canonical coordinate frame. We conduct a new user study with 36 participants, asking users to rate the naturalness of 20 motion sequences by NeMF and _TransNet_. Results are reported in Tab 2.
**Can _WalkNet_ be replaced with other path following methods?** We provide comparisons with SAMP, Wang et al, and GAMMA which all navigate A* paths. As our experiments illustrate, our method outperforms these existing methods for navigation. For further completeness, we trained the SoTA walking method, MoGlow [30], on our walking data. When deployed on 150-200 meter long A* paths, it produces significant foot-skating after about 30 secs. We hypothesize that this occurs because MoGlow synthesizes motion in an egocentric coordinate frame, and hence the control signal provided by A* changes rapidly which leads MoGlow to synthesize motion with significant drift. We compare our method to MoGlow on these paths
\begin{table}
\begin{tabular}{l|c c|c c c|c c} \hline & & Ours & SAMP & Ours & GAMMA & Ours & Wang et al. \\ \hline Which motion is most realistic (\%) \(\uparrow\) & **71.8** & 28.2 & **95.6** & 4.4 & **100** & 0 \\ Which motion satisfies scene constraints best (\%) \(\uparrow\) & **76.8** & 23.2 & **100** & 0 & **100** & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Comparisons between our method and existing baselines using a perceptual study.
\begin{table}
\begin{tabular}{l c c c|c c} \hline & Language & Manual & _WalkNet_ & MoGlow & _TransNet_ & NeMF \\ \hline Foot Skate (cm/f) \(\downarrow\) & 0.93 & **0.92** & **0.91** & 1.88 & **1.1** & 1.54 \\ User Study (\%) \(\uparrow\) & **53.8** & 46.2 & **75.7** & 24.3 & **66.8** & 33.2 \\ \hline \end{tabular}
\end{table}
Table 2: Analysis of different components in our method. We compare our method with different baselines across three design components: using language based or manually specified keypoints, the walking motion and the transition motion.
\begin{table}
\begin{tabular}{l c c c} \hline & Ours & SAMP & GAMMA & Wang et al. \\ \hline Foot-skate \(\downarrow\) & **0.91** & 1.34 & 0.94 & 4.53 \\ \hline \end{tabular}
\end{table}
Table 3: Comparisons between our method and existing baselines using the foot-skate metric.
using a user study with 36 participants in Tab. 2, where our approach outperforms MoGlow.
**How well does language based keypoint placement work?** In this experiment, we compare motion synthesized using manual keypoint placement with language based keypoint placement. We synthesize 5 motion sequences using keypoints generated by these two approaches and compare the sythesized sequences using a user study with 36 participants. When used for motion synthesis, these KPs produce similar quality as manual KP placement (Tab. 2).
**How long does it take for a user to provide keypoints manually?** We develop a user interface which allows users to navigate 3D scenes and to click on locations of interaction. We instruct 7 participants how to navigate 3D scenes with our user interface. On average it takes 245 seconds for users to learn the interface. We then ask each user to provide 5 sets of 3 action keypoints (the location of the root and the two feet or the location of one hand and two feet) for a total of 15 keypoints per scene in 5 different 3D scenes. On average it takes 125 seconds to select these points per scene.
### Qualitative Results
Please watch the supp. video for qualitative evaluation. In Figure 6, we demonstrate examples of motion generated in scenes from 4 different datasets: Replica [61], Matterport[8], HPS[22] and Scannet[10]. Morover, representing the motion as Action Keypoints allows us to have high control and diversity over the generated motions. In Figure 7 we show how this representation allows us to sit or pick objects at different heights (left column), or generate actions such as grabbing with two hands or stretching.
## 5 Limitations and Conclusions
We presented the first method to synthesize continual human motion in scenes from the HPS, Matterport, ScanNet, and Replica. Our core contribution is a novel method for long-range motion synthesis via iterative canonicalization and the use of keypoints to decouple scene reasoning from motion synthesis, and provide a flexible interface to synthesize motion. We demonstrated that our method works better than existing solutions that generate motion in 3D scenes.
Figure 6: Our method allows to generate motion that generalizes across different scenes. Here we show motion generation in scenes from 4 different datasets: Replica [61], Matterport [8], HPS [22] and Scannet [10].
Figure 7: The keypoint representation allows us to generate diverse and highly controllable motion. We show here examples of different grabbing, sitting and newly defined motions.
While our approach presents an important step towards long-range motion synthesis in 3D scenes, it also has limitations: It assumes a horizontal floor and thus does not support scenes with uneven floors. It also assumes valid keypoint configurations: if the keypoints provided by the user do not conform to a valid pose, the pose produced by the IK step will not look realistic, producing unnatural motion. In the future we hope to remove this limitation by reducing the number of required keypoint inputs. We hope that the proposed approach drives new research towards continual human motion in arbitrary 3D scenes.
|
2301.09932 | The photometric periods of rapidly rotating field ultra-cool dwarfs | We use 1-m class telescopes and the Transiting Exoplanet Survey Satellite
(TESS) to explore the photometric variability of all known rapidly rotating
($v\sin{i}\gtrsim30$ km\,s$^{-1}$) ultra-cool ($\geq$M7) dwarfs brighter than
$I\approx17.5$ mag. For a sample of 13 M7--L1.5 dwarfs without prior
photometric periods, we obtained $I$-band light curves with the SMARTS 1.3m and
WIYN 0.9m telescopes and detected rotation-modulated photometric variability in
three of them. Seven of our targets were also observed by TESS and six of them
show significant periodicities compatible with the estimated rotation periods
of the targets. We investigate the potential of TESS to search for
rotation-modulated photometric variability in ultra-cool dwarfs and find that
its long stare enables $<$80~h periodic variations to be retrieved with
$\leq$1\% amplitudes for ultra-cool dwarfs up to a TESS magnitude of 16.5. We
combine these results with the periods of all other known
photometrically-periodic ultra-cool dwarfs from the literature, and find that
the periods of ultra-cool dwarfs range between 1 and 24 h, although the upper
limit is likely an observational bias. We also observe that the minimum
rotation periods follow a lower envelope that runs from $\approx$2 h at
spectral type $\approx$M8 to $\approx$1 h at spectral type T. | Paulo A. Miles-Páez, Stanimir A. Metchev, Benjamin George | 2023-01-24T11:28:05Z | http://arxiv.org/abs/2301.09932v1 | # The photometric periods of rapidly rotating field ultra-cool dwarfs
###### Abstract
We use 1-m class telescopes and the Transiting Exoplanet Survey Satellite (TESS) to explore the photometric variability of all known rapidly rotating (\(v\sin i\gtrsim 30\) km s\({}^{-1}\)) ultra-cool (\(\geq\)M7) dwarfs brighter than \(I\approx 17.5\) mag. For a sample of 13 M7-L1.5 dwarfs without prior photometric periods, we obtained \(I\)-band light curves with the SMARTS 1.3m and WIYN 0.9m telescopes and detected rotation-modulated photometric variability in three of them. Seven of our targets were also observed by TESS and six of them show significant periodicities compatible with the estimated rotation periods of the targets. We investigate the potential of TESS to search for rotation-modulated photometric variability in ultra-cool dwarfs and find that its long stare enables \(<\)80 h periodic variations to be retrieved with \(\leq\)1% amplitudes for ultra-cool dwarfs up to a TESS magnitude of 16.5. We combine these results with the periods of all other known photometrically-periodic ultra-cool dwarfs from the literature, and find that the periods of ultra-cool dwarfs range between 1 and 24 h, although the upper limit is likely an observational bias. We also observe that the minimum rotation periods follow a lower envelope that runs from \(\approx\)2 h at spectral type \(\approx\)M8 to \(\approx\)1 h at spectral type T.
keywords: stars: rotation - stars: low-mass- stars: activity - surveys
## 1 Introduction
Stellar spectro-photometric variability provides valuable information on the physical processes that take place at different atmospheric heights of a star. In the case of very low-mass stars and brown dwarfs (usually referred to as ultra-cool dwarfs; spectral types later than M7), spectro-photometric variability can have multiple origins, such as magnetic processes (e.g., flares or Sun-like spots; Hooten and Hall, 1990), condensates particles that form atmospheric cloud-like structures (Tsuji et al., 1996), or even brightness changes due to an atmosphere out of chemical equilibrium (Tremblin et al., 2015).
Independent of the variability nature, the monitoring of these brightness changes (usually at red-optical and infrared wavelengths) allows us to derive the rotation periods of ultra-cool dwarfs. Numerous searches for photometric variability have been carried out from both the ground (e.g., Tinney and Tolley, 1999; Koen, 2005; Harding et al., 2013; Radigan et al., 2014; Miles-Paez et al., 2017) and space (e.g., Buenzli et al., 2014; Metchev et al., 2015; Cushing et al., 2016; Miles-Paez et al., 2019; Tannock et al., 2021; Miles-Paez, 2021; Vos et al., 2022), covering from late-M to Y dwarfs of different ages. Understanding the distribution of rotation periods in ultra-cool dwarfs is crucial for investigating their angular momentum evolution, which seems different from the well-known spin-down for higher mass stars caused by angular momentum loss via magnetized stellar winds (Skumanich, 1972). For example, Tannock et al. (2021) reported the discovery of three L and T brown dwarfs with rotation periods close to 1 hour, which rather than a spin-down have likely experienced an increase in spin rate: the likely result of evolutionary contraction without disk- or wind-induced angular momentum loss. When comparing to all 78 other L, T, and Y dwarfs with a measured rotation periods, Tannock et al. (2021) concluded that the 1 hour period marks an empirical lower limit to the spin period of brown dwarfs. This is a factor of two to three slower than the rotational stability limit dictated by angular momentum conservation at substellar masses. The reason for the discrepancy likely lies in stability considerations related to the high oblateness of such rapid rotators (James, 1964). Tannock et al. (2021) further invoke the unknown role of the magnetic dynamo of the metallic hydrogen interior, which may be an important contributor to the angular momentum budget at high spin rates. The situation for low-mass stars just above the hydrogen burning limit may be different still, as
thermonuclear reactions are an important contributor to the energy balance.
To augment our understanding of rapidly rotating ultra-cool dwarfs, we seek to expand their sample with objects that are already known to have high projected rotational velocities, but lack photometric period measurements. Specifically, we focus on field M/L transition dwarfs with \(v\sin i\gtrsim 30\) km s\({}^{-1}\) that are bright enough to be observed with 1 m-class telescopes at red optical wavelengths. We conducted a ground-based photometric monitoring campaign of 13 such targets with 1 m-class telescopes in 2018. The sample selection, ground-based observing campaign, and data reduction are presented in Sections 2 and 3. With the release of data from the Transit Exoplanet Survey Satellite (TESS; Ricker et al. 2014), we complemented these with TESS light curves where available. The analysis of the ground-based and TESS observations is explained in Sections 4 and 5, respectively. The TESS data refined periods for three of our ground-based targets, and revealed periods for three more. We further combine the results from our variability survey with published photometric periods of 15 additional ultra-cool dwarfs, which we also refine with data from TESS, and discuss the ensemble findings in Section 6. Overall, the TESS data yield significantly more precise periods, and hence the Sections of greatest interest are 5 and 6. Nevertheless, the ground-based observations do allow us to lift the ambiguity on the component periods of one triple system that is unresolved in TESS.
## 2 Target Sample
We aimed to monitor all \(\geq\)M7 ultra-cool dwarfs with measured projected rotation velocities of \(v\sin i>30\) km s\({}^{-1}\) and without reported rotation periods. We limited our sample to dwarfs brighter than 17.5 mag in the \(I\) band, which generally allowed us to attain \(<\)1.5% precision in a few minutes on 1 m-class telescopes by means of differential photometry. Thus, our survey sample comprised 13 ultra-cool dwarfs: nine M7-M9.5 dwarfs and four L0.5-L1.5 dwarfs, that are listed in Table 1.
Ultra-cool dwarfs with ages greater than 500 Myr are predicted to have radii of about 1 \(R_{\rm Jup}\)(Chabrier & Baraffe, 2000), which combined with \(v\sin i\geq 30\) km s\({}^{-1}\) constrain the rotational periods to \(\lesssim 4\) hr. Thus, our selection allows us to detect rotation-modulated variability within a single night of observations.
_Ages._ We checked whether any of our targets are known or suspected to be young based on probable membership in nearby young stellar moving groups, or on signatures of low surface gravity. Using the BANYAN \(\Sigma\) tool (Gagne et al., 2018), we found that most of our targets are not members of young associations, with the exception of 2MASS J16192988-2440469, which is an Upper Sco member (11 \(\pm\) 2 Myr; Martin et al. 2017). In addition, 2MASS J14112131-2119503 was reported as an intermediate-gravity dwarf by Liu et al. (2016) and has a radius estimation of 1.98 \(\pm\) 0.44 \(R_{\rm Jup}\)(Filippazzo et al., 2015): larger than the \(\approx\) 1 \(R_{\rm Jup}\) radius of \(>\)500 Myr-old higher-gravity ultra-cool dwarfs. We assume that the remaining 11 rapidly rotating targets in our sample are \(>\)500 Myr-old field ultra-cool dwarfs.
_Binarity._ Our sample contains several binary or multiple systems. LP213-67 and LP213-68 were identified by Gizis et al. (2000) as a common proper motion pair separated by about 14.4''. Using adaptive optics observations, Close et al. (2003) further resolved LP213-68 into an M8+L0 binary with the components separated by 0.12''. Dupuy & Liu (2017) found that the components of LP213-68AB have an orbital period of nearly 6.6 years and masses of 97\({}^{+6}_{-7}M_{\rm Jup}\) and 80 \(\pm\) 6\(M_{\rm Jup}\). We estimate a magnitude difference of 1.2 mag in the \(I\)-band for the components of LP213-68AB by means of theoretical spectra. Additionally, Bartlett et al. (2017) report strong perturbations in the 8-years astrometric data for 2MASS J23515044\(-\)2537367, which suggests the presence of a stellar companion with an orbital period longer than a decade. Reid et al. (2006) observed this object using _HST/NICMOS_, but did not resolve any
Figure 1: Photometric light curve of 2MASS J14112131-2119503 from the WIYN 0.9 m telescope (top panels). The differential photometry in the left panels exhibits strong linear correlations between the normalized flux and the x and y pixel positions of the target on the detector (left, second and third panels from top). We de-trend the normalized flux for any target that shows an \(|t|\geq 0.4\) correlation with any of x or y pixel position, airmass, or PSF FWHM (Section 4.1). The resulting corrected data (panels on the right) do not show any variability when these systematic effects have been resolved.
\begin{table}
\begin{tabular}{l c c c c} \hline Target Name & SpT & TESS & \(v\sin i\) & Ref. \\ & & (mag) & (km s\({}^{-1}\)) & \\ \hline
2MASS J00242463-0158201 & M9.5 & 15.053 & 33\(\pm\)3 & 1, 2 \\ LP213-67 & M7 & 13.549 & 35\(\pm\)5 & 2 \\ LP213-68AB & M8+L0 & 15.069 & 35\(\pm\)5 & 2 \\
2MASS J11593850+0057268 & L0 & 17.491 & 74.5\({}^{+9}_{-5.4}\) & 3 \\
2MASS J13365044+4751321 & M7 & 14.987 & 30\(\pm\)5 & 2 \\
2MASS J14112131-2119503 & M7 & 14.905 & 44.0\(\pm\)4.0 & 1 \\
2MASS J14310126-1953489 & M9 & 18.024 & 47.4\({}^{+5.2}_{-3.3}\) & 4 \\ LP859-1 & M7 & 14.540 & 30\(\pm\)5 & 2 \\
2MASS J16192988-2440469 & M8 & 17.044 & 47.2 & 5 \\
2MASS J17054834-0516462 & L1 & 16.688 & 26\(\pm\)2.6 & 6 \\
2MASS J20575409-0252302 & L15 & 16.606 & 62\(\pm\)6.2 & 3 \\
2MASS J23515044-2537367 & M9 & 15.608 & 36\(\pm\)4 & 1, 3 \\ \hline \end{tabular} References: 1 – Reiners et al. (2010) ; 2 – Mohanty & Basri (2003) ; 3 – Reiners & Basri (2008) ; 4 – Jenkins et al. (2009) ; 5 – Rice et al. (2010) ; 6 – Blake et al. (2010).
\end{table}
Table 1: Target sample characteristics.
companion. At the time of this work, the physical properties of the unresolved companion to 2MASS J23515044-2537367 are unknown. Finally, two of our targets--2MASS J13365044+4751321 and 2MASS J17054834-0516462 (hereafter J17054834-0516462, 2MASS is omitted in general)--have been observed at high angular resolution, but have not shown any companions (Siegler et al., 2005; Reid et al., 2006). We are not aware of any surveys that have found companions to any of our other targets.
## 3 Observations and Data Reduction
### Ground-based observations
Five of our targets were monitored with the Half Degree Imager (HDI) mounted on the WIYN 0.9 m telescope at Kitt Peak observatory (USA), and the other eight targets with the ANDICAM optical imager on the SMARTS 1.3 m telescope at Cerro Tololo Inter-American observatory (Chile). Observations at Kitt Peak were carried out in visitor mode from February 22\({}^{\rm nd}\)-25\({}^{\rm th}\) 2018, while observations at Cerro Tololo consisted of 65 hours in service mode during semesters 2018A and 2018B. The observing log of our campaigns is shown in Table 2.
HDI has a plate scale of 0.419'' pixel\({}^{-1}\) and a field size of 29''\(\times\)29''. Observations were carried out using a Harris-\(i\) filter, which has an effective wavelength of 7999 A with a pass band of 1368 A. Typical observations of a target covered 4-6 hours of continuous imaging. We used individual exposure times of 60-180 seconds, depending on the target's brightness. Typical seeing conditions were within 1.5''-2.5''. The sky conditions were mostly clear with very light cirrus cloud cover.
we used the Astropy package Photutils (Bradley et al., 2019) to perform circular aperture photometry on all our targets and sources in their fields. The aperture radius was chosen to maximize the SNR of the main target, which is then used for all other field stars. Typical radii values range between 3-6 pixels depending on target brightness, which roughly correspond to 1-1.5 times the typical full-width-at-half-maximum (FWHM) of the images. The photometry was sky-subtracted by computing the median value of the sky in an annulus with inner radius typically 12-20 pixels and a width of 8 pixels, depending on field density. Finally, the differential light curve of each target was constructed by dividing their fluxes by the sum of those of a set of comparison stars. These comparison stars were chosen among those stars slightly brighter than our targets, but far from the non-linear regime of the detector, that show the smallest standard deviation in their fluxes. Comparison star light curves were visually inspected for variability to ensure that any target variability detections are not spuriously created. We normalized the differential photometry of each target by dividing the fluxes by their average value.
An example of these light curves is shown in the top panels of Figure 1, in which we also plot the differential photometry as a function of the pixel position in both the x- and y-axes, the airmass, and the FWHM to search for the presence of residual systematics that can mimic photometric variability. We used Pearson's r correlation coefficient to test for linear correlation attributable to any residual systematic (Taylor, 1990). We found that only r-values \(\geq\) 0.4 can produce non-negligible variability in our data. For those cases, we determined the best fit straight line between the differential flux and the parameter involved in the correlation. Then, we detrended our data by dividing the differential photometry by the best fit. Figure 1 shows an example of spurious photometric variability in one of our targets (panels on the left) and the detrending process described here (panels on the right). In total, the photometric data for 10 out of 23 campaigns were detrended. These are indicated in Table 2, and their corresponding light curves are shown in Figure A1 of the Appendix A. Most of the targets with detrended light curves did not show real photometric variability, with the exception of J13365044+4751321, for which
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Object & SpT & Observing & Filtera & Observation & Nc & \(\sigma_{\rm intrg}\)c & \(\sigma_{\rm intrg}\)c & Airmass \\ & & Date & Time (h) & & (mmag) & (mmag) & \\ \hline
2MASS J00242463-0158201 & M9.5 & 110/08/18a & KPNO I & 3.66 & 3 & 12 & 110+4 & 1.14-1.68 \\ & & 19/08/18a & KPNO I & 3.66 & 3 & 12 & 13+4 & 1.17-1.48 \\ & & Sector 42 & TESSe & 551 & – & – & – & — \\ & & Sector 43 & TESSe & 581 & – & – & – & — \\ LP213-67 & M7 & 22/02/18 & Harris i & 5.05 & 6 & 7 & 4\(\pm\)1 & 1.02-1.28 \\ & & 24/02/218 & Harris i & 5.09 & 6 & 6 & 5\(\pm\)1 & 1.01-1.47 \\ & & Sector 21 & TESSe & 648 & – & – & – & — \\ LP213-68AB & M8+L0 & 220/218 & Harris i & 5.05 & 5 & 8 & 7\(\pm\)1 & 1.05-1.42 \\ & & 24/02/18 & Harris i & 5.09 & 5 & 7 & 7\(\pm\)1 & 1.02-1.47 \\ & & Sector 21 & TESSe & 648 & – & – & – & — \\
2MASS J115938504-0057268 & L0 & 23/02/18 & Harris i & 4.98 & 7 & 18 & 17\(\pm\)2 & 1.19-2.00 \\
2MASS J13365044+4751321 & M7 & 25/02/18a & Harris i & 6.22 & 7 & 6 & 5\(\pm\)1 & 1.10-1.49 \\ & & Sector 16 & TESSe & 530 & – & – & — \\ & & Sector 22 & TESSe & 631 & – & – & — \\ & & Sector 23 & TESSe & 577 & – & – & — \\
2MASS J14112131-2119503 & M7.5 & 240/02/18a & Harris i & 3.87 & 5 & 11 & 7\(\pm\)1 & 2.01-2.14 \\ & & 10/05/18a & KPNO I & 3.25 & 5 & 7 & 8\(\pm\)3 & 1.04-1.18 \\ & & 15/05/18a & KPNO I & 2.73 & 5 & 9 & 9\(\pm\)3 & 1.03-1.09 \\ & & Sector 11 & TESSe & 568 & – & – & — \\
2MASS J14310126-1953489 & M9 & 12/05/18 & KPNO I & 2.73 & 4 & 26 & 28\(\pm\)2 & 1.02-1.10 \\ & & 16/05/18a & KPNO I & 2.68 & 4 & 22 & 21\(\pm\)1 & 1.01-1.10 \\ & & 23/05/18 & KPNO I & 2.91 & 4 & 7 & 7\(\pm\)3 & 1.01-1.13 \\ & & Sector 11 & TESSe & – & – & – & — \\
2MASS J16192988-2440469 & M8 & 17/05/18 & KPNO I & 2.73 & 4 & 86 & 84\(\pm\)3 & 1.01-1.31 \\ & & 14/06/18 & KPNO I & 2.73 & 4 & 46 & 47\(\pm\)7 & 1.00-1.08 \\
2MASS J17054834-0516462 & L1 & 24/05/18 & KPNO I & 2.73 & 4 & 19 & 15\(\pm\)2 & 1.10-1.41 \\ & & 26/05/18a & KPNO I & 2.73 & 4 & 52 & 40\(\pm\)6 & 1.10-1.50 \\
2MASS J20575409-0252302 & L1.5 & 20/06/18a & KPNO I & 2.73 & 4 & 29 & 25\(\pm\)1 & 1.13-1.67 \\ & & 22/06/18a & KPNO I & 2.73 & 4 & 15 & 15\(\pm\)1 & 1.13-1.36 \\
2MASS J23515044-2537367 & M9 & 25/06/18a & KPNO I & 2.73 & 4 & 22 & 16\(\pm\)2 & 1.03-1.46 \\ & & 24/07/18a & KPNO I & 2.73 & 4 & 12 & 13\(\pm\)2 & 1.00-1.15 \\ & Sector 2 & TESSe & 658 & – & – & — \\ & Sector 29 & TESSe & 572 & – & – & — \\ \hline \end{tabular} Notes: \({}^{\star}\)The Harris filters are associated with the Kitt Peak WIYN 0.9m telescope while the KPNO I filters are associated with the SMARTS 1.3m telescope. \({}^{\flat}\)N is the number of comparison stars used per field for the differential photometry described in Section 4.
\({}^{\circ}\sigma_{\rm intrg}\) indicates the scatter of the target light curve, while \(\sigma_{\rm err
we corrected a linear trend with FWHM. We discuss this object further in Sections 4.2 and 5.2.
### Search for photometric variability
We search for potential periodicities in our data by applying a Lomb-Scargle (LS) periodogram (Lomb, 1976; Scargle, 1982) to all light curves shown in Figure A1. For each LS periodogram we sample \(10^{4}\) frequencies corresponding to periods between 0.5-8 hours, and also compute the window function (e.g., VanderPlas, 2018) and a 0.1% False-Alarm-Probability (FAP), obtained from \(10^{4}\) randomizations of the data (e.g., Miles-Paez et al., 2017).
Three out of our 13 targets (LP 213-67, LP 213-68AB, and J13365044+4751321) showed significant peaks in the 2-3 h range, in agreement with expectations from their \(v\sin i\) and radii. We fit a sine function (e.g., \(y(t)=\Lambda\sin\left(2\pi t/P+\phi\right)+\mu\)) to each light curve, and estimate variability properties by means of a \(10^{6}\)-step Markov Chain Monte Carlo (MCMC) process. We used flat priors for the amplitude (A), rotation period (P), phase (\(\phi\)), and mean (\(\mu\)), and allowed them to vary over the ranges: 0-1%, 0.5-40 h, 0-2\(\pi\), and 0.9-1.1, respectively. All MCMC fits for the variable targets and the corresponding rotation periods each night are shown in Figure 3; the best-fit rotation periods are also listed in Table 3. We find amplitudes of variability in the range 0.4%-0.8%, and rotation periods of \(2.7\pm 0.3\) h, \(2.3\pm 0.3\) h, and \(2.6\pm 0.2\) h for LP 213-67, LP 213-68AB, and J13365044+4751321, respectively. For LP 213-67 and LP 213-68AB we used a single sine function to model the variability seen in both nights. While LP 213-68AB does exhibit the same variability amplitude on both nights (separated by \(\sim\)20 rotation cycles), LP 213-67 shows some flattening in the observed variability from one epoch to the next one, which suggests the need for multi-epoch monitoring of these objects.
The remaining targets of our ground-based observations (J00242463-0158201, J11593850+0057268, J14112131-2119503, J14310126-1953489, LP859-1, J16192988-2440469, J17054834-0516462, J20575409-0252302, and J23515044-2537367) do not show any photometric variability larger than that seen in other field stars of similar brightness. This might be due to a lack of photospheric heterogeneities, or if present these may be too small to produce a measurable variability in our data sets.
## 5 TESS data analysis
The \(\sim\)27 days of continuous monitoring provided by TESS for each of its sectors has no parallel from the ground. For a typical rotation period of 4 h as expected for our rapidly rotating targets, TESS would observe 162 continuous rotation cycles in one single sector, allowing us to search for periodicities in the data with amplitudes of variability much smaller than those measurable from the ground (e.g., Miles-Paez, 2021). At the time of this work, 2-minute-cadence TESS data were available for J00242463-0158201, LP213-67 and LP213-68AB (spatially unresolved triple), J13365044+4751321, J23515044+2537367, and LP859-1 (see Table 2). In our analysis we use the Pre-search Data Conditioned Simple departure Photometry (PDCSAP) light curves extracted via the TESS pipeline. We also found 30-minutes-cadence observations for J14112131-2119503 in the FFI delivered by TESS, and extracted the light curve using e1eanor (Feinstein et al., 2019) in a similar way as done by the TESS pipeline. We removed any data points that the pipeline had flagged as having lower quality. This was the case for less than 1% of the data for each target. Panels on the left of Figures 4 and 5 show the extracted light curves for J00242463-0158201, J13365044+4751321, J14112131-2119503, J23515044-2537367, and LP859-1. In the case of LP213-67 and LP213-68AB, their small (14.4'') separation places the combined light of the triple system in the same \(21\arcsec\times 21\arcsec\)pixel of TESS. We show the compound light curve of LP213-67 and LP213-68AB in the top panel of Figure 7, and discuss this system in more detail in Section 5.2.2.
### Approximate light curve analysis with detrending and an LS periodogram
We first conduct an approximate analysis of the light curves to test for periodicities in the TESS data. Targets that do display such periodicities are then analyzed with a more robust Gaussian processes-based approach (Section 5.2).
The TESS light curves shown in Figures 4 (and 5, left) and 7 (top) exhibit some temporal structure on time scales of \(>\)1 day, which are likely associated with spacecraft momentum dumps, changes in the camera temperature, or Moon-Earth scattered light that is not fully removed by the pipeline (Vanderspek, 2019). We do a first-order correction of these trends by removing from the data a 24-h median filter, which is also shown in the figures (green lines). Then, we search for periodicity in the data by computing the LS periodogram of the raw and detrended data. These periodograms
Figure 3: Best fit sinusoid variations overlaid onto the calibrated data for LP213-68AB (top), LP213-67 (middle), and J13365044+4751321 (bottom). The best-fit rotation period for each target is indicated in the panels.
are plotted in the central panels of Figures 4 (and 5) and 7 together with their associated window functions and 0.1% FAP, computed as in Section 4.2.
We find significant periodicities in for J13365044+4751321, J14112131-2119503, LP859-1, J23515044-2537367 (central panels of Fig. 4 and 5), and LP213-67 and LP213-68AB (central panel of Fig. 7), but no significant periodicity in any of the sectors of J00242463-0158201. In the case of J13365044+4751321, LP213-67 and LP213-68AB the periodicities fall in the same range of the periods measured in our ground-based data, while for J14112131-2119503, LP859-1, and J23515044-2537367 the detected periodicities were not seen in our campaign from the ground. In all the cases the significant periods are contained in the range of the expected rotation period for our targets.
The first-order correction with a median filter is useful to quickly identify potential significant periodicity in the data by using a LS periodogram, especially in the case of J14112131-2119503 for which only the detrended data show a significant period in the LS periodogram. However, our choice of the 24-h median filter width is somewhat arbitrary. What is more, the filter could remove some real periodicity. Thus, we do a more sophisticated modelling of the variability and the noise in the data in Section 5.2 by using Gaussian Processes regression as described in Littlefair et al. (2017) and as done for TESS data and ultra-cool dwarfs in Miles-Paez (2021).
### Robust light curve modelling using Gaussian Processes regression
#### 5.2.1 Single targets
Gaussian processes (GP) are a class of Bayesian non-parametric models that allow simultaneous capture of the expected behaviour of data in a time series and their associated noise due to systematics (Roberts et al., 2012; Ivezic et al., 2014). The part related to the general trend of the data is usually called the _mean_ function, while the part related to the noise is usually referred to as the _kernel_ function
Figure 4: _Left_: TESS data for two of the 13 targets in our ground-based sample. Each sector observation is shown in a different row, so some targets are shown on more than one row. Most of the light curves have a 2-minute cadence, except for J1411-2119 (Fig. 5), which is at a 30-minute cadence. A 24-hour-long median filter is shown in green, which we use to trace potential residual systematics uncorrected by the pipeline. _Middle_: LS periodogram for the data before (red) and after (blue) removing the 1-day-long median filter. The window function and a 0.1% FAP are also shown in cyan and green, respectively. _Right_: Phase-folded light curve using the rotation period derived with the gaussian process approach in Section 5.2.1. The best fit sine curve is shown in red. A 300-point bin has been applied to the phase-folded data. For targets observed in multiple sectors, the light curves have been phase-folded using the same initial reference time.
Figure 5: Same as Figure 4.
Figure 6: Field of view of the system LP 213-67 and LP 213-68AB as seen from TESS (left; dashed green square) and from the ground (right; WIYN/HDI on 22nd February 2018). Both images are centered in the position of LP 213-67 and their fields of view have the same size (\(3\aas@@fstack{\prime\prime}9\times 3\aas@@fstack{\prime\prime}9\)).
and deals with the covariance of the data. The parameters associated with the kernel function are called hyperparameters. Following Miles-Paez (2021), we account for red noise in the TESS data, i.e., the low-frequency trends removed by the median filter in Figures 4, 5 (left) and 7 (top), by using a Matern-3/2 kernel (Rasmussen and Williams, 2006) for each component (\(k(t_{i},t_{j})\)) of the covariance matrix of the data:
\[k\left(t_{i},t_{j}\right)=a^{2}\left(1+\sqrt{\frac{3r^{2}}{r^{2}}}\right)\exp \left(-\sqrt{\frac{3r^{2}}{r^{2}}}\right), \tag{1}\]
where \(r^{2}=\left(t_{i}-t_{j}\right)^{2}\), \(t_{i}\) is the time for data point \(i\), and \(a\) and \(\tau\) are the typical amplitude and time scale of the red noise. We also add a white noise component (\(\sigma\)) that only contributes to the diagonal elements of the covariance matrix:
\[K_{ij}=\sigma^{2}\delta_{ij}+k\left(t_{i},t_{j}\right) \tag{2}\]
Equation 2 thus defines the covariance matrix of our kernel function. For the mean function of our model we adopt a sum of two functions. The first one is a time-dependent sine function, as is appropriate for most optical rotation-modulated light curves for late-M and early-L dwarfs (e.g., see Figure 3 above or analyses in, e.g., Martin and Zapatero-Osorio, 1997; Harding et al., 2013; Miles-Paez et al., 2017) The second one is a time-invariant \(\mu\) parameter that describes a flat light curve. The mean function \(f(t)\) is thus:
\[f(t)=\mu+A\sin\left[2\pi\left(\frac{t}{P}\right)+\phi\right] \tag{3}\]
We used the celerite package (Foreman-Mackey et al., 2017) to compute the GP (assuming flat and periodic light curves), and emcecee (Goodman and Weare, 2010; Foreman-Mackey et al., 2013) to run the MCMC process that fits our models to the data of each sector for J13365044+4751321 (3 sectors), J14112131-2119503, J23515044-2557367 (2 sectors), and LP859-1 (2 sectors). We analyze the combined light curve of LP213-67 and LP213-68AB separately in Section 5.2.2. For the MCMC we used 32 walkers with 500 iterations for the burn-in stage and 5000 iterations for the full process. We used log-uniform priors with values between 0.01 and 10 times the standard deviation of the light curve for the hyper-parameter \(a\), 2 minutes and 27 days for \(\tau\), and a broad range of (10\({}^{-20}\),10) for \(\sigma\), while for the parameters in eq. 3 we adopted the following ranges: 0.9-1.1 (\(\mu\)), 10\({}^{-5}\)-10\({}^{-1}\) (\(A\)), 0-2\(\pi\) (\(\phi\)), and 0.5-40 h for \(P\) (as usually seen in the field for other ultra-cool dwarfs, e.g., Tannock et al., 2021).
We investigated whether a flat (Model 1) or periodic (Model 2) light curve is more favoured by the data in each sector by evaluating the Bayesian Information Criterion (BIC, Schwarz, 1978) for each model. In general, \(\mathrm{ABIC}=\mathrm{BIC}_{\mathrm{Model\,1}}-\mathrm{BIC}_{\mathrm{Model\,2}}<2\) indicates no significant preference of the data for either model, \(2<\mathrm{ABIC}<6\) suggests a preference for Model 2, \(6<\mathrm{ABIC}<10\) points to further increasing support for Model 2, and \(\mathrm{ABIC}>10\) indicates that Model 2 is strongly favoured by the data.
The MCMC processes quickly converged to a solution for J13365044+4751321 (3 sectors), J14112131-2119503, J23515044-253736 (2 sectors), and LP859-1 (2 sectors), and their \(\mathrm{ABIC}\) strongly favours the periodic models. The results from the GP fitting and MCMC parameter estimation processes for all unresolved TESS targets are included in Table 3. We find rotation periods very similar to those shown in the periodograms of Figures 4 and 5 for all of these targets. Moreover, the periods retrieved for targets with data in multiple sectors are identical down to \(\sim 10^{-3}\) h, which supports the evidence for real periodicies in the targets. Once we determined best-fit noise and variability models, we subtracted from our data the best-fit noise model, and phase-folded the corrected data on the best-fit rotation period (for objects with several sectors, we used the weighted average to compute the final rotation period). We show these light curves in the right-hand panels on of Figures 4 and 5.
The data shown in Figure 4 (right) for the three sectors of J13365044+4751321 were phase-folded using the beginning of sector 16 as the reference time "zero", which allow us to investigate the stability of the photometric variability over multiple months. The photometric variability over the consecutive sectors 22 and 23 stays in phase, which indicates that its origin is likely stable on time scales of more than a month (a few hundred rotation cycles). On the other hand, observations for sector 16 started \(\sim\)162 days earlier than those for sector 22, and the phase-folded data for J13365044+4751321 in sector 16 show a phase offset of \(0.28\pm 0.04\) compared to the phase-folded data in sectors 22 and 23. At a first glance, this offset might suggest that the light curve is not stable on time scales longer than six months. However, between the beginning of observations in sectors 16 and 22 there are nearly 1600 rotation cycles. When combined with the 0.0003 h uncertainty on the period, we obtain an uncertainty on the phase-folding of the data: 1600\(\times\)0.0003 = 0.48 h or 0.20 in phase. This is comparable to the observed phase offset in the light curves of J13365044+4751321 between sectors 16 and
Figure 7: _Top:_ TESS light curve for the combined light of the triple system LP213-67 and LP213-68AB. Vertical lines denote individual uncertainties, and the green line is a 1-day-long median filter. _Middle:_ LS periodogram for the TESS light curve shown in the top panel (red) and after detrending with a median filter (blue). The window function (cyan) and associated 0.1% FAP (green) are also shown. _Bottom:_ LS periodograms for each of the of four different quarters of the data, zoomed in on the significant peaks at \(\sim\)2.5 h and \(\sim\)3.1 h present in all the quarters of the data.
22-23. Hence, it is likely that this light curve is stable on times scales of years, similarly to other fast rotators, such as the M8.5 dwarf TVLM 513-46 (Wolszczan & Route, 2014) or the L1 dwarf WISEP J190648.47+401106.8 (Gizis et al., 2013).
We can repeat the previous analysis for LP859-1 and J23515044-253736 that have data in two sectors separated by about two years. However, despite the fact that their period uncertainties are only a few seconds, the propagated phase uncertainties over nearly two years are too large to assess whether the light curves remain in phase or not. It is still remarkable that the retrieved rotation periods for LP859-1 and J23515044-253736 are very similar at each of the respective observing epochs, and also in good agreement with the expected rotation period from their observed \(v\sin\) and radii. Nonetheless, the variability amplitude of LP859-1 is significantly different between the two epochs two years apart, which could be evidence for time-dependence of the spot pattern on the stellar surface.
2.2 LP 213-67 and LP 213-68AB: spatially unresolved ultra-cool dwarfs produce a beat pattern in the TESS light curve
LP 213-67 (M7) and LP 213-68AB (M8+L0) form a triple system of ultra-cool dwarfs. The M7 component and the (unresolved) M8+L0 pair are separated by only 14\(\aas@@fstack{\prime\prime}\)4, meaning that light from the entire system is contained in a single pixel of TESS as seen in Figure 6. Because of this peculiarity we discuss this system separately.
The 2-min TESS light curve of the combined fluxes of this system is shown in the top panel of Figure 7. The light curve also exhibits some trends on time scales of a day (green line), similarly to the TESS light curves of the other single targets (Fig. 4, 5). The middle panel of Figure 7 shows the LS periodogram for the raw (red) and detrended (see Section 5.1; blue) light curve as well as the associated window function (cyan), and 0.1% FAP level (green dashed line). This Figure shows two significant peaks at \(\approx\)2.5 h and \(\approx\)3.1 h, which are close to the rotation periods detected in our ground-based data for LP 213-68AB (2.3 h) and LP 213-67 (2.7 h). We check that these periodicities are present in all the TESS data by splitting this into four quarters and computing their associated LS periodograms, which we show in the bottom panel of Figure 7.
A closer look into this data set (Figure 8) reveals that some epochs show a clear quasi-sin
MCMC. Unsurprisingly the best solution contains the two rotation periods identified in the LS periodogram of Fig. 7: 2.5267 \(\pm\) 0.0002 h and 3.0889\(\pm\) 0.0002 h. Interestingly, both amplitudes of variability converge to the same value: 0.44\(\pm\)0.01%. There is no astrophysical reason to assume that both targets will have the same amplitude of variability. Thus, this result likely reflects the fact that TESS provides enough sensitivity to discern periodicities, but the photometry is not precise enough to distinguish different amplitudes of variability in our (faint) targets. The TESS light curve does not allow us to assign which of the periods corresponds to each component of the triple system. However, we use the results from our ground-based photometric analysis (Fig. 3) to assign 2.5267 \(\pm\) 0.0002 h to LP 213-68A and 3.0889 \(\pm\) 0.0002 h to LP 213-67. In Figure 8 we also plot the best fit for the two-period model (eq. 4).
## 6 Discussion
Sections 5.2.1 and 5.2.2 show the synergy between TESS, which is able to unveil small periodicities that would be missed from the ground (e.g., J14112131-2119503, LP859-1, or J23515044-253736), and ground-based monitoring to safely discern the source of variability when several objects fall inside the same pixel of TESS (e.g., LP 213-67 and LP 213-68AB). Given that TESS, with its throughput peaking near 900 nm, is optimized to observe red stars, it can allow us to explore the distribution of rotation periods of the brightest ultra-cool dwarfs.
### Sensitivity of TESS to ultra-cool dwarf photometric periodicities and amplitudes
To explore the range of amplitudes and rotation periods that can be detected by TESS we perform Monte Carlo simulations. First, we simulate a 27 day-long TESS time series (e.g., like the data shown in Figs. 4, 5, left) with a cadence of 2-min and a gap of 3 days in the middle of the light curve to simulate the time in which data are sent to the ground. From each of the two halves of the resulting time series we further remove some contiguous data to simulate bad quality data, which are removed by the pipeline. These additional gaps are randomly located in each half of the light curve, with a duration randomly sampled between 5-72 h. Second, we inject in our time series the expected white noise from the TESS model presented in Stassun et al. (2018) and Barclay (2017) for TESS magnitudes from 13 to 19 in steps of 0.5 mag. Third, we add red noise by using the Matern-3/2 kernel of eq. 1. We randomly sample the kernel hyper-parameters from a log-uniform distribution with limits 0.5-5 times the expected white noise for \(a\) and 0.5-13 days for \(\tau\), which matches the typical values found in our analysis in Section 5.2. Finally, we inject a sine wave with mean value of 1 and amplitude, rotation period and zero phase randomly sampled from [0.01-5]% (log-uniform distribution), [0.3-312] h (log-uniform distribution), and [0-2\(\pi\)] (uniform distribution).
We simulated 200,000 light curves at each half-magnitude step between 13.0 and 19.0. In addition to simulating 2-min cadence, we also binned the light curves every 10 and 30 min to simulate the FFI delivered by TESS. Fitting a GP to these light curves would be extremely expensive in terms of computing time. Instead we repeated our approximate analysis from Section 5 (left and middle panels of Figures 4 and 5). Specifically, we: i) removed a 1-day long median filter to the data, ii) computed the associated LS periodogram and 0.1% FAP level, and iii) searched for significant peaks in the periodogram. We considered the injected periodicity as successfully recovered if the strongest peak in the LS periodogram was above the 0.1% FAP and if the relative error between the period marked by this peak and the injected period was \(\leq\)5%. Finally, we used these recovery rates to build sensitivity maps for amplitude and period as a function of magnitude and cadence. Example sensitivity maps are shown in Figure 9. Our simulations show that the minimum detectable variability amplitude at a fixed stellar magnitude is independent of period for periods in the 0.5-80 h range and cadences of either 2 min or 10 min (left and middle panels). The finding mostly holds for 30 min cadences too, except that the recovery rate deteriorates for periodicities shorter than about 1 h (right panels), especially for periods comparable to the 0.5 h sampling rate.
We use the detection rates in our simulated sensitivity maps of Figure 9 to further assess the minimum variability amplitude that we can detect for 0.5-80 h variability periods: considered as the lowest amplitude at which the recovery rate is still 95%. We plot these amplitudes as a function of stellar magnitude in the top panel of Figure 10. Our simulations show that at TESS magnitudes fainter than \(\simeq\)16.5 we lose sensitivity to the typical \(\leq\)1% visible-wavelength variability of late-M to mid-L dwarfs (Miles-Paez et al., 2017, 2017). Figure 10 also compares these limits to the actual measured amplitudes for the variable targets in our survey (Table 1). Four of our targets have amplitudes near the sensitivity limits for their magnitude.
The bottom panel of Figure 10 shows the ratio between the minimum detectable amplitude (for 0.3-80 h periods) for a cadence of 10 min vs. 2 min and similarly for 30 min vs. 2 min. The amplitude ratio between 10 min and 2 min cadences is \(\leq\)1.05 for the range 13-17.5 mag, but the ratio for 30 min and 2 min is significantly larger. The poorer sensitivity for data with cadences of 30 min is caused by less frequent sampling that blurs the photometric variability when stacking data every 30-min for short rotation periods. Given \(>\)1 h rotation periods for ultracool dwarfs (Section 6.2; see also Tannock et al.2021) future searches for periodicities in ultra-cool dwarfs can be carried out by using the 10-min FFI from the extended mission of TESS without any additional need to obtain dedicated 2-min light curves.
### The period distribution of rapidly rotating ultra-cool dwarfs
Our survey targeted ultra-cool dwarfs with the fastest a priori known projected rotational velocities, but without prior determinations of photometric periods. By comparing with other photometric periods in the published literature, we are in position to assess the range of rotation periods in ultra-cool dwarfs. It is likely that our analysis is incomplete to \(>\)24 h periods, since outside of the TESS and Kepler missions that are mostly sensitive to \(<\)L0 dwarfs, long-duration monitoring of \(\geq\)L0 dwarfs has generally been limited to \(<\)24 h.
Our literature sample includes all previously confirmed photometric periods for dwarfs with spectral types later than M7, as known of this writing. Tannock et al. (2021) recently analyzed a sample of 78 L, T, and Y dwarfs with known rotation periods and found that \(\sim\)1 h is a likely lower limit on the period for Jupiter-sized objects. Our expanded sample encompasses 128 ultra-cool dwarfs, including new L- and T-dwarf rotation periods presented in Vos et al. (2022) and 38 rotation period measurements for M7-M9.5 dwarfs presented in Newton et al. (2017); Miles-Paez et al. (2017); Miles-Paez (2021); Andersson et al. (2022), and this work. We verified the published late-M dwarf rotation periods using their TESS light curves, and analyzed them in a similar way as already presented in Sections 5.2.1 and 5.2.2. While the expanded sample
of rotation periods no longer focuses only on the high-\(v\sin i\) fast rotators targeted in our own observations, the overall distribution of periods is still biased to short periods. That is because most of the published periods of ultra-cool dwarfs are based on either \(<\)8 h ground-based or \(\leq\)24 h space-based monitoring campaigns.
In total we found TESS FFI data for 15 late-M dwarfs. We list their known rotation periods and our updated determinations from TESS data in Table 3. We plot all confirmed periods as a function of spectral type in Figure 11. In general, the TESS rotation periods are in good agreement with those reported in the literature. In most cases the TESS data refine the known rotation period, with the exception of TVLM 513-46, which is known to have a highly stable light curve from optical data covering \(\sim\)7 years. An interesting case is the binary LP 415-20 AB (M7+M9.5), which has a \(4.4\pm 1.6\) h photometric period from a ground-based measurement (Miles-Paez et al., 2017). However, an LS periodogram of the TESS data reveals two peaks at 3.6 h and 4.9 h (Fig. A3). Using our beat pattern analysis from Section 5.2.2, we arrive at accurate rotation periods for both components, which are in good agreement with the expected periods from combining radii and observed \(v\sin i\). Four objects (LSR J0539+4038, LP 423-14, LSPM J1200+2048, LHS 2924) do not show any significant periodicity in the 27-days-long TESS data, which might indicate that either the targets are too faint for TESS to measure their photometric variability (we provide upper
Figure 9: Sensitivity maps for sinusoidal variability of amplitude A (\(y\) axis) and period P (\(x\) axis) from 200,000 simulated TESS light curves for different TESS magnitudes (top to bottom) and cadences (left to right). The amplitude is determined as a fraction of the mean flux level. The recovery rate is colour-coded as in the legends on the right. An object’s variation parameters are considered recovered if they are within 5% of their input values for the simulation. We set a minimum recovery rate of 95% at any of the simulated periods as a requirement for periodicity to be detected.
limits in Table 3 using the limits in Fig. 10) or the ground-based periodicities are spurious.
The rotation periods of M7-Y0 dwarfs in Figure 11 show a large scatter between \(\approx\)1-24 h, with the greatest concentration in the 2-4 h range. The median rotation period is 3.9 hr, although this is likely skewed by the lack of sensitivity to potential \(>\)24-hour periods in photometric surveys, and by our focus on fast rotators within our own sample of 13 ultra-cool dwarf targets. All six of our periodically variable targets have periods between 2.4 h and 4.0 h. However, even if they were to be excluded, a significant enhancement of 2-4 h periods persists in Figure 11 (right panel).
A lower envelope to the period distribution is visible in the left panel of Figure 11 that runs from \(\approx\)2 h at spectral type M7.5-M8.5 (2MASS J01483864\(-\)3024396, TVLM 513\(-\)46; Koen, 2013; Wolszczan & Route, 2014) down to \(\approx\)1 h at spectral type T7 (2MASS J03480772-6022270, Tannock et al., 2021). The part of the envelope between M7-M9.5 effectively continues in an unglobing fashion toward warmer spectral types the trend observed in L-T dwarfs by Tannock et al. (2021). We believe that the effect is real, and better sampled than the overall period distribution of M7-M9.5 dwarfs in Figure 11, because of the specific focus of our photometric variability survey on ultra-cool dwarfs with high projected rotational velocities.
It is possible that the trend of an increasing minimum period toward warmer spectral subtypes in the M7-M9.5 is the result of structural stability considerations, as speculated in Tannock et al. (2021) for lower-mass brown dwarfs. The trend runs counter the expected _decrease_ in the minimum rotation period with increasing stellar mass at a constant final radius (as appropriate for these degenerate objects) from conservation of angular momentum. Thermonuclear energy generation, and correspondingly steeper temperature gradients and more vigorous convection in the interiors of very low-mass stars compared to substellar brown dwarfs, may drive the trend for longer minimum periods at higher masses.
## 7 Conclusions
We combined ground-based \(I\)-band on 1 m-class telescopes and TESS data to search for photometric variability in a sample of 13 M7-L1.5 dwarfs, with \(v\sin i>\)30 km s\({}^{-1}\). The ground-based data revealed periodicities in the 2 h to 3 h range for three targets (LP213-68AB, LP213-67, and J13365044+4751321), that we attribute to rotation as they are compatible with our estimations of rotation period based on the \(v\sin i\) and the expected radii of our targets. Seven of our ground-based targets were also observed by TESS in either 2-min or 30-min cadence, and six of them (LP213-68AB, LP213-67, J13365044+4751321, J1412131-2119503, J23515044-2537367, and LP859-1) show photometric variability compatible with rotation. While our ground-based data provide a better spatial resolution of the targets than TESS and are sensitive to \(>\)0.5% \(I\)-band variability amplitudes, the long stare time provided by TESS allows exploration of a few hundred consecutive rotation periods, increasing the sensitivity to much smaller periodic photometric variability.
We quantify the sensitivity of TESS to ultra-cool dwarf variability by simulating light curves with different amplitudes, periods and noise for TESS magnitudes in the range 13-19 mag. We find that TESS data (in either 2-min or 10-min cadence) can detect \(\leq\)1% photometric variability in ultra-cool dwarfs (such as typically seen in the red optical) and \(\leq\)80 h rotation periods for TESS magnitudes brighter than 16.5 with 95% reliability.
Figure 11: _Left_: Rotation period as a function of spectral type for all 128 periodically variable M7–Y0 dwarfs known as of this writing. Data are taken from compilations listed in Newton et al. (2017); Miles-Páez et al. (2017, 2017, 2018); Tannock et al. (2021) and the new detections reported in Miles-Páez (2021); Vos et al. (2022); Andersson et al. (2022). New photometrically variable targets reported in this work are indicated by red circles, while refined rotation periods for known variable ultra-cool dwarfs are indicated with green circles. We use crosses for targets reported to be photometrically variable in the literature that we could not confirm with TESS data. _Right_: Distribution of all known photometric periods of field M7-Y0 dwarfs. The median rotation period for field ultra-cool dwarfs is 3.9 hr (blue dashed line).
Figure 10: _Top_: Minimum photometric amplitude for sinusoidal variability detectable by TESS (with a 95% recovery rate) for a cadence of 2 min, 10 min, and 30 min as a function of stellar magnitude from the amplitude-period sensitivity maps in Figure 9. As in Figure 9, the amplitude is determined as a fraction of the mean flux level. _Bottom_: Ratio between the minimum amplitude of variability detected in light curves with a cadence of 10 min and 2 min (blue) and 30 min and 2 min (magenta). The most recent TESS FFI with \(\sim\)10-min cadence allow for (\(>\)0.5-hour period) variability searches in ultra-cool dwarfs comparable to those with 2-min light curves.
Finally, we compiled photometric periods for all \(\geq\)M7 dwarfs known to be photometrically variable from previous ground-based observations, and refined the rotation periods for 11 that have been observed by TESS. The entirety of the ultra-cool dwarf period distribution reveals a lower envelope on the rotation periods from \(\approx\)2 h for M7.5-M8.5 dwarfs to \(\approx\)1 h for late-T dwarfs. A larger, unbiased survey of photometric periods would be needed to confirm the trend, and its implications for the structural stability of rapidly rotating ultra-cool dwarfs.
## Acknowledgements
TESS data was obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts. IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
## Data Availability
The TESS data underlying this article were obtained from the MAST data archive at the Space Telescope Science Institute (STScI). The ground-based data can be shared on reasonable request to the corresponding author.
|
2306.04269 | ColNav: Real-Time Colon Navigation for Colonoscopy | Colorectal cancer screening through colonoscopy continues to be the dominant
global standard, as it allows identifying pre-cancerous or adenomatous lesions
and provides the ability to remove them during the procedure itself.
Nevertheless, failure by the endoscopist to identify such lesions increases the
likelihood of lesion progression to subsequent colorectal cancer. Ultimately,
colonoscopy remains operator-dependent, and the wide range of quality in
colonoscopy examinations among endoscopists is influenced by variations in
their technique, training, and diligence. This paper presents a novel real-time
navigation guidance system for Optical Colonoscopy (OC). Our proposed system
employs a real-time approach that displays both an unfolded representation of
the colon and a local indicator directing to un-inspected areas. These
visualizations are presented to the physician during the procedure, providing
actionable and comprehensible guidance to un-surveyed areas in real-time, while
seamlessly integrating into the physician's workflow. Through coverage
experimental evaluation, we demonstrated that our system resulted in a higher
polyp recall (PR) and high inter-rater reliability with physicians for coverage
prediction. These results suggest that our real-time navigation guidance system
has the potential to improve the quality and effectiveness of Optical
Colonoscopy and ultimately benefit patient outcomes. | Netanel Frank, Erez Posner, Emmanuelle Muhlethaler, Adi Zholkover, Moshe Bouhnik | 2023-06-07T09:09:35Z | http://arxiv.org/abs/2306.04269v1 | # ColNav: Real-Time Colon Navigation for Colonoscopy
###### Abstract
Colorectal cancer screening through colonoscopy continues to be the dominant global standard, as it allows identifying pre-cancerous or adenomatous lesions and provides the ability to remove them during the procedure itself. Nevertheless, failure by the endoscopist to identify such lesions increases the likelihood of lesion progression to subsequent colorectal cancer. Ultimately, colonoscopy remains operator-dependent, and the wide range of quality in colonoscopy examinations among endoscopists is influenced by variations in their technique, training, and diligence. This paper presents a novel real-time navigation guidance system for Optical Colonoscopy (OC). Our proposed system employs a real-time approach that displays both an unfolded representation of the colon and a local indicator directing to un-inspected areas. These visualizations are presented to the physician during the procedure, providing actionable and comprehensible guidance to un-surveyed areas in real-time, while seamlessly integrating into the physician's workflow. Through coverage experimental evaluation, we demonstrated that our system resulted in a higher polyp recall (PR) and high inter-rater reliability with physicians for coverage prediction. These results suggest that our real-time navigation guidance system has the potential to improve the quality and effectiveness of Optical Colonoscopy and ultimately benefit patient outcomes.
Keywords:Colonoscopy, Coverage Real-time systems
## 1 Introduction
Colorectal cancer (CRC) is a significant public health issue, with over 1.9 million new cases diagnosed globally in 2020 [2]. It is one of the most preventable types of cancer [11], and early detection is crucial for preventing its progression [12, 10]. The most commonly used screening method is optical colonoscopy (OC) [7], which visually inspects the mucosal surface of the colon for abnormalities such
as colorectal lesions. However, the process of detecting CRC in its early stages can be difficult, since performing a comprehensive examination of the colon using OC alone can be challenging, resulting in certain regions of the colon not being fully examined and potentially reducing the rate of polyp detection.
To address this problem, researchers have conducted extensive studies to propose assistive technologies that set out to provide clinicians with a better understanding of the procedure quality. Most existing methods focus on estimating measures of quality, such as the withdrawal time, or on reconstructing a 3D model of the colon from a video sequence of the procedure. Despite the advancements in technology that allow for the prediction of 3D structures from images, there is still a significant gap in providing useful and actionable information to clinicians during the procedure in real-time. Current methods for detecting un-surveyed regions, which usually show a 3D visualization of the colon, are not designed to be easily understood, or interacted with, during the procedure. They may not align with the camera view; making it difficult for physicians to understand where they need to move the endoscope to survey missing regions. Other measures of quality, such as coverage per frame, or withdrawal time, do not provide clear, usable information to assist during the procedure in capturing un-surveyed regions. In this paper, we present ColNav, a novel real-time solution that (i) utilizes an unfolded representation of the colon to localize the endoscope within the colon, (ii) introduces a local indicator that directs the physician to un-surveyed areas and (iii) is robust to real-life issues such as tracking loss. Our approach estimates the centerline and unfolds the scanned colon from a 3D structure to a 2D image in a way that not only calculates the coverage, but also provides augmented guidance to un-surveyed areas without disrupting the physician's workflow. To the best of our knowledge, this is the first coverage based, real-time guidance system for colonoscopies.
## 2 Related Work
In recent years, there has been an abundance of papers exploring various aspects of quality measures for colonoscopy, with the goal of assisting clinicians and improving the overall quality of care.
**SLAM for colonoscopy** approaches usually rely on estimating a 3D reconstruction of the colon and post-processing it in order to estimate the un-surveyed regions (holes). Posner et al. [13] utilized deep features to better track the camera position, presented tracking loss recovery and loop closure capabilities to create a consistent 3D model. Ma et al. [8] reconstructed fragments of the colon using Direct Sparse Odometry (DSO) [4] and a Recurrent Neural Network (RNN) for depth estimation. However their output is not easily understood nor meant to be interacted with during the procedure, making them less likely to be adopted by physicians or impact the clinical outcome.
**Direct coverage estimation** methods [5, 1], aim to predict the coverage on a segment-by-segment basis by estimating what fraction of the colon has been viewed in any given segment. Freedman et al. [5] used a CNN to perform depth
estimation for each frame followed by coverage estimation. As it was trained in a supervised manner using synthetic ground-truth coverage, it cannot be easily generalized to real data. Blau et al. [1] proposed an unsupervised learning technique for detecting deficient colon coverage segments modeled as curved cylinders. However, their method does not run in real-time.
**Indirect Quality Objective measurements** Objective measurements of quality in colonoscopy are important for minimizing subjective biases and variations among endoscopists. Yao et al. [19] proposed an auto-detection of non-Informative frames and Zhou et al. [20] predicted the bowel preparation scores every 30 seconds during the withdrawal phase of the procedure. However, these techniques lack the ability to provide real-time information to the physician during the procedure.
**Virtual colon unfolding** is a well known visualisation technique for virtual colonoscopy (VC), where the colon is inspected by analysing the output of a CT scan. In such cases, a 3D mesh of the entire colon can be extracted from the CT image volume and mapped onto a 2D grid, providing the physician with a fast and convenient way to inspect the colon mucosa and find polyps. A number of solutions have been proposed to perform this mapping [18, 6, 16, 15]. In many cases, the solution uses, as an intermediate step, the computation of the _centerline_, a single continuous line spanning the colon. Although most of these methods tend to be computationally expensive, Sudarsky et al. [15] proposed a fast unfolding method based on the straightening of the colon mesh using the centerline. Using colon unfolding to visualize missed areas in optical colonoscopy (OC) has been proposed by Ma et al. [8]. However, it was done offline, for validation purposes, on a few disconnected colon segments, using a single straight line as the centerline.
Figure 1: Our novel, real-time colonoscopy navigation. Flattened image of the colon (right): (1) un-surveyed areas as black pixels, (2) camera location, (3) coverage percentage and length covered. Endoscope view (left): (4) the local compass indicator directing the physician to look up (ticks highlighted in red).
## 3 Method Overview
Our pipeline, ColNav, provides actionable and comprehensible guidance to un-surveyed areas in real-time and is seamlessly integrated into the physician's workflow. While scanning the colon, the physician is presented with 2 screens as can be seen in Fig. 1. The colon's unfolded image, presented on the right, shows the three-dimensional (3D) colon flattened into a two-dimensional (2D) image. Black pixels in the image indicate unseen areas, which were missed during the scan. The location of the camera in the unfolded colon is visualized as the green camera frustum marker. When the physician withdraws the endoscope the green marker moves down and vice-versa. This enables the physician to know whether to move the endoscope forwards or backwards to reach the missed regions (holes). Coverage percentage and the overall length, computed on the scanned portion of the colon, are displayed as well. On the left, is the main endoscope view with a local compass indicator directing to un-inspected areas. Once the endoscope is near an un-inspected area (hole), the compass indicator directs the physician towards the area that needs to be examined.
The ColNav algorithm consists of three major parts: (\(i\)) centerline estimation, (\(ii\)) multi-segment 3D to 2D unfolding, and (\(iii\)) local indicator (navigation compass). To estimate the depth map and pose of each new frame, we employ C\({}^{3}\)Fusion [13] as our SLAM module. ColNav's first component is a robust method for centerline estimation. In the second component, the overall 2D scene representation is obtained by merging the depth, pose, and RGB of all frames into a single flattened representation of the colon. In real-life scenarios, C\({}^{3}\)Fusion may lose track, resulting in the creation of a new segment when the last frame cannot be connected to any previous frame. Alternatively, loop closure may occur, where two disjoint segments are merged, and their poses are subsequently updated. Our system accommodates these scenarios by (a) adjusting the previous centerline approximation based on updated poses and (b) de-integrating frames that have changed location in the flattened image and re-integrating them with their new pose. In cases of tracking loss, the flattened image shows separate segments with red lines that can be merged if tracking recovery occurs.
### Centerline & Colon Unfolding
In our proposed solution, the 3D representation of each frame is obtained by back-projecting the depth map into a point cloud. It is then mapped onto a 2D unfolded representation, using an algorithm analogous to that described in [15]. In particular, the centerline is used for straightening the reconstructed colon and dividing it into cross-sections perpendicular to the centerline. Each cross-sectional slice corresponds to a row within the two-dimensional flattened image, see Fig. 2. The colon centerline, sometimes also referred to as the medial axis, is usually defined as a single connected line, spanning the colon and situated at its center, away from the colon walls [17]. In our case, the centrality requirement is partly relaxed, as we observed that a shift of the centerline away from the center of the colon has little effect on the unfolding. In our case, unlike prior works, the
centerline and the flattened image are updated in real time. This creates new requirements for the centerline: (1) Fast computation. (2) Consistency over time relatively to the camera trajectory. To support these requirements, the centerline is estimated from the camera trajectory poses.
The centerline algorithm contains the following steps: (1) Filtering outlier poses from the trajectory. (2) Constructing or updating a graph \(G\) of the trajectory, with camera positions as nodes and edges connecting nodes within a threshold distance. (3) Calculating or updating the shortest path length \(l\) between each node \(n\in G\). (4) Binning of the trajectory points according to \(l\). (5) Fitting a B-spline [3] to an aggregate of the trajectory points in each bin. Each time the trajectory poses are updated, steps (1) - (5) are computed and a new centerline is re-calculated.
**Camera Position Indicator:** The camera position for each frame is given by the SLAM module and is noted by \(T_{i}=\{(R_{i},t_{i})|R_{i}\in SO(3),t_{i}\in\mathbb{R}^{3}\}_{i=1}^{N}\) with \(N\) the number of frames in the sequence. To represent the endoscope current location \(s_{e}\) on the centerline of size K, The endoscope position \(t_{i}\) is projected on the centerline \(C=\{c_{k}\in\mathbb{R}^{3}\}_{k=1}^{K}\) by querying the centerline KDTree.
\[s_{e}=\arg\min_{c_{k}\in C}||t_{i}-c_{k}||^{2},t_{i}\in\mathbb{R}^{3} \tag{1}\]
### Navigation Compass
The navigation compass serves as a local indicator that visually guides the physician to areas that have been missed. Specifically, the compass ticks are highlighted in red to indicate which specific sections of the colon require further inspection. Based on the camera position along the centerline \(s_{e}\), the coverage information is extracted from the unfolded image \(F\), where each column represents \(b_{\theta}\) - the rotation angle bin around the centerline axis at the endoscope location \(s_{e}\), with \(\theta\in\{0,...,2\pi\}\).
Figure 2: On the left: 3D point cloud of a single frame (blue), centerline (green), vertices associated with a specific cross-section on the centerline (red) and the camera pose indicated by the 3 axis vectors. On the right: the flattened image with corresponding cross-section (red). Note that the holes in the cross-section match the black pixels in the corresponding row.
When \(F(s_{e},b_{\theta})\) - the corresponding pixel in the extracted row, is black (meaning, it wasn't covered), the navigation compass tick will be highlighted in red, otherwise it will remain dark. To make the navigation compass invariant to camera roll, the camera orientation is projected on the centerline and the relative angle offset is computed to compensate for misalignment between the centerline and the camera pose. Fig. 2 depicts the extracted row, selected from the flattened image according to the camera location.
### Unfolding real-time dynamic update
To achieve real-time and consistent unfolding of the colon, it is crucial to update the flattened image \(F\) whenever new information becomes available. This need arises as the SLAM pipeline continually refines frames poses, updates frames segment assignment, and copes with real-life issues. To accomplish this, we closely monitor the continuous change in frames' poses and their assignment to segments, updating the flattened image through the integration and de-integration of frames. By adopting this strategy, we can rectify errors resulting from registration drift or tracking loss.
**Managing Unfolding Updates** When an input frame arrives, we seek to integrate it into the flattened image as quickly as possible, to give the physician instantaneous feedback of the colon coverage. Since previous frames segment assignment or poses could be updated from [13], we de-integrate and re-integrate all frames if their segment assignment changes. In addition, we sort all frames within each segment in descending order, based on the difference between their previous and updated poses. After sorting, we select and re-integrate the top 10 frames from the list. This allows us to dynamically update the unfolded image to produce a globally-consistent representation of the unfolded colon.
**Integration and De-integration:** Integration of an RGBD frame \(f_{i}\) is defined as a weighted average of previous mapped samples. For each pixel \(p\) in the flattened image, let \(F(p)\) denote its color, \(W(p)\) the pixel weight, \(d_{i}(p)\) the frame's sample color to be integrated, and \(w_{i}(p)\) the integration weight for a sample of \(f_{i}\). Each pixel is then updated by:
\[F^{{}^{\prime}}(p)=\frac{F(p)W(p)\pm w_{i}(p)d_{i}(p)}{W(p)\pm w_{i}(p)},W^{{} ^{\prime}}(p)=W(p)\pm w_{i}(p) \tag{2}\]
Where the \(+\) sign is used for integration and the \(-\) for de-integrating a frame. A frame in the flattened image can be updated by de-integrating it from its original pose and/or segment and integrate it with a new pose into its updated segment.
## 4 Experiments
This section presents the validation of our solution through multiple tests. The first test, named 'Colon unfolding verification', demonstrates that our 2D flattened visualization is a valid representation of the scanned colon. It also showcases our ability to detect and localize 'holes' in the colon using this visualization.
To carry out this test, we used coverage annotations of short colonoscopy clips. Each clip was divided into four quadrants (See Fig. 3), and two experienced physicians tagged each quadrant based on its coverage level ('mostly not covered', 'partially covered','mostly covered'). We then used ColNav to estimate the coverage of each quadrant and compared it to the physicians' annotations. The second test focuses on the clinical impacts of using our tool during procedures. We estimate coverage and Polyp Recall (PR) with and without the real-time navigation guidance during the scan to demonstrate the possible benefits of our tool. All datasets used, are proprietary of the group.
We conducted all of the tests using a calibrated Olympus CF-H185L/I colonoscope on a 3D printed colon model. The colon model was manufactured by segmenting a CT colon scan from [14] and post-processing it to recover the 3D structure of the colon. The model was fabricated from the final mesh using a 3D printer. ColNav was run on a high-performance computer equipped with an AMD Ryzen 3960x processor, 128 GB of RAM, and an NVIDIA A6000 GPU. The algorithm ran at a speed of 20 FPS while the live endoscope stream was in its native frequency, enabling real-time usage and guidance during the scans.
The annotations for the first test, and the scans in the second test were performed by physicians who, on average, had 6.5 years of experience and conducted 5000 colonoscopies. We also used the baseline PR experiment (without ColNav) as a standard, and only included physicians with a recall of over 50%.
**Colon unfolding verification:** Two physicians were asked to annotate the coverage level of 83 short clips captured using our colon model. To assess the annotators agreement, we used weighted Cohen's kappa coefficient [9], due to the ordinal nature of the coverage categories. The resulting weighted kappa score of approximately 60% indicated a "moderate" level of agreement.
Figure 3: Left: Representative frame from an annotated clip with the annotated quadrant numbers. Below, ColNav’s flattened image of the same clip with the corresponding quadrant numbers. Note that areas that are occluded or aren’t visible in the frame are mostly dark in the flattened image. Right: Our complete 3D model with the external magnets that hold in place the small magnetic balls.
However, the absolute coverage value given by a single physician might be subjective, making calibration and comparison difficult. Thus, to overcome this issue, we tested for agreement on the relative score of the four quadrants. We used Cohen's kappa to measure the agreement between the physicians on the order of the sorted quadrants based on their coverage score (most covered quadrant is first, least one is last). Cohen's kappa using the relative coverage scores between the two annotators is 84.7% meaning 'almost perfect agreement', which showcases that this comparison method is more suited for the task.
Based on this approach, we applied ColNav to compute a flattened image for each short clip. Each flattened image was partitioned into four quadrants, and the coverage percentage was calculated for each quadrant. To evaluate our predictions, we mapped the coverage percentages to three categories using a simple threshold: (\(coverage<=60\%\):'mostly not covered', \(60\%<coverage<=80\%\): 'partially covered', \(80\%<coverage<=100\%\):'mostly covered'). The results shown in Table 1 present ColNav high levels of agreement with the two physicians, with agreement rates of 88.4% and 85.9% respectively. These results demonstrate that ColNav accurately represents the scanned colon and has high inter-rater reliability with physicians for predicting coverage levels.
**Polyp Recall Impact:** Real-time estimation of coverage offers the crucial advantage of guiding physicians to potentially missed areas and enhancing the detection of polyps in these regions. To evaluate this capability, we conducted a simulation study by concealing 18 small magnetic balls (diameter=\(5mm\)) within our colon model to simulate polyps (see Fig. 3, Supplementary Fig 5). Three trained physicians were recruited to perform an optical colonoscopy on the model, with and without ColNav, while recording their coverage and recall, i.e. the ratio between the number of balls detected during each test and the total number of balls. Scans were conducted in the same manner, starting from the end of the model ('eccum'), the colon was examined while the endoscope was withdrawn. To prevent location bias, we used balls of multiple colors and assigned each physician a different combination of colors in each test phase (with/without ColNav). The results, as presented in Table 2, reveal that physicians using ColNav achieved 11.1% higher polyp recall (PR) and 4.8% better coverage, demonstrating the effectiveness of our solution and supporting our belief that ColNav could improve PDR in clinical scenarios.
**Real colonoscopy videos:** ColNav was used in an offline manner on short clips of recorded procedures. See Supplementary Fig. 4 for example of the output.
\begin{table}
\begin{tabular}{|l|c|} \hline & Cohen’s Kappa[\%] \\ \hline Annotators A, B & 84.7 \\ \hline Anno. A, **ColNav** & **88.4** \\ \hline Anno. B, **ColNav** & **85.9** \\ \hline \end{tabular}
\begin{tabular}{|l|c|} \hline & PR[\%] & Cov.[\%] \\ \hline Without ColNav & \(77.8\pm 3.9\) & \(91.6\pm 1.5\) \\ \hline With **ColNav** & **88.9**\(\pm 3.9\) & **96.4**\(\pm 1.0\) \\ \hline \end{tabular}
\end{table}
Table 1: Weighted Cohen’s Kappa over the relative coverage scores.
## 5 Conclusion
We have presented ColNav, the first of its kind real-time colon navigation system, which not only calculates coverage, but also provides augmented guidance to un-surveyed areas without disrupting the procedure. The coverage estimation has been shown to have high correlation with experts. Using the system, physicians were able to improve their coverage and recall in detecting findings within the colon. The system was qualitatively evaluated offline on real-life procedures. Further research will focus on improving real-time performance and robustness to extreme colon deformation using non-rigid SLAM.
|
2304.12272 | AMR Parsing with Instruction Fine-tuned Pre-trained Language Models | Instruction fine-tuned language models on a collection of instruction
annotated datasets (FLAN) have shown highly effective to improve model
performance and generalization to unseen tasks. However, a majority of standard
parsing tasks including abstract meaning representation (AMR), universal
dependency (UD), semantic role labeling (SRL) has been excluded from the FLAN
collections for both model training and evaluations. In this paper, we take one
of such instruction fine-tuned pre-trained language models, i.e. FLAN-T5, and
fine-tune them for AMR parsing. Our extensive experiments on various AMR
parsing tasks including AMR2.0, AMR3.0 and BioAMR indicate that FLAN-T5
fine-tuned models out-perform previous state-of-the-art models across all
tasks. In addition, full fine-tuning followed by the parameter efficient
fine-tuning, LoRA, further improves the model performances, setting new
state-of-the-arts in Smatch on AMR2.0 (86.4), AMR3.0 (84.9) and BioAMR (82.3). | Young-Suk Lee, Ramón Fernandez Astudillo, Radu Florian, Tahira Naseem, Salim Roukos | 2023-04-24T17:12:17Z | http://arxiv.org/abs/2304.12272v1 | # AMR Parsing with Instruction Fine-tuned Pre-trained Language Models
###### Abstract
Instruction fine-tuned language models on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks. However, a majority of standard parsing tasks including abstract meaning representation (AMR), universal dependency (UD), semantic role labeling (SRL) has been excluded from the FLAN collections for both model training and evaluations. In this paper, we take one of such instruction fine-tuned pre-trained language models, i.e. FLAN-T5, and fine-tune them for AMR parsing. Our extensive experiments on various AMR parsing tasks including AMR2.0, AMR3.0 and BioAMR indicate that FLAN-T5 fine-tuned models out-perform previous state-of-the-art models across all tasks. In addition, full fine-tuning followed by the parameter efficient fine-tuning, LoRA, further improves the model performances, setting new state-of-the-arts in Smatch on AMR2.0 (86.4), AMR3.0 (84.9) and BioAMR (82.3).
## 1 Introduction
Instruction fine-tuning language models on a collection of annotated datasets has proven highly effective to improve model performance and generalization to unseen tasks both in general purpose open domain setup, as in Chung et al. (2022); Wei et al. (2021); Longpre et al. (2023); Ouyang et al. (2022); Mishra et al. (2022); Wang et al. (2022); Honovich et al. (2022) and specialized tasks such as conversational dialogs in Gupta et al. (2022).
Despite its great success in the majority of natural language processing tasks, however, standard parsing tasks such as abstract meaning representation (AMR), Banarescu et al. (2013); Bevilacqua et al. (2021); Zhou et al. (2021); Bai et al. (2022), universal dependency (UD), Nivre et al. (2017), semantic role labeling (SRL), Gildea and Jurafsky (2002); Palmer et al. (2005), etc. have been largely excluded from the fine-tuned language net (FLAN) collections either for model training or evaluations. And therefore it still remains to be seen whether or not instruction fine-tuned language models are as effective for standard parsing tasks as for other NLP tasks.
In this paper, we fine-tune FLAN-T5 models of Chung et al. (2022) (FLANT5Large and FLAN-T5XL) on a wide range of AMR parsing tasks including AMR2.0, AMR3.0 and BioAMR. We show that fine-tuning FLAN-T5 models on AMR parsing leads to a significant improvement over the previous BART fine-tuned SoTA models by Zhou et al. (2021); Bai et al. (2022). We further explore a parameter efficient fine-tuning technique, LoRA (Low Rank Adaptation), Hu et al. (2021). While LoRAonly finetuned models do not out-perform full fine-tuned models, full fine-tuning followed by LoRA fine-tuning significantly improve full fine-tuned models, setting new state-of-the-arts across all AMR parsing tasks.
Our main contributions are as follows:
* We apply instruction fine-tuned FLAN-T5 models to AMR parsing for the first time. We show that FLAN-T5 fine-tuned AMR parsing models significantly out-perform previous BART fine-tuned SoTA models.
* We explore the parameter efficient fine-tuning technique LoRA for sequence-to-sequence tasks. Although fine-tuning FLAN-T5 models with LoRA only does not out-perform full fine-tuned models, LoRA fine-tuning of full fine-tuned models further improves model performances.
* We push the envelope of AMR parsing, by setting new SoTA in Smatch on AMR2.0 (86.4), AMR3.0 (84.9) and BioAMR (82.3).
## 2 AMR Parsing with FLAN-T5 Models
Flan-T5 models, Chung et al. (2022), are obtained by instruction fine-tuning T5-LM adapted models, Lester et al. (2021), on a collection of 1.8K instruction annotated tasks.1 They are prefix language models2 and achieve strong few-shot performance even compared to much larger models, such as PaLM 62B.
Footnote 1: [https://github.com/google-research/FLAN/blob/main/flan/v2/flan_collection_info.csv](https://github.com/google-research/FLAN/blob/main/flan/v2/flan_collection_info.csv)
Footnote 2: Given natural text prefix as input, the model must produce the natural text continuation as output.
Like all models derived from T5 models, Raffel et al. (2020), we pose AMR parsing as a text-to-text problem and train models to transfer a text to a linearized AMR graph with the task prefix **amr generation**. FLAN-T5 model size variants, all of which use 32,128 unique vocabulary, is shown in Table 1.
### Pre- and Post-processing
We first remove wiki tags from the raw AMR graphs. We then serialize the AMR graph and transfer the node variable information of concepts to the concepts themselves.3 If a graph includes the same concept more than once, unique indices are appended to each concept, e.g. _thing_1, thing_2_, etc. Finally, we add the task prefix **amr generation** to each input text. A sample input text and the corresponding serialized AMR graph after preprocessing is shown in Figure 1.
Footnote 3: [https://github.com/IBM/graph_ensemble_learning](https://github.com/IBM/graph_ensemble_learning)
For testing, the decoder first generates serialized graphs, which is then de-serialized to restore the concept variables including re-ifications. We finally restore wiki tags using deterministic algorithms.
### Full Fine-tuning
For full fine-tuning, we call the huggingface transformers4 class T5ForConditionalGeneration and T5Tokenizer. We train all models on 2 NVIDIA A100 80GB machines for 24 hours.
Footnote 4: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers)
For both FLAN-T5-large and FLAN-T5-XL models, we set the maximum source and target lengths to 512. Learning rate is set to 5e-5. We utilize sentence based batching for mini batches. Batch size is 8 for FLAN-T5-large and 4 FLAN-T5-XL, distributed over 2 GPUs.
We run the validation data set after each epoch and choose the model with the highest validation set Smatch score as the final best model for testing.
Input: Statistics also revealed that Taiwanese business investments in the mainland is tending to increase
AMR graph:
(r / reveal-01 :ARG0 (s / statistic) :ARG1 (t / tend-02 :ARG1 (t2 / thing :ARG1-of (i / invest-01 :ARG0 (c / country :wiki "Taiwan" :name (m / name :opal "Taiwan")) :ARG2 (m / mainland) :mod (b / business))) :ARG2 (i2 / increase-01 :ARG1 (2)) :mod (a / also))
Serialized AMR graph:
( reveal-01 :ARG0 ( statistic ) :ARG1 ( tend-02 :ARG1 ( thing :ARG1-of ( invest-01 :ARG0 ( country :name ( name :op1 "Taiwan" ) ) :ARG2 ( mainland ) :mod ( business ) ) ) :ARG2 ( increase-01 :ARG1 thing ) ) :mod ( also ) )
Input text with the task prefix: amr generation ; Statistics also revealed that Taiwanese business investments in the mainland is tending to increase
### LoRA Fine-tuning
We experiment with the low rank adaptation LoRA5 for AMR parsing, a sequence-to-sequence task using an encoder-decoder architecture.
Footnote 5: [https://github.com/huggingface/peff](https://github.com/huggingface/peff)
Largely following the recommended setup of adapting only the \(q\) (query) and \(v\) (value) projections in the transformer, we explore the LoRA configurations between rank=8, alpha=32 and rank=16, alpha=64 while fixing task_type to Seq_2_SEQ_LM. We call model.eval() to merge LoRA parameters with the corresponding pre-trained ones after each parameter update and for inferencing and model.train() to split the LoRA parameters from the pre-trained ones for updating only LoRA parameters. Unlike full fine-tuning for which learning rate is 5e-5, we use learning rate 4e-1 for LoRA fine-tuning. We have found that LoRA fine-tuning requires a higher learning rate than full
Figure 1: Serialization of AMR graphs and addition of the task prefix **amr generation** to the input text. In the serialized graph, the variables corresponding to each concept in the original AMR graph is removed while the parentheses indicating the concept spans are retained. The input text with the task prefix and the serialized AMR graphs are used for model training.
fine-tuning for optimal performances, which is not surprising given the much fewer number of parameters to update with LoRA fine-tuning compared with full fine-tuning.
We experiment with two different modes of LoRA fine-tuning. First, apply LoRA fine-tuning to the pretrained language model (PLM) directly. Second, apply full fine-tuning to the PLM, and then apply LoRA fine-tuning to the full fine-tuned models. LoRA fine-tuning on PLM directly does not seem to improve the performances over full fine-tuned models.6 However, LoRA fine-tuning on full fine-tuned models improve the full fine-tuned models across various AMR parsing tasks and model configurations.
Footnote 6: We have not explored the full range of adjustable parameters of LoRA, e.g. \(k\) and \(o\) projections in addition to \(q\) and \(v\) projections, or values other than 8 and 16 for LoRA rank and 32 and 64 for LoRA alpha.
## 3 Experimental Results
We experiment on 3 AMR tasks, AMR2.0, AMR3.0 and BioAMR, and 2 model training configurations, FLAN-T5-large and FLAN-T5-XL. Training and test corpora statistics are shown in Table 2. Silver training corpus is annotated with the MBSE ensemble distillation technique presented in Lee et al. (2022).
Experimental results are shown in Table 3. As a point of reference, we include the highest Structured BART scores from Lee et al. (2022). FFT denotes full fine-tuning and LoRA, LoRA fine-tuning. All FLAN-T5-Large-FFT-LoRA and FLAN-T5-Large-FFT-LoRA scores are an average of two distinct model scores, one trained with lora_rank=8, lora_alpha=32 and the other, with lora_rank=16, lora_alpha=64.
Across all tasks and training corpus sizes, full fine-tuned (FFT) FLAN-T5-Large models outperform Structured BART, except for AMR2.0 with silver training data for which FLAN-T5-Large is 0.1 Smatch lower than Structured BART. Full fine-tuned FLAN-T5-XL models out-perform all corresponding Structured BART models. LoRA fine-tuning lags behind full fine-tuning in performance when comparing FLAN-T5-Large-LoRA with FLAN-T5-Large-FFT. Full fine-tuning followed by LoRA fine-tuning (FFT-LoRA), however, always out-performs full fine-tuning only except for BioAMR with silver training corpus, for which full fine-tuned model score is the same as that of full fine-tuning followed by LoRA fine-tuning.
The fact that full fine-tuning followed by LoRA fine-tuning improves the scores of full fine-tuned models is somewhat unexpected and this training setup does not seem to have been explored elsewhere, Xu et al. (2032); Valipour et al. (2022); Lialin et al. (2023). We conjecture that the low rank adaptation by LoRA prevents the model from over-fitting the training data especially when the
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Models & \# parameters & \# layers & d\({}_{model}\) & d\({}_{ff}\) & d\({}_{kv}\) & \# heads \\ \hline Flan-T5-Small & 77M & 8 & 512 & 1024 & 64 & 6 \\ Flan-T5-Base & 250M & 12 & 768 & 2048 & 64 & 12 \\
**Flan-T5-Large** & 780M & 24 & 1024 & 2816 & 64 & 16 \\
**Flan-T5-XL** & 3B & 24 & 2048 & 5120 & 64 & 32 \\ Flan-T5-XXL & 11B & 24 & 4096 & 10240 & 64 & 64 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Flan-T5 model size variants obtained from each model configuration file of [https://huggingface.co/models](https://huggingface.co/models). We fine-tune Flan-T5-Large and Flan-T5-XL for our AMR experiments.
\begin{table}
\begin{tabular}{l|l|c c} \hline \hline
**Dataset** & **Split** & **Sents** & **Tokens** \\ \hline AMR2.0 & Train\({}^{h}\) & 36,521 & 653K \\ & Test & 1,371 & 30K \\ \hline AMR3.0 & Train\({}^{h}\) & 55,635 & 1M \\ & Test & 1,898 & 39K \\ \hline Bio AMR & Train\({}^{h}\) & 5,452 & 231K \\ & Test & 500 & 22K \\ \hline PropBank & Silver\({}^{std}\) & 20K & 386K \\ SQuAD2.0-C & Silver\({}^{std}\) & 70K & 2M \\ Ontonotes5.0 & Silver\({}^{std}\) & 59K & 1.1M \\ WikiText-103 & Silver\({}^{std}\) & 70K & 2M \\ \hline BioNLP-ST-2011 & Silver\({}^{bio}\) & 15K & 460K \\ CRAFT & Silver\({}^{bio}\) & 27K & 740K \\ PubMed & Silver\({}^{bio}\) & 26K & 750K \\ \hline \hline \end{tabular}
\end{table}
Table 2: Corpus statistics for the standard benchmark experiments on AMR2.0, AMR3.0 and BioAMR test sets. In corpus split, Train\({}^{h}\) indicates human annotated treebank. Silver\({}^{std}\) indicates the unlabeled data for silver training of AMR2.0 and AMR3.0. Silver\({}^{bio}\) indicates the unlabeled data for silver training of BioAMR. Silver training corpus is annotated with the MBSE ensemble distillation technique in Lee et al. (2022).
supervised training corpus size is large as in our AMR parsing setup, which in turn leads to better generalization capabilities on unseen test sets. We leave this topic for future research for now.
We compare the performances of our best models with previous SoTA models in Table 4. We restrict the comparisons only with models that fine-tune PLMs, i.e. BART-large. The model scores with \(\ddagger\) indicates that the models are significantly better than all of the previous models at p=0.05 according to randomized bootstrap statistical significance tests.7 We see that FLAN-T5 fine-tuned models out-perform the previous SoTA models across all training conditions and model configurations.
Footnote 7: [https://github.com/IBM/transition-amr-parser/blob/master/scripts/smatch_aligner.py](https://github.com/IBM/transition-amr-parser/blob/master/scripts/smatch_aligner.py)
We show the detailed scores of our best models both with and without silver training corpus in Table 5. BioAMR wiki scores are 0.0 because we do not apply wikification to BioAMR parsing outputs. Overall, concept scores are the highest and negation/re-entrancy are the lowest among all categories. Negation scores are lower than re-entrancy scores for AMR3.0-mbse, AMR3.0-base and AMR2.0-base whereas for all others, re-entrancy scores are lower than negation scores. Named entity detection (NER) does not seem to be an issue even with BioAMR, which should be attributed to the fact that the training corpus includes 5K human annotated BioAMR graphs.
## 4 Conclusion and Future Work
We presented AMR parsing with instruction fine-tuned FLAN-T5 models, first parsing results with FLAN-T5 models to the best of our knowledge. The experimental results indicate that FLAN-T5 fine-tuned AMR parsing models significantly outperform previous SoTA models, which were also fine-tuned with another PLM, BART-large. We also explore parameter efficient fine-tuning LoRA. While LoRA fine-tuned models under-perform full fine-tuned models, LoRA tuning applied to full fine-tuned models further improves the Smatch scores of fine-tuned models across all training conditions. We push the envelope of AMR parsing, by setting new SoTA in Smatch on AMR2.0 (86.4), AMR3.0 (84.9) and BioAMR (82.5), which is even higher than 7 model graphene ensemble results presented in Tables 2 and 3 of (Hoang et al., 2021).
While full fine-tuning followed by LoRA fine
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**Models** & **PLM** & **Silver** & **AMR2.0** & **AMR3.0** & **BioAMR** \\ \hline SPRING (Bevilacqua et al., 2021) & BART-large & - & 84.5 & 83.0 & 79.9 \\ SPRING (Bevilacqua et al., 2021) & BART-large & 200K & 84.3 & 83.0 & 59.5 \\ StructBART-vanilla (Zhou et al., 2021) & BART-large & 90K & 84.7 & 82.7 & - \\ BARTAMR (Bai et al., 2022) & BART-large & 200K & 85.4 & 84.2 & 63.2 \\ StructBART-MBSE (Lee et al., 2022) & BART-large & 219K & 85.9 & 84.3 & 81.3 \\ FLAN-T5-Large-FFT-LoRA (Ours) & FLAN-T5-Large & 219K & 86.1 & 84.7\(\ddagger\) & 82.2\(\ddagger\) \\ FLAN-T5-XL-FFT-LoRA (Ours) & FLAN-T5-XL & 219K & **86.4\(\ddagger\)** & **84.9\(\ddagger\)** & **82.3\(\ddagger\)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of AMR parsing models for AMR2.0, AMR3.0 and BioAMR test sets. We compare the current FLAN-T5 fine-tuned models (Ours) against those BART-large fine-tuned models. Boldface indicates the best model scores. \(\ddagger\) indicates that the model is statistically significantly better than all of the previous models at p=0.05 according to randomized bootstrap statistical significance tests.
\begin{table}
\begin{tabular}{l|c c c||c c c} \hline \hline
**Training Corpora** & \multicolumn{3}{c||}{**Human Annotations**} & \multicolumn{3}{c}{**Human \& Silver Annotations**} \\ \hline
**Models** & **AMR2.0** & **AMR3.0** & **BioAMR** & **AMR2.0** & **AMR3.0** & **BioAMR** \\ \hline (Lee et al., 2022) & 84.2 & 82.3 & 79.8 & 85.9 & 84.3 & 81.3 \\ \hline FLAN-T5-Large-LoRA & 82.3 & 81.7 & 79.1 & 84.6 & 83.0 & 80.4 \\ FLAN-T5-Large-FFT & 84.6 & 83.2 & 81.0 & 85.8 & 84.6 & 82.1 \\ FLAN-T5-Large-FFT-LoRA & 84.8\(\pm\)0.1 & 83.3\(\pm\)0.0 & 81.2\(\pm\)0.1 & 86.1\(\pm\)0.0 & 84.7\(\pm\)0.1 & 82.2\(\pm\)0.1 \\ \hline FLAN-T5-XL-FFT & 84.6 & 83.4 & 80.9 & 86.1 & 84.6 & 82.3 \\ FLAN-T5-XL-FFT-LoRA & 84.8\(\pm\)0.0 & 83.9\(\pm\)0.1 & 81.6\(\pm\)0.0 & 86.4\(\pm\)0.1 & 84.9\(\pm\)0.0 & 82.3\(\pm\)0.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of FLAN-T5 fine-tuned models trained on human annotations only (left) and human and silver annotations (right). FFT denotes full fine-tuning. The numbers prefixed by \(\pm\) indicate the standard deviation of Smatch scores across 2 seeds with different LoRA configurations.
tuning improves the model performances significantly, compared with the models with full fine-tuning only, we do not yet understand exactly why this should be the case, which we leave this for future research.
With the advent of very powerful instruction fine-tuned language models with human feedback such as ChatGPT and GPT-4, many natural language processing tasks, including classification and detection, achieve very high zero-shot performances. Nonetheless, given the unique label vocabulary and the hidden structure present in most parsing representations, zero shot parsing on a new parsing task does not seem easily achievable. Instruction fine-tuning on a collection of annotated natural language parsing tasks, along the lines what has been done for dialog tasks in Gupta et al. (2022), might lead to high performing few-shot or zero-shot learning of new parsing tasks.
|
2306.13073 | Unitary Complexity and the Uhlmann Transformation Problem | State transformation problems such as compressing quantum information or
breaking quantum commitments are fundamental quantum tasks. However, their
computational difficulty cannot easily be characterized using traditional
complexity theory, which focuses on tasks with classical inputs and outputs.
To study the complexity of such state transformation tasks, we introduce a
framework for unitary synthesis problems, including notions of reductions and
unitary complexity classes. We use this framework to study the complexity of
transforming one entangled state into another via local operations. We
formalize this as the Uhlmann Transformation Problem, an algorithmic version of
Uhlmann's theorem. Then, we prove structural results relating the complexity of
the Uhlmann Transformation Problem, polynomial space quantum computation, and
zero knowledge protocols.
The Uhlmann Transformation Problem allows us to characterize the complexity
of a variety of tasks in quantum information processing, including decoding
noisy quantum channels, breaking falsifiable quantum cryptographic assumptions,
implementing optimal prover strategies in quantum interactive proofs, and
decoding the Hawking radiation of black holes. Our framework for unitary
complexity thus provides new avenues for studying the computational complexity
of many natural quantum information processing tasks. | John Bostanci, Yuval Efron, Tony Metger, Alexander Poremba, Luowen Qian, Henry Yuen | 2023-06-22T17:46:39Z | http://arxiv.org/abs/2306.13073v2 | # Unitary Complexity and the Uhlmann Transformation Problem
###### Abstract
State transformation problems such as compressing quantum information or breaking quantum commitments are fundamental quantum tasks. However, their computational difficulty cannot easily be characterized using traditional complexity theory, which focuses on tasks with classical inputs and outputs.
To study the complexity of such state transformation tasks, we introduce a framework for _unitary synthesis problems_, including notions of reductions and unitary complexity classes. We use this framework to study the complexity of transforming one entangled state into another via local operations. We formalize this as the _Uhlmann Transformation Problem_, an algorithmic version of Uhlmann's theorem. Then, we prove structural results relating the complexity of the Uhlmann Transformation Problem, polynomial space quantum computation, and zero knowledge protocols.
The Uhlmann Transformation Problem allows us to characterize the complexity of a variety of tasks in quantum information processing, including decoding noisy quantum channels, breaking falsifiable quantum cryptographic assumptions, implementing optimal prover strategies in quantum interactive proofs, and decoding the Hawking radiation of black holes. Our framework for unitary complexity thus provides new avenues for studying the computational complexity of many natural quantum information processing tasks.
###### Contents
* 1 Introduction
* 1.1 A fully quantum complexity theory
* 1.2 Structural results about the Uhlmann Transformation Problem
* 1.3 Centrality of the Uhlmann Transformation Problem
* 1.4 Summary and future directions
* 2 Preliminaries
* 2.1 Notation
* 2.2 Partial isometries and channel completions
* 2.3 Quantum circuits
* 2.4 Quantum state complexity classes
* I Unitary Complexity Theory
* 3 Unitary Synthesis Problems and Unitary Complexity Classes
* 3.1 Unitary synthesis problems
* 3.2 Unitary complexity classes
* 3.3 Reductions
* 3.4 Discussion and open problems
* 4 Interactive Proofs for Unitary Synthesis
* 4.1 Quantum interactive protocols
* 4.2 Interactive proofs for unitary synthesis
* 4.3 Zero-knowledge protocols for state and unitary synthesis
* II Uhlmann Transformation Problem: Definitions and Structural Results
* 5 Definition of the Uhlmann Transformation Problem
* 5.1 Uhlmann's theorem and canonical isometries
* 5.2 Worst-case Uhlmann transformation problem
* 5.3 Distributional Uhlmann transformation problem
* 6 Structural Results about the Uhlmann Transformation Problem
* 6.1 Completeness for unitary zero knowledge
* 6.2 Hardness amplification
* 6.3 The padding trick
* 6.4 A polarization lemma for unitary zero knowledge?
* 7 Structural Results about the Succinct Uhlmann Transformation Problem
* 7.1 Completeness for avgUnitaryQIP
* 7.2 Completeness for avgUnitaryPSPACE
* 7.3 Completeness for worst-case unitaryPSPACE
* 7.4 Relationship between avgUnitaryPSPACE and PSPACE
* 8
[MISSING_PAGE_POST]
Introduction
Uhlmann's theorem [14] is a fundamental result in quantum information theory that quantifies how well a bipartite pure state \(\ket{C}\) can be mapped to another bipartite pure state \(\ket{D}\) by only acting on a subsystem: letting \(\rho\) and \(\sigma\) denote the reduced density matrices on the first subsystem of \(\ket{C}\) and \(\ket{D}\), respectively, Uhlmann's theorem states that
\[\mathrm{F}(\rho,\sigma)=\max_{U}\,|\bra{D}\mathrm{id}\otimes U\ket{C}|^{2}\,, \tag{1.1}\]
where \(\mathrm{F}(\rho,\sigma)\) denotes the fidelity function and the maximization is over all unitary transformations acting on the second subsystem. We call a unitary \(U\) achieving equality in Equation (1.1) an _Uhlmann transformation_.1
Footnote 1: Such Uhlmann transformations are unique only if \(\ket{C},\ket{D}\) have full Schmidt rank.
Transforming entangled states via local operations is a ubiquitous task in quantum information processing. Some examples include:
**Quantum Shannon theory.**: Quantum Shannon theory is the study of the fundamental limits of quantum communication over noisy and noiseless channels. Protocols for a myriad of tasks such as state redistribution, entanglement distillation, and quantum communication over a noisy quantum channel all require performing Uhlmann transformations [15, 1, 1, 2].
**Quantum cryptography.**: While it is known that quantum commitment schemes with information-theoretic security are impossible [16, 2], they are possible under computational assumptions, and recent oracle separations suggest that their security can be based on weaker assumptions than what is needed classically (i.e., one-way functions) [17, 1, 2, 3]. It can be seen from the impossibility results of Mayers-Lo-Chau [16, 2] that the security of a quantum commitment scheme relies on the hardness of performing certain Uhlmann transformations.
**Quantum gravity.**: Attempts to unite quantum mechanics with general relativity have given rise to apparent paradoxes of whether black holes preserve information or not [14]. Recently, physicists have provided intriguing arguments based on _computational complexity_ as possible resolutions to these paradoxes [15]. These arguments claim that distilling entanglement from the emitted Hawking radiation of a black hole is computationally infeasible -- this can be equivalently phrased as a statement about the hardness of an Uhlmann transformation [15, 16].
**Quantum complexity theory.**: The \(\mathsf{QIP}=\mathsf{PSPACE}\) theorem [12] gives a characterization of the power of (single-prover) quantum interactive proofs. Kitaev and Watrous [19] showed that optimal prover strategies in these interactive proofs involve applying Uhlmann transformations at each round.
These examples motivate investigating a computational task we call the _Uhlmann Transformation Problem_ (denoted by the shorthand Uhlmann): given the classical description of quantum circuits \(C,D\) acting on \(2n\) qubits and an \(n\)-qubit quantum system (in some unknown state), apply
the Uhlmann transformation \(U\) for the state pair \(\left(\left|C\right\rangle,\left|D\right\rangle\right)\) to the given quantum system, where \(\left|C\right\rangle=C\left|0^{2n}\right\rangle\) and \(\left|D\right\rangle=D\left|0^{2n}\right\rangle\).
What is the complexity of Uhlmann? What are the implications for the complexity of the tasks mentioned in the examples above? For instance, many protocols developed in quantum Shannon theory achieve asymptotically optimal communication rates, but are not known to be computationally efficient due to the use of Uhlmann transformations for decoding. Could solving Uhlmann be _necessary_ for these protocols? What would that mean for quantum cryptography or quantum gravity? Despite the prevalence of Uhlmann transformations in quantum information processing, these questions have not been studied systematically yet.
The goal of this paper is to study these questions formally. Since Uhlmann transformations are inherently quantum operations and cannot meaningfully be phrased as decision or function problems, we need to extend the language of complexity theory to _unitary synthesis problems_, i.e. computational problems that involve implementing a unitary operation on a quantum system in an unknown state. The first main contribution of this paper is to provide a general formal framework for reasoning about unitary complexity (Part I). This involves extending many of the traditional notions of complexity theory, such as reductions, complexity classes, complete problems, etc. to the setting of unitary synthesis problems. Our second main contribution is to analyze the complexity of the Uhlmann Transformation Problem within this framework (Part II). As we will see, this will also naturally lead us to more general statements about the relationships between unitary complexity classes. Finally, we show how the Uhlmann transformation problem plays a central role in connecting the complexity of many natural tasks in quantum information processing (Part III).
### A fully quantum complexity theory
The complexity of Uhlmann transformations deals with the hardness of implementing a unitary _transformation_, where the inputs and outputs of the task are quantum states. Traditional complexity classes deal with tasks with classical inputs and outputs (e.g., solving a decision problem or computing a Boolean function). As a consequence, traditional complexity theory cannot capture the complexity of Uhlmann transformations, or more generally of implementing unitaries on unknown input states. To study the hardness of problems with quantum inputs or outputs, we need a new framework.
The idea that the complexity of _inherently quantum_ problems cannot easily be reduced to the complexity of classical problems has already been explored in prior works [11, 12, 13]. Indeed, the oracle separations mentioned above [11, 12] can be rephrased as oracle separations that demonstrate that the complexity of some quantum cryptographic problems is independent of the complexity of the decisional complexity classes NP or QMA.
Recently, Rosenthal and Yuen initiated the study of complexity classes for _state synthesis_ and _unitary synthesis_ problems [10]. A state synthesis problem is a sequence \((\rho_{x})_{x\in\{0,1\}^{*}}\) of quantum states. A _state complexity class_ is a collection of state synthesis problems that captures the computational resources needed to synthesize (i.e., generate) the states. For example, [10] defined the class statePSPACE as the set of all state sequences \((\rho_{x})_{x\in\{0,1\}^{*}}\) for which there is a polynomial-space (but possibly exponential-time) quantum algorithm \(A\) that, on input \(x\), outputs an approximation to the state \(\rho_{x}\).
_Unitary complexity classes_, which are the focus of this work, describe the computational resources
needed to perform state _transformations_. A unitary synthesis problem is a sequence of unitary2 operators \((U_{x})_{x\in\{0,1\}^{*}}\) and a unitary complexity class is a collection of unitary synthesis problems. For example the class unitaryBQP is the set of all sequences of unitary operators \((U_{x})_{x\in\{0,1\}^{*}}\) where there is a polynomial-time quantum algorithm \(A\) that, given an _instance_\(x\in\{0,1\}^{*}\) and a quantum system \(\mathsf{B}\) as input, (approximately) applies \(U_{x}\) to system \(\mathsf{B}\). As a simple example, any sequence of unitaries \((U_{x})\) where \(x\) is simply (an explicit encoding of) a sequence of quantum gates that implement the unitary is obviously in unitaryBQP, since given \(x\), the algorithm \(A\) can just execute the circuit specified by \(x\) in time polynomial in the length of \(x\). On the other hand, \(x\) could also specify a unitary in a sequence in a more implicit way (e.g. by circuits for two quantum states between which \(U_{x}\) is meant to be the Uhlmann transformation), in which case the sequence \((U_{x})_{x}\) could be harder to implement.
Footnote 2: In our formal definition of unitary synthesis problems (see Section 3), the \(U_{x}\)’s are technically partial isometries, which is a promise version of unitaries, but we gloss over the distinction for now.
The reason we say that the algorithm \(A\) is given a _system_ instead of a _state_ is to emphasize that the state of the system is not known to the algorithm ahead of time, and in fact the system may be part of a larger entangled state. Thus the algorithm has to coherently apply the transformation \(U_{x}\) to the given system, maintaining any entanglement with an external system. This makes unitary synthesis problems fundamentally different, and in many cases harder to analyse, than state synthesis problems.
Traditional complexity classes like \(\mathsf{P}\), \(\mathsf{NP}\), and \(\mathsf{BQP}\) have proven to be powerful ways of organizing and comparing the difficulty of different decision problems. In a similar way, state and unitary complexity classes are useful for studying the complexity of quantum states and of quantum state transformations. We can then ask about the existence of complete problems, reductions, inclusions, separations, closure properties, and more. Importantly, state and unitary complexity classes provide a useful language to formulate questions and conjectures about the computational hardness of inherently quantum problems. For example, we can ask whether unitaryPSPACE is contained in unitaryBQP\({}^{\mathsf{PSPACE}}\) - in other words, can polynomial-space-computable unitary transformations be also computed by a polynomial-time quantum computer that is given oracle access to a PSPACE decision oracle?3
Footnote 3: We remark that this question is open — this is related to the “Unitary Synthesis Problem” raised by Aaronson and Kuperberg [1] – whereas an analogue of this question in traditional complexity theory (e.g. whether \(\mathsf{BQPSPACE}\) is contained in \(\mathsf{BQP}^{\mathsf{PSPACE}}\)) has a positive answer [20].
Unitary synthesis problems, classes, and reductions.We begin by giving general definitions for unitary synthesis problems and a number of useful unitary complexity classes, e.g. unitaryBQP and unitaryPSPACE. We then define a notion of _reductions_ between unitary synthesis problems. Roughly speaking, we say that a unitary synthesis problem \(\mathscr{U}=(U_{x})_{x}\) polynomial-time reduces to \(\mathscr{V}=(V_{x})_{x}\) if an efficient algorithm for implementing \(\mathscr{V}\) implies an efficient algorithm for implementing \(\mathscr{U}\).
Next, we define _distributional_ unitary complexity classes that capture the _average case complexity_ of solving a unitary synthesis problem. Here, the unitary only needs to be implemented on an input state _randomly chosen_ from some distribution \(\mathcal{D}\) which is known ahead of time. The motivation behind this notion of average case complexity is that it is equivalent to implementing the unitary on one half of a larger entangled state \(|\psi\rangle\). This is natural in the context of entanglement transformation problems.
The notion of average case complexity turns out to be central to our paper: nearly all of our
results are about average-case unitary complexity classes and the average-case complexity of the Uhlmann Transformation Problem. Thus the unitary complexity classes we mainly deal with will be \(\mathsf{avgUnitaryBQP}\) and \(\mathsf{avgUnitaryPSPACE}\), which informally mean sequences of unitaries that can be implemented by time-efficient and space-efficient quantum algorithms, respectively, and where the implementation error is measured with respect to inputs drawn from a fixed distribution over quantum states. We forgo a formal definition in this introduction and refer to Section3 for more details.
Interactive proofs for unitary synthesis.We then explore models of _interactive proofs_ for unitary synthesis problems. Roughly speaking, in an interactive proof for a unitary synthesis problem \(\mathscr{U}=(U_{x})_{x}\), a polynomial-time verifier receives an instance \(x\) and a quantum system \(\mathsf{B}\) as input, and interacts with an all-powerful but untrusted prover to try to apply \(U_{x}\) to system \(\mathsf{B}\). As usual in interactive proofs, the main challenge is that the verifier does not trust the prover, so the protocol has to test whether the prover actually behaves as intended. We formalize this with the complexity classes \(\mathsf{unitaryQIP}\) and \(\mathsf{avgUnitaryQIP}\), which capture unitary synthesis problem that can be verifiably implemented in this interactive model. This generalizes the interactive state synthesis model studied by [14, 15].4 The primary difference between the state synthesis and unitary synthesis models is that in the former, the verifier starts with a fixed input state (say, the all zeroes state), while in the latter the verifier receives a quantum system \(\mathsf{B}\) in an unknown state that has to be transformed by \(U_{x}\). We refer to Section4 for more details.
Footnote 4: The class \(\mathsf{unitaryQIP}\) was also briefly studied by Rosenthal and Yuen in [14], but the model and class were not formally defined.
Zero-knowledge unitary synthesis.In the context of interactive protocols, we also introduce a notion of _zero-knowledge protocols_ for unitary synthesis problems. Roughly speaking, a protocol is zero-knowledge if the interaction between the verifier and prover can be efficiently reproduced by an algorithm (called the _simulator_) that does not interact with the prover at all. This way, the verifier can be thought of as having learned no additional knowledge from the interaction aside from the fact that the task was solved. This model gives rise to the unitary complexity class \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\),5 which is a unitary synthesis analogue of the decision classes \(\mathsf{QSZK}\) and \(\mathsf{SZK}\) in traditional complexity theory. Interestingly, for reasons that we explain in more detail in Section4.3, the average-case aspect of \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) appears to be necessary to obtain a nontrivial definition of zero-knowledge in the unitary synthesis setting.
Footnote 5: The “HV” modifier signifies that the zero-knowledge property is only required to hold with respect to verifiers that honestly follow the protocol.
Just like there is a zoo of traditional complexity classes [1], we expect that many unitary complexity classes can also be meaningfully defined and explored. In this paper we focus on the ones that turn out to be tightly related to the Uhlmann Transformation Problem. We discuss these relationships next.
_Remark 1.1_.: For simplicity's sake, in the introduction we present informal statements of our results that gloss over some technical details that would otherwise complicate the result statement. For example, we do not distinguish between unitary synthesis problems and distributional versions of them, nor do we distinguish between uniform and non-uniform unitary complexity classes. After each informal result statement we point the reader to where the formal result is stated and proved.
### Structural results about the Uhlmann Transformation Problem
Equipped with the proper language to talk about unitary synthesis problems, we now turn to the Uhlmann Transformation Problem. We define the unitary synthesis problem Uhlmann to be the sequence \((U_{x})_{x\in\{0,1\}^{*}}\) where we interpret an instance \(x\) as an explicit encoding (as a list of gates) of a pair of quantum circuits \((C,D)\) such that \(C\) and \(D\), on the all-zeroes input, output pure bipartite states \(\ket{C},\ket{D}\) on the same number of qubits, and \(U_{x}\) is an associated Uhlmann transformation mapping \(\ket{C}\) to \(\ket{D}\) by acting on a local system. Usually, we will assume that \(C\) and \(D\) output \(2n\) qubits (for some \(n\) specified as part of \(x\)) and the Uhlmann transformation acts on the last \(n\) qubits. If \(x\) does not specify such a pair, then an algorithm implementing the unitary synthesis problem is allowed to behave arbitrarily on such \(x\); this is formally captured by allowing partial isometries as part of unitary synthesis problems in Definition 3.1.
Furthermore, for a parameter \(0\leq\kappa\leq 1\) we define the problem Uhlmann\({}_{\kappa}\), which is the same as Uhlmann, except that it is restricted to instances corresponding to states \(\ket{C},\ket{D}\) where the fidelity between the reduced density matrices \(\rho,\sigma\) of \(\ket{C},\ket{D}\) respectively on the first subsystem is at least \(\kappa\); recall by Uhlmann's theorem that \(\kappa\) lower bounds how much overlap \(\ket{C}\) can achieve with \(\ket{D}\) by a local transformation. By definition, Uhlmann\({}_{\kappa}\) instances are at least as hard as Uhlmann\({}_{\kappa^{\prime}}\) instances when \(\kappa\leq\kappa^{\prime}\). The formal definitions of Uhlmann, Uhlmann\({}_{\kappa}\), and their distributional versions can be found in Section 5.
Zero-knowledge and the Uhlmann Transformation Problem.We show that the Uhlmann Transformation Problem (with fidelity parameter \(\kappa=1-\epsilon\) for negligibly small \(\epsilon\) as a function of the length of the string specifying an instance \(x\)) _exactly characterizes_ the complexity of the unitary complexity class avgUnitarySZK\({}_{\text{HV}}\), which is the unitary synthesis version of \(\mathsf{QSZK}_{\text{HV}}\)[20].
**Theorem 1.2** (Informal).: _Uhlmann\({}_{1-\epsilon}\) for negligibly small \(\epsilon\) is complete for avgUnitarySZK\({}_{\text{HV}}\) under polynomial-time reductions._
This is formally stated and proved in Section 6.1. To show completeness we have to prove two directions. The first direction is to show that if one can efficiently solve Uhlmann\({}_{1-\epsilon}\) in the average case (meaning that one can approximately map the input state \(\ket{C}\) to \(\ket{D}\) by acting locally), then one can efficiently solve any other distributional unitary synthesis problem in avgUnitarySZK\({}_{\text{HV}}\). This uses a characterization of quantum interactive protocols due to Kitaev and Watrous [21].
The second direction is to show that Uhlmann\({}_{1-\epsilon}\) is in avgUnitarySZK\({}_{\text{HV}}\) by exhibiting an (honest-verifier) zero-knowledge protocol to solve the Uhlmann Transformation Problem. Our protocol is rather simple: in the average case setting, we assume that the verifier receives the last \(n\) qubits of the state \(\ket{C}=C\ket{0^{2n}}\), and the other half is inaccessible. Its goal is to transform, with the help of a prover, the global state \(\ket{C}\) to \(\ket{D}\) by only acting on the last \(n\) qubits that it received as input. To this end, the verifier generates a "test" copy of \(\ket{C}\) on its own, which it can do because \(C\) is a polynomial-size circuit. The verifier then sends to the prover two registers of \(n\) qubits; one of them is the first half of the test copy and one of them (call it A) holds the "true" input state. The two registers are randomly shuffled. The prover is supposed to apply the Uhlmann transformation \(U\) to both registers and send them back. The verifier checks whether the "test" copy of \(\ket{C}\) has been transformed to \(\ket{D}\) by applying the inverse circuit \(D^{\dagger}\) to the test copy and checking if all qubits are zero. If so, it accepts and outputs the register A, otherwise the verifier rejects.
If the prover is behaving as intended, then both the test copy and the "true" copy of \(\ket{C}\) are transformed to \(\ket{D}\). Furthermore, the prover cannot tell which of its two registers corresponds to
the test copy, and thus if it wants to pass the verification with high probability, it has to apply the correct Uhlmann transformation on both registers. This shows that the protocol satisfies the completeness and soundness properties of an interactive proof. The zero-knowledge property is also straightforward: if both the verifier and prover are acting according to the protocol, then before the verifier's first message to the prover, the reduced state of the verifier is \(|C\rangle\!\langle C|\otimes\rho\) (where \(\rho\) is the reduced density matrix of \(|C\rangle\)), and at the end of the protocol, the verifier's state is \(|D\rangle\!\langle D|\otimes U\rho U^{\dagger}\). Both states can be produced in polynomial time.
One may ask: if the simulator can efficiently compute the state \(U\rho U^{\dagger}\) without the help of the prover, does that mean the Uhlmann transformation \(U\) can be implemented in polynomial time? The answer is no, since the simulator only has to prepare the appropriate reduced state (i.e. essentially solve a state synthesis task), which is easy since the starting and ending states of the protocol are efficiently computable; in particular, \(U\rho U^{\dagger}\) is (approximately) the reduced state of \(|D\rangle\), which is easy to prepare. In contrast, the verifier has to implement the Uhlmann transformation on a _specific_ set of qubits that are entangled with a _specific_ external register, i.e. it has to perform a state transformation task that preserves coherence with the purifying register. This again highlights the distinction between state and unitary synthesis tasks.
Hardness amplification for Uhlmann.We also prove a _hardness amplification_ result for the Uhlmann Transformation Problem. The unitary complexity classes we define can be parameterized by an error parameter \(\delta\). For example, \(\mathsf{unitaryBQP}_{\delta}\) denotes the set of unitary synthesis problems where the transformation can be implemented up to error \(\delta\) on all input states; when the error parameter is not specified we assume that the transformation can be implemented with any error that is an arbitrarily small inverse polynomial. It is clear that if \(\mathscr{U}\in\mathsf{unitaryBQP}_{\delta}\), then \(\mathscr{U}\in\mathsf{unitaryBQP}_{\eta}\) for all \(\eta\geq\delta\) (i.e., problems cannot get any harder if we are allowed to incur larger error).
Although we do not expect Uhlmann to be solvable in polynomial time, we show that _if_ Uhlmann transformations can generally be efficiently implemented with large error (even with error approaching 1), then they can be efficiently implemented with arbitrarily small inverse polynomial error. Written in terms of unitary classes, we have:
**Theorem 1.3** (Informal).: _Let \(\epsilon\) be negligibly small. Then \(\textsc{Uhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}\) if and only if \(\textsc{Uhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}_{1-\xi}\) where \(\xi(n)=n^{-1/16}\). Here, \(n\) refers to the number of qubits of the Uhlmann instance._
This is formally stated and proved as Theorem 6.8. In other words, being able to solve Uhlmann (with the guarantee that the fidelity between the reduced states is negligibly close to 1) with very large error is no easier (up to polynomial factors) than solving it with very small error. We prove the more interesting direction (that \(\textsc{Uhlmann}\in\mathsf{avgUnitaryBQP}_{1-\xi}\) implies that \(\textsc{Uhlmann}\in\mathsf{avgUnitaryBQP}\)) as follows: given an instance \(\left(\left|C\right\rangle,\left|D\right\rangle\right)\) of Uhlmann for which it is hard to implement the corresponding Uhlmann transformation \(U\) with error \(\delta\), we show that it is hard to to implement the Uhlmann transformation \(U^{\otimes k}\) for the _\(k\)-fold repetition_\(\left(\left|C\right\rangle^{\otimes k},\left|D\right\rangle^{\otimes k}\right)\) even with error \(1-\frac{1}{\delta k^{2}}\). The proof uses ideas inspired by "quantum rewinding" techniques from quantum cryptography [20].
A natural question is whether one can prove a _strong_ hardness amplification result, where one shows that hardness of an Uhlmann transformation \(U\) implies the hardness of implementing the repeated transformation \(U^{\otimes k}\) with error \(1-\exp(-\Omega(k))\). Strong hardness amplification results are
known in classical complexity theory and cryptography [11, 12, 13]; we conjecture that an analogous result holds for the Uhlmann Transformation Problem.
**Conjecture 1.4**.: _Let \(\epsilon\) be negligibly small. Then \(\textsc{Uhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}\) if and only if \(\textsc{Uhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}_{1-\exp(-\Omega(n))}\)._
Our hardness amplification result for the Uhlmann Transformation Problem immediately implies hardness amplification for quantum commitments, which answers an open question of Yan [10]. We give more details when discussing the applications to quantum cryptography later in this introduction and in Section 8.
The succinct Uhlmann Transformation Problem.We also define a _succinct_ version of the Uhlmann Transformation Problem (denoted by \(\textsc{SuccinctUhlmann}\)), where the string \(x\) encodes a pair \((\hat{C},\hat{D})\) of _succinct descriptions_ of quantum circuits \(C,D\). By this we mean that \(\hat{C}\) (resp. \(\hat{D}\)) is a classical circuit that, given a number \(i\in\mathbb{N}\) written in binary, outputs the \(i\)'th gate in the quantum circuit \(C\) (resp. \(D\)). Thus the circuits \(C\), \(D\) in general can have _exponential_ depth (in the length of the instance string \(x\)) and generate states \(\ket{C},\ket{D}\) that are unlikely to be synthesizable in polynomial time. Thus the task of synthesizing the Uhlmann transformation \(U\) that maps \(\ket{C}\) to a state with maximum overlap with \(\ket{D}\), intuitively, should be much harder than the non-succinct version. We confirm this intuition with the following result:
**Theorem 1.5** (Informal).: _SuccinctUhlmann is complete for \(\mathsf{avgUnitaryPSPACE}\) under polynomial-time reductions._
This is formally stated and proved as Theorem 7.12. The class \(\mathsf{avgUnitaryPSPACE}\) corresponds to distributional unitary synthesis problems that can be solved using a polynomial-space (but potentially exponential-depth) quantum algorithm. The fact that \(\textsc{SuccinctUhlmann}\in\mathsf{avgUnitaryPSPACE}\) was already proved by Metger and Yuen [14], who used this to show that optimal prover strategies for quantum interactive proofs can be implemented in \(\mathsf{avgUnitaryPSPACE}\).6 The fact that \(\mathsf{avgUnitaryPSPACE}\) reduces to \(\textsc{SuccinctUhlmann}\) is because solving a distributional unitary synthesis problem \((U_{x})_{x}\) in \(\mathsf{avgUnitaryPSPACE}\) is equivalent to applying a local unitary that transforms an entangled state \(\ket{\omega_{x}}\) representing the distribution to \((\mathrm{id}\otimes U_{x})\ket{\omega_{x}}\). This is nothing but an instance of the \(\textsc{SuccinctUhlmann}\) transformation problem. We refer to the proof of Theorem 7.12 for details.
Footnote 6: This was phrased in a different way in their paper, as \(\mathsf{avgUnitaryPSPACE}\) was not yet defined.
We show another completeness result for \(\textsc{SuccinctUhlmann}\):
**Theorem 1.6** (Informal).: _SuccinctUhlmann is complete for \(\mathsf{avgUnitaryQIP}\) under polynomial-time reductions._
This is formally stated and proved as Theorem 7.6. Here, the class \(\mathsf{avgUnitaryQIP}\) is like \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) except there is no requirement that the protocol between the honest verifier and prover can be efficiently simulated. The proof of starts similarly to the proof of the \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\)-completeness of Uhlmann, but requires additional ingredients, such as the state synthesis protocol of [13] and the density matrix exponentiation algorithm of [12]. Putting together Theorems 1.5 and 1.6 we get the following unitary complexity analogue of the \(\mathsf{QIP}=\mathsf{PSPACE}\) theorem [14] and the \(\mathsf{stateQIP}=\mathsf{statePSPACE}\) theorem [13, 14]:
**Corollary 1.7**.: \(\mathsf{avgUnitaryQIP=avgUnitaryPSPACE}\)_._
This partially answers an open question of [14, 15], who asked whether \(\mathsf{unitaryQIP=unitaryPSPACE}\) (although they did not formalize this question to the same level as we do here). We resolve this question in the average case, and leave it as an interesting open question to prove the same statement for the worst-case complexity classes \(\mathsf{unitaryPSPACE}\) and \(\mathsf{unitaryQIP}\). The proof of the equivalence of these unitary complexity classes via the Uhlmann Transformation Problem highlights the problem's usefulness and centrality.
We can also relate the traditional decision complexity class \(\mathsf{PSPACE}\) to the unitary synthesis problem \(\mathsf{SuccinctUhlmann}\) with the following theorem.
**Theorem 1.8**.: \(\mathsf{PSPACE}\subseteq\mathsf{BQP}^{\textsc{SuccinctUhlmann}}\)_._
This is formally proved as Theorem 7.15. In other words, all languages in \(\mathsf{PSPACE}\) can be decided by a quantum polynomial time algorithm that can query an oracle that solves \(\mathsf{SuccinctUhlmann}\). Since it is believed that \(\mathsf{PSPACE}\not\subseteq\mathsf{BQP}\), this gives evidence from "traditional" complexity theory that \(\mathsf{SuccinctUhlmann}\) is a very difficult unitary synthesis problem. Our proof of this relies on the random self-reducibility of \(\mathsf{PSPACE}\)-complete languages [13].
We note that it is an interesting question whether the "converse" direction holds: can \(\mathsf{SuccinctUhlmann}\) be synthesized in polynomial time given oracle access to the decision class \(\mathsf{PSPACE}\)? We conjecture that the answer is "no", and that in general a given unitary complexity class is much harder than its corresponding decision class.
### Centrality of the Uhlmann Transformation Problem
We now relate the Uhlmann Transformation Problem to quantum information processing tasks in a variety of areas: quantum cryptography, quantum Shannon theory, and high energy physics. We show that the computational complexity of a number of these tasks is in fact _equivalent_ to the hardness of Uhlmann. For some other problems we show that they are efficiently reducible to Uhlmann or \(\mathsf{SuccinctUhlmann}\). Although some of these connections have been already observed in prior work, we believe that the framework of unitary complexity theory formalizes and clarifies the relationships between these different problems.
#### 1.3.1 Quantum cryptography applications
We begin by exploring connections between the Uhlmann Transformation Problem and several concepts in quantum cryptography.
Quantum commitments.A bit commitment scheme is a fundamental cryptographic primitive that allows two parties (called a _sender_ and _receiver_) to engage in a two-phase communication protocol with the following properties: in the first phase (the "commit phase"), the sender sends a commitment (i.e. some string) to a bit \(b\) to the receiver; the _hiding_ property of a bit commitment scheme ensures that the receiver cannot decide the value of \(b\) from this commitment string alone. In the second phase (the "reveal phase"), the sender sends another string to the receiver that allows the receiver to compute the value of \(b\); the _binding_ property of commitments ensures that the sender can only reveal the correct value of \(b\), i.e. if the sender sent a reveal string that was meant to convince the receiver it had committed to a different value of \(b\), the receiver would detect this.
Commitment schemes -- even quantum ones -- require computational assumptions [13, 14]; at least one of the hiding or binding properties must rely on computational assumptions. In classical cryptography, commitment schemes can be constructed from one-way functions [12], but recent works suggest the possibility of basing quantum commitment schemes on weaker, inherently quantum assumptions such as the existence of pseudorandom states [15, 1, 16, 17, 18, 19] or EFI pairs [1].
In an in-depth study of the properties of quantum commitment schemes, Yan [16] suggested connecting the hardness of Uhlmann transformations to the existence of quantum commitments. We formalize this connection within the unitary complexity framework and show the following:
**Theorem 1.9** (Informal).: _If \(\textsc{Uhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}\) for all negligible \(\epsilon\), then quantum commitments do not exist. On the other hand, if \(\textsc{Uhlmann}_{1-\epsilon}\not\in\mathsf{avgUnitaryBQP}\) for some negligible \(\epsilon\), and furthermore hard instances of \(\textsc{Uhlmann}_{1-\epsilon}\) can be uniformly and efficiently generated, then quantum commitments with strong statistical hiding and weak computational binding exist._
Here, _strong statistical hiding_ means that no adversary (even a computationally unbounded one) can distinguish commitments to \(b=0\) from commitments to \(b=1\) with more than negligible advantage, and _weak computational binding_ means that no computationally bounded adversary can swap the committed bit with fidelity greater than \(1-1/p(\lambda)\) for some polynomial \(p(\lambda)\) in the security parameter. This theorem is formally stated and proved as Theorem 8.10.
We also show that the proof of hardness amplification for the Uhlmann Transformation Problem (Theorem 1.3) can be used to amplify the security of the binding property of quantum commitments: roughly speaking, if there is a commitment scheme where it is hard for a malicious sender to transform the \(0\)-commitment to have fidelity more than \(1-1/p(\lambda)\) with the \(1\)-commitment for some polynomial \(p(\lambda)\), then there exists another commitment scheme where it is hard for an adversary to transform the \(0\)-commitment to have more than \(\frac{1}{q(\lambda)}\) overlap with the \(1\)-commitment for all polynomials \(q(\lambda)\). This answers an open question of Yan [16], who asked whether hardness amplification for commitments is possible.
**Theorem 1.10** (Informal).: _Quantum commitments with strong statistical hiding and weak computational binding exist if and only if quantum commitments with strong statistical hiding and \(1/q(\lambda)\)-computational binding exist for all \(q(\lambda)\)._
This theorem is formally stated and proved as Theorem 8.8. Furthermore, since we can generically perform _flavor switching_ of quantum commitments (i.e. swap which security property holds statistically and which computationally) [10, 11, 19, 17], both Theorems 1.9 and 1.10 can be extended to quantum commitments with computational hiding and statistical binding.
Assuming Conjecture 1.4 about strong amplification for the Uhlmann Transformation Problem, we also obtain a stronger statement, which is that if \(\textsc{Uhlmann}_{1-\epsilon}\not\in\mathsf{avgUnitaryBQP}\) for some negligibly small \(\epsilon\) and hard instances can be uniformly and efficiently generated, then quantum commitments with strong hiding and strong binding properties exist (whereas Theorem 1.9 only guarantees commitments with weak binding). These strong commitments are in turn equivalent to a number of quantum cryptographic primitives, such as EFI pairs [1], oblivious transfer [1], (secretly-verifiable and statistically-invertible) one-way state generators [13], and secure quantum multi-party quantum computation scheme for any classical functionality [1, 1].
Breaking one-way state generators.We consider the cryptographic notion of _one-way state generators_ introduced by Morimae and Yamakawa [14], which can be seen as a quantum analogue of one-way function: it efficiently maps a classical key \(k\) to a quantum state \(\ket{\phi_{k}}\), but is hard to invert. We show the following relation between a natural class of one-way state generators and Uhlmann (see Theorem 8.17 for the formal statement):
**Theorem 1.11** (Informal).: _A real-valued, clean-output one-way state generator is either information-theoretically secure, or the task of cloning its output can be efficiently reduced to \(\textsc{UHlmann}_{\kappa}\) for \(\kappa=1/\mathrm{poly}(n)\)._
Here, _real-valued_ means that the output state of the one-way state generator is represented as a real vector. The _clean-output_ property means that the one-way state generator, on input key \(k\), only outputs \(\ket{\phi_{k}}\) and no other residual state depending on \(k\). We argue in Section 8.2 that most existing constructions of OWSSs are real-valued and clean-output.
Note that this theorem uses a regime where \(\kappa\ll 1\). This marks our first application of \(\textsc{Uhlmann}_{\kappa}\) for small \(\kappa\); most of the applications in this paper are connected to \(\textsc{Uhlmann}_{1-\epsilon}\) for a negligible function \(\epsilon(n)\). The class \(\textsc{Uhlmann}_{\kappa}\) for small \(\kappa\) is at least as hard as \(\textsc{Uhlmann}_{1-\epsilon}\), and _a priori_ it could be harder.
Breaking falsifiable quantum cryptographic assumptions.Finally, we consider the general notion of a _falsifiable quantum cryptographic assumption_, which can be seen as a quantum analogue of the notion of a falsifiable assumption considered by Naor [15] as well as Gentry and Wichs [11]. Our notion of a falsifiable quantum cryptographic assumption captures most cryptographic assumptions in both classical and quantum cryptography. The definition resembles a regular QIP protocol, albeit in a cryptographic setting by means of a security experiment between an adversary and a challenger. Our main result of this section is the following statement, which provides a generic upper bound on the complexity of breaking falsifiable quantum cryptographic assumptions:
**Theorem 1.12** (Informal).: _A quantum cryptographic assumption is either information-theoretically secure, or the task of breaking security reduces to \(\textsc{SuccinctUhlmann}\)._
This theorem is formally stated and proved as Theorem 8.20. It shows that any reasonable definition of security in quantum cryptography (which can be phrased in terms of an interactive _security game_ between an adversary and a challenger) either amounts to _information-theoretic_ security (and thus requires no computational assumptions) or (by Theorem 1.5) can be broken in avgUnitaryPSPACE.
#### 1.3.2 Quantum Shannon theory applications
Quantum Shannon theory studies the achievability and limits of quantum communication tasks. It has become a mature subject with many foundational results that characterize the optimal ways to compress quantum information, transmit quantum information over noisy channels, and transform entangled states in a distributed fashion with limited communication. For comprehensive references on quantum Shannon theory we refer the reader to [11, 12, 13].
We study the computational complexity of some fundamental tasks in quantum Shannon theory, namely noisy channel decoding and compression of quantum states.
Decodable channel problem.Consider a quantum channel \(\mathcal{N}\) that maps a register \(\mathsf{A}\) to a register \(\mathsf{B}\). Suppose that the channel \(\mathcal{N}\) is _decodable_, meaning that it is possible to information-theoretically (approximately) recover the information sent through the channel; i.e., there exists a decoding channel \(\mathcal{D}\) mapping register \(\mathsf{B}\) back to register \(\mathsf{A}\) such that \(\mathcal{D}_{\mathsf{B}\to\mathsf{A}^{\prime}}\Big{(}\mathcal{N}_{\mathsf{A} \to\mathsf{B}}(\Phi_{\mathsf{AR}})\Big{)}\approx\Phi_{\mathsf{A}^{\prime} \mathsf{R}}\), where \(\ket{\Phi}_{\mathsf{AR}}\) is the maximally entangled state. Note that the register \(\mathsf{R}\) is not touched.
Important examples of decodable channels come from coding schemes for noisy quantum channels: suppose \(\mathcal{K}\) is a noisy quantum channel that has capacity \(C\) (meaning it is possible to (asymptotically) transmit \(C\) qubits through \(\mathcal{K}\)). Let \(\mathcal{E}\) denote a channel that takes \(C\) qubits and maps it to an input to \(\mathcal{K}\). For example, we can think of \(\mathcal{E}\) as an encoder for a quantum error-correcting code. If \(\mathcal{E}\) is a good encoding map, the composite channel \(\mathcal{N}:\rho\mapsto\mathcal{K}(\mathcal{E}(\rho))\) is decodable.
We define the _Decodable Channel Problem_: given as input a circuit description of a channel \(\mathcal{N}\) that maps register \(\mathsf{A}\) to register \(\mathsf{B}\) and furthermore is promised to be decodable, and given the register \(\mathsf{B}\) of the state \((\mathcal{N}\otimes\mathrm{id})(\Phi_{\mathsf{AR}})\), decode and output a register \(\mathsf{A}^{\prime}\equiv A\) such that the final joint state of \(\mathsf{A}^{\prime}\mathsf{R}\) is close to \(\ket{\Phi}\). Although it is information-theoretically possible to decode the output of \(\mathcal{N}\), it may be computationally intractable to do so. The following theorem shows that the Decodable Channel Problem is equivalent to the Uhlmann Transformation Problem in terms of computational hardness.
**Theorem 1.13** (Informal).: _The Decodable Channel Problem can be solved in polynomial-time if and only if \(\textsc{Uhlmann}\in\mathsf{avgUnitaryBQP}\)._
This theorem is formally stated and proved as Theorem 9.6; since we do not expect that \(\textsc{Uhlmann}\in\mathsf{avgUnitaryBQP}\), this suggests that the Decodable Channel Problem is hard to solve in general. The main idea behind the upper bound (Decodable Channel Problem is easy if \(\textsc{Uhlmann}\) is easy) is that a channel \(\mathcal{N}\) is decodable if and only if the output of the _complementary channel_7\(\mathcal{N}^{c}\), when given register \(\mathsf{A}\) of the maximally entangled state \(\ket{\Phi}_{\mathsf{AR}}\), is approximately unentangled with register \(\mathsf{R}\). Thus by Uhlmann's theorem there exists an Uhlmann transformation acting on the output of the channel \(\mathcal{N}\) that recovers the maximally entangled state. If \(\textsc{Uhlmann}\in\mathsf{avgUnitaryBQP}\), then this transformation can be performed efficiently.
Footnote 7: The output of the complementary channel can be thought of as the qubits that a purification (formally, a Stinepring dilation) of the channel \(\mathcal{N}\) discards to the environment.
The proof of the lower bound (Decodable Channel Problem is hard if \(\textsc{Uhlmann}\) is hard) draws inspiration from quantum commitments. As discussed earlier, the hardness of \(\textsc{Uhlmann}\) essentially implies the existence of strong statistical hiding and weak computational binding quantum commitments. From this, we can construct a hard instance of the Decodable Channel Problem: consider a channel \(\mathcal{N}\) that takes as input a single bit \(\ket{b}\), and then outputs the commitment register of the commitment to bit \(b\) (and discards the reveal register). The ability to decode this "commitment channel" implies the ability to break the hiding property of the underlying commitment scheme, and therefore decoding must be computationally hard.
Compression of quantum information.Another fundamental task in information theory - both classical and quantum - is compression of data. Shannon's source coding theorem shows that the Shannon entropy of a random variable \(X\) characterizes the rate at which many independent copies of \(X\) can be compressed [11]. Similarly, Schumacher proved that the von Neumann entropy of a density matrix \(\rho\) characterizes the rate at which many independent copies of \(\rho\) can be (coherently) compressed [10].
We consider the _one-shot_ version of the information compression task, where one is given just one copy of a density matrix \(\rho\) (rather than many copies) and the goal is to compress it to as few qubits as possible while being able to recover the original state within some error. In the one-shot setting the von Neumann entropy no longer characterizes the optimal compression of \(\rho\); instead this is given by a one-shot entropic quantity known as the _smoothed max-entropy_[10]. What is the computational effort required to perform near-optimal one-shot compression of quantum states? Our next result gives upper and lower bounds for the computational complexity of this task.
**Theorem 1.14** (Informal).: _Quantum states can be optimally compressed to their smoothed max entropy in polynomial-time if \(\textsc{Uhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}\) for some negligible \(\epsilon\). Furthermore, if stretch pseudorandom state generators exist, then optimal compression of quantum states cannot be done in polynomial time._
This theorem is formally stated and proved as Theorems 9.15 and 9.17. The upper bound (i.e., compression is easy if \(\textsc{Uhlmann}\) is easy) is proved using a powerful technique in quantum information theory known as _decoupling_[11]. The hardness result for compression is proved using a variant of _pseudorandom states_, a cryptographic primitive that is a quantum analogue of pseudorandom generators [10].
#### 1.3.3 Theoretical physics applications
In recent years, quantum information has provided a new lens on long-standing questions in theoretical physics. Perhaps the most important question in this area is whether quantum mechanics and general relativity can be reconciled. Attempts to find a unified theory have led to apparent paradoxes, particularly in the context of finding a consistent quantum mechanical description of black holes [14, 15]. Recently, computational complexity has emerged as a way to explore - and possibly resolve - these paradoxes [13, 1, 15, 16]. We consider applications of the Uhlmann Transformation Problem to two computational tasks arising from this research.
Black hole radiation decoding.First, we consider the so-called Harlow-Hayden _black hole radiation decoding task_[13], which is defined as follows. We are given as input a circuit description of a tripartite state \(\ket{\psi}_{\mathsf{BHR}}\) that represents the global pure state of a single qubit (register \(\mathsf{B}\)), the interior of a black hole (register \(\mathsf{H}\)), and the Hawking radiation that has been emitted by the black hole (register \(\mathsf{R}\)). Moreover, we are promised that it is possible to _decode_ from the emitted radiation \(\mathsf{R}\) a single qubit \(\mathsf{A}\) that forms a maximally entangled state \(\ket{\mathrm{EPR}}=\frac{1}{\sqrt{2}}(\ket{00}+\ket{11})\) with register \(\mathsf{B}\). The task is to perform this decoding when given register \(\mathsf{R}\) of a system in the state \(\ket{\psi}\).
This task is motivated by a thought experiment proposed by Almheiri, Marolf, Polchinski, and Sully [1], where Alice drops half of an EPR pair into a black hole and waits for it to be radiated back out. If she can perform the radiation decoding task, then Alice could jump into the black hole with the recovered EPR pair and find the original qubit that was dumped in. However that should _also_ be maximally entangled with the EPR pair, violating the monogamy of entanglement principle. Harlow and Hayden [13] proposed a way to sidestep the paradox by arguing that the decoding task is computationally intractable assuming that \(\mathsf{SZK}\not\subseteq\mathsf{BQP}\).
While the hardness of the black hole radiation decoding task can be based on plausible hardness assumptions in traditional complexity theory, precisely characterizing the task's complexity appears to require the notions of a fully quantum complexity theory. Brakerski recently showed that this
task is equivalent to breaking the security of a quantum cryptographic primitive known as EFI pairs [1]. We reformulate this equivalence in our unitary complexity framework:
**Theorem 1.15** (Informal).: _Black-hole radiation decoding can be solved in polynomial-time if and only if \(\textsc{Uhlmann}\in\mathsf{avgUnitaryBQP}\)._
This theorem is formally stated and proved as Theorem 10.3. We prove this by showing that the black hole radiation decoding task is equivalent to the Decodable Channel Problem, which, as we explained above, is equivalent to the Uhlmann Transformation Problem.
Interference detection.Finally, we consider the complexity of detecting interference between orthogonal states. Here, we revisit recent work by Aaronson, Atia, and Susskind [1], who proved the following (sometimes called the _swapping-distinguishing equivalence_): for two orthogonal states \(|\psi\rangle\) and \(|\varphi\rangle\), if one can distinguish \((|\psi\rangle+|\varphi\rangle)/\sqrt{2}\) from \((|\psi\rangle-|\varphi\rangle)/\sqrt{2}\), then one can also _swap_ between \(|\psi\rangle\) and \(|\varphi\rangle\), and vice versa. One of the motivations for considering this problem comes from the field of quantum gravity, where it is related to the task of physically detecting interference between superpositions of spacetime geometries in the AdS/CFT correspondence [1]. The equivalence between swapping and distinguishing has also recently been used for flavor conversion of quantum bit commitment schemes, as well as to construct public-key encryption from cryptographic non-abelian group actions [14]. We give the following upper bound on the complexity of InterferenceDetection between orthogonal statePSPACE states (formally defined in Definition 10.6):
**Theorem 1.16** (Informal).: _InterferenceDetection between orthogonal statePSPACE states polynomial-time reduces to SuccinctUhlmann._
This theorem is formally stated and proved as Theorem 10.8. The proof of the theorem uses a simple circuit transformation which allows one to perform a _controlled_ Uhlmann unitary with a single call to a SuccinctUhlmann oracle--a trick that may be of independent interest.
### Summary and future directions
Computational tasks with quantum inputs and/or outputs are ubiquitous throughout quantum computing and quantum information theory. The traditional framework of complexity theory, which is focused on computational tasks with classical inputs and outputs, cannot naturally capture the complexity of these "fully quantum" tasks.
In this paper we introduce a framework to reason about the computational complexity of unitary synthesis problems. We then use this framework to study Uhlmann's theorem through an algorithmic lens, i.e. to study the complexity of Uhlmann transformations. We prove that variants of the Uhlmann Transformation Problem are complete for some unitary complexity classes, and then explore relationships between the Uhlmann Transformation Problem and a myriad of computational tasks in quantum cryptography, quantum Shannon theory, and high energy physics.
The study of the complexity of state transformation tasks is a very new field and we hope that our formal framework of unitary complexity theory and our findings about the Uhlmann Transformation Problem lay the foundations for a rich theory of the complexity of "fully quantum" problems. A lot of questions in this direction are completely unexplored. Throughout this paper, we have included many concrete open problems, which we hope will spark future research in this new direction in complexity theory. Additionally, our work suggests some high-level, open-ended future directions to explore:
Populating the zoo.An important source of the richness of computational complexity theory is the variety of computational problems that are studied. For example, the class NP is so interesting because it contains many complete problems that are naturally studied across the sciences [12], and the theory of NP-completeness gives a unified way to relate them to each other.
Similarly, a fully quantum complexity theory should have its own zoo of problems drawn from a diverse range of areas. We have already shown that core computational problems in quantum cryptography, quantum Shannon theory, and high energy physics can be related to each other through the language of unitary complexity theory. What are other natural problems in, say, quantum error-correction, quantum metrology, quantum chemistry, or condensed matter physics, and what can we say about their computational complexity?
The crypto angle.Complexity and cryptography are intimately intertwined. Operational tasks in cryptography have motivated models and concepts that have proved indispensible in complexity theory (such as pseudorandomness and zero-knowledge proofs), and conversely complexity theory has provided a rigorous theoretical foundation to study cryptographic hardness assumptions.
We believe that there can be a similarly symbiotic relationship between quantum cryptography and a fully quantum complexity theory. Recent quantum cryptographic primitives (such as quantum pseudorandom states [10] and one-way state generators [14]) and new protocols (such as quantum copy-protection [1, 2] and certified deletion [1]) are unique to the quantum setting, and the relationships between them are barely understood. For example, an outstanding question is whether there is a meaningful _minimal hardness assumption_ in quantum cryptography, just like one-way functions are in classical cryptography.
Can a fully quantum complexity theory help answer this question about minimal quantum cryptographic assumptions, or at least provide some guidance? For example, there are many beautiful connections between one-way functions, average-case complexity, and Kolomogorov complexity [13, 14, 15]. Do analogous results hold in the fully quantum setting?
The learning theory angle.Quantum learning theory has also seen rapid development, particularly on the topic of quantum state learning [1, 2, 1, 2]. Learning quantum states or quantum processes can most naturally be formulated as tasks with quantum inputs. Traditionally these tasks have been studied in the information-theoretic setting, where sample complexity is usually the main measure of interest. However we can also study the computational difficulty of learning quantum objects. What does a complexity theory of quantum learning look like?
Traditional versus fully quantum complexity theory.While traditional complexity theory appears to have difficulty reasoning about fully quantum tasks, can we obtain _formal_ evidence that the two theories are, in a sense, independent of each other? For example, can we show that \(\mathsf{P}=\mathsf{PSPACE}\) does not imply \(\mathsf{unitaryBQP}=\mathsf{unitaryPSPACE}\)? One would likely have to show this in a _relativized_ setting, i.e., exhibit an oracle \(O\) relative to which \(\mathsf{P}^{O}=\mathsf{PSPACE}^{O}\) but \(\mathsf{unitaryBQP}^{O}\neq\mathsf{unitaryPSPACE}^{O}\). Such a result would give compelling evidence that the reasons for the hardness of unitary transformations are intrinsically different than the reasons for the hardness of a Boolean function. There are a few works that have made some initial steps towards this direction [17, 18, 19]. More generally, what are other ways of separating traditional from fully quantum complexity theory?
### Guide for readers
Although the paper is rather long, the material is organized in a way that supports random-access reading - depending on your interests, it is not necessary to read Section \(X\) before reading Section \(X+1\). All sections depend on the basic definitions of unitary complexity theory (Section 3) and the basic definitions of the Uhlmann Transformation Problem (Section 5). From then on, it's choose-your-own-adventure. If you are interested in:
* **Structural results about the complexity of Uhlmann**. Read Sections 4, 6 and 7.
* **Quantum cryptography**. Read Section 8. It may be helpful to review the definitions of quantum interactive protocols (Section 4) and the hardness amplification result (Section 6.2).
* **Quantum Shannon theory**. Read Section 9. It may be helpful to read the section on quantum commitments (Section 8.1).
* **Quantum gravity**. Read Section 10. It may be helpful to read the section on the Decodable Channel Problem (Section 9.1).
### Acknowledgments
We thank Anurag Anshu, Lijie Chen, Andrea Coladangelo, Sam Gunn, Yunchao Liu, Joe Renes, and Renato Renner for helpful discussions. We thank Fred Dupuis for his help with understanding the decoupling results in his thesis. JB and HY are supported by AFOSR award FA9550-21-1-0040, NSF CAREER award CCF-2144219, and the Sloan Foundation. TM acknowledges support from SNSF Project Grant No. 200021_188541 and AFOSR-Grant No. FA9550-19-1-0202. AP is partially supported by AFOSR YIP (award number FA9550-16-1-0495), the Institute for Quantum Information and Matter (an NSF Physics Frontiers Center; NSF Grant PHY-1733907) and by a grant from the Simons Foundation (828076, TV). LQ is supported by DARPA under Agreement No. HR00112020023. We thank the Simons Institute for the Theory of Computing, where some of this work was conducted.
## 2 Preliminaries
### Notation
For a bit string \(x\in\{0,1\}^{*}\), we denote by \(|x|\) its length (not its Hamming weight). When \(x\) describes an instance of a computational problem, we will often use \(n=|x|\) to denote its size.
A function \(\delta:\mathbb{N}\to[0,1]\) is an _inverse polynomial_ if \(\delta(n)\leq 1/p(n)\) for all sufficiently large \(n\). A function \(\epsilon:\mathbb{N}\to[0,1]\) is _negligible_ if for every polynomial \(p(n)\), for all sufficiently large \(n\) we have \(\epsilon(n)\leq 1/p(n)\).
A _register_\(\mathsf{R}\) is a named finite-dimensional complex Hilbert space. If \(\mathsf{A},\mathsf{B},\mathsf{C}\) are registers, for example, then the concatenation \(\mathsf{ABC}\) denotes the tensor product of the associated Hilbert spaces. We abbreviate the tensor product state \(|0\rangle^{\otimes n}\) as \(|0^{n}\rangle\). For a linear transformation \(L\) and register \(\mathsf{R}\), we write \(L_{\mathsf{R}}\) to indicate that \(L\) acts on \(\mathsf{R}\), and similarly we write \(\rho_{\mathsf{R}}\) to indicate that a state \(\rho\) is in the register \(\mathsf{R}\). We write \(\operatorname{Tr}(\cdot)\) to denote trace, and \(\operatorname{Tr}_{\mathsf{R}}(\cdot)\) to denote the partial trace over a register
We denote the set of linear transformations on \(\mathsf{R}\) by \(\mathrm{L}(\mathsf{R})\), and linear transformations from \(\mathsf{R}\) to another register \(\mathsf{S}\) by \(\mathrm{L}(\mathsf{R},\mathsf{S})\). We denote the set of positive semidefinite operators on a register \(\mathsf{R}\) by \(\mathrm{Pos}(\mathsf{R})\). The set of density matrices on \(\mathsf{R}\) is denotes \(\mathrm{S}(\mathsf{R})\). For a pure state \(|\varphi\rangle\), we write \(\varphi\) to denote the density matrix \(|\varphi\rangle\!\langle\varphi|\). We denote the identity transformation by \(\mathrm{id}\). For an operator \(X\in\mathrm{L}(R)\), we define \(\|X\|_{\infty}\) to be its operator norm, and \(\|X\|_{1}=\mathrm{Tr}(|X|)\) to denote its trace norm, where \(|X|=\sqrt{X^{\dagger}X}\). We write \(\mathrm{td}(\rho,\sigma)=\frac{1}{2}\|\rho-\sigma\|_{1}\) to denote the trace distance between two density matrices \(\rho,\sigma\), and \(\mathrm{F}(\rho,\sigma)=\|\sqrt{\rho}\sqrt{\sigma}\|_{1}^{2}\) for the fidelity between \(\rho,\sigma\).8 Throughout the paper we frequently invoke the following relationship between fidelity and trace distance:
Footnote 8: We note that in the literature there are two versions of fidelity that are commonly used; here we use the _squared_ version of it.
**Proposition 2.1** (Fuchs-van de Graaf inequalities).: _For all density matrices \(\rho,\sigma\) acting on the same space, we have that_
\[1-\sqrt{\mathrm{F}(\rho,\sigma)}\leq\mathrm{td}(\rho,\sigma)\leq\sqrt{1- \mathrm{F}(\rho,\sigma)}\,.\]
A _quantum channel_ from register \(\mathsf{A}\) to \(\mathsf{B}\) is a completely positive trace-preserving (CPTP) map from \(\mathrm{L}(\mathsf{A})\) to \(\mathrm{L}(\mathsf{B})\). For simplicity, we often write \(\mathcal{N}:\mathsf{A}\to\mathsf{B}\) instead of \(\mathcal{N}:\mathrm{L}(\mathsf{A})\to\mathrm{L}(\mathsf{B})\) when it is clear that \(\mathcal{N}\) is a channel. We denote the set of quantum channels as \(\mathrm{CPTP}(\mathsf{A},\mathsf{B})\). We also call a channel a _superoperator_. For a channel \(\Phi\), we write \(\mathrm{supp}(\Phi)\) to denote the number of qubits it takes as input. We call a channel unitary (isometric) if it conjugates its input state with a unitary (isometry). The diamond norm of a channel \(\Phi\in\mathrm{CPTP}(\mathsf{A},\mathsf{B})\) is defined as \(\|\Phi\|_{\diamond}=\max_{\rho}\|(\Phi\otimes\mathrm{id}_{\mathsf{C}})(\rho) \|_{1}\) where the maximization is over all density matrices \(\rho\in\mathrm{S}(\mathsf{A}\otimes\mathsf{C})\) where \(\mathsf{C}\) is an arbitrary register.
Another important type of quantum operation that can be performed on a quantum state is a _measurement_. In general a quantum measurement is described by a finite set of positive semidefinite matrices \(\mathcal{M}=\{M_{i}\}_{i}\) satisfying \(\sum_{i}M_{i}=\mathrm{id}\). Performing a measurement on a state \(\rho\) results in an _output_\(i\), where each \(i\) occurs with probability \(\mathrm{Tr}[M_{i}\rho]\), and conditioned on the outcome being \(i\), the resulting state is
\[\rho|_{M_{i}}=\frac{\sqrt{M_{i}}\rho\sqrt{M_{i}}}{\mathrm{Tr}(M_{i}\rho)}\,. \tag{2.1}\]
The gentle measurement lemma is an important property about quantum measurements that connects the trace distance between a state and its post-measurement state to the probability that the measurement accepts.
**Proposition 2.2** (Gentle Measurement lemma).: _Let \(\rho\) be a density matrix and \(\Lambda\) be a positive semidefinite hermitian matrix. If \(\mathrm{Tr}[\Lambda\rho]\geq 1-\epsilon\), then \(\mathrm{F}(\rho,\rho|_{\Lambda})\geq 1-\epsilon\) and \(\|\rho-\rho|_{\Lambda}\|_{1}\leq 2\sqrt{\epsilon}\)._
A proof of this can be found in, e.g., [20, Lemma 9.4.1].
### Partial isometries and channel completions
Usually, operations on a quantum state can be described by a unitary matrix, an isometry (if new qubits are introduced), or more generally a quantum channel (if one allows incoherent operations such as measuring or discarding qubits). However, we will find it useful to consider operations whose action is only defined on a certain subspace; outside of this "allowed subspace" of input states, we do not want to make a statement about how the operation changes a quantum state. Such operations can be described by partial isometries.
**Definition 2.3** (Partial isometry).: _A linear map \(U\in\mathrm{L}(\mathsf{A},\mathsf{B})\) is called a partial isometry if there exists a projector \(\Pi\in\mathrm{L}(A)\) and an isometry \(\tilde{U}\in\mathrm{L}(A,B)\) such that \(U=\tilde{U}\Pi\). We call the image of the projector \(\Pi\) the support of the partial isometry \(U\)._
Of course in practice we cannot implement a partial isometry because it is not a trace-preserving operation: states in the orthogonal complement of the support are mapped to the \(0\)-vector. We therefore define a _channel completion_ of a partial isometry as any quantum channel that behaves like the partial isometry on its support, and can behave arbitrarily on the orthogonal complement of the support.
**Definition 2.4** (Channel completion).: _Let \(U\in\mathrm{L}(\mathsf{A},\mathsf{B})\) be a partial isometry. A channel completion of \(U\) is a quantum channel \(\Phi\in\mathrm{CPTP}(\mathsf{A},\mathsf{B})\) such that for any input state \(\rho\in\mathrm{S}(\mathsf{A})\),_
\[\Phi(\Pi\rho\Pi)=U\Pi\rho\Pi U^{\dagger}\,,\]
_where \(\Pi\in\mathrm{L}(\mathsf{A})\) is the projector onto the support of \(U\). If \(\Phi\) is a unitary or isometric channel, we also call this a unitary or isometric completion of the partial isometry._
### Quantum circuits
For convenience we assume that all quantum circuits use gates from the universal gate set \(\{H,\mathit{CNOT},T\}\)[10, Chapter 4] (although our results hold for any universal gate set consisting of gates with algebraic entries). A _unitary quantum circuit_ is one that consists only of gates from this gate set. A _general quantum circuit_ is a quantum circuit that can additionally have non-unitary gates that (a) introduce new qubits initialized in the zero state, (b) trace them out, or (c) measure them in the standard basis. We say that a general quantum circuit uses space \(s\) if the total number of qubits involved at any time step of the computation is at most \(s\). The description of a general quantum circuit is a sequence of gates (unitary or non-unitary) along with a specification of which qubits they act on. A general quantum circuit \(C\) implements a quantum channel; we will abuse notation slightly and also use \(C\) to denote the channel. For a unitary quantum circuit \(C\) we will write \(\left|C\right\rangle\) to denote the state \(C\left|0\ldots 0\right\rangle\).
**Definition 2.5** (Polynomial size and space circuit families).: _We say that \((C_{x})_{x\in\{0,1\}^{*}}\) is a family of polynomial-size general quantum circuits if there exists a polynomial \(p\) such that \(C_{x}\) has size (i.e. number of gates) at most \(p(\left|x\right|)\). We say that \((C_{x})_{x\in\{0,1\}^{*}}\) is a family of polynomial-space general quantum circuits if there exists a polynomial \(p\) such that \(C_{x}\) uses at most \(p(\left|x\right|)\) space._
**Definition 2.6** (Uniform circuit families).: _A family of general quantum circuits \((C_{x})_{x\in\{0,1\}^{*}}\) is called time-uniform (or simply uniform) if \((C_{x})_{x\in\{0,1\}^{*}}\) is polynomial-size and there exists a classical polynomial-time Turing machine that on input \(x\) outputs the description of \(C_{x}\). Similarly, a family of general quantum circuits \((C_{x})_{x\in\{0,1\}^{*}}\) is called space-uniform if \((C_{x})_{x\in\{0,1\}^{*}}\) is polynomial-space and there exists a classical polynomial-space Turing machine that on input \((x,i)\) outputs the \(i\)'th gate of \(C_{x}\). For brevity, we also call a time-uniform (resp. space-uniform) family of quantum circuits a polynomial time (resp. polynomial space) quantum algorithm._
**Definition 2.7** (Unitary purification of a general quantum circuit).: _A unitary purification (or dilation) of a general quantum circuit \(C\) is a unitary circuit \(\tilde{C}\) formed by performing all measurements in \(C\) coherently (with the help of additional ancillas) and not tracing out any qubits._
The following proposition relates a general quantum circuit to its unitary purification; it follows directly from the definition of the unitary purification. This proposition also demonstrates that the unitary purification \(\tilde{C}\) of a general quantum circuit \(C\) is a specific _Stinespring dilation_ of the quantum channel corresponding to \(C\).
**Proposition 2.8**.: _Let \(C\) be a size-\(m\) general quantum circuit acting on \(n\) qubits, and let \(\tilde{C}\) be its unitary purification where register \(\mathsf{R}\) denote all the qubits that are traced out in the original circuit \(C\) as well as the ancilla qubits introduced for the purification. Then for all states \(\rho\),_
\[C(\rho)=\operatorname{Tr}_{\mathsf{R}}(\tilde{C}\,\rho\tilde{C}^{\dagger})\,.\]
_Furthermore, \(\tilde{C}\) acts on at most \(n+m\) qubits and has size at most \(m\)._
### Quantum state complexity classes
Here we present the definitions of some state complexity classes that were introduced in [10]. Intuitively, they are classes of sequences of quantum states that require certain resources to be synthesized (e.g., polynomial time or space).
**Definition 2.9** (stateBQP, statePSPACE).: _Let \(\delta:\mathbb{N}\to[0,1]\) be a function. Then \(\mathsf{stateBQP}_{\delta}\) (resp. \(\mathsf{statePSPACE}_{\delta}\)) is the class of all sequences of density matrices \((\rho_{x})_{x\in\{0,1\}^{*}}\) such that each \(\rho_{x}\) is a state on \(\operatorname{poly}(|x|)\) qubits, and there exists a time-uniform (resp. space-uniform) family of general quantum circuits \((C_{x})_{x\in\{0,1\}^{*}}\) such that for all \(x\in\{0,1\}^{*}\) with sufficiently large length \(|x|\), the circuit \(C_{x}\) takes no inputs and \(C_{x}\) outputs a density matrix \(\sigma_{x}\) such that_
\[\operatorname{td}(\sigma_{x},\rho_{x})\leq\delta(|x|)\,.\]
_We define_
\[\mathsf{stateBQP}=\bigcap_{q}\mathsf{stateBQP}_{1/q(n)}\quad\quad\text{and} \quad\quad\mathsf{statePSPACE}=\bigcap_{q}\mathsf{statePSPACE}_{1/q(n)}\]
_where the intersection is over all polynomials \(q:\mathbb{N}\to\mathbb{R}\)._
## Part I Unitary Complexity Theory
### 3 Unitary Synthesis Problems and Unitary Complexity Classes
To be able to make formal statements about the complexity of quantum tasks, we present a framework for unitary complexity theory: we define unitary synthesis problems, algorithms for implementing them, unitary complexity classes, and reductions between unitary synthesis problems.
#### Unitary synthesis problems
In traditional complexity theory, decision problems are formalized as _languages_, which are sets of binary strings. The analogue in our framework is the following formalization of unitary synthesis problems.
**Definition 3.1** (Unitary synthesis problem).: _A unitary synthesis problem is a sequence of partial isometries.9_
Footnote 9: We note that while unitary synthesis problems are not necessarily sequences of unitaries, we believe that it is a better name than “partial isometry synthesis problem”.
One should think of \(x\in\{0,1\}^{*}\) as an encoding of the particular partial isometry in the sequence \(\mathscr{U}\). The precise form of this encoding can differ between unitary synthesis problems. Some examples of encodings include: \(x\) describes a polynomial-length sequence of quantum gates; or \(x\) describes a classical circuit that, on input \(i\), outputs the \(i\)-th gate of a (potentially exponentially long) quantum circuit implementing a unitary. We call the latter a _succinct description_ of the unitary.
We note that Definition 3.1 considers partial isometries, not only unitaries (which are of course the special case of partial isometries for which the projector in Definition 2.3 is \(\Pi_{x}=\mathrm{id}\)). A partial isometry is only required to be unitary on some subspace, and does not specify any action on the orthogonal complement of the subspace. This is analogous to the idea of a "promise" on the inputs in standard complexity theory: the unitary synthesis problem includes a "promised subspace" on which all input states to that unitary are supposed to lie; if an input state has support on the orthogonal complement to this subspace, the behaviour is not specified by the unitary synthesis problem.
Examples.We present some examples of unitary synthesis problems.
1. (_Hamiltonian time evolution_) Consider some natural string encoding of pairs \((H,t)\) where \(H\) is a local Hamiltonian and \(t\) is a real number: the encoding will specify the number of qubits that \(H\) acts on as well as each local term of \(H\). If \(x\) is a valid encoding of such a pair \((H,t)\), then define \(U_{x}=e^{-iHt}\). Otherwise, define \(U_{x}=0\). Then we define \(\textsc{TimeEvolution}=(U_{x})_{x\in\{0,1\}^{*}}\).
2. (_Decision languages_) Let \(L\subseteq\{0,1\}^{*}\) be a decision language. Define \(\textsc{UnitaryDecider}_{L}=(U_{x})_{x\in\{0,1\}^{*}}\) as follows: interpreting \(x\) as the binary representation of an integer \(n\in\mathbb{N}\), the unitary \(U_{n}\) acts on \(n+1\) qubits and for all \(y\in\{0,1\}^{n},b\in\{0,1\}\), we define \(U_{n}\left|y\right\rangle\left|b\right\rangle=\left|y\right\rangle\left|b \oplus L(y)\right\rangle\) where \(L(y)=1\) iff \(y\in L\). In other words, the unitary \(U_{n}\) coherently decides whether \(y\in L\) or not.
3. (_State preparation_) Let \((|\psi_{x}\rangle)_{x\in\{0,1\}^{*}}\) be a family of states where \(|\psi_{x}\rangle\) is on \(n_{x}\) qubits. Then the partial isometries \(U_{x}=|\psi_{x}\rangle\!\langle 0^{n_{x}}|\) form a unitary synthesis problem. In other words, these partial isometries map the zero state to \(|\psi_{x}\rangle\).
We now define what it means to _implement_ a unitary synthesis problem. Intuitively, an implementation of a unitary synthesis problem is just a sequence of (not necessarily unitary) quantum circuits that implement the corresponding partial isometries. The only subtlety is that a quantum circuit is trace-preserving on all inputs, so it cannot map states in the orthogonal complement of the support of the partial isometry to \(0\). Therefore, we require that the quantum circuit implements any channel completion of the partial isometry (Definition 2.4). This is analogous to classical promise problems, where a Turing machine deciding the promise problem is allowed to behave arbitrarily on inputs violating the promise, instead of being e.g. required to abort.
**Definition 3.2** (Worst-case implementation of unitary synthesis problems).: _Let \(\mathscr{U}=(U_{x})_{x\in\{0,1\}^{*}}\) denote a unitary synthesis problem and \(\delta:\mathbb{N}\to\mathbb{R}\) a function. Let \(C=(C_{x})_{x\in\{0,1\}^{*}}\) denote a (not necessarily uniform) family of quantum circuits, where \(C_{x}\) implements a channel whose input and output registers are the same as those of \(U_{x}\). We say that \(C\) implements \(\mathscr{U}\) with_ **worst-case error**__\(\delta\) if for all \(x\in\{0,1\}^{*}\), there exists a channel completion \(\Phi_{x}\) of \(U_{x}\) such that_
\[\Big{\|}C_{x}-\Phi_{x}\Big{\|}_{\diamond}\leq\delta(|x|)\,,\]
_where \(\|\cdot\|_{\diamond}\) denotes the diamond norm._
Recall that a small diamond distance between two channels means that the channels are difficult to distinguish even if the channels are applied to an entangled state.
_Remark 3.3_.: For a unitary synthesis problem \(\mathscr{U}=(U_{x})_{x\in\{0,1\}^{*}}\) we call \(x\) an _instance_, and \(U_{x}\) the transformation of \(\mathscr{U}\) corresponding to instance \(x\). We call the register that \(U_{x}\) or its implementation \(C_{x}\) acts on the _quantum input_ to the unitary synthesis problem.
We also define a notion of _distributional (or average-case) unitary synthesis problems._ Here, in addition to a partial isometry, we also specify a state and a register of this state on which the partial isometry is going to act; note, however, that this is very different from a state synthesis problem, as we discuss in Remark 3.12. We first give the formal definition and then explain why this is a reasonable notion of a distributional unitary synthesis problem.
**Definition 3.4** (Distributional unitary synthesis problem).: _We say that a pair \((\mathscr{U},\Psi)\) is a distributional unitary synthesis problem if \(\mathscr{U}=(U_{x})_{x}\) is a unitary synthesis problem with \(U_{x}\in\mathrm{L}(\mathsf{A}_{x},\mathsf{B}_{x})\) for some registers \(\mathsf{A}_{x},\mathsf{B}_{x}\), and \(\Psi=(|\psi_{x}\rangle)_{x}\) is a family of bipartite pure states on registers \(\mathsf{A}_{x}\mathsf{R}_{x}\). We call \(|\psi_{x}\rangle\) the distribution state with target register \(\mathsf{A}_{x}\) and ancilla register \(\mathsf{R}_{x}\)._
**Definition 3.5** (Average-case implementation of distributional unitary synthesis problems).: _Let \((\mathscr{U},\Psi)\) denote a distributional unitary synthesis problem, where \(\mathscr{U}=(U_{x})_{x}\) and \(\Psi=(|\psi_{x}\rangle)_{x}\), and let \(\delta:\mathbb{N}\to\mathbb{R}\) be a function. Let \(C=(C_{x})_{x}\) denote a family of quantum circuits, where \(C_{x}\) implements a channel whose input and output registers are the same as those of \(U_{x}\). We say that \(C\) implements \((\mathscr{U},\Psi)\) with_ **average-case error**__\(\delta\) if for all \(x\in\{0,1\}^{*}\), there exists a channel completion \(\Phi_{x}\) of \(U_{x}\) such that_
\[\mathrm{td}\Big{(}(C_{x}\otimes\mathrm{id})(\psi_{x}),\,(\Phi_{x}\otimes \mathrm{id})(\psi_{x})\Big{)}\leq\delta(|x|)\,,\]
_where the identity channel acts on the ancilla register of \(|\psi_{x}\rangle\)._
The term "distributional" may seem a bit odd at first; for example, where is the distribution in Definition 3.4? In classical average-case complexity theory, a distributional problem is one where the inputs are sampled from some probability distribution \(\mathcal{D}\). The state family \(\Psi=(|\psi_{x}\rangle)_{x}\) in a distributional unitary synthesis problem \((\mathscr{U},\Psi)\) can be viewed as a _purification_ of a distribution over pure states: by the Schmidt decomposition, we can always write
\[|\psi_{x}\rangle=\sum_{j}\sqrt{p_{x,j}}\,|\phi_{x,j}\rangle\otimes|j\rangle \tag{3.1}\]
for orthonormal states \(\{|\phi_{x,j}\rangle\}_{j}\) on \(\mathsf{A}_{x}\) and \(\{|j\rangle\}_{j}\) on \(\mathsf{R}_{x}\). The Schmidt coefficients \(\{p_{x,j}\}_{j}\) form a probability distribution \(\mathcal{D}_{x}\), so \(|\psi_{x}\rangle\) can be viewed as the purification of the distribution \(\mathcal{D}_{x}\) over pure states \(\{|\phi_{x,j}\rangle\}_{j}\). The condition of \(C\) implementing \((\mathscr{U},\Psi)\) with average-case error \(\delta\) is equivalent to the following: for all \(x\in\{0,1\}^{*}\) there exists a channel completion \(\Phi_{x}\) of \(U_{x}\) such that
\[\operatorname*{\mathbb{E}}_{j\sim\mathcal{D}_{x}}\operatorname{td}(C_{x}( \phi_{x,j}),\,\Phi_{x}(\phi_{x,j}))\leq\delta(|x|)\,. \tag{3.2}\]
Conversely, any distribution over pure states can be purified into a state of the form Equation (3.1), so the condition in Equation (3.2) is equivalent to Definition 3.5. We will find it more convenient to simply specify (for each \(x\)) one pure state \(|\psi_{x}\rangle_{\mathsf{A}_{x}\mathsf{R}_{x}}\) instead of a set of pure states on \(\mathsf{A}_{x}\) and a distribution over them.
One might also wonder about specifying a distribution over the strings \(x\) that label the partial isometries \(U_{x}\). However this can be "folded" into the state distribution by consider a larger unitary \(U_{n}\) that takes as input \(|x\rangle\otimes|\psi\rangle\).
_Remark 3.6_.: Comparing Definition 3.2 and Definition 3.5, we see that we can also define the worst-case error in terms of the average-case error: a circuit family \(C=(C_{x})_{x\in\{0,1\}^{*}}\) implements a unitary synthesis problem \(\mathscr{U}=(U_{x})_{x\in\{0,1\}^{*}}\) with worst case error \(\delta\) if and only if it implements the distributional unitary synthesis problem \((\mathscr{U},\Psi)\) with average-case error \(\delta\) for all state sequences \(\Psi=(|\psi_{x}\rangle)_{x}\).
### Unitary complexity classes
A _unitary complexity class_ is a collection of unitary synthesis problems. We introduce some natural unitary complexity classes. First we define the unitary synthesis analogues of \(\mathsf{BQP}\) and \(\mathsf{PSPACE}\), respectively.
**Definition 3.7** (unitary\(\mathsf{BQP}\), unitary\(\mathsf{PSPACE}\)).: _Let \(\delta:\mathbb{N}\to\mathbb{R}\) be a function. Define the unitary complexity class \(\mathsf{unitaryBQP}_{\delta}\) (resp. \(\mathsf{unitaryPSPACE}_{\delta}\)) to be the set of unitary synthesis problems \(\mathscr{U}=(U_{x})_{x}\) for which there exists a uniform polynomial-time (resp. polynomial-space) quantum algorithm \(C\) that implements \(\mathscr{U}\) with worst-case error \(\delta\). We define \(\mathsf{unitaryBQP}\) (resp. \(\mathsf{unitaryPSPACE}\)) to be the intersection of \(\mathsf{unitaryBQP}_{1/q(n)}\) (resp. \(\mathsf{unitaryPSPACE}_{1/q(n)}\)) over all polynomials \(q(n)\)._
A natural question about unitary complexity classes such \(\mathsf{unitaryBQP}_{\delta}\) and \(\mathsf{unitaryPSPACE}_{\delta}\) is whether the error \(\delta\) can be generically reduced, in analogy to how the completeness/soundness errors can be generically reduced in randomized complexity classes like \(\mathsf{BPP}\) or \(\mathsf{BQP}\). In particular, is it true that \(\mathsf{unitaryBQP}_{1/3}\) is the same as \(\mathsf{unitaryBQP}_{n^{-1}}\) or even \(\mathsf{unitaryBQP}_{\exp(-n)}\)? We first present a simple argument for why error reduction for unitary synthesis classes is not possible in general.
**Proposition 3.8** (Impossibility of error reduction for unitary synthesis problems).: _Let \(\alpha,\beta\) be such that \(0<\alpha<\beta<1\) and \(\beta>2\sqrt{3\alpha}\). Then \(\mathsf{unitaryBQP}_{\alpha}\neq\mathsf{unitaryBQP}_{\beta}\)._
Proof.: Define \(\mathscr{U}=(U_{x})_{x\in\{0,1\}^{*}}\) as follows. If \(x\) is the description of a Turing machine that halts on the empty input, then \(U_{x}\) is the single-qubit unitary \(\begin{pmatrix}\sqrt{1-3\alpha}&-\sqrt{3\alpha}\\ \sqrt{3\alpha}&\sqrt{1-3\alpha}\end{pmatrix}\). Otherwise, \(U_{x}\) is the identity matrix on a single qubit. It is clear that \(\mathscr{U}\in\mathsf{unitaryBQP}_{\beta}\): this is because in the case that \(x\) represents a halting Turing machine, the identity matrix approximates \(U_{x}\) in diamond norm with error \(2\sqrt{3\alpha}<\beta\).
On the other hand, \(\mathscr{U}\notin\mathsf{unitaryBQP}_{\alpha}\). Suppose for contradiction there was a uniform quantum algorithm \(C=(C_{x})_{x}\) that implements \(\mathscr{U}\) with worst-case error \(\alpha\). Then we can use \(C\) to decide the Halting Problem as follows. Given an input \(x\), repeatedly run the circuit \(C_{x}\) on \(|0\rangle\), and then measure in the standard basis. Since \(C_{x}\) implements \(U_{x}\) with worst-case error \(\alpha\), this means that if \(x\) represents a halting Turing machine, then each trial results in \(|1\rangle\) with probability at least \(3\alpha-\alpha\geq 2\alpha\), and if \(x\) represents a non-halting Turing machine, then each trial results in \(|1\rangle\) with probability at most \(\alpha\). Since \(\alpha\) is constant, after a constant number of trials one can distinguish with high confidence whether \(x\) represents a halting Turing machine or not. Thus this implies the Halting problem can be decided by a quantum algorithm in polynomial time, which is a contradiction.
_Remark 3.9_.: It is interesting that a simple argument can prove separations between unitary complexity classes, whereas in contrast it is much harder to prove analogous separations between traditional complexity classes. For example, it remains unknown whether \(\mathsf{BPP}\neq\mathsf{BQP}\). However we also point out that this has nothing to do with the fact that we're dealing with quantum complexity classes; one could also prove similar separations between _classical sampling complexity classes_ (see, e.g., [1]).
Next we define classes of distributional unitary synthesis problems, the unitary complexity analogues of classical average case complexity classes.
**Definition 3.10** (\(\mathsf{avgUnitaryBQP}\), \(\mathsf{avgUnitaryPSPACE}\)).: _Let \(\delta:\mathbb{N}\to\mathbb{R}\) be a function. Define the unitary complexity class \(\mathsf{avgUnitaryBQP}_{\delta}\) (resp. \(\mathsf{avgUnitaryPSPACE}_{\delta}\)) to be the set of distributional unitary synthesis problems \(\Big{(}\mathscr{U}=(U_{x})_{x},\Psi=(|\psi\rangle_{x})_{x}\Big{)}\) where \(\Psi\in\mathsf{stateBQP}\) (resp. \(\Psi\in\mathsf{statePSPACE}\)) and there exists a uniform polynomial-time (resp. polynomial-space) quantum algorithm \(C\) that implements \((\mathscr{U},\Psi)\) with average-case error \(\delta\). We define \(\mathsf{avgUnitaryBQP}\) (resp. \(\mathsf{avgUnitaryPSPACE}\)) to be the intersection of \(\mathsf{avgUnitaryBQP}_{1/q(n)}\) (resp. \(\mathsf{avgUnitaryPSPACE}_{1/q(n)}\)) over all polynomials \(q(n)\)._
_Remark 3.11_.: In our definition of \(\mathsf{avgUnitaryBQP}\) and \(\mathsf{avgUnitaryPSPACE}\), we require that the state sequence with respect to which the average case unitary synthesis problem is defined be in the corresponding state complexity class (i.e. \(\mathsf{stateBQP}\) and \(\mathsf{statePSPACE}\), respectively). We will follow this general pattern throughout the paper: whenever we define an average case unitary complexity class, we will require that the state sequence is in the corresponding state class (see e.g. Definition 4.2). This is in analogy to classical average case complexity classes, where it is common to require that the distribution over which the problem is defined can be sampled from with reasonable complexity. As we will see e.g. in Theorem 7.12, this assumption will be necessary to prove several natural results about average unitary complexity.
_Remark 3.12_.: Since an average-case unitary synthesis problem specifies both an input state \(\ket{\psi_{x}}\) and a unitary \(U_{x}\) to be applied on that state, it may seem like this is just a complicated way of stating the state synthesis problem for the state \(U_{x}\ket{\psi_{x}}\). This, however, is not the case: the state \(\ket{\psi_{x}}\) is defined on register \(\mathsf{A}_{x}\mathsf{R}_{x}\), but the unitary \(U_{x}\) is only allowed to act on \(\mathsf{A}_{x}\). Therefore, if we imagine that we hand a unitary synthesis problem to some black box to implement, we should imagine that we provide as input the string \(x\) as well as register \(\mathsf{A}_{x}\) of \(\ket{\psi_{x}}\). With sufficient computational resources, the black box can of course synthesise many more copies of \(\ket{\psi_{x}}\) because the state is in the corresponding state complexity class. However, to solve the unitary synthesis problem, it has to apply the unitary on register \(\mathsf{A}_{x}\) of the state that we provided as input, not any other copy of \(\ket{\psi_{x}}\) it created itself. This is because otherwise the output state would not be entangled with our register \(\mathsf{R}_{x}\) (which the black box does not have access to) in the correct way. This has the important consequence that in contrast to state synthesis problems, in average unitary synthesis problems the black box cannot re-run the synthesis algorithm many times and post-select on success, as it is only provided with a single copy of a register of the input state \(\ket{\psi_{x}}\). We therefore see that solving an average-case unitary synthesis problem (\(\mathscr{U}=(U_{x})_{x},\Psi=(\ket{\psi_{x}})_{x}\)) is potentially much harder than the state synthesis problem for the sequence \(\Psi^{\prime}\coloneqq(U_{x}\ket{\psi_{x}})_{x}\). Conversely, if we can show that \((\mathscr{U},\Psi)\) is in some average unitary complexity class, it immediately follows that the state synthesis problem \(\Psi^{\prime}\) is in the corresponding state complexity class.
Non-uniform unitary synthesis classes.The classes unitaryBQP and unitaryPSPACE are _uniform_ complexity classes in the sense that a unitary synthesis problem \(\mathscr{U}=(U_{x})_{x}\) in either class must be implemented by a _uniform_ quantum algorithm, i.e. a collection of circuits \(C=(C_{x})_{x}\) that are uniformly generated by a single (classical) Turing machine.
However one can also consider _nonuniform_ variants of these classes, where the circuits \(C_{x}\) are not required to be uniformly generated by a Turing machine. These are analogous to nonuniform complexity classes like \(\mathsf{P/poly}\) in classical complexity theory, but there is one key difference: the implementation algorithm can have a different circuit for each instance \(x\), whereas the definition of \(\mathsf{P/poly}\) only allows the circuit to depend on the _input length_. If the circuits in the definition of \(\mathsf{P/poly}\) could depend on the instance, then all languages would trivially be in \(\mathsf{P/poly}\): the circuit could just output \(1\) or \(0\) depending on whether the instance were in the language.
As we will see in Section 8, this notion of nonuniformity allows us to establish a tight connection between the (non-uniform) complexity of unitary synthesis problems and the hardness of breaking various quantum cryptographic primitives.
**Definition 3.13** (unitaryBQP/poly).: _Let \(\delta:\mathbb{N}\to\mathbb{R}\) be a function. Define the unitary complexity class \(\mathsf{unitaryBQP/poly}_{\delta}\) to be the set of unitary synthesis problems \(\mathscr{U}=(U_{x})_{x}\) for which there exists a non-uniform polynomial-size family of quantum algorithms \(C_{x}\) that implements \(\mathscr{U}\) with worst-case error \(\delta\). We define \(\mathsf{unitaryBQP/poly}\) to be the intersection of \(\mathsf{unitaryBQP/poly}_{1/q(n)}\) for all polynomials \(q(n)\)._
We also define an nonuniform variant of avgUnitaryBQP.
**Definition 3.14** (avgUnitaryBQP/poly).: _Let \(\delta:\mathbb{N}\to\mathbb{R}\) be a function. Define the unitary complexity class \(\mathsf{avgUnitaryBQP/poly}_{\delta}\) to be the set of distributional unitary synthesis problems \(\left(\mathscr{U}=(U_{x})_{x},\Psi=(\ket{\psi}_{x})_{x}\right)\) where \(\Psi\in\mathsf{stateBQP}\) and there exists a non-uniform polynomial-size family of quantum circuits \((C_{x})_{x}\) that implements \((\mathscr{U},\Psi)\) with average-case error \(\delta\). We define \(\mathsf{avgUnitaryBQP/poly}\) to be the intersection of \(\mathsf{avgUnitaryBQP/poly}_{1/q(n)}\) for all polynomials \(q(n)\)._
One can also define non-uniform versions of these classes with _quantum_ advice, e.g., unitaryBQP/qpoly, but we leave that for future work.
### Reductions
Notions of reductions are crucial in complexity theory and theoretical computer science. We introduce a basic notion of reduction that allows one to relate one unitary synthesis problem to another. First, we to formalize the notion of circuits that can make queries to a unitary synthesis oracle. Intuitively, a quantum circuit with access to a unitary synthesis oracle is just like a normal quantum circuit, except that it can apply some set of partial isometries (or more precisely arbitrary channel completions of partial isometries) in a single computational step by using the unitary synthesis oracle.
**Definition 3.15** (Quantum query circuits).: _A quantum query circuit\(C^{*}\) specifies a sequence of gates like those in a general quantum circuit (defined in Section 2.3), except it may also include special "oracle gates". An oracle gate is specified by a label \(y\in\{0,1\}^{*}\); its action on its input qubits will be specified separately, i.e. a quantum query circuit is not actually a quantum circuit, but rather a template for a quantum circuit._
Section 3.3 depicts an example of a quantum query circuit.
**Definition 3.16** (Instantiations of quantum query circuits).: _An instantiation of a quantum query circuit\(C^{*}\) with a unitary synthesis problem \(\mathscr{U}=(U_{x})\), denoted \(C^{\mathscr{U}}\), is a quantum channel obtained from \(C^{*}\) by replacing all the oracle gates with label \(y\) by some channel completion of \(U_{y}\) (which can be different each time \(U_{y}\) is called). Whenever we write \(C^{\mathscr{U}}\), we implicitly require that \(\mathscr{U}\) is such that the input and output registers of \(U_{y}\) match the input and output registers of any oracle gate with label \(y\) in \(C^{*}\)._
**Definition 3.17** (Uniformity of quantum query circuits).: _We say that a family \((C^{*}_{x})_{x}\) of quantum query circuits is time-uniform (resp. space-uniform) if there exists a classical polynomial time (resp. polynomial space) Turing machine that on input \(x\) outputs a description of \(C_{x}\) and furthermore all labels \(y\) in an oracle gate in \(C^{*}_{x}\) satisfy \(|y|=\mathrm{poly}(|x|)\). For brevity, we also call a time-uniform (resp. space-uniform) family of quantum query circuits a polynomial time (resp. polynomial space) quantum query algorithm. If \(C^{*}=(C^{*}_{x})_{x}\) is a quantum query algorithm, then we write \(C^{\mathscr{V}}\) to denote a family of instantiations \((C^{\mathscr{V}}_{x})_{x}\). Just like for individual query circuits, for families of query circuits we call \(C^{\mathscr{V}}\) an instantiation of \(C^{*}\)._
Figure 1: An example of a quantum query circuit that calls members of a unitary synthesis problem \(\mathscr{U}\); the subscripts \(x_{1},x_{2}\) denote instances that are hardcoded in the query circuit.
We note that our definition of quantum query circuit has the classical instances \(y\) "hardcoded" into the description of the circuit. In particular, the query circuit cannot choose which oracles it queries depending on its quantum input.10 To accommodate situations when the oracle circuit may want to query different oracles \(\mathscr{U}=(U_{x})_{x}\) (perhaps even in superposition), one can define a "controlled oracle" \(\tilde{U}_{n}=\sum_{x:|x|=n}|x\rangle\!\langle x|\otimes U_{x}\). In other words, \(\tilde{U}_{n}\) applies the oracle \(U_{x}\) conditioned on some \(n\)-qubit register being in the state \(|x\rangle\). A quantum query circuit with access to this controlled oracle can then apply different \(U_{x}\) coherently depending on its quantum input, i.e. the controlled oracle gives a query circuit more power than the uncontrolled one.
Footnote 10: Of course, for a family of query circuits \((C_{x}^{*})\), the labels \(y\) used by \(C_{x}^{*}\) can depend on the index \(x\); the point here is that a given \(C_{x}^{*}\) cannot compute the labels \(y\) as a function of the quantum input it is given.
We also note that the instantiation \(C^{\mathscr{V}}\) is not unique because the oracle gates can implement any channel completion of the partial isometries \(V_{x}\in\mathscr{V}\). Whenever we say that a statement holds for \(C^{\mathscr{V}}\), we mean that it holds for all possible instantiations, i.e. for all possible choices of channel completions.
Using quantum query circuits, we can define reductions between unitary synthesis problems.
**Definition 3.18** (Reductions between unitary synthesis problems).: _Let \(\mathscr{U}=(U_{x})_{x}\) and \(\mathscr{V}=(V_{x})_{x}\) denote unitary synthesis problems. Then \(\mathscr{U}\) (polynomial-time) reduces to \(\mathscr{V}\) if for all polynomials \(q(n)\) there exists a polynomial-time quantum query algorithm \(C^{*}\) such all instantiations \(C^{\mathscr{V}}\) of \(C^{*}\) implement \(\mathscr{U}\) with worst-case error \(1/q(|x|)\)._
Just like one can define oracle complexity classes like \(\mathsf{P}^{3\mathrm{SAT}}\) (i.e., polynomial-time computation with oracle access to a 3SAT oracle), we can now also define oracle complexity classes for unitary synthesis problems:
**Definition 3.19** (Oracle unitary complexity classes).: _We define the oracle class \(\mathsf{unitaryBQP}^{\mathscr{V}}\) to be the set of all unitary synthesis problems that are polynomial-time reducible to a unitary synthesis problem \(\mathscr{V}\)._
We can also define reductions between distributional unitary synthesis problems, analogously to how reductions between distributional problems are defined in classical average case complexity.
First, we need to define what it means for a query circuit to be instantiated with an average-case implementation of an oracle.
**Definition 3.20** (Average-case instantiation of a query circuit).: _Let \((\mathscr{V}=(U_{x})_{x},\Psi=(|\psi_{x})_{x})\) denote a distributional unitary synthesis problem. Let \(\epsilon(n)\) be a function and let \(C^{*}\) denote a quantum query circuit that queries \(U_{x_{1}},U_{x_{2}},\ldots,U_{x_{m}}\). An \(\epsilon\)-error average-case instantiation of \(C^{*}\) with \((\mathscr{U},\Psi)\), denoted by \(C^{(\mathscr{U},\Psi)}\), is a quantum channel obtained from \(C^{*}\) by replacing all the oracle gates with label \(x\) by some quantum algorithm (which can be different each time \(U_{x}\) is called) that implements \(U_{x}\) on the distribution \(\psi_{x}\) with average-case error \(\epsilon(|x|)\)._
_Furthermore, whenever we write \(C^{(\mathscr{U},\Psi)}\), we implicitly require that \(\mathscr{U}\) is such that the input and output registers of \(U_{x}\) match the input and output registers of any oracle gate with label \(x\) in \(C^{*}\)._
We note that the error \(\epsilon\) in an "\(\epsilon\)-error average-case instantiation" only refers to the error with which the oracle gates are implemented, not the error of the output of the overall quantum query circuit. The latter will of course depend on \(\epsilon\), but also on other factors, e.g. how many oracle queries are made and how sensitive the overall output is to errors in the oracle implementation.
We now define reductions between distributional problems.
**Definition 3.21** (Reductions between distributional problems).: _Let \(\left(\mathscr{U}=(U_{x})_{x},\Psi=(|\psi_{x}\rangle)_{x}\right)\) and \(\left(\mathscr{V}=(V_{x})_{x},\Omega=(|\omega_{x}\rangle)_{x}\right)\) denote distributional unitary synthesis problems. Then \((\mathscr{U},\Psi)\) (polynomial-time) reduces to \((\mathscr{V},\Omega)\) if for all polynomials \(q(n)\) there exists a polynomial-time quantum query algorithm \(C^{*}\) and a polynomial \(r(n)\) such that all \(1/r(n)\)-error average-case instantiations \(C^{(\mathscr{V},\Omega)}\) implement \((\mathscr{U},\Psi)\) with average-case error \(1/q(n)\)._
Next we aim to define the oracle class \(\mathsf{avgUnitaryBQP}^{(\mathscr{V},\Omega)}\). For this, we will have to specify a state complexity class which the distributional states are required to be from. For \(\mathsf{avgUnitaryBQP}\), we required that the distributional states be from \(\mathsf{stateBQP}\). However, if we give the \(\mathsf{avgUnitaryBQP}\) oracle access to \((\mathscr{V},\Omega)\), it is natural to allow the same oracle access for the preparation of the distributional states, too. Therefore, we have to specify a notion "oracle state complexity class", which we will naturally denote by \(\mathsf{stateBQP}^{(\mathscr{V},\Omega)}\). Similar definitions can be made for other state classes in addition to \(\mathsf{stateBQP}\).
**Definition 3.22** (Oracle state complexity classes).: _Let \(\left(\mathscr{V}=(V_{x})_{x},\Omega=(|\omega_{x}\rangle)_{x}\right)\) be a distributional unitary synthesis problem. We define the oracle state complexity class \(\mathsf{stateBQP}^{(\mathscr{V},\Omega)}\) to be the set of state families \(\Psi=(|\psi_{x}\rangle)_{x}\) where for all polynomials \(q(n)\) there exists a polynomial-time quantum query algorithm \(C^{*}=(C^{*}_{x})_{x}\) and a polynomial \(r(n)\) such that for all \(x\), all \(1/r(n)\)-error average-case instantiations \(C^{(\mathscr{V},\Psi)}_{x}\) on the all zeroes input outputs a state that is \(1/q(n)\)-close to \(|\psi_{x}\rangle\)._
In other words, a state family \(\Psi=(|\psi_{x}\rangle)_{x}\) is in \(\mathsf{stateBQP}^{(\mathscr{V},\Omega)}\) if it can be synthesized by polynomial-sized circuits that also have the ability to query algorithms that solve \(\mathscr{V}\) in the average case. We now define the oracle class \(\mathsf{avgUnitaryBQP}^{(\mathscr{V},\Omega)}\):
**Definition 3.23** (Average-case oracle unitary complexity classes).: _We define the oracle class \(\mathsf{avgUnitaryBQP}^{(\mathscr{V},\Omega)}\) to be the set of all distributional problems \((\mathscr{U},\Psi)\) that are polynomial-time reducible to the distributional unitary synthesis problem \((\mathscr{V},\Omega)\) and for which \(\Psi\in\mathsf{stateBQP}^{(\mathscr{V},\Omega)}\)._
Just like for classical complexity classes, we can use this notion of reduction to define hard and complete problems for (average-case) unitary complexity classes.
**Definition 3.24** (Hard and complete problems).: _We call a unitary synthesis problem \(\mathscr{U}\) hard (under polynomial-time reductions) for a unitary complexity class \(\mathsf{unitaryC}\) if \(\mathsf{unitaryC}\subseteq\mathsf{unitaryBQP}^{\mathscr{U}}\). If additionally \(\mathscr{U}\in\mathsf{unitaryC}\), we call \(\mathscr{U}\) complete for the class \(\mathsf{unitaryC}\)._
_Analogously, we call a distributional unitary synthesis problem \((\mathscr{U},\Psi)\) hard (under polynomial-time reductions) for an average-case unitary complexity class \(\mathsf{avgUnitaryC}\) if \(\mathsf{avgUnitaryC}\subseteq\mathsf{avgUnitaryBQP}^{(\mathscr{U},\Psi)}\). If additionally \((\mathscr{U},\Psi)\in\mathsf{avgUnitaryC}\), we call \((\mathscr{U},\Psi)\) complete for the class \(\mathsf{avgUnitaryC}\)._
As would be expected, \(\mathsf{unitaryBQP}\) and \(\mathsf{avgUnitaryBQP}\) are closed under polynomial-time reductions.
**Lemma 3.25**.: \(\mathsf{unitaryBQP}\) _is closed under polynomial-time reductions, i.e. for all \(\mathscr{V}\in\mathsf{unitaryBQP}\), we have that \(\mathsf{unitaryBQP}^{\mathscr{V}}\subseteq\mathsf{unitaryBQP}\)._
_Likewise, \(\mathsf{avgUnitaryBQP}\) is closed under polynomial-time reductions, i.e. for all \((\mathscr{V},\Omega)\in\mathsf{avgUnitaryBQP}\), we have that \(\mathsf{avgUnitaryBQP}^{(\mathscr{V},\Omega)}\subseteq\mathsf{avgUnitaryBQP}\)._
Proof.: Consider a unitary synthesis problem \(\mathscr{U}=(U_{x})_{x}\in\mathsf{unitaryBQP}^{\mathscr{V}}\). By definition, for all polynomials \(q\) there exists a polynomial-time quantum query algorithm \(C^{*}=(C^{*}_{x})_{x}\) such that all instantiations \(C^{\mathscr{V}}_{x}\) implement \(U_{x}\) with worst-case error \(1/q(|x|)\). Since \(\mathscr{V}\in\mathsf{unitaryBQP}\), for all polynomials \(p\) there exists a polynomial-size circuit family \(\tilde{C}_{y}\) such that \(\tilde{C}_{y}\) is \(1/p(|y|)\)-close to a channel completion of \(V_{y}\). Since \(C^{*}_{x}\) can include \(r(|x|)=\mathrm{poly}(|x|)\) many oracle gates with labels \(y\) such that \(|y|=\mathrm{poly}(|x|)\), we can simply replace each oracle gate by the polynomial-time circuit \(\tilde{C}_{y}\); this will yield another polynomial-size circuit, and this circuit will be \((r(|x|)/p(\mathrm{poly}(|x|))+1/q(|x|))\)-close to a channel completion of \(U_{x}\) by the triangle inequality and monotonicity property of the diamond norm. Since \(p,q\) can be chosen arbitrarily large, we get that \(\mathscr{U}\in\mathsf{unitaryBQP}\). The statement for \(\mathsf{avgUnitaryBQP}\) follows analogously after noting that for \((\mathscr{V},\Omega)\in\mathsf{avgUnitaryBQP}\), \(\mathsf{stateBQP}^{(\mathscr{V},\Omega)}\subseteq\mathsf{stateBQP}\).
The same statement holds for \(\mathsf{unitaryPSPACE}\) and \(\mathsf{avgUnitaryPSPACE}\), too.
**Lemma 3.26**.: \(\mathsf{unitaryPSPACE}\) _is closed under polynomial-time reductions, i.e. for all \(\mathscr{V}\in\mathsf{unitaryPSPACE}\), we have that \(\mathsf{unitaryBQP}^{\mathscr{V}}\subseteq\mathsf{unitaryPSPACE}\)._
_Similarly, \(\mathsf{avgUnitaryPSPACE}\) is closed under polynomial-time reductions, i.e. for all \((\mathscr{V},\Omega)\in\mathsf{avgUnitaryPSPACE}\), we have that \(\mathsf{avgUnitaryBQP}^{(\mathscr{V},\Omega)}\subseteq\mathsf{avgUnitaryPSPACE}\)._
Proof.: The proof for the worst-case class \(\mathsf{unitaryPSPACE}\) is identical to that of Lemma 3.25. The proof for the average-case setting is analogous, too, except that we now need to ensure that the distributional states \(\Psi\in\mathsf{stateBQP}^{(\mathscr{V},\Omega)}\) allowed by the oracle class \(\mathsf{avgUnitaryBQP}^{(\mathscr{V},\Omega)}\) are also valid input states for a problem in \(\mathsf{avgUnitaryPSPACE}\); that is we need to show that \(\mathsf{stateBQP}^{(\mathscr{V},\Omega)}\subseteq\mathsf{statePSPACE}\) for a distributional problem \((\mathscr{V},\Omega)\in\mathsf{avgUnitaryPSPACE}\). This is easily seen to hold by the same argument we used for Lemma 3.25: we can simply replace all oracle calls in the state preparation procedure for \(\Psi\in\mathsf{stateBQP}^{(\mathscr{V},\Omega)}\) by the corresponding space-uniform circuit that implements \((\mathscr{V},\Omega)\); since there are at most polynomially oracle calls, the result is a space-uniform circuit, so \(\Psi\in\mathsf{statePSPACE}\).
### Discussion and open problems
In this section, we have introduced a formal framework for studying the complexity of unitary synthesis problems. We have already seen the unitary complexity classes \(\mathsf{unitaryBQP}\) and \(\mathsf{unitaryPSPACE}\), as well as their average-case versions. In the next section, we will consider interactive proofs for unitary synthesis problems, which will naturally lead us to define the classes \(\mathsf{unitaryQIP}\) and \(\mathsf{unitarySZK}\). This, however, is by no means a full list of all unitary complexity classes that might be of interest -- our aim here is to introduce the classes relevant to the Uhlmann transformation problem, not to provide a complete account. As such, it is natural to consider the following question.
**Open Problem 1**.: What are other unitary complexity classes that naturally relate to physically interesting problems? For example, is there a useful notion of \(\mathsf{unitaryQMA}\)?
Later in this paper, we will prove some results relating unitary complexity classes to one another. However, one would naturally conjecture that certain unitary complexity classes are in fact different, e.g. one would expect \(\mathsf{unitaryBQP}\neq\mathsf{unitaryPSPACE}\). For decision languages, proving such separations unconditionally is significantly out of reach of current techniques. However, it is not clear whether this necessarily constitutes a barrier for proving similar results in the unitary setting, as it might for example be possible that \(\mathsf{unitaryBQP}\neq\mathsf{unitaryPSPACE}\), but \(\mathsf{BQP}=\mathsf{PSPACE}\). Therefore, another interesting question is the following:
**Open Problem 2**.: Are there barriers from traditional complexity theory to proving unitary complexity class separations?
Another important direction is to find complete problems for unitary complexity classes. We make progress on this by showing that (certain variants of) of the Uhlmann transformation problem are complete for certain unitary classes, but there might be other interesting complete problems for other unitary classes. One natural option is the following:
**Open Problem 3**.: Is Hamiltonian Fast-Forwarding [1] complete for unitaryPSPACE?
One can also consider variations of the model for unitary synthesis. In this paper, we always assume that the unitary needs to be applied on a single copy of an unknown state. However, it might also make sense to consider a model where an implementation of a unitary is allowed to "consume" multiple copies of an input state, but only has to produce a single output state.
**Open Problem 4**.: How can a multi-input version of unitary synthesis problems be formalised, including cases where the unitary is supposed to act on a part of a larger pure state? Are there meaningful notions of reductions and complexity classes of unitary synthesis problems in this multi-input model?
## 4 Interactive Proofs for Unitary Synthesis
In this section we introduce the model of interactive proofs for unitary synthesis problems, as well as the corresponding unitary complexity classes. In particular we introduce the unitary synthesis classes unitaryQIP and avgUnitarySZK, which are analogues of QIP and (average-case) SZK, respectively. As we will see in Sections 6 and 7, the complexity of such interactive proof classes is captured by the Uhlmann Transformation Problem.
### Quantum interactive protocols
First we formally describe the model of quantum interactive protocols. (For a more in-depth account we refer the reader to the survey of Vidick and Watrous [15].) Since in quantum computing the standard model of computation is the quantum circuit model (rather than quantum Turing machines), we model the verifier in a quantum interactive protocol as a sequence of _verifier circuits_, one for each input length. A verifier circuit is itself a tuple of quantum circuits that correspond to the operations performed by the verifier in each round of the protocol.
More formally, a _\(k\)-round quantum verifier circuit_\(C=(C_{j})_{j\in[k]}\) is a tuple of general quantum circuits that each act on a pair of registers \((\mathsf{V},\mathsf{M})\). The register \(\mathsf{V}\) is further divided into disjoint sub-registers \((\mathsf{V}_{\mathsf{work}},\mathsf{V}_{\mathsf{flag}},\mathsf{V}_{\mathsf{ out}})\). The register \(\mathsf{V}_{\mathsf{work}}\) is the verifier circuit's "workspace", the register \(\mathsf{V}_{\mathsf{flag}}\) is a single qubit indicating whether the verifier accepts or rejects, and the register \(\mathsf{V}_{\mathsf{out}}\) holds the verifier's output (if applicable). The register \(\mathsf{M}\) is the message register. The size of a verifier circuit \(C\) is the sum of the circuit sizes of the \(C_{j}\)'s.
A _quantum prover_\(P\) for a verifier circuit \(C\) is a unitary that acts on \(\mathsf{M}\) as well as a disjoint register \(\mathsf{P}\). Note that we could also define the prover to be a collection of unitaries, one for each round, in analogy to the verifier; the two definitions are equivalent since we can always combine the single-round unitaries into a larger unitary that keeps track of which round is being executed and applies the corresponding single-round unitary. Since we will rarely deal with prover unitaries
for individual rounds, we will find it more convenient to just treat the prover as one large unitary. Furthermore, since the prover register is of unbounded size, we can assume without loss of generality that the prover applies a unitary (rather than a quantum channel).
Let \(\ket{\psi}\) denote a quantum state whose size is at most the number of qubits in \(\mathsf{V_{work}}\). We write \(C(\ket{\psi}){\leftrightarrow}P\) to denote the interaction between the verifier circuit \(C\) and the prover \(P\) on input \(\rho\), which is defined according to the following process. The initial state of the system is \(\ket{\phi_{0}}=\ket{\psi,0\cdots 0}_{\mathsf{V_{work}}}\ket{0\cdots 0}_{ \mathsf{V_{flag}}\mathsf{V_{out}}\mathsf{MP}}\). Inductively define \(\ket{\phi_{i}}=P\ket{\phi_{i-1}}\) for odd \(i\leq 2k\), and \(\ket{\phi_{i}}=C_{i/2}\ket{\phi_{i-1}}\) for even \(i\leq 2k\). We say that \(C(\ket{\psi}){\leftrightarrow}P\) accepts (resp. rejects) if measuring the register \(\mathsf{V_{flag}}\) in the standard basis yields the outcome \(1\) (resp. \(0\)). We say that the _output of \(C(\ket{\psi}){\leftrightarrow}P\) conditioned on accepting_ is the density matrix
\[\frac{\operatorname{Tr}_{\mathsf{VMP}\backslash\mathsf{V_{out}}}\left(\ket{1 }_{\mathsf{V_{flag}}}\cdot\phi_{2k}\right)}{\operatorname{Tr}\left(\ket{1}_{ \mathsf{V_{flag}}}\cdot\phi_{2k}\right)}\,;\]
in other words, it is the reduced density matrix of \(\ket{\phi_{2k}}\) on register \(\mathsf{V_{out}}\), conditioned on \(C(\ket{\psi}){\leftrightarrow}P\) accepting. (If the probability of accepting is \(0\), then we leave the output undefined.)
A _quantum verifier_\(V=(V_{x})_{x\in\{0,1\}^{*}}\) is a uniform sequence of polynomial-size and polynomial-round quantum verifier circuits.
### Interactive proofs for unitary synthesis
We now present our notion of interactive protocols for unitary synthesis.
**Definition 4.1** (unitaryQIP).: _Let \(c,s,\delta:\mathbb{N}\to[0,1]\) be functions. The class \(\mathsf{unitaryQIP}_{c,s,\delta}\) is the set of unitary synthesis problems \(\mathscr{U}=(U_{x})_{x}\) where there exists a polynomial-time quantum verifier \(V=(V_{x})_{x\in\{0,1\}^{*}}\) satisfying, for all \(x\in\{0,1\}^{*}\) of sufficiently large length,_
* Completeness: _There exists a quantum prover_ \(P\) _(called an_ honest prover_) such that for all input states_ \(\ket{\psi}\) _in the support of_ \(U_{x}\)_,_ \[\Pr[V_{x}(\ket{\psi}){\leftrightarrow}P\text{ accepts}]\geq c(\ket{x})\]
* Soundness: _For all input states_ \(\ket{\psi}\) _and for all quantum_ \(P\)_, there exists a channel completion of_ \(\Phi_{x}\) _of_ \(U_{x}\) _such that_ \[\text{if }\quad\Pr[V_{x}(\ket{\psi}){\leftrightarrow}P\text{ accepts}]\geq s(\ket{x})\qquad\text{then}\qquad \operatorname{td}(\sigma,\Phi_{x}(\psi))\leq\delta(\ket{x})\,,\] _where_ \(\sigma\) _denotes the output of_ \(V_{x}(\ket{\psi}){\leftrightarrow}P\) _conditioned on accepting._
_Here the probabilities are over the randomness of the interaction._
_Finally, define_
\[\mathsf{unitaryQIP}_{\delta}=\bigcup_{\epsilon(n)\text{ negl}}\mathsf{unitaryQIP}_{1- \epsilon,\frac{1}{2},\delta}\]
_where the union is over all negligible functions \(\epsilon(n)\), and define_
\[\mathsf{unitaryQIP}=\bigcap_{q(n)\text{ poly}}\mathsf{unitaryQIP}_{1/q(n)}\]
_where the intersection ranges over all polynomials \(q(n)\)._
Intuitively, a unitary synthesis problem \(\mathscr{U}=(U_{x})_{x}\) has an interactive proof if a polynomial-time verifier who receives a pair \((x,\ket{\psi})\) can interact with an all-powerful prover, and conditioned on accepting, output a state close to \(U_{x}\ket{\psi}\).
The class unitaryQIP is analogous to the state synthesis class stateQIP introduced by [13]; the only difference is that a stateQIP verifier for the state family \(\Psi=(\ket{\psi_{x}})_{x}\) has its input registers fixed to the all zeroes state, and in the soundness condition, if a prover makes \(V_{x}\) accept with probability at least \(s(n)\), then its output conditioned on accepting is close to the target state \(\ket{\psi_{x}}\).
We make a few remarks regarding the definition. First, one may notice a peculiar asymmetry between the definitions of the classes unitaryQIP\({}_{\delta}\) and unitaryQIP. The class unitaryQIP\({}_{\delta}\) is defined as a _union_ over completeness parameters \(c(n)=1-\epsilon(n)\) for some negligible function \(\epsilon(n)\). This is because we want to consider unitary synthesis protocols as long as there is an honest prover that can be accepted with probability \(1-\epsilon(n)\) for _some_ negligible function \(\epsilon(n)\); we do not want to fix a particular negligible function. On the other hand, the class unitaryQIP is defined as the _intersection_ of unitaryQIP\({}_{1/q(n)}\) over all choices of polynomials \(q(n)\). Here the quantity \(1/q(n)\) denotes how well the output state (conditioned on the verifier accepting) approximates the target state, and we want to consider state sequences where for all polynomials \(q(n)\) there is a protocol that can synthesize the state with error smaller than \(1/q(n)\) (for sufficiently large \(n\)).
A second remark concerns the default choice of soundness \(s(n)=\frac{1}{2}\) for the definition of unitaryQIP\({}_{\delta}\) and unitaryQIP. In the state synthesis setting, the soundness parameter can be generically amplified via sequential repetition (see [13] for a proof). Thus the class stateQIP is the same for any soundness, completeness parameters that are separated by at least an inverse polynomial. It is not clear whether soundness amplification is possible in the unitary synthesis setting, however. This is because the verifier only gets one copy of the input state, and if a verifier does not accept the interaction it is unclear how to recover the input state for another repetition of the protocol. This motivates the following open question.
**Open Problem 5**.: Can completeness/soundness amplification be performed for unitaryQIP, or is there evidence that it's not possible?
In analogy to Definition 4.1, we also define an average-case complexity version of unitaryQIP, where the verifier only has to synthesize the desired unitary well on a given distribution state.
**Definition 4.2** (avgUnitaryQIP).: _Let \(c,s,\delta:\mathbb{N}\to[0,1]\) be functions. The class avgUnitaryQIP\({}_{c,s,\delta}\) is the set of distributional unitary synthesis problems \((\mathscr{U}=(U_{x})_{x},\Psi=(\ket{\psi_{x}})_{x})\) such that \(\Psi\in\mathsf{stateQIP}\) and there exists a polynomial-time quantum verifier \(V=(V_{x})_{x\in\{0,1\}^{*}}\) satisfying, for all \(x\in\{0,1\}^{*}\) of sufficiently large length,_
* Completeness: _There exists a quantum prover_ \(P\) _(called an_ honest prover_) such that_ \[\Pr[V_{x}(\ket{\psi_{x}})\xhookrightarrow{}P\text{ accepts}]\geq c(\ket{x})\]
* Soundness: _For all quantum provers_ \(P\)_, there exists a channel completion_ \(\Phi_{x}\) _of_ \(U_{x}\) _such that_ \[\text{if }\quad\Pr[V_{x}(\ket{\psi_{x}})\xhookrightarrow{}P\text{ accepts}]\geq s(\ket{x})\qquad\text{then}\qquad\operatorname{td}( \sigma,(\Phi_{x}\otimes\operatorname{id})(\psi_{x}))\leq\delta(\ket{x})\,.\] _where_ \(\sigma\) _denotes the output of_ \(V_{x}(\ket{\psi_{x}})\xhookrightarrow{}P\) _conditioned on accepting and_ \(V_{x}\) _acts the identity on the ancilla register of_ \(\ket{\psi_{x}}\)_._
_Here the probabilities are over the randomness of the interaction. Finally, define_
\[\mathsf{avgUnitaryQIP}_{\delta}=\bigcup_{\epsilon(n)\ \mathrm{negl}}\mathsf{ avgUnitaryQIP}_{1-\epsilon,\frac{1}{2},\delta}\]
_where the union is over all negligible functions \(\epsilon(n)\), and define_
\[\mathsf{avgUnitaryQIP}=\bigcap_{q(n)\ \mathrm{poly}}\mathsf{avgUnitaryQIP}_{1/q(n)}\]
_where the intersection ranges over all polynomials \(q(n)\)._
For this section, we only consider single-prover interactive protocols. However, in traditional (classical and quantum) complexity theory, multi-prover protocols have been shown to be surprisingly powerful [11, 12]. It is natural to ask whether multi-prover models might also provide additional power (and insights) in the unitary synthesis setting:
**Open Problem 6**.: Is there a meaningful notion of multi-prover unitary synthesis protocols, and what is their power?
A related question concerns distributed protocols for unitary synthesis, where multiple provers have to apply a unitary collectively under certain resource constraints. Such a scenario was recently studied for state synthesis problems [14], and it is natural to ask what can be said in the unitary synthesis setting.
**Open Problem 7**.: How are unitary synthesis problems related to distributed quantum computation?
### Zero-knowledge protocols for state and unitary synthesis
In this section we present a notion of _zero knowledge_ for unitary synthesis problems. _A priori_, it is unclear how to reasonably define zero knowledge in the unitary synthesis setting. First, defining zero-knowledge quantum protocols for decision languages is already challenging, as the notion of "view" in the quantum setting is less straightforward than with classical protocols [13, 14]. Second, in the unitary synthesis setting the verifier additionally gets one copy of an unknown state \(\ket{\psi}\) for the quantum part of its input; this poses an additional conceptual difficulty in trying to come up with a reasonable notion of zero knowledge simulation.
We first explore several attempts to define zero knowledge for unitary synthesis, and highlight their shortcomings. A first attempt is to require that the view of the verifier, when given instance \(x\) and a quantum input \(\ket{\psi}\) and interacts with the honest prover, can be efficiently output by the simulator \(\mathrm{Sim}\) that only receives instance \(x\) and state \(\ket{\psi}\) as input and does not interact with the prover. However, since the verifier supposed to end up with \(U_{x}\ket{\psi}\) at the end of the protocol, this means that the simulator can output \(U_{x}\ket{\psi}\) from \(x\) and \(\ket{\psi}\) in polynomial time, meaning that \(\mathscr{U}\in\mathsf{unitaryBQP}\). This would lead to an uninteresting definition of zero knowledge.
A second attempt to define zero knowledge is inspired by simulation-based security, where we allow the simulator to query the ideal Uhlmann transformation \(U_{x}\) once. In particular, the simulator gets as input the honest verifier's input \(\ket{\psi}\), and gets a single query to \(U_{x}\), before being asked to output the verifier's view. This still seems problematic in the honest verifier setting, since the
simulator might decide to query \(U_{x}\) on a state other than \(|\psi\rangle\). If it does that, it seems tricky to argue that the verifier does not learn anything from the interaction since it could potentially learn the target unitary transformation applied to a state that is completely unrelated to the input.
These difficulties point to the core issue with devising a notion of zero knowledge in the unitary synthesis setting. With the standard definition of zero knowledge for decision problems, the input and outputs of the verifier are fully specified for the simulator: in particular, the simulator only has to reproduce the interaction in the accepting case. In the unitary synthesis setting, the verifier does not have a full classical description of what state it is supposed to output: the classical string \(x\) provides the simulator with a complete classical description of the partial isometry \(U_{x}\), but it only gets the input state \(|\psi\rangle\) in quantum form.
This motivates us to define a notion of _honest-verifier, average-case_ zero knowledge for unitary synthesis, where we consider verifiers that get a classical input \(x\) and an input state that comes from half of a distribution state \(|\psi_{x}\rangle\). We assume the distribution state \(|\psi_{x}\rangle\) has an efficient classical description (i.e. it comes from a stateBQP state family). Thus, the input/output behavior of the unitary synthesis protocol when both the verifier and prover are honest is completely specified, which then allows for the possibility of a simulator. Although this is seemingly a weak notion of zero knowledge, as we will see in Section 6 it captures the complexity of the Uhlmann Transformation Problem.
**Definition 4.3** (avgUnitarySZK\({}_{\mathrm{HV}}\)).: _Let \(c,s,\delta:\mathbb{N}\to[0,1]\) be functions. The class avgUnitarySZK\({}_{\mathrm{HV},c,s,\delta}\) is the set of distributional unitary synthesis problems \((\mathscr{U},\Psi)\) with \(\mathscr{U}=(U_{x})_{x}\) and \(\Psi=(|\psi_{x}\rangle)_{x}\in\mathsf{stateBQP}\) for which there exists a polynomial-time quantum verifier \(V^{*}=(V^{*}_{x})_{x\in\{0,1\}^{*}}\) (called the honest verifier), an unbounded prover \(P^{*}\) (called the honest prover), and a polynomial-time quantum algorithm \(\mathrm{Sim}\) (called the simulator) such that for sufficiently long \(x\in\{0,1\}^{*}\),_
1. _The prover_ \(P^{*}\) _on input_ \(x\) _is accepted with probability at least_ \(c(|x|)\)_._
2. _The verifier_ \(V^{*}\) _satisfies the soundness condition (in Definition_ 4.2_) of an_ avgUnitaryQIP__\({}_{c,s,\delta}\) _verifier for_ \((\mathscr{U},\Psi)\)_._
3. _There exists a negligible function_ \(\epsilon:\mathbb{N}\to\mathbb{R}\) _such that the simulator_ \(\mathrm{Sim}\)_, on input_ \((x,r)\) _(for_ \(r\in\mathbb{N}\)_), outputs a state_ \(\rho\) _satisfying_ \[\mathrm{td}(\rho,\sigma_{x,r})\leq\epsilon(|x|)\] _where_ \(\sigma_{x,r}\) _is the reduced density matrix of the verifier_ \(V^{*}_{x}\) _(which was given the target register of the distribution state_ \(|\psi_{x}\rangle\)_) immediately after the_ \(r\)_'th round of interaction with the honest prover_ \(P^{*}\)_._
_Finally, define_
\[\mathsf{avgUnitarySZK}_{\mathrm{HV},\delta}=\bigcup_{\epsilon(n)\ \mathrm{negl}} \mathsf{avgUnitarySZK}_{\mathrm{HV},1-\epsilon,\frac{1}{2},\delta}\,,\]
_where the union ranges over all negligible functions \(\epsilon(n)\), and_
\[\mathsf{avgUnitarySZK}_{\mathrm{HV}}=\bigcap_{q(n)\ \mathrm{poly}}\mathsf{ avgUnitarySZK}_{\mathrm{HV},1/q(n)}\]
_where the intersection ranges over all polynomials \(q(n)\)._
Note that the definition of \(\mathsf{avgUnitaryQIP}\) (Definition 4.2 already includes a completeness condition. However, we need to list the completeness condition for \(\mathsf{avgUnitarySZK}\) explicitly because we need to ensure that the prover \(P^{*}\) for whom the completeness condition holds is the same as the prover \(P^{*}\) in the zero-knowledge condition.
We now make several additional remarks regarding the zero knowledge definition.
Simulation of the average case.If we think of running a unitary synthesis protocol on the distribution state \(\ket{\psi_{x}}\), then from the point of view of the verifier, it is given a pure state input \(\ket{\phi}\) sampled from a distribution corresponding to the reduced density matrix of \(\ket{\psi_{x}}\). Let \(\mathcal{D}_{x}\) denote this distribution of pure states. (This distribution may not be unique because the spectral decomposition is not unique, but the end result is the same.) Then in this definition the simulator's job is to produce the view of the verifier _averaged over inputs sampled from \(\mathcal{D}_{x}\)_. In other words, the simulator does not have to reproduce the view of the verifier on any specific input state \(\ket{\phi}\), just on average.
Complexity of the distribution state.The distribution state sequence \(\Psi\) associated with a distributional unitary synthesis problem in \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) is required to be in \(\mathsf{stateBQP}\), instead of some notion of \(\mathsf{stateSZK}_{\mathrm{HV}}\). In the definition of \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) we require that the simulator can output the state between an honest verifier and honest prover after each round. It is easy to see that any reasonable definition of \(\mathsf{stateSZK}_{\mathrm{HV}}\) results in the same class as \(\mathsf{stateBQP}\).
Honest-verifier versus general zero knowledge.A natural question is whether this definition of zero knowledge can be meaningfully generalized to the _malicious verifier_ setting, where the interaction between the honest prover and verifier can be efficiently simulated even if the verifier deviates from the protocol. This is typically the notion of zero knowledge that is useful in the cryptographic setting. It is known that in both the classical and quantum settings, the malicious verifier and honest verifier definitions of statistical zero knowledge proofs yield the same complexity classes (i.e., \(\mathsf{SZK}=\mathsf{SZK}_{\mathrm{HV}}\) and \(\mathsf{QSZK}=\mathsf{QSZK}_{\mathrm{HV}}\)) [11, 10, 12]. We leave studying stronger notions of zero knowledge protocols for unitary synthesis to future work:
**Open Problem 8**.: Is there a meaningful notion of malicious verifier zero knowledge for unitary synthesis problems, and how is that related to the honest verifier setting that we considered here?
## Part II Uhlmann Transformation Problem: Definitions and Structural Results
### 5 Definition of the Uhlmann Transformation Problem
In this section we formally define the Uhlmann Transformation Problem as a unitary synthesis problem. We also define a "succinct" version of it, in which the two input states \(\ket{C},\ket{D}\) that specify an instance of the Uhlmann Transformation Problem, while exponentially complex, nonetheless have a polynomial-size description.
#### Uhlmann's theorem and canonical isometries
We begin by recalling Uhlmann's theorem.
**Theorem 5.1** (Uhlmann's theorem).: _Let \(\ket{\psi}_{\mathsf{AB}}\) and \(\ket{\varphi}_{\mathsf{AB}}\) be pure states on registers \(\mathsf{AB}\) and denote their reduced states on register \(\mathsf{A}\) by \(\rho_{\mathsf{A}}\) and \(\sigma_{\mathsf{A}}\), respectively. Then, there exists a unitary \(U_{\mathsf{B}}\) acting only on register \(\mathsf{B}\) such that_
\[\mathrm{F}(\rho_{\mathsf{A}},\sigma_{\mathsf{A}})=\left|\,\bra{\varphi}_{ \mathsf{AB}}\left(\mathrm{id}_{\mathsf{A}}\otimes U_{\mathsf{B}}\right)\ket{ \psi}_{\mathsf{AB}}\right|^{2}.\]
We now would like to define a unitary synthesis problem \((U_{x})_{x}\) corresponding to Uhlmann's theorem. Intuitively, whenever the string \(x\) represents a pair of bipartite states \(\ket{\psi},\ket{\varphi}\) (by specifying circuits for them, for example), the unitary \(U_{x}\) should satisfy the conclusion of Uhlmann's theorem. However are several subtleties that arise. First, the unitary \(U_{\mathsf{B}}\) in Theorem 5.1 is not unique; outside of the support of \(\rho_{\mathsf{B}}=\mathrm{Tr}_{\mathsf{A}}(\ket{\psi}_{\mathsf{AB}})\), \(U_{B}\) can act arbitrarily. This motivates defining a _canonical_ Uhlmann transformation \(W\) corresponding to a pair of bipartite states \(\ket{\psi}_{\mathsf{AB}},\ket{\varphi}_{\mathsf{AB}}\). A natural candidate is \(W=\mathrm{sgn}(\mathrm{Tr}_{\mathsf{A}}(\ket{\varphi}_{\mathsf{A}}\!\bra{ \psi}))\) where for any linear operator \(K\) with singular value decomposition \(U\Sigma V^{\dagger}\), we define \(\mathrm{sgn}(K)=U\,\mathrm{sgn}(\Sigma)V^{\dagger}\) with \(\mathrm{sgn}(\Sigma)\) denoting replacing all the nonzero entries of \(\Sigma\) with \(1\) (which is the same as the usual sign function since all singular values are non-negative). A proof that \(W\) is a partial isometry satisfying \(\mathrm{F}(\rho,\sigma)=\left|\,\bra{\varphi}\mathrm{id}\otimes W\ket{\psi} \right|^{2}\) can be found in [13, Lemma 7.6]. This Uhlmann transformation is also _minimal_ in the sense that any other partial isometry \(\tilde{W}\) that achieves the same guarantee satisfies \(W^{\dagger}W\leq\tilde{W}^{\dagger}\tilde{W}\).
However, this definition of canonical Uhlmann transformation is not robust in the sense that arbitrarily small changes to the states \(\ket{\psi},\ket{\varphi}\) could result in arbitrarily large changes in \(W\) as measured by, say, the operator norm. Consider the following two-qutrit example:
\[\ket{\psi} =\sqrt{1-\epsilon}\ket{00}+\sqrt{\epsilon/2}\ket{11}+\sqrt{ \epsilon/2}\ket{22}\,,\] \[\ket{\tilde{\psi}} =\sqrt{1-\epsilon}\ket{00}+\sqrt{\epsilon/2}\ket{12}+\sqrt{ \epsilon/2}\ket{21}\,,\] \[\ket{\varphi} =\ket{\psi}\,.\]
The Uhlmann isometry \(W\) corresponding to \(\left(\ket{\psi},\ket{\varphi}\right)\) is simply the identity operator on \(\mathbb{C}^{3}\). On the other hand, the Uhlmann isometry \(\tilde{W}\) corresponding to \(\left(\ket{\tilde{\psi}},\ket{\varphi}\right)\) can be computed as
\[\tilde{W}=\ket{0}\!\bra{0}+\ket{1}\!\bra{2}+\ket{2}\!\bra{1}\,.\]
In other words, it swaps \(\left|1\right\rangle\) with \(\left|2\right\rangle\) and keeps \(\left|0\right\rangle\) unchanged. The difference \(W-\tilde{W}\) has operator norm at least \(2\), but the difference \(\left|\psi\right\rangle-\left|\tilde{\psi}\right\rangle\) has norm at most \(\epsilon\), which can be arbitrarily small. We would like a definition of the canonical Uhlmann isometry that is insensitive to extremely small changes in the states \(\left|\psi\right\rangle,\left|\varphi\right\rangle\).
Finally, for convenience, we only focus on bipartite states that have the same number of qubits on each side. This is not a severe assumption as we can always pad the smaller register with ancilla zero qubits, which does not affect the existence of an Uhlmann transformation.
These points motivate the following definition of canonical Uhlmann isometry. First, some notation: for \(\eta\in\mathbb{R}\) and an operator \(K\) with singular value decomposition \(U\Sigma V^{\dagger}\), we define \(\operatorname{sgn}_{\eta}(K)\) to be the operator
\[\operatorname{sgn}_{\eta}(K)=U\operatorname{sgn}_{\eta}(\Sigma)V^{\dagger}\]
where \(\operatorname{sgn}_{\eta}(\Sigma)\) denotes the projection onto the eigenvectors of \(\Sigma\) with eigenvalue greater than \(\eta\). In other words, \(\operatorname{sgn}_{\eta}\) is the scalar function that behaves like the usual sgn function on inputs \(\left|x\right|>\eta\), and maps inputs \(\left|x\right|\leq\eta\) to \(0\); this scalar function is applied to the diagonal matrix \(\Sigma\) in the usual way. We also write \(\operatorname{sgn}(K)\) to denote \(\operatorname{sgn}_{0}(K)\). Using \(\operatorname{sgn}_{\eta}\) instead of sgn in the definition of the Uhlmann partial isometry removes the sensitivity to arbitrarily small changes to the input states discussed above. The parameter \(\eta\) can be thought of as a cutoff below which changes in the input states are ignored.
**Definition 5.2** (Canonical Uhlmann partial isometry).: _The canonical Uhlmann partial isometry with cutoff \(\eta\) corresponding to a pair of pure states \(\left(\left|\psi\right\rangle_{\mathsf{AB}},\left|\varphi\right\rangle_{ \mathsf{AB}}\right)\) is defined as_
\[W=\operatorname{sgn}_{\eta}(\operatorname{Tr}_{\mathsf{A}}(\left|\varphi \middle\rangle\!\!\left\langle\psi\right|))\,. \tag{5.1}\]
_For brevity we call \(W\) the canonical \(\eta\)-Uhlmann isometry._
We verify several basic properties of the canonical \(\eta\)-Uhlmann isometry.
**Proposition 5.3**.: _The map \(W\) defined in Equation (5.1) is a partial isometry, and satisfies the following. Let \(\rho,\sigma\) denote the reduced density matrices of \(\left|\psi\right\rangle,\left|\varphi\right\rangle\), respectively, on register \(\mathsf{A}\)._
1. _(Approximate Uhlmann transformation) The isometry_ \(W\) _approximately maps_ \(\left|\psi\right\rangle\) _to_ \(\left|\varphi\right\rangle\)_, i.e.,_ \[\left|\left\langle\varphi\right|_{\mathsf{AB}}(\operatorname{id}_{\mathsf{A}} \otimes W_{\mathsf{B}})\left|\psi\right\rangle_{\mathsf{AB}}\right|^{2}\geq \operatorname{F}(\rho_{\mathsf{A}},\sigma_{\mathsf{A}})-2\eta\dim(\mathsf{B} )\,,\]
2. _(Minimality) For all partial isometries_ \(R_{\mathsf{B}}\) _satisfying_ \[\operatorname{F}(\rho_{\mathsf{A}},\sigma_{\mathsf{A}})=\left|\left\langle \varphi\right|_{\mathsf{AB}}(\operatorname{id}_{\mathsf{A}}\otimes R_{ \mathsf{B}})\left|\psi\right\rangle_{\mathsf{AB}}\right|^{2},\] _we have_ \(W^{\dagger}W\leq R^{\dagger}R\)_._
Proof.: Let \(X,Y\) be unitary operators acting on register \(\mathsf{B}\) such that
\[\left|\psi\right\rangle =\sqrt{\rho}\otimes X\left|\Omega\right\rangle\] \[\left|\varphi\right\rangle =\sqrt{\sigma}\otimes Y\left|\Omega\right\rangle\]
where \(\left|\Omega\right\rangle=\sum_{i}\left|i\right\rangle_{\mathsf{A}}\left|i \right\rangle_{\mathsf{B}}\) is the unnormalized maximally entangled state in the standard basis. Let \(U\Sigma V^{\dagger}\) denote the singular value decomposition of \((\sqrt{\rho}\sqrt{\sigma})^{\top}\), the transpose of \(\sqrt{\rho}\sqrt{\sigma}\) with respect to the standard basis. Then the proof of [13, Lemma 7.6] shows that
\[W=YU\operatorname{sgn}_{\eta}(\Sigma)V^{\dagger}X^{\dagger}\,. \tag{5.2}\]
The fact that \(W\) is a partial isometry is clear: since the matrices \(X,U,V,Y\) are unitary and \(\operatorname{sgn}_{\eta}(\Sigma)\) is a projection, it can be written in the form \(W=\Pi F\) where \(\Pi=XU\operatorname{sgn}_{\eta}(\Sigma)U^{\dagger}X^{\dagger}\) is a projection and \(F=XUV^{\dagger}Y^{\dagger}\) is a unitary. To show the approximate transformation statement, we note that the proof of [13, Lemma 7.6] shows that
\[\left\langle\varphi\right|_{\mathsf{AB}}(\operatorname{id}_{\mathsf{A}} \otimes W_{\mathsf{B}})\left|\psi\right\rangle_{\mathsf{AB}}=\operatorname{ Tr}(\sqrt{\sigma}\sqrt{\rho}\operatorname{sgn}_{\eta}(\sqrt{\rho}\sqrt{ \sigma}))\]
where \(\operatorname{sgn}_{\eta}(K)\) for an arbitrary operator \(K\) with singular value decomposition \(R\Sigma Q^{\dagger}\) is defined to be \(R\operatorname{sgn}_{\eta}(\sigma)Q^{\dagger}\). The preceding centered equation is equal to
\[\operatorname{Tr}(\sqrt{\sigma}\sqrt{\rho}\operatorname{sgn}( \sqrt{\rho}\sqrt{\sigma}))-\operatorname{Tr}\Bigl{(}\sqrt{\sigma}\sqrt{\rho} \,\bigl{(}\operatorname{sgn}(\sqrt{\rho}\sqrt{\sigma})-\operatorname{sgn}_{ \eta}(\sqrt{\rho}\sqrt{\sigma})\,\bigr{)}\Bigr{)}\] \[\qquad\geq\sqrt{\operatorname{F}(\sigma,\rho)}-\operatorname{Tr }\Bigl{(}\sqrt{\sigma}\sqrt{\rho}\,\bigl{(}\operatorname{sgn}(\sqrt{\rho} \sqrt{\sigma})-\operatorname{sgn}_{\eta}(\sqrt{\rho}\sqrt{\sigma})\,\bigr{)} \Bigr{)} \tag{5.3}\]
where in the last line we used that \(\operatorname{F}(\sigma,\rho)=\operatorname{Tr}(|\sqrt{\sigma}\sqrt{\rho}|)^{2}\) and that \(\operatorname{Tr}(K\operatorname{sgn}(K^{\dagger}))=\operatorname{Tr}(|K|)\) for all operators \(K\). Letting \(U\Sigma V^{\dagger}\) denote the singular value decomposition of \((\sqrt{\rho}\sqrt{\sigma})^{\top}\), we have that \(\overline{U}\Sigma V^{\top}\) is the singular value decomposition of \(\sqrt{\sigma}\sqrt{\rho}\). Thus Equation (5.3) is equal to
\[\sqrt{\operatorname{F}(\sigma,\rho)}-\operatorname{Tr}(\Sigma(\operatorname{ sgn}(\Sigma)-\operatorname{sgn}_{\eta}(\Sigma)))\geq\sqrt{\operatorname{F}( \sigma,\rho)}-\eta\dim(\mathsf{B})\]
where we used that \(\operatorname{sgn}(\Sigma)-\operatorname{sgn}_{\eta}(\Sigma))\) is the projector onto the eigenvectors of \(\Sigma\) with eigenvalue smaller than \(\eta\). Squaring both sides, we get:
\[\Bigl{(}\sqrt{\operatorname{F}(\sigma,\rho)}-\eta\dim(\mathsf{B})\Bigr{)}^{2 }=\operatorname{F}(\sigma,\rho)-2\sqrt{\operatorname{F}(\sigma,\rho)}\eta\dim (\mathsf{B})+\eta^{2}\dim(\mathsf{B})^{2}\geq\operatorname{F}(\sigma,\rho)-2 \eta\dim(\mathsf{B})\]
where we used that \(0\leq\operatorname{F}(\sigma,\rho)\leq 1\). This shows the approximation statement.
For the minimality statement, we note that the proof of Uhlmann's theorem [14, Theorem 9.2.1] shows that
\[|\left\langle\varphi\right|_{\mathsf{AB}}(\operatorname{id}_{\mathsf{A}} \otimes R_{\mathsf{B}})\left|\psi\right\rangle_{\mathsf{AB}}|^{2}=| \operatorname{Tr}(\sqrt{\sigma}\sqrt{\rho}\,(Y^{\dagger}RX)^{\top})|^{2}=| \operatorname{Tr}((Y^{\dagger}RX)(\sqrt{\sigma}\sqrt{\rho})^{\top})|^{2}\]
where in the last step we used \(|\operatorname{Tr}(K)|=|\operatorname{Tr}(K^{\top})|\) for all operators \(K\). Let \(Q=Y^{\dagger}RX\), and note that the singular value decomposition of \((\sqrt{\sigma}\sqrt{\rho})^{\top}\) is \(V\Sigma U^{\dagger}\). By the Cauchy-Schwarz inequality for matrices, we have
\[|\operatorname{Tr}((Y^{\dagger}RX)(\sqrt{\sigma}\sqrt{\rho})^{ \top})|^{2} =|\operatorname{Tr}(QV\Sigma^{1/2}\Sigma^{1/2}U^{\dagger})|^{2} \leq\operatorname{Tr}(QV\Sigma V^{\dagger}Q^{\dagger})\operatorname{Tr}(U \Sigma U^{\dagger})\] \[=\operatorname{Tr}(\Sigma V^{\dagger}Q^{\dagger}QV)\operatorname{ Tr}(\Sigma)\leq\operatorname{Tr}(\Sigma)^{2}\]
where in the last line we used that the operator norm of \(V^{\dagger}Q^{\dagger}QV\) is at most \(1\). If \(|\left\langle\varphi\right|_{\mathsf{AB}}(\operatorname{id}_{\mathsf{A}} \otimes R_{\mathsf{B}})\left|\psi\right\rangle_{\mathsf{AB}}|^{2}= \operatorname{F}(\rho,\sigma)^{2}=\operatorname{Tr}(|\sqrt{\sigma}\sqrt{ \rho}|)=\operatorname{Tr}(\Sigma)^{2}\), then this implies that
\[\operatorname{Tr}(\Sigma V^{\dagger}Q^{\dagger}QV)=\operatorname{Tr}(\Sigma)\,.\]
Since \(\Sigma\) and \(V^{\dagger}Q^{\dagger}QV\) are positive semidefinite, and \(V^{\dagger}Q^{\dagger}QV\) has operator norm at most \(1\), this implies that \(V^{\dagger}Q^{\dagger}QV\) acts as the identity on the support of \(\Sigma\); in particular, \(V^{\dagger}Q^{\dagger}QV\geq\operatorname{sgn}(\Sigma)\) in the positive semidefinite ordering. This is equivalent to
\[R^{\dagger}R\geq XV\operatorname{sgn}(\Sigma)V^{\dagger}X^{\dagger}\geq XV \operatorname{sgn}_{\eta}(\Sigma)V^{\dagger}X^{\dagger}=W^{\dagger}W\]
as desired.
### Worst-case Uhlmann transformation problem
We now define explicit and succinct descriptions of quantum circuits.
**Definition 5.4** (Explicit and succinct descriptions of quantum circuits).: _An explicit description of a unitary quantum circuit \(C\) is a sequence \((1^{n},g_{1},g_{2},\ldots)\) where \(1^{n}\) represents in unary the number of qubits that \(C\) acts on, and \(g_{1},g_{2},g_{3},\ldots\) is a sequence of unitary gates._
_A succinct description of a quantum circuit \(C\) is a pair \((1^{n},\hat{C})\) where \(\hat{C}\) is a description of a classical circuit11 that takes as input an integer \(t\) in binary and outputs the description a unitary gate \(g_{t}\) coming from some universal gate set, as well as the (constant-sized) set of qubits that \(g_{t}\) acts on. Together, the gates \(g_{1},\ldots,g_{T}\) describe a circuit \(C\) acting on \(n\) qubits; we will always denote the classical circuit with a hat (e.g. \(\hat{C}\)) and use the same latter without a hat (e.g. \(C\)) for the associated quantum circuit._
Footnote 11: Here, we think of \(\hat{C}\) as being a list of AND, OR, and NOT gates.
We make a few remarks about the definitions of explicit and succinct descriptions of quantum circuits:
1. The length of an explicit description of a quantum circuit is polynomial in the number of qubits it acts on as well as the number of gates in the circuit.
2. In a succinct description of a quantum circuit \(C\), the size of the circuit may be exponentially larger than the length of the description \((1^{n},\hat{C})\). However, the number of qubits that \(C\) acts on is polynomial (in fact, at most linear) in the description length.
3. For a succinct description, we provide the number of qubits \(n\) in the quantum circuit explicitly in unary because given only the classical circuit \(\hat{C}\) it may be difficult to compute the the number of qubits that the quantum circuit \(C\) acts on.
We now define two variants of the Uhlmann Transformation Problem. In the first, the two bipartite states are described by explicit circuit descriptions, and in the second they are described by succinct circuit descriptions.
**Definition 5.5** (Valid Uhlmann instances).: _We say that a string \(x\in\{0,1\}^{*}\) is a valid Uhlmann instance if it encodes a tuple \((1^{n},C,D)\) where \(C,D\) are explicit descriptions of unitary circuits that each act on \(2n\) qubits. We say that \(x\) is a valid succinct Uhlmann instance if \(x=(1^{n},\hat{C},\hat{D})\) is a succinct description of a pair \((C,D)\) of unitary circuits that each act on \(2n\) qubits for some \(n\)._
_We further say that a valid (possibly succinct) Uhlmann instance \(x\) is a fidelity-\(\kappa\) instance if the reduced states \(\rho,\sigma\) of the states \(\ket{C}=C\ket{0^{2n}}\), \(\ket{D}=D\ket{0^{2n}}\) on the first \(n\) qubits satisfy \(\mathrm{F}(\rho,\sigma)\geq\kappa\)._
**Definition 5.6** (Uhlmann Transformation Problem).: _Let \(\kappa,\eta:\mathbb{N}\to[0,1]\) be functions. The \((\kappa,\eta)\)-Uhlmann Transformation Problem is the unitary synthesis problem \(\textsc{Uhlmann}_{\kappa,\eta}=(U_{x})_{x\in\{0,1\}^{*}}\) where whenever \(x\) is a fidelity-\(\kappa(n)\) Uhlmann instance specifying a pair \((C,D)\) of unitary circuits that each act on \(2n\) qubits for some \(n\), then \(U_{x}\) is the canonical \(\eta\)-Uhlmann isometry for the states \(\ket{C}=C\ket{0^{2n}}\) and \(\ket{D}=D\ket{0^{2n}}\), with \(U_{x}\) acting on the last \(n\) qubits. Otherwise if \(x\) is not a valid Uhlmann instance, then we define \(U_{x}=0\) (i.e., a partial isometry with zero-dimensional support)._
_The \((\kappa,\eta)\)-Succinct Uhlmann Transformation Problem, denoted by \(\textsc{SuccinctUhlmann}_{\kappa,\eta}\), is the sequence \((U_{x})_{x}\) where whenever \(x\) is a valid fidelity-\(\kappa(n)\) succinct Uhlmann instance specifying
a pair \((C,D)\) of unitary circuits that each act on \(2n\) qubits for some \(n\), then \(U_{x}\) is the canonical \(\eta\)-Uhlmann isometry for the states \(\ket{C}=C\ket{0^{2n}}\) and \(\ket{D}=D\ket{0^{2n}}\), with \(U_{x}\) acting on the last \(n\) qubits; if \(x\) is not a valid succinct Uhlmann instance, then we define \(U_{x}=0\)._
Although we defined the Uhlmann and SuccinctUhlmann problems as parameterized by the cutoff parameter \(\eta\) for the sake of robustness of the definitions, we will see next that when we focus on _distributional_ versions of these problems the \(\eta\) parameter can be without loss of generality set to \(0\). The cutoff parameter \(\eta\) only really matters for complexity results about solving Uhlmann or SuccinctUhlmann in the _worst-case_.
### Distributional Uhlmann transformation problem
To define average case versions of the Uhlmann Transformation Problems we specify a distribution state \(\ket{\psi_{x}}\) for every valid (succinct or non-succinct) Uhlmann instance \(x\). If \(x\) specifies a pair of circuits \((C,D)\) on \(2n\) qubits each, the distribution state \(\ket{\psi_{x}}\) is also on \(2n\) qubits. As we argue below, a natural choice of distribution state is \(\ket{\psi_{x}}=C\ket{0^{2n}}\). When \(x\) represents a fidelity-\(1\) Uhlmann instance the Uhlmann transformation \(U_{x}\) by definition maps \(\ket{\psi_{x}}\) to \(D\ket{0^{2n}}\).
**Definition 5.7** (Distributional Uhlmann Transformation Problems).: _We define a state sequence \(\Psi_{\textsc{Uhlmann}}=(\ket{\psi_{x}})_{x\in\{0,1\}^{*}}\) as follows: for all \(x\in\{0,1\}^{*},\)_
\[\ket{\psi_{x}}=\begin{cases}\ket{C}&\text{if $x=(1^{n},C,D)$ is valid Uhlmann instance,}\\ 0&\text{otherwise.}\end{cases}\]
_Then, the distributional \((\kappa,\eta)\)-Uhlmann Transformation Problem is the distributional unitary synthesis problem \(\textsc{Dist}\textsc{Uhlmann}_{\kappa,\eta}=(\textsc{Uhlmann}_{\kappa,\eta}, \Psi_{\textsc{Uhlmann}})\)._
_Analogously, we define the state family \(\Psi_{\textsc{SuccinctUhlmann}}=(\ket{\psi_{x}})_{x}\) as follows: for all \(x\in\{0,1\}^{*},\)_
\[\ket{\psi_{x}}=\begin{cases}\ket{C}&\text{if $x=(1^{n},\hat{C},\hat{D})$ is valid succinct Uhlmann instance,}\\ 0&\text{otherwise.}\end{cases}\]
_The distributional \((\kappa,\eta)\)-Succinct Uhlmann Transformation Problem is the distributional unitary synthesis problem \(\textsc{Dist}\textsc{SuccinctUhlmann}_{\kappa,\eta}=(\textsc{SuccinctUhlmann }_{\kappa,\eta},\Psi_{\textsc{SuccinctUhlmann}})\)._
We now argue that this choice of distribution state is natural for the Uhlmann Transformation Problems: being able to solve the distributional Uhlmann Transformation Problems in the average-case essentially coincides with being able to perform the Uhlmann transformation corresponding to a pair of (succinctly or non-succinctly described) states. The next proposition captures this equivalence in the _high \(\kappa\) regime_, where \(\kappa\) is close to \(1\).
**Proposition 5.8**.: _Let \(M=(M_{x})_{x}\) be a quantum algorithm where for each valid fidelity-\(\kappa(n)\) Uhlmann (resp. Succinct Uhlmann) instance \(x=(1^{n},C,D)\) (resp. \(x=(1^{n},\hat{C},\hat{D})\)),_
\[\mathrm{F}\Big{(}(\mathrm{id}\otimes M_{x})(\ket{C}\!\!\bra{C}),\ket{D}\!\! \bra{D}\Big{)}\geq\kappa(n)-\delta(n) \tag{5.4}\]
_for some error function \(\delta(n)\), where \(M_{x}\) acts on the second \(n\) qubits of \(\ket{C}\). Then \(M\) implements \(\textsc{Dist}\textsc{Uhlmann}_{\kappa,0}\) (resp. \(\textsc{Dist}\textsc{SuccinctUhlmann}_{\kappa,0}\)) with average-case error \(6\sqrt{1-\kappa(n)}+\sqrt{\delta(n)}\)._
_Conversely, suppose that a (uniform or nonuniform) quantum algorithm \(M=(M_{x})_{x}\) implements \(\textsc{DistUhlmann}_{\kappa,0}\) (resp. \(\textsc{DistSUCCinctUhlmann}_{\kappa,0}\)) with average-case error \(\delta\). Then for all valid fidelity-\(\kappa(n)\) Uhlmann (resp. Succinct Uhlmann) instances \(x=(1^{n},C,D)\) (resp. \(x=(1^{n},\hat{C},\hat{D})\)), the following holds:_
\[\mathrm{F}\Big{(}(\mathrm{id}\otimes M_{x})(|C\rangle\!\langle C|),|D \rangle\!\langle D|\,\Big{)}\geq\Big{(}1-\delta(n)-5\sqrt{1-\kappa(n)}\Big{)}^ {2}\,.\]
Proof.: We will prove this proposition for the case of Uhlmann instances; the case of succinct Uhlmann instances is entirely analogous. Throughout the proof we abuse notation slightly and write \(\delta=\delta(n)\) and \(\kappa=\kappa(n)\).
We begin with the first part of the proposition. Fix a valid fidelity-\(\kappa(n)\) Uhlmann instance \(x=(1^{n},C,D)\). Let \(\eta=0\) and let \(W\) denote the canonical \(\eta\)-Uhlmann partial isometry corresponding to \((|C\rangle\,,|D\rangle)\). Let \(U=W+(\mathrm{id}-W^{\dagger}W)\), which is a (non-partial) isometry. Let \(\Phi(K)=UKU^{\dagger}\) and note that it is a channel completion of the partial isometry \(W\). (This is in fact the most straightforward channel completion: it simply applies \(W\) on the support of \(W\), and the identity on the orthogonal complement of the support of \(W\).) We will show that
\[\mathrm{td}((\mathrm{id}\otimes\Phi)\,|C\rangle\!\langle C|\,,|D \rangle\!\langle D|)\leq 5\sqrt{1-\kappa}\,. \tag{5.5}\]
Before proving this, let us see how this implies the first part of the proposition. By the triangle inequality, we have
\[\mathrm{td}\Big{(}( \mathrm{id}\otimes M_{x})(|C\rangle\!\langle C|),(\mathrm{id} \otimes\Phi)(|C\rangle\!\langle C|)\Big{)}\] \[\leq\mathrm{td}\Big{(}(\mathrm{id}\otimes M_{x})(|C\rangle\! \langle C|),|D\rangle\!\langle D|\,\Big{)}+\mathrm{td}\Big{(}\,|D\rangle\! \langle D|\,,(\mathrm{id}\otimes\Phi)(|C\rangle\!\langle C|)\Big{)}\] \[\leq\sqrt{1-\kappa+\delta}\,+5\sqrt{1-\kappa}\] \[\leq 6\sqrt{1-\kappa}+\sqrt{\delta}\]
where in the third line we applied the Fuchs-van de Graaf inequality to Equation (5.4) and also used Equation (5.5). This shows that one the state \(|C\rangle\), \(M_{x}\) behaves (approximately) like a channel completion of the Uhlmann partial isometry. By Definition 3.5, this means that \(M_{x}\) (approximately) implements the DistUhlmann problem as claimed in the first part of the proposition.
We now prove Equation (5.5). The main issue we have to deal with is that for \(\kappa<1\), the support of the reduced state of \(|C\rangle\) on the second half of the qubits may not be contained in the support of the Uhlmann partial isometry. As a result, taking \(\Phi\) to be a channel completion of the Uhlmann partial isometry as above, it is _not_ the case that \(\Phi(|C\rangle\!\langle C|)=(\mathrm{id}\otimes W)\,|C\rangle\!\langle C|\,( \mathrm{id}\otimes W^{\dagger})\). (This equation does of course hold for \(\kappa=1\).)
To deal with this issue, we need to consider the state \(|C\rangle\) projected onto the support of the Uhlmann partial isometry. To this end, let \(\Pi=W^{\dagger}W\) denote the projector onto the support of \(W\). Let \(|C^{\prime}\rangle\) denote the (re-normalized) projection of \(|C\rangle\) onto \(\mathrm{id}\otimes\Pi\):
\[|C^{\prime}\rangle=\frac{(\mathrm{id}\otimes\Pi)\,|C\rangle\!\langle C|\,( \mathrm{id}\otimes\Pi)}{\mathrm{Tr}((\mathrm{id}\otimes\Pi)\,|C\rangle\! \langle C|)}\,.\]
By the Gentle Measurement Lemma [20, Section 9.4], we have
\[\mathrm{td}(|C\rangle\!\langle C|\,,|C^{\prime}\rangle\!\langle C^{\prime}|) \leq 2\sqrt{1-\mathrm{Tr}((\mathrm{id}\otimes\Pi)\,|C\rangle\!\langle C|)}\,. \tag{5.6}\]
Note that since the projection \(\mathrm{id}\otimes\Pi\) is at least \((\mathrm{id}\otimes W^{\dagger})\,|D\rangle\!\langle D|\,(\mathrm{id}\otimes W)\) in the positive semidefinite ordering, we have
\[\mathrm{Tr}((\mathrm{id}\otimes\Pi)\,|C\rangle\!\langle C|)\geq \mathrm{Tr}\Big{(}(\mathrm{id}\otimes W^{\dagger})\,|D\rangle\!\langle D|\,( \mathrm{id}\otimes W)\,|C\rangle\!\langle C|\,\Big{)}=\mathrm{F}(\rho,\sigma)\geq\kappa \tag{5.7}\]
where \(\rho,\sigma\) denote the reduced density matrices of \(|C\rangle\,,|D\rangle\) respectively. Applying the triangle inequality, we have
\[\mathrm{td}((\mathrm{id}\otimes\Phi)\,|C\rangle\!\langle C|\,,|D \rangle\!\langle D|) \leq\mathrm{td}((\mathrm{id}\otimes\Phi)\,|C\rangle\!\langle C|\,, (\mathrm{id}\otimes\Phi)\,|C^{\prime}\rangle\!\langle C^{\prime}|)+\mathrm{td} ((\mathrm{id}\otimes\Phi)\,|C^{\prime}\rangle\!\langle C^{\prime}|\,,|D \rangle\!\langle D|)\] \[\leq\mathrm{td}(|C\rangle\!\langle C|\,,|C^{\prime}\rangle\! \langle C^{\prime}|)+\mathrm{td}((\mathrm{id}\otimes W)\,|C^{\prime}\rangle \!\langle C^{\prime}|\,(\mathrm{id}\otimes W^{\dagger}),|D\rangle\!\langle D|)\] \[\leq 2\sqrt{1-\kappa}+\mathrm{td}((\mathrm{id}\otimes W)\,|C^{ \prime}\rangle\!\langle C^{\prime}|\,(\mathrm{id}\otimes W^{\dagger}),|D \rangle\!\langle D|)\]
where in the second line we used the monotonicity of the trace distance under quantum channels and the fact that \(|C^{\prime}\rangle\) is supported on \(\Pi\), and in the last line we used Equation (5.6) and Equation (5.7). To bound the last term we use the triangle inequality again:
\[\mathrm{td}((\mathrm{id}\otimes W)\,|C^{\prime}\rangle\!\langle C ^{\prime}|\,(\mathrm{id}\otimes W^{\dagger}),|D\rangle\!\langle D|)\] \[\qquad\leq\mathrm{td}(|C\rangle\!\langle C|\,,|C^{\prime}\rangle \!\langle C^{\prime}|)+\mathrm{td}((\mathrm{id}\otimes W)\,|C\rangle\!\langle C |\,(\mathrm{id}\otimes W^{\dagger}),|D\rangle\!\langle D|)\] \[\qquad\leq 3\sqrt{1-\kappa}\]
where in the last line we applied the Fuchs-van de Graaf inequality to \(\mathrm{F}((\mathrm{id}\otimes W)\,|C\rangle\!\langle C|\,(\mathrm{id}\otimes W ^{\dagger}),|D\rangle\!\langle D|)\geq\kappa\). This concludes the proof of Equation (5.5).
We now prove the "Conversely" part of the proposition. Again fix a valid fidelity-\(\kappa(n)\) Uhlmann instance \(x=(1^{n},C,D)\). By Definition 3.5, there exists a channel completion \(\Phi\) of the Uhlmann transformation \(W\) corresponding to the states \(|C\rangle\,,|D\rangle\) such that
\[\mathrm{td}\Big{(}(\mathrm{id}\otimes M_{x})\ |C\rangle\!\langle C|\,,( \mathrm{id}\otimes\Phi)\ |C\rangle\!\langle C|\,\Big{)}\leq\delta\,. \tag{5.8}\]
By the triangle inequality
\[\mathrm{td}\Big{(}(\mathrm{id}\otimes M_{x})(|C\rangle\!\langle C |),|D\rangle\!\langle D|\,\Big{)}\] \[\qquad\leq\mathrm{td}\Big{(}(\mathrm{id}\otimes M_{x})(|C \rangle\!\langle C|),(\mathrm{id}\otimes\Phi)\ |C\rangle\!\langle C|\,\Big{)}+\mathrm{td}\Big{(}(\mathrm{id}\otimes\Phi)\,|C \rangle\!\langle C|\,,(\mathrm{id}\otimes\Phi)\,|C^{\prime}\rangle\!\langle C ^{\prime}|\,\Big{)}\] \[\qquad\qquad\qquad+\mathrm{td}\Big{(}(\mathrm{id}\otimes\Phi)\,|C ^{\prime}\rangle\!\langle C^{\prime}|\,,|D\rangle\!\langle D|\,\Big{)}\,. \tag{5.9}\]
By Equation (5.8), the first term is at most \(\delta\). Using the same argument as above, by the monotonicity of trace distance under quantum channels and the Gentle Measurement Lemma, the second term of Equation (5.9) is at most \(2\sqrt{1-\kappa}\). Similarly, the third term of Equation (5.9) is bounded by \(3\sqrt{1-\kappa}\) as shown above.
Putting everything together, we can upper bound Equation (5.9) by
\[\mathrm{td}\Big{(}(\mathrm{id}\otimes M_{x})(|C\rangle\!\langle C|),|D\rangle\! \langle D|\,\Big{)}\leq\delta+5\sqrt{1-\kappa}\,.\]
Applying the Fuchs-van de Graaf inequality yields the conclusion of the proposition.
Proposition 5.8 indicates that in the average-case setting and the setting of \(\kappa\) close to \(1\), the \(\eta\) cutoff parameter can be without loss of generality set to \(0\), as alluded to earlier. This is because Proposition 5.8 shows that solving the distributional versions of Uhlmann or SuccinctUhlmann is equivalent to approximately mapping \(|C\rangle\) to \(|D\rangle\) while acting only on the second half of the qubits. This second statement, however, is clearly robust to small perturbations: if a quantum algorithm \(M\) can approximately map \(|C\rangle\) to \(|D\rangle\), then it can also approximately map \(|C^{\prime}\rangle\) and \(|D^{\prime}\rangle\) where \(|C^{\prime}\rangle\approx|C\rangle\) and \(|D^{\prime}\rangle\approx|D\rangle\). Thus the subtlety discussed at the beginning of the section about the need for a cutoff parameter \(\eta\) does not arise.
Since we mainly deal with solving Uhlmann or SuccinctUhlmann in the average case and in the high \(\kappa\) regime, we will from now omit mention of the \(\eta\) parameter and implicitly assume it is set to \(0\). The only place where we explicitly need the \(\eta\) parameter is in Section 7.3, where we sketch how SuccinctUhlmann\({}_{1,\eta}\) for exponentially small cutoff \(\eta\) is a complete problem for (worst-case) unitaryPSPACE.
Finally, we pose a question about the tightness of Proposition 5.8.
**Open Problem 9**.: Can Proposition 5.8 be improved to give meaningful guarantees when the fidelity parameter \(\kappa\) is bounded away from \(1\)?
Improving it would be helpful when reasoning about Uhlmann transformations for Uhlmann\({}_{\kappa}\) instances with small \(\kappa\) (for an example of this, see Section 8.2).
## 6 Structural Results about the Uhlmann Transformation Problem
Having defined the Uhlmann Transformation Problem and its succinct version as unitary synthesis problems, we now prove some structural results about their complexity. Specifically we show that the distributional Uhlmann Transformation Problem is complete for the zero knowledge unitary complexity class avgUnitarySZK\({}_{\text{HV}}\) defined in Section 4. We also prove a hardness amplification result for the Uhlmann Transformation Problem, which has cryptographic applications as we discuss in Section 8. We then introduce a simple "padding trick" that shows that the complexity of the distributional Uhlmann Transformation Problem is the same for all \(\kappa\) that is polynomially-bounded away from \(0\) or \(1\). As discussed in the previous section, since we are only dealing with the distributional Uhlmann Transformation Problem, we set the cutoff parameter \(\eta\) to \(0\) and omit reference to it.
### Completeness for unitary zero knowledge
In this section we show that DistUhlmann\({}_{1-\text{negl}}\) is complete for the unitary complexity class avgUnitarySZK\({}_{\text{HV}}\) (see Section 4.3 for the definition of this class). What we mean by this is that for every negligible function \(\text{negl}(n)\), the distributional unitary synthesis problem DistUhlmann\({}_{1-\text{negl}}\) is contained in avgUnitarySZK\({}_{\text{HV}}\), and every problem in avgUnitarySZK\({}_{\text{HV}}\) is polynomial-time reducible to DistUhlmann\({}_{1-\text{negl}}\) for _some_ negligible function \(\text{negl}(n)\) (related to the simulation error of the problem).
We first introduce the notation employed throughout this section. A register block \(\mathsf{R}_{[i:j]}\) is an ordered collection of registers, denoted as \(\mathsf{R}_{[i:j]}\coloneqq\mathsf{R}_{i}\mathsf{R}_{i+1}\ldots\mathsf{R}_{j}\), with the size of the collection defined as, \(|\mathsf{R}_{[i:j]}|\coloneqq j-i+1\). When the first index is omitted, the collection is taken to start at \(i=1\), so \(\mathsf{R}_{[m]}=\mathsf{R}_{1}\ldots\mathsf{R}_{m}\). For a permutation \(\pi\) on \(|\mathsf{R}_{[0:m]}|\) elements, we use \(P_{\pi}\) to denote the
unitary for a "block permutation" that permutes the registers inside a block in the obvious way, and \(\mathcal{P}_{\pi}(\cdot)=P_{\pi}(\cdot)P_{\pi}^{\dagger}\) the associated channel.
We first show that for all negligible functions \(\mu(n)\), \(\textsc{DistUhlmann}_{1-\mu}\) is contained in \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) in Proposition 6.1. Then, in Proposition 6.5 we show that \(\textsc{DistUhlmann}_{1-\mathrm{negl}}\) is \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\)-hard, i.e. that any problem in \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) polynomial-time reduces to \(\textsc{DistUhlmann}_{1-\epsilon}\) for some negligible function \(\epsilon(n)\). In Theorem 6.7, we combine these two statements to conclude that \(\textsc{DistUhlmann}_{1-\mathrm{negl}}\) is complete for \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\).
#### 6.1.1 \(\textsc{DistUhlmann}_{1-\mathrm{negl}}\in\mathsf{avgUnitarySZK}_{\mathrm{HV}}\)
**Proposition 6.1**.: \(\textsc{DistUhlmann}_{1-\mu}\in\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) _for all negligible functions \(\mu(n)\)._
Proof.: Let \(\mu(n)\) be a negligible function. We show that for all polynomials \(q\), \(\textsc{DistUhlmann}_{1-\mu}\in\mathsf{avgUnitarySZK}_{\mathrm{HV},1-\nu,1/2,1/q}\) for \(\nu(n)=32q(n)^{2}\mu(n)\); since \(\nu(n)\) is still negligible, this suffices to show the proposition. For this, we need to design a protocol that satisfies the conditions from Definition 4.3. Consider the following protocol (Protocol 1).
**Protocol 1.**\(\mathsf{avgUnitarySZK}_{\mathrm{HV},1-\nu,\frac{1}{2},1/q}\) **verifier for \(\textsc{DistUhlmann}_{1-\mu}\)**
**Input:** A valid \(\textsc{Uhlmann}_{1-\mu}\) instance \(x=(1^{n},C,D)\), and an \(n\) qubit quantum register \(\mathsf{B}_{0}\).
1. Let \(m=32q(n)^{2}\). Prepare the state \(\bigotimes_{i=1}^{m}|C\rangle_{\mathsf{A}_{i}\mathsf{B}_{i}}\). Select a permutation \(\pi\in S_{m+1}\) uniformly at random, and apply \(\mathcal{P}_{\pi}\) to the register block \(\mathsf{B}_{[0:m]}=\mathsf{B}_{0}\mathsf{B}_{1}\ldots\mathsf{B}_{\mathsf{m}}\). Send the block \(\mathsf{B}_{[0:m]}\) to the prover.
2. The verifier receives register block \(\mathsf{B}\) from the prover. Then: 1. Apply \(\mathcal{P}_{\pi^{-1}}\) to \(\mathsf{B}_{[0:m]}\). 2. Apply \((D^{\dagger})^{\otimes m}\) to registers \(\mathsf{AB}_{[m]}\), and measure in the computational basis. If the outcome is the all-0 string for all \(i\), accept and output the \(\mathsf{B}_{0}\) register. Otherwise, reject.
Note that all registers, \(\mathsf{B}_{i}\) and \(\mathsf{A}_{i}\), used in the protocol have a dependence on the instance \(x\), but as the instance \(x\) if fixed at the beginning of the protocol, we omit explicitly writing this dependence. Protocol 1 describes the actions of the verifier. To satisfy Definition 4.3, we also need to define an honest prover \(P^{*}\), who behaves as follows: let \(\Phi(\cdot)\) be an arbitrary channel completion of the canonical Uhlmann partial isometry for \((C,D)\). Upon receipt of the registers \(\mathsf{B}_{[0:m]}=\mathsf{B}_{0}\ldots\mathsf{B}_{\mathsf{m}}\), the honest prover \(P^{*}\) applies \(\Phi(\cdot)\) to each register \(\mathsf{B}_{i}\) individually and sends back the resulting state.
We will show that Protocol 1 with the honest prover \(P^{*}\) satisfies the three properties from Definition 4.3. Since the proofs are slightly involved, we separate them out into individual lemmas, which we prove below using the same notation and parameter settings introduced here:
1. The honest prover \(P^{*}\) needs to succeed with probability at least \(1-\nu(n)\) (Lemma 6.2).
2. The verifier needs to satisfy the soundness condition of an \(\mathsf{avgUnitaryQIP}_{1-\nu,1/2,1/q}\) protocol (Lemma 6.3).
3. The protocol needs to satisfy the zero-knowledge condition (Lemma 6.4).
Combined, Lemmas 6.2 to 6.4 imply Proposition 6.1.
We now prove the individual lemmas referenced in the proof of Proposition 6.1.
**Lemma 6.2** (avgUnitaryQIP completeness).: _For all valid \(\textsc{Uhlmann}_{1-\mu}\) instances \(x=(1^{n},C,D)\), for sufficiently large \(n\) the honest prover \(P^{\star}\) satisfies_
\[\Pr[V_{x}(|C)_{\mathsf{A_{0}B_{0}}}){\leftrightarrow}P^{\star}]\geq 1-\nu(n)\,.\]
Proof.: We want to show that the honest prover, who applies the optimal Uhlmann isometry, is accepted by the verifier with probability at least \(1-\nu\). Let \(W_{\mathsf{B}}\) be the optimal Uhlmann partial isometry for the circuit pair \((C,D)\). Because we are considering an \(\textsc{Uhlmann}_{1-\mu}\) instance we have that
\[1-\mu(n)\leq|\langle D|\left(\operatorname{id}_{\mathsf{A}}\otimes W_{ \mathsf{B}}\right)|C\rangle|^{2}\.\]
The honest prover action on the product state \(|C\rangle^{\otimes m+1}\) is exactly given by \(W_{\mathsf{B}}^{\otimes m+1}\). Because the fidelity is multiplicative under tensor products, the probability of the verifier accepting is given by
\[\Big{(}|\langle D|\left(\operatorname{id}_{\mathsf{A}}\otimes(W)_{\mathsf{B} }\right)|C\rangle|^{2}\Big{)}^{m}\geq(1-\mu(n))^{m}\geq 1-m\cdot\mu(n)=1-\nu(n)\,.\]
Additionally, after interacting with the honest prover and conditioned on accepting, the verifier has successfully applied \(W\) to the input state.
**Lemma 6.3** (avgUnitaryQIP soundness).: _For all valid \(\textsc{Uhlmann}_{1-\mu}\) instances \(x=(1^{n},C,D)\), for sufficiently large \(n\), for all quantum provers \(P\), there exists a channel completion \(\Phi_{x}\) of \(U_{x}\) such that_
\[\text{if }\quad\Pr[V_{x}(|C\rangle){\leftrightarrow}P\text{ accepts}]\geq\frac{1}{2}\qquad\text{then}\qquad\operatorname{td}(\sigma,( \Phi_{x}\otimes\operatorname{id})\,|C\rangle\!\langle C|)\leq 1/q(n)\,\]
_where \(\sigma\) denotes the output of \(V_{x}(|C\rangle){\leftrightarrow}P\), conditioned on \(V_{x}\) accepting._
Proof.: We argue that soundness holds in three steps. We first show that by applying the block permutation \(P_{\pi}\) and inverting it after the interaction with the prover, the verifier has forced the state after interacting with the prover to be a symmetric state across the registers \(\mathsf{AB}_{[0:m]}\). Second, we show that measuring \(|D\rangle\!\langle D|\) on the \(m\) decoy registers and accepting yields a state close to measuring \(|D\rangle\!\langle D|\) on all \(m+1\) registers. Finally we apply the Gentle Measurement Lemma to show that, conditioned on accepting, the verifier has a state close to the optimal Uhlmann unitary applied to the input state.
We begin by expressing the state of the verifier's registers after interacting with the prover and undoing the permutation in step \(2(a)\). Assume that the verifier's quantum input is the \(\mathsf{B}_{0}\) register of \(|C\rangle_{\mathsf{A_{0}B_{0}}}\) (the distributional input). In the protocol, the verifier will first apply a random permutation on \(\mathsf{B}_{[0:m]}\); then the prover will perform some arbitrary action on \(\mathsf{B}_{[0:m]}\), represented by a quantum channel \(\Lambda_{\mathsf{B}_{[0:m]}}\); and finally the verifier will undo the random permutation from the first step. Treating \(\mathsf{A_{0}}\) as the purification register of the verifier's quantum input, the state of the registers \(\mathsf{B}_{[0:m]}A_{[0:m]}\) after these three steps is given by
\[\rho^{\star}\coloneqq\operatorname*{\mathbb{E}}_{\pi\in S_{m+1}}\Big{(}( \mathcal{P}_{\pi^{-1}})_{\mathsf{B}_{[0:m]}}\circ\Lambda_{\mathsf{B}_{[0:m]}} \circ(\mathcal{P}_{\pi})_{\mathsf{B}_{[0:m]}}\otimes\operatorname{id}_{ \mathsf{A}_{[0:m]}}\Big{)}(|C\rangle\!\langle C|^{\otimes m+1})\,.\]
Note that in addition to permuting the \(\mathsf{B}_{[0:m]}\) registers and then permuting them back, we can extend the permutation to include the \(\mathsf{A}_{[0:m]}\) registers, too. This is because \((\mathcal{P}_{\pi})_{\mathsf{AB}_{[0:m]}}=(\mathcal{P}_{\pi})_{\mathsf{A}_{[0:m] }}\otimes(\mathcal{P}_{\pi})_{\mathsf{B}_{[0:m]}}\), and since \(\Lambda\) does not act on \(\mathsf{A}_{[0:m]}\), the permutations on \(\mathsf{A}_{[0:m]}\) simply cancel. Therefore,
\[\rho^{*}=\mathop{\mathbb{E}}_{\pi\in\overline{S}_{m+1}}\Big{(}(\mathcal{P}_{ \pi^{-1}})_{\mathsf{AB}_{[0:m]}}\circ(\Lambda_{\mathsf{B}_{[0:m]}}\otimes \mathrm{id}_{\mathsf{A}_{[0:m]}})\circ(\mathcal{P}_{\pi})_{\mathsf{AB}_{[0:m]} }\Big{)}(|C\rangle\!\langle C|^{\otimes m+1})\,.\]
This state is clearly permutation-invariant, i.e. \((\mathcal{P}_{\sigma})_{\mathsf{AB}_{[0:m]}}(\rho^{*})=\rho^{*}\) for any permutation \(\sigma\in S_{m+1}\).
In the last step of the protocol, the verifier performs the projective measurement \(\{\Pi^{(0)}=|D\rangle\!\langle D|\,,\Pi^{(1)}=\mathrm{id}-\Pi^{(0)}\}\) on each of the systems in \(\mathsf{AB}_{[m]}\) and accepts if all of them yield outcome \(0\). We define random variables \(X_{0},\ldots,X_{m}\) with the joint distribution
i.e. \(X_{i}\) corresponds to the verifier's measurement on the \(i\)-th system. Since we are assuming \(\Pr[V_{x}(|C\rangle){\stackrel{{\leftrightarrow}}{{\leftrightarrow }}}P\) accepts\(]\geq\frac{1}{2}\), we have that \(\Pr[(X_{1},\ldots,X_{m})=(0,\ldots,0)]\geq 1/2\). Intuitively, the "bad outcome" is that the verifier receives outcome \(0\) for \(X_{1},\ldots,X_{m}\), but if the verifier had measured the \(0\)-th system, he would have received outcome \(1\). We can bound the probability of this happening as
\[\Pr[X_{0}=1|(X_{1},\ldots,X_{m})=(0,\ldots,0)] \leq 2\Pr[X_{0}=1\wedge(X_{1},\ldots,X_{m})=(0,\ldots,0)]\] \[\leq\frac{2}{m+1}\sum_{i=0}^{m+1}\Pr[X_{i}=1\wedge(X_{j})_{j\neq i }=(0,\ldots,0)]\] \[\leq\frac{2}{m+1}\,. \tag{6.1}\]
For the first inequality, we used the definition of conditional probability and \(\Pr[(X_{1},\ldots,X_{m})=(0,\ldots,0)]\geq 1/2\). For the second inequality, we used the fact that due to the permutation-invariance of \(\rho^{*}\), the random variables \((X_{0},\ldots,X_{m})\) are exchangeable (i.e. their joint distribution is invariant under permutations), and for the last inequality we used that \((X_{i}=1\wedge(X_{j})_{j\neq i}=(0,\ldots,0))\) are disjoint events, so the sum of their probabilities is at most \(1\).
Denoting the verifier's output state conditioned on acceptance by \(\sigma\), Equation (6.1) tells us that
\[\mathrm{F}(\sigma,|D\rangle\!\langle D|)=\mathrm{Tr}(|D\rangle\!\langle D|\, \sigma)^{2}\geq\Big{(}1-\frac{2}{m+1}\Big{)}^{2}\geq 1-\frac{4}{m+1}\,.\]
By Fuchs-van de Graaf we have
\[\mathrm{td}(\sigma,|D\rangle\!\langle D|)\leq\sqrt{\frac{4}{m+1}}\.\]
Then Equation (5.5) in the proof of Proposition 5.8 shows that for all channel completions \(\Phi_{x}\) of \(U_{x}\), we have that
\[\mathrm{td}((\Phi_{x}\otimes\mathrm{id})\,|C\rangle\!\langle C|\,,\,|D\rangle \!\langle D|)\leq 5\sqrt{\mu(n)}\]
so therefore
\[\mathrm{td}\Big{(}\sigma,(\Phi_{x}\otimes\mathrm{id})\,|C\rangle\!\langle C| \,\Big{)}\leq\sqrt{\frac{4}{m+1}}+5\sqrt{\mu(n)}\.\]
By the choice of \(m=32q(n)^{2}\), since \(\mu(n)\) is negligible, for sufficiently large \(n\) this is at most \(1/q(n)\) as desired. This completes the proof of Lemma 6.3.
**Lemma 6.4** (\(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) zero-knowledge).: _There exists a negligible function \(\mathrm{negl}\) and a polynomial-time simulator that, on input \((x,1)\),12 outputs a state \(\rho\) satisfying_
Footnote 12: Note that there is only \(1\) interaction with the prover in the protocol.
\[\mathrm{td}(\rho,\sigma_{x,1})\leq\mathrm{negl}(n)\,,\]
_where \(\sigma_{x,1}\) is the reduced density matrix of \(V_{x}^{*}\) immediately after interacting with the honest prover \(P^{*}\)._
Proof.: The simulator simply outputs the state \(\ket{D}^{\otimes m+1}\). Because the fidelity is being measured between product states, the fidelity between the simulator output and the state of the verifier after interacting with the honest prover is
\[\left(\ket{\bra{D}\left(\mathrm{id}_{\mathsf{A}}\otimes W_{\mathsf{B}}\right) \ket{C}\right)^{2}}^{m+1}\geq 1-(m+1)\mu(n).\]
By the standard relationship between trace distance and fidelity, the trace distance between the simulators output and the state of the verifier after interacting with the prover is at most \(\sqrt{(m+1)\mu(n)}\). Since \(m\) is a polynomial in \(n\), \(\sqrt{(m+1)\mu(n)}\) is also a negligible function in \(n\), so the simulator satisfies the definition of \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\).
#### 6.1.2 \(\mathsf{DistUhlmann}_{1-\mathrm{negl}}\) is \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\)-hard
Now we show that all problems in \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) reduce to \(\mathsf{DistUhlmann}_{1-\nu}\) for _some_ negligible \(\nu(n)\) (which depends on the \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\)-problem). We highlight the annoying fact that it is not known if there is a single negligible function \(\nu^{*}(n)\) such that \(\mathsf{DistUhlmann}_{1-\nu^{*}}\) is hard for \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\). In particular, the value of \(\nu\) will depend on the error to which the simulator can prepare the verifier's state.
**Open Problem 10**.: Does there exist a negligible function \(\mu\) such that every distributional unitary synthesis problem in \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\) has a protocol, (\(V\), \(P^{*}\), \(\mathrm{Sim}\)), such that \(\mathrm{Sim}\) prepares the verifiers state to within trace distance error \(\mu\)?
If the above problem was answered in the affirmative, then it would directly imply that there is a single distributional unitary synthesis problem that is complete for \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\). Note that the number of rounds in the algorithm does not matter because the Padding Trick (Section 6.3) can be used to transform a protocol with simulator error \(\delta\) to one with simulator error \(\delta/p\) for any polynomial \(p\). This question is tightly related to another broader difference between \(\mathsf{SZK}_{\mathrm{HV}}\) and \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\), which will be discussed in more detail in Section 6.4.
**Proposition 6.5**.: _Let \((\mathscr{U}=(U_{x})_{x},\Psi=(\ket{\psi_{x}})_{x})\) be a distributional unitary synthesis problem in \(\mathsf{avgUnitarySZK}_{\mathrm{HV}}\). Then there exists a negligible function \(\nu\) such that \((\mathscr{U},\Psi)\) polynomial-time reduces to \(\mathsf{DistUhlmann}_{1-\nu}\)._
Proof.: By Definition 4.3, for all polynomials \(q\) there exists a negligible function \(\mu\) such that \((\mathscr{U},\Psi)\in\mathsf{avgUnitarySZK}_{\mathrm{HV},1-\mu,1/2,1/q}\). Let \(V^{*}=(V_{x}^{*})_{x}\) be the honest, \(r\)-round \(\mathsf{avgUnitarySZK}_{\mathrm{HV},1-\mu,1/2,1/q}\) verifier for \((\mathscr{U},\Psi)\), and let \((V_{x,i})_{i=1}^{r+1}\) be the unitaries that this verifier applies throughout the protocol, where \(V_{x,i}\) is the unitary applied in the \(i\)-th round. \(V_{x,i}\) acts on a workspace register \(\mathsf{F}_{i-1}\) and a prover message \(\mathsf{Q}_{i-1}\), and outputs a pair of registers \(\mathsf{F}_{i}\mathsf{Q}_{i}\). Additionally, \(V_{x,0}\) takes in a quantum
input in register \(\mathsf{A}\), and an ancilla register \(\mathsf{R}\). See Figure 6.1.2 for an image describing how the verifier and prover interact.
Let \(\mathrm{Sim}\) be the zero-knowledge simulator for \(V^{*}\), and let \(\epsilon\) be the negligible function such that \(\mathrm{Sim}\), when run on input \((x,i)\), outputs a state within \(\epsilon(|x|)\) of the reduced density matrix of \(V^{*}\) immediately after the \(i\)-th round of interaction with the honest prover. Since \(\mathrm{Sim}\) is a polynomial time quantum algorithm, for all \(x,i\) there exists a polynomial time unitary circuit \(\mathrm{Sim}_{x,i}\) that implements \(\mathrm{Sim}(x,i)\) (i.e. in \(\mathrm{Sim}_{x,i}\), the input \(x,i\) is hard-coded). Since the circuit \(\mathrm{Sim}_{x,i}\) implements a unitary while \(\mathrm{Sim}(x,i)\) might perform measurements and trace out registers, we need to assume that \(\mathrm{Sim}_{x,i}\) might require an additional (private) register \(\mathsf{P}\) that is traced out by \(\mathrm{Sim}(x,i)\). In this section, we abuse notation and interchange the unitary implemented by \(\mathrm{Sim}_{x,i}\) with the explicit circuit description of \(\mathrm{Sim}_{x,i}\) wherever it is clear from context which one is intended. We emphasize that \(\mathrm{Sim}_{x,i}\) is a fixed quantum circuit that acts only on \(|0\rangle\), and produces a purification of the state that \(\mathrm{Sim}\) would produce when run on input \((x,i)\). Every \(\mathrm{Sim}_{x,i}\) acts on registers \(\mathsf{F}_{i}\mathsf{Q}_{i}\mathsf{P}\mathsf{B}\), where \(\mathsf{F}_{i}\) and \(\mathsf{Q}_{i}\) are the verifiers registers, \(\mathsf{B}\) is the purification register for the initial input (initially in \(\mathsf{AB}\)), and \(\mathsf{P}\) is a purification register for the simulator, as explained before. Since \(\mathrm{Sim}\) produces the state _after_ the interaction, we need to define an additional circuit that prepares the initial state of the system, which we will call \(\mathrm{Sim}_{x,0}\). Since the initial state is in \(\mathsf{stateBQP}\), there is a polynomial-time circuit such that \((\mathrm{Sim}_{x,0})_{\mathsf{AB}}\otimes\mathrm{id}_{\mathsf{R}}\,|0\rangle _{\mathsf{ABR}}=|\psi_{x}\rangle\otimes|0\rangle_{\mathsf{R}}\), which prepares the initial state of the verifier when run on \(|0\rangle_{\mathsf{ABR}}\) (recall that the initial state of the system includes an ancilla register \(\mathsf{R}\) for the verifier). We relabel the registers \(\mathsf{AR}\) to be \(\mathsf{F}_{\mathsf{0}}\mathsf{Q}_{\mathsf{0}}\) so that \(\mathrm{Sim}_{x,0}\) follows the pattern of the other \(\mathrm{Sim}_{x,i}\). Assume that every \(\mathrm{Sim}_{x,i}\) uses a private register of the same size, and that the private register has polynomial in \(|x|\) many qubits, which we can achieve by padding every \(\mathrm{Sim}_{x,i}\) with extra ancilla qubits.
We now define a quantum query circuit making exactly \(r\) calls to a \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\) oracle. The classical label for the \(i\)-th Uhlmann oracle will be (an explicit classical description of) the following pair \((A_{i},B_{i})\) of polynomial-time quantum circuits.
\[A_{i} =(V_{x,i})_{\mathsf{F}_{i-1}\mathsf{Q}_{i-1}}\circ(\mathrm{Sim}_ {x,i-1})_{\mathsf{F}_{i-1}\mathsf{Q}_{i-1}\mathsf{P}\mathsf{B}}\] \[B_{i} =(\mathrm{Sim}_{x,i})_{\mathsf{F},\mathsf{Q}_{i}\mathsf{P} \mathsf{B}}\]
Both of these are polynomial time quantum circuits. \(A_{i}\) is a unitary that, when applied to \(|0\rangle_{\mathsf{F}_{i-1}\mathsf{Q}_{i-1}\mathsf{P}\mathsf{B}}\), prepares a purification of the verifier's state immediately before the \(i\)-th round of interaction. \(B_{i}\) is a unitary that, when applied to \(|0\rangle_{\mathsf{F},\mathsf{Q}_{i}\mathsf{P}\mathsf{B}}\), prepares the verifier's state after the \(i\)-th round of interaction with the prover. We first show that \((A_{i},B_{i})\) are a valid \(\textsc{Uhlmann}_{1-2\epsilon^{2}}\) instance. Let \(\Phi_{i}\) be the channel representing the honest prover in the \(i\)-th interaction acting on \(\mathsf{Q}_{i}\). Let \(\sigma_{x,i}\) be reduced the state of the verifier registers (and hidden input register \(\mathsf{B}\)) immediately after the \(i\)-th interaction with the honest prover. By the definition of \(\mathrm{Sim}\), we have that
\[\mathrm{td}(\mathrm{Trp}(|A_{i}\rangle\!\langle A_{i}|),V_{x,i} \sigma_{x,i-1}V_{x,i}^{\dagger}) \leq\epsilon(|x|)\text{ and} \tag{6.2}\] \[\mathrm{td}(\mathrm{Trp}(|B_{i}\rangle\!\langle B_{i}|),\sigma_{x, i}) \leq\epsilon(|x|). \tag{6.3}\]
We also have that the state of the verifier after the \(i\)-th round of interaction can be attained by applying the verifiers unitary \(V_{x,i}\) and the provers channel \(\Phi_{i}\) to the state after the \((i-1)\)-th round, formally
\[((\Phi_{i})_{\mathsf{Q}_{i}}\otimes\mathrm{id})(V_{x,i}\sigma_{x,i-1}V_{x,i}^ {\dagger})=\sigma_{x,i}.\]
Fix an \(i\), and let \(\rho_{A}\) and \(\rho_{B}\) be the reduced states of \(|A_{i}\rangle\!\langle A_{i}|\) and \(|B_{i}\rangle\!\langle B_{i}|\) on \(\mathsf{F}_{i}\mathsf{B}\). We have that
\[F(\rho_{A},\rho_{B}) \geq F(((\Phi_{i})_{\mathsf{Q}_{i}}\otimes I_{\mathsf{F}_{i} \mathsf{B}})\left(\operatorname{Tr}_{\mathsf{P}}(|A_{i}\rangle\!\langle A_{i}| )\right),\operatorname{Tr}_{\mathsf{P}}(|B_{i}\rangle\!\langle B_{i}|))\] \[\geq 1-\operatorname{td}(((\Phi_{i})_{\mathsf{Q}_{i}}\otimes I_{ \mathsf{F}_{i}\mathsf{B}})\left(\operatorname{Tr}_{\mathsf{P}}(|A_{i}\rangle \!\langle A_{i}|)\right)),\operatorname{Tr}_{\mathsf{P}}(|B_{i}\rangle\! \langle B_{i}|))^{2}\] \[\geq 1-\operatorname{td}(((\Phi_{i})_{\mathsf{Q}_{i}}\otimes I_{ \mathsf{F}_{i}\mathsf{B}})\left(V_{x,i}\sigma_{x,i-1}V_{x,i}^{\dagger}\right),\operatorname{Tr}_{\mathsf{P}}(|B_{i}\rangle\!\langle B_{i}|))^{2}-\epsilon ^{2}(|x|)\] \[\geq 1-\operatorname{td}(((\Phi_{i})_{\mathsf{Q}_{i}}\otimes I_{ \mathsf{F}_{i}\mathsf{B}})\left(V_{x,i}\sigma_{x,i-1}V_{x,i}^{\dagger}\right),\sigma_{x,i})^{2}-2\epsilon(|x|)^{2}\] \[=1-2\epsilon^{2}(|x|).\]
Here the first line holds because the states on the right are extensions of the states \(\rho_{A}\) and \(\rho_{B}\). Because \(\Phi_{i}\) acts only on \(\mathsf{Q}_{i}\), the reduced state of the left hand state on \(\mathsf{F}_{i}\mathsf{B}\) is the same as \(|A_{i}\rangle\!\langle A_{i}|\). The subsequent lines follow because the trace distance obeys the triangle inequality and contracts under trace preserving channels, using the inequalities from Equations (6.2) and (6.3). Note that this means the Uhlmann unitary that acts on \(\mathsf{Q}_{i}\)_and_\(\mathsf{P}\), since we only showed that the reduced states on \(\mathsf{F}_{i}\mathsf{B}\) have high fidelity with each other. Now consider the following \(\mathsf{avgUnitaryBQP}^{\textsc{DistUhlmann}_{1-2\epsilon^{2}}}\) query algorithm protocol for \((\mathscr{U},\Psi)\).
**Algorithm 1**.: \(\mathsf{avgUnitaryBQP}^{\textsc{DistUhlmann}_{1-2\epsilon^{2}}}_{3/q(n)}\) **query algorithm for \((\mathscr{U},\Psi)\)**
**Input:** Classical string \(x\) specifying \(U_{x}\) and quantum register \(\mathsf{A}\).
1. Initialize \(i\gets 1\), register \(\mathsf{R}\leftarrow|0\rangle\!\langle 0|\) and relabel \(\mathsf{F}_{0}\mathsf{Q}_{0}\leftarrow\mathsf{AR}\). While \(i\leq r\): 1. Run \(V_{x,i}\) on \(\mathsf{F}_{i-1}\mathsf{Q}_{i-1}\) to get a state on \(\mathsf{F}_{i}\mathsf{Q}_{i}\), if \(V_{x,i}\) rejects, abort and output \(|0\rangle_{\mathsf{A}}\). 2. Call \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\) oracle on the instance corresponding to an explicit circuit representation of \((A_{i},B_{i})\), and quantum register \(\mathsf{Q}_{i}\mathsf{P}\). 3. \(i\gets i+1\).
2. Run \(V_{x,r+1}\) on \(\mathsf{F}_{r}\mathsf{Q}_{r}\) to get a state on \(\mathsf{AR}\).
3. Output register \(\mathsf{A}\).
In order to show that \((\mathscr{U},\Psi)\) polynomial-time reduces to \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\), we need to show that for all polynomials \(q\), there exists another polynomial \(p\) such that all \(1/p\)-error average case instantiations of Algorithm 1 with \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\) implement \((\mathscr{U},\Phi)\).
**Claim 6.6**.: _Fix a polynomial \(q\), and let \(p(n)=rq(n)\) (where \(r\) is the number of rounds as before). Then all \(1/p\)-error average case instantiations of Algorithm 1 with \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\) implement \((\mathscr{U},\Phi)\) to average case error \(3/q(n)\)._
Proof.: We first show by induction that for every \(i\leq r\), the input to the \(i\)-th \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\) oracle call is at most \((i-1)\cdot(1/p(n)+\epsilon\sqrt{2})\) in trace distance from the "correct" distributional state \(|A_{i}\rangle\coloneqq A_{i}\,|0\rangle\) for which the guarantee of the \(\textsc{DistUhlmann}\) holds.
The input to the first call to \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\) is exactly \(|A_{0}\rangle=(V_{x,1})_{\mathsf{A}}\,|\psi_{x}\rangle_{\mathsf{AB}}\), so the trace distance error before the 1-st call is 0.
Now assume that the claim is true up to the \(i\)-th call to \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\). Let \(\rho_{i}\) be the input to the \(i\)-th call to \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\). Let \(\Phi_{(A_{i},B_{i})}\) be the channel that the \(i\)-th call to the \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\) oracle implements, and let \(W_{i}\) be the optimal Uhlmann unitary for instance \((A_{i},B_{i})\). By assumption we have that
\[\mathrm{td}(\Phi_{(A_{i},B_{i})}(\rho_{i}),|B_{i}\rangle\!\langle B _{i}|) \leq\mathrm{td}(\Phi_{(A_{i},B_{i})}(\rho_{i}),\Phi_{(A_{i},B_{i} )}(|A_{i}\rangle\!\langle A_{i}|)+\mathrm{td}(\Phi_{(A_{i},B_{i})}(|A_{i} \rangle\!\langle A_{i}|),|B_{i}\rangle\!\langle B_{i}|)\] \[\leq(i-1)(1/p(n)+\epsilon\sqrt{2})+\mathrm{td}(\Phi_{(A_{i},B_{i })}(A_{i}\,|0\rangle\!\langle 0|\,A_{i}^{\dagger}),B_{i}\,|0\rangle\!\langle 0|\,B_{i}^{ \dagger})\] \[\leq(i-1)(1/p(n)+\epsilon\sqrt{2})+1/p(n)+\mathrm{td}(W_{i}\,|A_ {i}\rangle\!\langle A_{i}|\,W_{i}^{\dagger},|B_{i}\rangle\!\langle B_{i}|)\] \[\leq i(1/p(n)+\epsilon\sqrt{2})\]
Here we first apply the induction hypothesis and the fact that quantum channels decrease trace distance. Then we use the fact that \(\Phi_{(A_{i},B_{i})}\) is a \(1/p(n)\)-error average case solver. Finally we use the fact that \((A_{i},B_{i})\) is a valid \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\) instance, so the states \(W_{i}\,|A_{i}\rangle\) and \(|B_{i}\rangle\) are within \(\epsilon\sqrt{2}\) in trace distance. The state that the query algorithm gives as input to the oracle is
\[V_{x,i+1}(\Phi_{(A_{i},B_{i})}(\rho_{i}))\,,\]
which is within \(i(1/p(n)+\epsilon\sqrt{2})\) trace distance of \(|A_{i+1}\rangle=V_{x,i+1}\,|B_{i}\rangle\) because unitaries preserve trace distance. By induction, for all \(i\), the input to the \(i\)-th oracle call in the protocol is within \((i-1)\cdot(1/p(n)+\epsilon\sqrt{2})\). Following the same inequalities, the _output_ of the final oracle call satisfies
\[\mathrm{td}(\Phi_{(A_{r},B_{r})}(\rho_{r}),|B_{r}\rangle\!\langle B_{r}|)\leq r (1/p(n)+\epsilon\sqrt{2})\,.\]
Let \(\sigma_{x,r}\) be the state of the verifier after the final interaction with the honest prover. Then by the definition of the simulator, we have that
\[\mathrm{td}(\Phi_{(A_{r},B_{r})}(\rho_{r}),\sigma_{x,r})\leq r/p(n)+(r+1) \epsilon\sqrt{2}.\]
Figure 2: A avgUnitarySZK protocol with \(r\) rounds. The prover recieves the \(\mathsf{A}\) register of \(|\psi\rangle_{\mathsf{AB}}\). Every round of interaction consists of the verifier applying \(V_{x,i}\) to \(\mathsf{F}_{i-1}\mathsf{Q}_{i-1}\) to get \(\mathsf{F}_{i}\mathsf{Q}_{i}\) and then exchanging \(\mathsf{Q}_{i}\) with the prover. The first and final rounds are special. In the first round, the verifier takes in \(\mathsf{A}\) and a workspace \(\mathsf{R}\), and in the final round the verifier either accepts or rejects, and outputs a register. Sim can be used to generate the state after every interacting with the prover.
By the definition of the honest prover, there exists a negligible function \(\mu\) such that the honest prover is accepted with probability \(1-\mu\), and conditioned on accepting the verifier outputs a state within \(1/q(n)\) of \(U_{x}\ket{\psi_{x}}\!\bra{\psi_{x}}U_{x}^{\dagger}\) in trace distance. Thus we have that
\[\operatorname{td}(V_{x,r+1}\sigma_{x,r}V_{x,r+1}^{\dagger},U_{x}\ket{\psi_{x}} \!\bra{\psi_{x}}U_{x}^{\dagger})\leq\mu+1/q.\]
Combining everything we have that
\[\operatorname{td}(V_{x,r+1}\Phi_{(A_{r},B_{r})}(\rho_{r})V_{x,r+1} ^{\dagger},U_{x}\ket{\psi_{x}}\!\bra{\psi_{x}}U_{x}^{\dagger})\] \[\qquad\leq\operatorname{td}(V_{x,r+1}\sigma_{x,r}V_{x,r+1}^{ \dagger},U_{x}\ket{\psi_{x}}\!\bra{\psi_{x}}U_{x}^{\dagger})+r(1/p(n)+\epsilon \sqrt{2})\] \[\qquad\leq 1/q+\mu+r(1/p+\epsilon\sqrt{2})\] \[\qquad\leq 2/q+r\epsilon\sqrt{2}+\mu\] \[\qquad\leq 3/q.\]
Here we use the fact that \(p=rq\), and since \(\epsilon\) and \(\mu\) are negligible, for sufficiently large \(n\), \(r\epsilon\sqrt{2}+\mu\leq 1/q\).
Because \((\mathscr{U},\Psi)\in\mathsf{avgUnitarySZK}_{\mathrm{HV}}\), there exists a negligible function \(\epsilon\) such that for all polynomials \(q^{\prime}\), there exists a verifier that implements \((\mathscr{U},\Psi)\) with average case error \(1/3q^{\prime}\), and a simulator that makes simulation error \(\epsilon\). Thus for all polynomials \(q^{\prime}\), there exists a polynomial time quantum query algorithm, specified by Algorithm 1 when \(V\) is taken to have average case error \(1/3q^{\prime}\), and another polynomial \(p=rq^{\prime}\), that achieves average case error \(1/q^{\prime}\) when instantiated with \(1/p\)-error average case instantiations of \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\). In other words, \((\mathscr{U},\Psi)\) polynomial-time reduces to \(\textsc{DistUhlmann}_{1-2\epsilon^{2}}\).
We summarise the results of Proposition 6.1 and Proposition 6.5 in the following theorem, which shows that \(\textsc{DistUhlmann}_{1-\mathrm{negl}}\) is "almost complete" for \(\mathsf{avgUnitarySZK}\) up to the aforementioned issue that we cannot find a single negligible function \(\nu\) such that \(\mathsf{avgUnitarySZK}\) reduces to \(\textsc{DistUhlmann}_{1-\nu}\).
**Theorem 6.7**.: _For all neglibible functions \(\mu\), \(\textsc{DistUhlmann}_{1-\mu}\in\mathsf{avgUnitarySZK}_{\mathrm{HV}}\), and for all distributional unitary synthesis problems \((\mathscr{U},\Psi)\), there exists a negligible function \(\nu\) such that \((\mathscr{U},\Psi)\) reduces to \(\textsc{DistUhlmann}_{1-\nu}\)._
### Hardness amplification
In this section, we prove a hardness amplification result for the Uhlmann Transformation Problem, which roughly states that if it is hard to implement DistUhlmann in polynomial time (i.e., \(\textsc{DistUhlmann}\notin\mathsf{avgUnitaryBQP}\)), then in fact it is hard to implement DistUhlmann even with large average case error approaching \(1\). This hardness amplification statement has applications to amplifying the security of quantum commitment schemes as we show in Section 8.
**Theorem 6.8**.: _The following two statements are equivalent:_
1. _For all negligible functions_ \(\epsilon(n)\)_,_ \(\textsc{DistUhlmann}_{1-\epsilon}\qquad\in\qquad\mathsf{avgUnitaryBQP}\) _(resp._ \(\mathsf{avgUnitaryBQP}/\mathsf{poly}\)_)._
2. _For all negligible functions_ \(\epsilon(n)\)_,_ \(\textsc{Distuhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}_{1-\xi}\) _(resp._ \(\mathsf{avgUnitaryBQP}/\mathsf{poly}_{1-\xi}\)_), where_ \(\xi(n)=n^{-1/16}\)_._
While Theorem 6.8 will be useful, it is not the strongest possible amplification statement one would hope to prove: one could hope to show that instead of just suppressing the error to an inverse exponential, it can actually be suppressed to an inverse exponential. We call this "strong amplification" and leave it as an open problem:
**Open Problem 11**.: Can strong amplification be proved? In other words, does solving the Uhlmann transformation problem with inverse polynomial error imply being able to solve it with inverse exponential error?
Strong amplification for Uhlmann would also have ramifications for the question of whether quantum commitments with weak security can be boosted to commitments with strong security (see Conjecture 8.9). This would be of independent interest for quantum cryptography.
We first give an overview of the proof of Theorem 6.8. Recall from Definition 3.10 that \(\mathsf{avgUnitaryBQP}_{\delta}\) denotes the class of distributional unitary synthesis problems that can be implemented with average case error \(\delta\), and that \(\textsc{Distuhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}\) means that for all inverse polynomials \(\delta\), \(\textsc{Distuhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}_{\delta}\).
The "only if" direction of the theorem is immediate: by definition, for \(0\leq\delta\leq\delta^{\prime}\leq 1\) we have \(\mathsf{avgUnitaryBQP}_{\delta}\subseteq\mathsf{avgUnitaryBQP}_{\delta^{ \prime}}\).
For the "if" direction, the idea is to reduce implementing the Uhlmann transformation \(U_{x}\) corresponding to a valid Uhlmann instance \(x=(1^{n},C,D)\) to the task of implementing the Uhlmann transformation for the _parallel repetition_ of the instance, which we denote by \(x^{\otimes k}=(1^{nk},C^{\otimes k},D^{\otimes k})\) for some integer \(k\). If the circuits \(C,D\) acted on registers \(\mathsf{AB}\), then the circuits \(C^{\otimes k},D^{\otimes k}\) act on \(k\) copies denoted by \(\mathsf{A}_{1}\mathsf{B}_{1},\ldots,\mathsf{A}_{k},\mathsf{B}_{k}\), and output the states \(\left|C\right\rangle^{\otimes k},\left|D\right\rangle^{\otimes k}\). What we show is that being able to implement the repeated Uhlmann transformation with error very close to \(1\) can be turned into a way of implementing the original Uhlmann transformation \(U_{x}\) with very small error. Put another way, if it was hard to implement the original Uhlmann transformation \(U_{x}\) almost exactly, then it is still hard to implement the repeated Uhlmann transformation even approximately. We abstract this reduction out in the following Lemma:
**Lemma 6.9**.: _Let \(C,D\) be unitary circuits such that the states \(\left|C\right\rangle\coloneqq C\left|0\ldots 0\right\rangle,\left|D\right\rangle \coloneqq D\left|0\ldots 0\right\rangle\) are bipartite states on registers \(\mathsf{AB}\). Let \(k\in\mathbb{N}\) and let \(\left|C\right\rangle^{\otimes k},\left|D\right\rangle^{\otimes k}\) be states on registers \(\mathsf{A}_{[k]}\) and \(\mathsf{B}_{[k]}\) respectively. Suppose there is a quantum circuit \(R\) acting on register \(\mathsf{B}_{[k]}\) such that_
\[\mathrm{F}\Big{(}(\mathrm{id}\otimes R)(\left|C\right\rangle\!\!\left\langle C \right|^{\otimes k}),\,\left|D\right\rangle\!\!\left\langle D\right|^{\otimes k }\Big{)}\geq\nu\.\]
_Then for all \(T\in\mathbb{N}\) there exists a quantum circuit \(M\) which acts on register \(\mathsf{B}\) such that_
\[\mathrm{F}\Big{(}(\mathrm{id}\otimes M)(\left|C\right\rangle\!\!\left\langle C \right|),\,\left|D\right\rangle\!\!\left\langle D\right|\Big{)}\geq 1-\Big{(}2(1- \nu)^{T}+\frac{32T}{\sqrt{k}}\Big{)}\,.\]
_and the size of \(M\) is at most \(\mathrm{poly}(T,\left|R\right|,\left|C\right|,\left|D\right|)\) where \(\left|R\right|,\left|C\right|,\left|D\right|\) denote the sizes of circuits \(R,C,D\). Furthermore, if \(R\) is an instance of a uniformly generated quantum algorithm, so is \(M\)._
Before proving Lemma 6.9, we first show how it implies Theorem 6.8.
Proof of Theorem 6.8.: We present the proof for the uniform class \(\mathsf{avgUnitaryBQP}\); the proof for the non-uniform class \(\mathsf{avgUnitaryBQP}/\mathsf{poly}\) is entirely analogous.
As mentioned the "only if" direction is trivial, and we focus on the "if" direction. That is, we assume that \(\textsc{DistUhlmann}_{1-\mu}\in\mathsf{avgUnitaryBQP}_{1-\xi}\) for all negligible functions \(\mu(n)\), where \(\xi(n)=n^{-1/16}\), and we aim to show that \(\textsc{DistUhlmann}_{1-\mu}\in\mathsf{avgUnitaryBQP}\) for all negligible functions \(\mu(n)\).
Fix a negligible function \(\epsilon(n)\) and a polynomial \(q(n)\). Define the functions
\[k(n)\coloneqq(n\,q(n))^{8}\qquad\delta(n)\coloneqq k(n)\epsilon(n)\qquad T(n )=2q(n)/\xi(nk(n))^{2}\.\]
Note that \(\delta(n)\) is also a negligible function. Therefore by assumption \(\textsc{DistUhlmann}_{1-\delta}\subseteq\textsc{DistUhlmann}_{(1-\epsilon)^{k}} \in\mathsf{avgUnitaryBQP}_{1-\xi}\) there exists a uniform polynomial-time algorithm \(R=(R_{x})_{x}\) that implements \(\textsc{DistUhlmann}_{1-\delta}\) with average-case error \(1-\xi\).
Fix a \(\textsc{Uhlmann}_{1-\epsilon}\) instance \(x=(1^{n},C,D)\), and let \(k=k(n),\delta=\delta(n),T=T(n)\). We write \(x^{k}\) to denote the parallel repeated instance \((1^{nk},C^{\otimes k},D^{\otimes k})\). Note \(x^{k}\) is a valid \(\textsc{Uhlmann}_{(1-\epsilon)^{k}}\) instance. By the second part of Proposition 5.8, it holds that
\[\mathrm{F}\Big{(}(\mathrm{id}\otimes R_{x^{k}})(|C\rangle\!\langle C|^{ \otimes k}),|D\rangle\!\langle D|^{\otimes k}\,\Big{)}\geq\Big{(}\xi(nk)-5 \sqrt{\delta(nk)}\Big{)}^{2}\geq\xi(nk)^{2}-10\sqrt{\delta(nk)}. \tag{6.4}\]
Define \(\nu=\xi(nk)^{2}-10\sqrt{\delta(nk)}\). Since \(\delta\) is a negligible function, for sufficiently large \(n\) the quantity \(\nu\) is lower bounded by \(\xi(nk)^{2}/2\). We now invoke Lemma 6.9: there exists a polynomial-time quantum algorithm \(M=(M_{x})_{x}\) such that for all valid \(\textsc{Uhlmann}_{1-\epsilon}\) instances \(x=(1^{n},C,D)\),
\[\mathrm{F}\Big{(}(\mathrm{id}\otimes M_{x})(|C\rangle\!\langle C|),\,|D \rangle\!\langle D|\,\Big{)}\geq 1-\Big{(}2(1-\nu(n))^{T(n)}+\frac{32T(n)}{ \sqrt{k(n)}}\Big{)}\.\]
By our choice of \(k,\nu,T\), we get
\[2(1-\nu)^{T}+\frac{32T}{\sqrt{k}} \leq 2(1-\xi(nk)^{2}/2)^{2q(n)/\xi(nk)^{2}}+\frac{64q(n)}{n^{4}\, \xi(nk)^{2}\,q(n)^{4}}\] \[\leq 2e^{-q(n)}+\frac{64\,(nk)^{1/8}}{n^{4}\,q(n)^{3}}\leq O \Big{(}\frac{1}{q(n)^{2}}\Big{)}\]
where in the second line we used the assumption that \(1/\xi(nk)\leq(nk)^{1/16}\). Thus we have argued that
\[\mathrm{F}((\mathrm{id}\otimes M_{x})\,|C\rangle\!\langle C|\,,|D\rangle\! \langle D|)\geq 1-O(1/q(n)^{2})\geq 1-\epsilon(n)-O(1/q(n)^{2})\.\]
Using the first part of Proposition 5.8, we get that the algorithm \(M_{x}\) implements \(\textsc{DistUhlmann}_{1-\epsilon}\) with average-case error \(O(\sqrt{\epsilon(n)})+O(1/q(n))\leq O(1/q(n))\) for sufficiently large \(n\). Since this is true for all polynomials \(q(n)\) and all \(\textsc{Uhlmann}_{1-\epsilon}\) instances \(x=(1^{n},C,D)\), this establishes that \(\textsc{DistUhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}\), as desired.
Proof of Lemma 6.9.: First, some notation: we write \(\mathsf{A}_{-i},\mathsf{B}_{-i}\) to denote \(\mathsf{A}_{[k]}\) without \(\mathsf{A}_{i}\) and \(\mathsf{B}_{[k]}\) without \(\mathsf{B}_{i}\), respectively.
The circuit \(R\) is not necessarily unitary as it may trace out or measure qubits, so let \(\tilde{R}\) denote the unitary extension of \(R\), i.e., \(\tilde{R}\) is the circuit given by \(R\) except all the measurements are performed coherently using ancilla qubits. Note that the size of \(\tilde{R}\) is at most polynomial in the size of \(R\). The
unitary \(\tilde{R}\) acts on registers \(\mathsf{B}_{[k]}\mathsf{G}\) where \(\mathsf{B}_{[k]}\) is the second register of the state \(\left|C\right\rangle^{\otimes k}\) and \(\mathsf{G}\) is an ancilla register.
The algorithm \(M\) is presented in Algorithm 2. The algorithm depends on the parameters \(k,T\), and makes calls to the circuits \(C,D,R\). Thus the claim about the circuit size and uniformity of \(M\) follow from inspection.
**Algorithm 2**.: _Algorithm \(M\) that maps \(\left|C\right\rangle\) to \(\left|D\right\rangle\) with small error, given that \(R\) maps \(\left|C\right\rangle^{\otimes k}\) to \(\left|D\right\rangle^{\otimes k}\) with large error._
**Input:** Quantum register \(\mathsf{B}\).
1. Sample \(i\) uniformly from \([k]\).
2. Initialize registers \(\mathsf{A}_{-i}\mathsf{B}_{-i}\) in the state \(\left|C\right\rangle^{\otimes k-1}\) and register \(\mathsf{G}\) in the all-zero state.
3. Relabel the \(\mathsf{B}\) register as \(\mathsf{B}_{i}\).
4. For \(t\in[T]\): 1. Perform the following measurement, which we will call \(P_{-i}\): 1. Apply \((C^{\otimes k-1})^{\dagger}\) to registers \(\mathsf{A}_{-i}\mathsf{B}_{-i}\). 2. Measure whether the \(\mathsf{A}_{-i}\mathsf{B}_{-i}\) registers are in the all-zeroes state. 3. Apply \(C^{\otimes k-1}\). 2. Perform the following measurement, which we will call \(Q_{-i}\): 1. Apply \(\tilde{R}\) to registers \(\mathsf{B}_{[k]}\mathsf{G}\). 2. Apply \((D^{\otimes k-1})^{\dagger}\) to registers \(\mathsf{A}_{-i}\mathsf{B}_{-i}\). 3. Measure whether the \(\mathsf{A}_{-i}\mathsf{B}_{-i}\) registers are in the all-zeroes state. 4. Apply \(D^{\otimes k-1}\). 5. Apply \(\tilde{R}^{\dagger}\) to registers \(\mathsf{B}_{[k]}\mathsf{G}\). 3. If the \(Q_{-i}\) outcome succeeds (i.e., all-zeroes are measured in step (iii)), then exit loop.
5. Apply \(\tilde{R}\) to registers \(\mathsf{B}_{[k]}\mathsf{G}\), and output register \(\mathsf{B}_{i}\).
Define the projectors
\[P\coloneqq\left|C\right\rangle\!\!\left\langle C\right|_{\mathsf{A}_{[k]} \mathsf{B}_{[k]}}^{\otimes k}\otimes\left|0\right\rangle\!\!\left\langle 0 \right|_{\mathsf{G}}\qquad\text{and}\qquad Q\coloneqq\tilde{R}^{\dagger} \left(\left|D\right\rangle\!\!\left\langle D\right|_{\mathsf{A}_{[k]}\mathsf{ B}_{[k]}}^{\otimes k}\otimes\mathrm{id}_{\mathsf{G}}\right)\tilde{R}\.\]
The assumption that \(R\) maps \(\left|C\right\rangle^{\otimes k}\) to have fidelity at least \(\nu\) with \(\left|D\right\rangle^{\otimes k}\) can be rewritten as \(\mathrm{Tr}(PQ)\geq\nu\). For simplicity assume that \(\nu=\mathrm{Tr}(PQ)\) exactly. Let
\[\left|v\right\rangle\coloneqq\left|C\right\rangle^{\otimes k}\otimes\left|0 \right\rangle,\qquad\left|w\right\rangle\coloneqq Q\left|v\right\rangle/ \sqrt{\nu}\.\]
The two-dimensional subspace spanned by \(\left|v\right\rangle\) and \(\left|w\right\rangle\) defines two additional vectors \(\left|v^{\perp}\right\rangle,\left|w^{\perp}\right\rangle\)
where
\[\left|v\right\rangle =\sqrt{\nu}\left|w\right\rangle+\sqrt{1-\nu}\left|w^{\perp}\right\rangle \text{, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\left|v^{\perp}\right\rangle=\sqrt{1-\nu}\left|w\right\rangle-\sqrt{\nu} \left|w^{\perp}\right\rangle\] \[\left|w\right\rangle =\sqrt{\nu}\left|v\right\rangle+\sqrt{1-\nu}\left|v^{\perp} \right\rangle\text{, \
We now analyze the performance of Algorithm 2. Let \(V_{i}\) denote the unitary corresponding to the "for loop" in Algorithm 2, i.e., step 4, conditioned on sampling \(i\) in step 1. The algorithm is described in terms of measurements, but we can imagine coherently performing the measurements and storing the outcome in an ancilla qubit. In particular, we describe \(V_{i}\) as a sequence of alternating unitary operations. We introduce the following labels for registers.
1. Let \(\mathsf{S}\) denote \(\mathsf{A}_{[k]}\mathsf{B}_{[k]}\mathsf{G}\).
2. Let \(\mathsf{H}_{[T]}\) denote ancilla qubits that store the outcomes of the \(P\) measurements.
3. Let \(\mathsf{F}\) denote an ancilla qubit that indicates whether a \(Q\) measurement succeeded.
The ancilla qubits all start in the zero state. Define the following unitary transformations:
1. For all \(j\in[T]\), define \[A_{ij}=\left|0\right\rangle\!\!\left\langle 0\right|_{\mathsf{F}}\otimes \left[P_{-i}\otimes X_{\mathsf{H}_{j}}+(I-P_{-i})\otimes\mathrm{id}_{\mathsf{ H}_{j}}\right]+\left|1\right\rangle\!\!\left\langle 1\right|_{\mathsf{F}}\otimes \mathrm{id}\] In other words, \(A_{ij}\) performs the \(j\)'th \(P_{-i}\) measurement: it checks if the \(\mathsf{F}\) qubit is set to \(\left|1\right\rangle\). If so, it does nothing. Otherwise, it performs the \(P_{-i}\) measurement and flips the qubit in register \(\mathsf{H}_{j}\).
2. We define \[B_{i}=\left|0\right\rangle\!\!\left\langle 0\right|_{\mathsf{F}}\otimes \left(\mathrm{id}-Q_{-i}\right)+\left|1\right\rangle\!\!\left\langle 0\right|_{ \mathsf{F}}\otimes Q_{-i}+\left|1\right\rangle\!\!\left\langle 1\right|_{ \mathsf{F}}\otimes\mathrm{id}\.\] In other words, \(B_{i}\) checks if the \(\mathsf{F}\) qubit is set to \(\left|1\right\rangle\). If so, it does nothing. Otherwise, it performs the \(Q_{-i}\) measurement and if the \(Q_{-i}\) outcome occurs, and flips \(\mathsf{F}\) qubit.
Thus the state of the algorithm \(V_{i}\) after the \(j\)'th step is
\[\left|\varphi_{ij}\right\rangle\coloneqq B_{i}A_{ij}B_{i}A_{i,j-1}\cdots B_{i }A_{i1}\left|v\right\rangle\left|0\cdots 0\right\rangle_{\mathsf{SFH}_{1}\cdots \mathsf{H}_{T}}\.\]
Clearly \(V_{i}\) is computable by a polynomial-size circuit.
We now consider another algorithm \(\hat{V}\) which is the same as \(V_{i}\) except instead of performing the \(P_{-i},Q_{-i}\) measurements, performs \(P,Q\) instead. Define the unitary matrices \(\hat{A}_{1},\ldots,\hat{A}_{T}\) and \(\hat{B}\) in the same way except they perform the \(P\) and \(Q\) measurements instead of \(P_{-i}\) and \(Q_{-i}\). Thus the state of the algorithm \(\hat{V}\) after the \(j\)'th step is
\[\left|\hat{\varphi}_{j}\right\rangle\coloneqq\hat{B}\hat{A}_{j}\hat{B}\hat{A} _{j-1}\cdots\hat{B}\hat{A}_{1}\left|v\right\rangle\left|0\cdots 0\right\rangle_{ \mathsf{SFH}_{1}\cdots\mathsf{H}_{T}}\.\]
Define \(\left|\hat{\varphi}_{0}\right\rangle\coloneqq\left|v\right\rangle\left|0 \cdots 0\right\rangle\). We will argue that \(\left|\varphi_{ij}\right\rangle\) is not far from \(\left|\hat{\varphi}_{j}\right\rangle\) on average over a randomly chosen index \(i\).
**Claim 6.12**.: _For all \(j=0,...,T\), the \(\mathsf{S}\) register of \(\left|\hat{\varphi}_{j}\right\rangle\) is supported on the subspace \(\mathrm{span}\{\left|v\right\rangle,\left|w\right\rangle\}\)._
Proof.: We prove this by induction. This holds for \(j=0\) because the initial state is \(\left|v\right\rangle\left|0\cdots 0\right\rangle\). Assume for induction that the statement is true up to \(j-1\). Note that
\[\left|\hat{\varphi}_{j}\right\rangle=\hat{B}\hat{A}_{j}\left|\hat{\varphi}_{j -1}\right\rangle\.\]
The operator \(\hat{A}_{j}\) either performs the \(P\) measurement on register \(\mathsf{S}\) or does nothing; the post-measurement states remain inside the two-dimensional subspace \(\mathrm{span}\{\left|v\right\rangle,\left|w\right\rangle\}\) because this subspace is invariant under the action of \(P\). Same with the \(\hat{B}\) operator, which either performs the \(Q\) measurement or does nothing.
**Claim 6.13**.: _For all \(i\in[k]\) and \(j\in[T]\) we have_
\[\operatorname*{\mathbb{E}}_{i}\left\|\left(B_{i}A_{ij}-\hat{B}\hat{A}_{j}\right) \left|\hat{\varphi}_{j-1}\right\rangle\right.\right\|\leq\frac{8}{\sqrt{k}}\.\]
Proof.: By triangle inequality,
\[\left\|\left(B_{i}A_{ij}-\hat{B}\hat{A}_{j}\right)\left|\hat{ \varphi}_{j-1}\right\rangle\right.\right\| \leq\left\|B_{i}(A_{ij}-\hat{A}_{j})\left|\hat{\varphi}_{j-1} \right\rangle\right.\right\|+\left\|\left(B_{i}-\hat{B}\right)\hat{A}_{j} \left|\hat{\varphi}_{j-1}\right\rangle\right.\right\|\] \[=\left\|\left(A_{ij}-\hat{A}_{j}\right)\left|\hat{\varphi}_{j-1} \right\rangle\right.\left\|+\left\|\left(B_{i}-\hat{B}\right)\hat{A}_{j} \left|\hat{\varphi}_{j-1}\right\rangle\right.\right\|\]
where in the second line we used that \(B_{i}\) is unitary. We analyze each term separately. By triangle inequality again,
\[\left\|\left(A_{ij}-\hat{A}_{j}\right)\left|\hat{\varphi}_{j-1} \right\rangle\right.\right\| \leq\left\|\left.0\right\rangle\!\!\left\langle 0\right|_{ \mathsf{F}}\otimes\left(P_{-i}-P\right)\otimes X_{\mathsf{H}_{j}}\left|\hat{ \varphi}_{j-1}\right\rangle\right.\right\|+\left\|\left.0\right\rangle\!\! \left\langle 0\right|_{\mathsf{F}}\otimes\left(P_{-i}-P\right)\otimes\operatorname {id}_{\mathsf{H}_{j}}\left|\hat{\varphi}_{j-1}\right\rangle\right.\right\|\] \[=2\!\left\|\left(P_{-i}-P\right)\left|\hat{\varphi}_{j-1}\right\rangle \right.\right\|\,.\]
Similarly we have
\[\left\|\left(B_{i}-\hat{B}\right)\hat{A}_{j}\left|\hat{\varphi}_{j-1}\right\rangle \right.\left\|\leq 2\!\left\|\left(Q_{-i}-Q\right)\hat{A}_{j}\left|\hat{ \varphi}_{j-1}\right\rangle\right.\right\|\,.\]
Averaging over \(i\in[k]\) we get
\[\operatorname*{\mathbb{E}}_{i}\left\|\left(B_{i}A_{ij}-\hat{B} \hat{A}_{j}\right)\left|\hat{\varphi}_{j-1}\right\rangle\right.\] \[\leq 2\!\left(\sqrt{\operatorname*{\mathbb{E}}_{i}\left\|\left(P_ {-i}-P\right)\left|\hat{\varphi}_{j-1}\right\rangle\right.\right\|^{2}}+\sqrt{ \operatorname*{\mathbb{E}}_{i}\left\|\left(Q_{-i}-Q\right)\hat{A}_{j}\left| \hat{\varphi}_{j-1}\right\rangle\right.\right\|^{2}}\right)\] \[\leq 4\sqrt{\frac{4}{k}}=\frac{8}{\sqrt{k}}\]
where in the second line we used Jensen's inequality, and in the third line we used Claim 6.11 with the fact that the \(\mathsf{S}\) registers of \(\left|\hat{\varphi}_{j-1}\right\rangle\) and \(\hat{A}_{j}\left|\hat{\varphi}_{j-1}\right\rangle\) are supported on the subspace \(\operatorname{span}\left\{\left|v\right\rangle,\left|w\right\rangle\right\}\).
Putting everything together, we have by the triangle inequality
\[\operatorname*{\mathbb{E}}_{i}\left\|\left|\varphi_{iT}\right\rangle-\left| \hat{\varphi}_{T}\right\rangle\right.\right\|\leq\operatorname*{\mathbb{E}}_{i }\sum_{j=1}^{T}\left\|\left(B_{i}A_{ij}-\hat{B}\hat{A}_{j}\right)\left|\hat{ \varphi}_{j-1}\right\rangle\right.\right\|\leq 8T/\sqrt{k}. \tag{6.5}\]
Now we analyze the behavior of the \(\hat{V}\) algorithm; by what we just argued, the behavior of the \(V_{i}\) algorithm is similar on average over \(i\) (assuming that \(T\) is sufficiently smaller than \(\sqrt{k}\)).
**Claim 6.14**.: \(\left\|\left(\operatorname{id}-Q\right)\left|\hat{\varphi}_{T}\right\rangle \right.\right\|^{2}\leq(1-\nu)^{T}\)_._
Proof.: Since the projector \(\operatorname{id}-Q\) does not act on the ancilla \(\mathsf{H}_{[T]}\) registers, the quantity \(\left\|\left(\operatorname{id}-Q\right)\left|\hat{\varphi}_{T}\right\rangle \right.\right\|^{2}\) is the probability that running the algorithm \(\hat{V}\) with _incoherent_\(P,Q\) measurements never yields a \(Q\) outcome at any of the \(T\) iterations.
Since the initial state of the algorithm is \(\left|v\right\rangle\left|0\ldots 0\right\rangle\), and the algorithm simply alternates between performing the \(\{P,\mathrm{id}-P\}\) and \(\{Q,\mathrm{id}-Q\}\) projective measurements, no matter what the measurement outcomes are, the register \(\mathsf{S}\) of the post-measurement state is always either \(\left|v\right\rangle,\left|w\right\rangle,\left|v^{\perp}\right\rangle,\left|w^ {\perp}\right\rangle\). If register \(\mathsf{S}\) is ever in the state \(\left|w\right\rangle\), then that means the \(Q\) outcome must have occurred, and the algorithm stops. Thus the quantity \(\left\|(\mathrm{id}-Q)\left|\hat{\varphi}_{T}\right\rangle\right\|^{2}\) is the probability that the first iteration resulted in register \(\mathsf{S}\) being in the state \(\left|w^{\perp}\right\rangle\), and for every iteration thereafter started in \(\left|w^{\perp}\right\rangle\) and ended in \(\left|w^{\perp}\right\rangle\).
The probability that the first iteration ends in the state \(\left|w^{\perp}\right\rangle\) is exactly \(1-\nu\). In all the iterations thereafter, conditioned on the starting state being \(\left|w^{\perp}\right\rangle\), performing the \(\{P,\mathrm{id}-P\}\) measurement followed by the \(\{Q,\mathrm{id}-Q\}\) measurement yields the outcome \(\left|w^{\perp}\right\rangle\) state again with probability \(\nu^{2}+(1-\nu)^{2}\). Thus we get
\[\left\|(\mathrm{id}-Q)\left|\hat{\varphi}_{T}\right\rangle\right\|^{2}=(1-\nu )(\nu^{2}+(1-\nu)^{2})^{T-1}\leq(1-\nu)^{T}\]
as desired.
We now bound the performance of \(M\). We have that
\[\mathrm{F}((\mathrm{id}_{\mathsf{A}}\otimes M)\left|C\right\rangle\!\! \left\langle C\right|,\left|D\right\rangle\!\!\left\langle D\right|)=\langle D |\left(\mathrm{id}_{\mathsf{A}}\otimes M\right)(\left|C\right\rangle\!\! \left\langle C\right|)\left|D\right\rangle=1-\mathrm{Tr}\!\left((\mathrm{id} -\left|D\right\rangle\!\!\left\langle D\right|)(\mathrm{id}_{\mathsf{A}} \otimes M)(\left|C\right\rangle\!\!\left\langle C\right|)\right)\,.\]
Note that we can write the output of \(M\) when given as input register \(\mathsf{B}\) of \(\left|C\right\rangle\) as
\[(\mathrm{id}_{\mathsf{A}}\otimes M)(\left|C\right\rangle\!\!\left\langle C \right|)=\mathop{\mathbb{E}}_{\bar{i}}\mathrm{Tr}_{\overline{\mathsf{B}_{i}}} \!\left(\tilde{R}V_{i}(\left|C\right\rangle\!\!\left\langle C\right|^{\otimes k }\otimes\left|0\ldots 0\right\rangle\!\!\left\langle 0\ldots 0\right|)V_{i}^{\dagger} \tilde{R}^{\dagger}\right)\]
where \(\mathrm{Tr}_{\overline{\mathsf{B}_{i}}}(\cdot)\) denotes the partial trace of all registers except for \(\mathsf{B}_{i}\), and \(\left|0\ldots 0\right\rangle\) denotes the ancilla qubits in registers \(\mathsf{G},\mathsf{F},\mathsf{H}_{\left[T\right]}\). Now observe that \(V_{i}\left|C\right\rangle^{\otimes k}\left|0\ldots 0\right\rangle\) is nothing but \(\left|\varphi_{iT}\right\rangle\). Therefore
\[\mathrm{Tr}\!\left((\mathrm{id}-\left|D\right\rangle\!\!\left\langle D \right|)(\mathrm{id}_{\mathsf{A}}\otimes M)(\left|C\right\rangle\!\!\left\langle C \right|)\right) =\mathop{\mathbb{E}}_{\bar{i}}\left\|(\mathrm{id}-\left|D\right\rangle\! \!\left\langle D\right|)_{\mathsf{A}_{\mathsf{B}}\mathsf{i}}\tilde{R}_{x^{k}} \left|\varphi_{iT}\right\rangle\right\|^{2}\] \[\leq 2(1-\nu)^{T}+4\mathop{\mathbb{E}}_{\bar{i}}\left\|\left| \varphi_{iT}\right\rangle-\left|\hat{\varphi}_{T}\right\rangle\right\|\] \[\leq 2(1-\nu)^{T}+\frac{32T}{\sqrt{k}}. \tag{6.6}\]
In the second line we used the triangle inequality and the fact that
\[\left\|(\mathrm{id}-\left|D\right\rangle\!\!\left\langle D\right|)_{\mathsf{A} _{\mathsf{i}}\mathsf{B}_{i}}\tilde{R}\left|\varphi_{iT}\right\rangle\right\|^ {2}=\left\|\tilde{R}^{\dagger}(\mathrm{id}-\left|D\right\rangle\!\!\left\langle D \right|)_{\mathsf{A}_{\mathsf{i}}\mathsf{B}_{i}}\tilde{R}\left|\varphi_{iT} \right\rangle\right\|^{2}\]
because \(\tilde{R}\) is unitary, and that \(Q\leq\tilde{R}^{\dagger}\left|D\right\rangle\!\!\left\langle D\right|_{\mathsf{ A}_{\mathsf{i}}\mathsf{B}_{i}}\tilde{R}\) in the positive semidefinite ordering. The third line uses Claim 6.14 and the fact that \(\left\|\left|\varphi_{iT}\right\rangle-\left|\hat{\varphi}_{T}\right\rangle \right\|^{2}\leq 2\left\|\left|\varphi_{iT}\right\rangle-\left|\hat{\varphi}_{T}\right\rangle\right\|\). The fourth line uses Equation (6.5).
This concludes the proof of Lemma 6.9.
### The padding trick
We now turn to the complexity of \(\textsc{DistUhlmann}_{\kappa}\) when \(\kappa\) is bounded away from \(0\) and \(1\) by some inverse-polynomial quantity. Note that for all \(0\leq\kappa_{1}\leq\kappa_{2}\leq 1\), we have that all valid instances of \(\textsc{Uhlmann}_{\kappa_{2}}\) are valid instances of \(\textsc{Uhlmann}_{\kappa_{1}}\) but not vice versa (a similar statement holds for \(\textsc{SuccinctUhlmann}\)). Thus, implementing general \(\textsc{Uhlmann}_{\kappa_{1}}\) transformations may potentially be more difficult than implementing \(\textsc{Uhlmann}_{\kappa_{2}}\) transformations. Furthermore, it is no longer apparent that there is a zero-knowledge protocol for, say, \(\textsc{DistUhlmann}_{1/2}\). Thus it is not clear how the complexities of \(\textsc{Uhlmann}_{\kappa_{1}}\) and \(\textsc{Uhlmann}_{\kappa_{2}}\) relate to each other for different \(\kappa_{1},\kappa_{2}\).
We present a simple padding trick which shows that as long as \(\kappa_{1},\kappa_{2}\) are bounded by at least some inverse polynomial from either \(0\) or \(1\), the complexities of \(\textsc{DistUhlmann}_{\kappa_{1}}\) and \(\textsc{DistUhlmann}_{\kappa_{2}}\) are equivalent under polynomial-time reductions.
**Lemma 6.15** (Padding trick).: _Let \(0\leq\kappa_{1}\leq\kappa_{2}\leq 1\) and let \(C,D\) be circuits on \(2n\) qubits such that \(\mathrm{F}(\rho,\sigma)\geq\kappa_{1}\) where \(\rho,\sigma\) are the reduced density matrices of \(\left|C\right\rangle=C\left|0^{2n}\right\rangle,\left|D\right\rangle=D\left|0^ {2n}\right\rangle\), respectively, on the first \(n\) qubits. Let \(0<\alpha\leq(1-\kappa_{2})/(1-\kappa_{1})\). Define the following states \(\left|E\right\rangle,\left|F\right\rangle\) on \(2(n+1)\) qubits where_
\[\left|E\right\rangle =\sqrt{\alpha}\left|0\right\rangle\left|C\right\rangle\left|0 \right\rangle+\sqrt{1-\alpha}\left|1^{2(n+1)}\right\rangle\] \[\left|F\right\rangle =\sqrt{\alpha}\left|0\right\rangle\left|D\right\rangle\left|0 \right\rangle+\sqrt{1-\alpha}\left|1^{2(n+1)}\right\rangle\.\]
_Suppose that the state \(\sqrt{\alpha}\left|0\right\rangle+\sqrt{1-\alpha}\left|1\right\rangle\) can be prepared using a circuit of size \(s\). Then the following hold:_
1. \(\left|E\right\rangle\)_,_ \(\left|F\right\rangle\) _can be computed by circuits_ \(E,F\) _of size_ \(O(\left|C\right|+\left|D\right|+s)\)_;_
2. \(\mathrm{F}(\tau,\mu)\geq\kappa_{2}\) _where_ \(\tau,\mu\) _are the reduced density matrices of_ \(\left|E\right\rangle,\left|F\right\rangle\) _on the first_ \(n+1\) _qubits;_
3. _The canonical_ \((n+1)\)_-qubit Uhlmann isometry_ \(V\) _for_ \((\left|E\right\rangle,\left|F\right\rangle)\) _can be written as_ \[V=U\otimes\left|0\right\rangle\!\!\left\langle 0\right|+\mathrm{i}\mathrm{d} \otimes\left|1\right\rangle\!\!\left\langle 1\right|\] _where_ \(U\) _is the_ \(n\)_-qubit canonical Uhlmann isometry for_ \((\left|C\right\rangle,\left|D\right\rangle)\)_._
Proof.: We prove the first item. To compute the state \(\left|E\right\rangle\), consider the circuit \(E\) on \(2(n+1)\) qubits that does the following:
1. Initialize the first qubit in the state \(\sqrt{\alpha}\left|0\right\rangle+\sqrt{1-\alpha}\left|1\right\rangle\).
2. Apply a CNOT from the first qubit to the last qubit.
3. Controlled on the first qubit being \(\left|0\right\rangle\), run the \(n\)-qubit circuit \(C\) on qubits \(2\) through \(n+1\).
4. Controlled on the first qubit being \(\left|1\right\rangle\), apply a bitflip operator to qubits \(2\) through \(n+1\).
Clearly the size of \(E\) is \(O(\left|C\right|+s)\) where \(\left|C\right|\) denotes the size of circuit \(C\) where by assumption there is a circuit of size \(s\) to initialize the first qubit. An analogous construction holds for \(\left|F\right\rangle\).
For the second item, we have
\[\tau=\alpha\left|0\right\rangle\!\!\left\langle 0\right|\otimes \rho+(1-\alpha)\left|1\right\rangle\!\!\left\langle 1\right|\otimes\left|1^{n} \right\rangle\!\!\left\langle 1^{n}\right|\] \[\mu=\alpha\left|0\right\rangle\!\!\left\langle 0\right|\otimes \sigma+(1-\alpha)\left|1\right\rangle\!\!\left\langle 1\right|\otimes\left|1^{n} \right\rangle\!\!\left\langle 1^{n}\right|\.\]
The fidelity between \(\tau\) and \(\mu\) can be bounded as \(\mathrm{F}(\tau,\mu)=\alpha\mathrm{F}(\rho,\sigma)+1-\alpha\geq\alpha\kappa_{1}+1- \alpha\geq\kappa_{2}\).
For the third item, recall that the canonical Uhlmann isometry (where we have set the cutoff \(\eta\) to \(0\)) for \((\ket{E},\ket{F})\) is defined as
\[V=\mathrm{sgn}(\mathrm{Tr}_{\mathsf{A}^{\prime}}(\ket{E}\!\bra{F}))\]
where \(\mathsf{A}^{\prime}\) denotes the first \(n+1\) qubits of \(\ket{E},\ket{F}\). This is equal to
\[\mathrm{sgn}\left(\alpha\mathrm{Tr}_{\mathsf{A}}(\ket{C}\!\bra{D})\otimes \ket{0}\!\bra{0}+(1-\alpha)\ket{1^{n}}\!\bra{1^{n}}\otimes\ket{1}\!\bra{1} \,\right)=\mathrm{sgn}(\mathrm{Tr}_{\mathsf{A}}(\ket{C}\!\bra{D}))\otimes\ket{0 }\!\bra{0}+\ket{1^{n}}\!\bra{1^{n}}\otimes\ket{1}\!\bra{1}\]
where \(\mathsf{A}\) denotes the first \(n\) qubits of \(\ket{C},\ket{D}\). To conclude, note that \(\mathrm{sgn}(\mathrm{Tr}_{\mathsf{A}}(\ket{C}\!\bra{D}))\) is the canonical Uhlmann isometry for \((\ket{C},\ket{D})\).
**Lemma 6.16** (Average-case reductions for DistUhlmann\({}_{\kappa}\) for different fidelities \(\kappa\)).: _Let \(\kappa:\mathbb{N}\to[0,1]\) be such that \(1/p(n)\leq\kappa(n)\leq 1-1/p(n)\) for all \(n\) for some polynomial \(p(n)\). Then DistUhlmann\({}_{\kappa}\) polynomial-time reduces to DistUhlmann\({}_{1-1/p}\)._
Proof.: For every valid Uhlmann\({}_{\kappa}\) instance \(x=(1^{n},C,D)\), let \(y=(1^{2(n+1)},E,F)\) denote the valid Uhlmann\({}_{1-1/p}\) instance given by the padding trick (Lemma 6.15), where \(\alpha(n)=1/p(n)\). The state \(\sqrt{\alpha(n)}\ket{0}+\sqrt{1-\alpha(n)}\ket{1}\) can be prepared with circuits of size \(O(\log n)\) by the Solovay-Kitaev theorem, so by Lemma 6.15\(E\) and \(F\) are also polynomial-sized (in \(n\)) circuits. Furthermore, given explicit descriptions of \(C,D\) one can efficiently compute explicit descriptions of \(E,F\).
To prove the lemma, let \(q(n)\) be an arbitrary polynomial. By Definition 3.21 we need to find another polynomial \(r(n)\) (which can depend on \(q(n)\)) and a polynomial-time quantum query algorithm \(A^{*}\) such that any \(1/r(n)\)-error average case instantiation (see Definition 3.20) of \(A^{\textsc{DistUhlmann}}_{1-1/p}\) implements DistUhlmann\({}_{1/p}\) with average-case error \(1/q(n)\).
We define \(A^{*}=(A^{*}_{x})_{x}\) as follows. The circuit \(A^{*}_{x}\) takes as input an \(n\)-qubit register \(\mathsf{B}\) and initializes a single-qubit register \(\mathsf{F}\) in the state \(\ket{0}\). It then applies the DistUhlmann\({}_{1-1/p}\) oracle for instance \(y\) (whose description can be efficiently computed from \(x\)) on registers \(\mathsf{FB}\) and outputs the result.
To show that this implements DistUhlmann\({}_{1/p}\), let \(r(n)=p(n)q(n)\), and let \(A^{\textsc{DistUhlmann}}_{1-1/p}\) denote a \(1/r(n)\)-error average-case instantiation. Concretely, let \(V_{y}\) denote the (exact) Uhlmann partial isometry for instance \(y\) and let \(H=(H_{y})_{y}\) denote a quantum algorithm that implements DistUhlmann\({}_{1-1/p}\) with average-case error \(1/r(n)\) and is used to instantiate the DistUhlmann\({}_{1-1/p}\)-oracle. This means there is a channel completion \(\Phi_{y}\) of \(V_{y}\) such that
\[\mathrm{td}\Big{(}(\mathrm{id}\otimes H_{y})(\ket{E}\!\bra{E}),(\mathrm{id} \otimes\Phi_{y})(\ket{E}\!\bra{E})\Big{)}\leq\frac{1}{r(\ket{y})}\.\]
By the third item of Lemma 6.15, any channel completion \(\Phi_{y}\) of \(V_{y}\) can be turned into a channel completion of \(\Xi_{x}\) of \(U_{x}\), the Uhlmann\({}_{\kappa}\) transformation corresponding to \((\ket{C},\ket{D})\). Define \(\Xi_{x}(\rho):=\mathrm{Tr}_{\mathsf{G}}(\Phi_{x}(\rho\otimes\ket{0}\!\bra{0} _{\mathsf{G}}))\) where \(\mathsf{G}\) denotes the last qubit. Let \(\Pi\) denote the support onto \(U_{x}\). Then \(\Xi_{x}(\Pi\rho\Pi)=\mathrm{Tr}_{\mathsf{G}}(\Phi_{x}(\Pi\rho\Pi\otimes\ket{0} \!\bra{0}_{\mathsf{G}}))\). But notice that the state \(\Pi\rho\Pi\otimes\ket{0}\!\bra{0}\) is contained in the support of \(V_{y}\); therefore
\[\mathrm{Tr}_{\mathsf{G}}(\Phi_{x}(\Pi\rho\Pi\otimes\ket{0}\!\bra{0}))=\mathrm{ Tr}_{\mathsf{G}}\Big{(}V_{y}(\Pi\rho\Pi\otimes\ket{0}\!\bra{0})V_{y}^{ \dagger}\Big{)}=U_{x}\Pi\rho\Pi U_{x}^{\dagger}\]
where we used the expression for \(V_{y}\) given by Lemma 6.15. Thus we can evaluate the performance of the instantiation \(A^{\textsc{Distuhlmann}_{1-1/p}}\) on the input \(\left|C\right\rangle\):
\[\operatorname{td}\Bigl{(}(\operatorname{id}\otimes A_{x}^{ \textsc{Distuhlmann}_{1-1/p}})(\left|C\right\rangle\!\!\left\langle C\right|), \,(\operatorname{id}\otimes\Xi_{x})(\left|C\right\rangle\!\!\left\langle C \right|)\Bigr{)}\] \[=\operatorname{td}\Bigl{(}(\operatorname{id}\otimes H_{y})( \left|0\right\rangle\!\!\left\langle 0\right|\otimes\left|C\right\rangle\!\!\left\langle C \right|\otimes\left|0\right\rangle\!\!\left\langle 0\right|),\,(\operatorname{id} \otimes\Phi_{y})(\left|0\right\rangle\!\!\left\langle 0\right|\otimes\left|C \right\rangle\!\!\left\langle C\right|)\otimes\left|0\right\rangle\!\! \left\langle 0\right|\,\Bigr{)}\] \[=\frac{1}{\alpha(n)}\operatorname{td}\Bigl{(}(\operatorname{id} \otimes H_{y})(P\left|E\right\rangle\!\!\left\langle E\right|P^{\dagger}),\,( \operatorname{id}\otimes\Phi_{y})(P\left|E\right\rangle\!\!\left\langle E \right|P^{\dagger})\Bigr{)}\] \[\leq\frac{1}{\alpha(n)}\operatorname{td}\Bigl{(}(\operatorname{ id}\otimes H_{y})(\left|E\right\rangle\!\!\left\langle E\right|),\,( \operatorname{id}\otimes\Phi_{y})(\left|E\right\rangle\!\!\left\langle E \right|)\Bigr{)}\] \[\leq\frac{1}{\alpha(n)r(n)}=\frac{1}{q(n)}\.\]
In the second line, we expanded the definitions of the query circuit \(A_{x}\) and the channel completion \(\Xi_{x}\). In the third line, we define the projector \(P=\left|0\right\rangle\!\!\left\langle 0\right|\) which acts on the first qubit so that \(\left|0\right\rangle\left|C\right\rangle\left|0\right\rangle=\frac{1}{\sqrt{ \alpha(n)}}P\left|E\right\rangle\). In the fifth line we used the guarantees about the algorithm \(H_{y}\) and our definitions of \(\alpha(n),r(n)\).
The padding trick allows us to make statements about \(\textsc{Uhlmann}_{\kappa}\) and \(\textsc{Distuhlmann}_{\kappa}\) for the case where \(\kappa\) is at least an inverse polynomial. However, it may be that \(\textsc{Uhlmann}\) with negligible \(\kappa\) is more powerful than this. We leave this as an open question.
**Open Problem 12**.: What is the power of \(\textsc{Uhlmann}_{\kappa}\) or \(\textsc{Distuhlmann}_{\kappa}\) for negligible \(\kappa\)?
### A polarization lemma for unitary zero knowledge?
Sahai and Vadhan [15] introduced the StatisticalDistance problem and showed that it is complete for SZK. Here, an instance \((1^{n},C_{0},C_{1})\) of StatisticalDistance consists of a pair of probability distributions (specified by circuits \(C_{0},C_{1}\) which produce samples from them) and the problem is to decide whether the distributions are _close_ (below the threshold \(1/3\)) or _far apart_ (above the threshold \(2/3\)) in terms of statistical distance. A key technical ingredient in their proof system is the so-called "polarization lemma". This is an efficient transformation that takes as input a pair of probability distributions (specified by circuits) and produces a new pair of distributions (in the form of new pair of circuits) with the following two guarantees:
* if the initial pair of distributions is statistically _close_ (below the threshold \(1/3\)), then the new pair of distributions is statistically _much closer_ (below the threshold \(2^{-n}\)), whereas
* if the initial pair of distributions is statistically far apart (above the threshold \(2/3\)), then new pair is statistically _much further apart_ (above the threshold \(1-2^{-n}\)).
This raises the following natural question: is it possible to obtain a "polarization lemma" in the context of \(\mathsf{avgUnitarySZK}\) - the unitary analogue of (average-case) SZK? Specifically, we ask:
**Open Problem 13**.: Is it possible to prove a "polarization lemma" which transforms an instance of \(\textsc{Uhlmann}_{\kappa}\) for a small \(\kappa\), say \(\kappa=1/2\), into an instance of \(\textsc{Uhlmann}_{1-\operatorname{negl}(n)}\) for some negligible function, say \(2^{-n}\)?
Note that the latter problem is complete for avgUnitarySZK, as we established in Theorem 6.7. Watrous [26] previously extended the polarization technique to _density operators_ (specified by quantum circuits which prepare them), and showed that QuantumStateDistinguishability is complete for QSZK. This suggests that one could potentially apply a similar transformation as in [26, Theorem 1] in order to map an Uhlmann\({}_{1/2}\) instance \((1^{n},C,D)\) with \(\mathrm{F}(\rho,\sigma)\geq 1/2\) (where \(\rho\) and \(\sigma\) represent the mixed states induced by \(C,D\)) into an Uhlmann\({}_{1-2^{-n}}\) instance \((1^{n},\tilde{C},\tilde{D})\) with \(\mathrm{F}(\tilde{\rho},\tilde{\sigma})\geq 1-2^{-n}\). While such a circuit transformation is indeed possible via auxiliary qubits (which encode random coins required for polarization), any auxiliary qubits must necessarily be part of the _purifying register_ on which the Uhlmann unitary is allowed to act upon. This significantly complicates the matter when quantum input states are taken into account; for example, it is unclear how to relate instances of DistUhlmann\({}_{1/2}\) to valid instances of DistUhlmann\({}_{1-2^{-n}}\). We leave the task of finding a polarization lemma for avgUnitarySZK - or to find evidence against one - as an interesting open problem.
## 7 Structural Results about the Succinct Uhlmann Transformation Problem
In this section, we show that the DistSuccinctUhlmann\({}_{1}\) problem captures the complexity of both avgUnitaryPSPACE and avgUnitaryQIP, which allows us to show that the two unitary complexity classes are equal. Concretely, we show that DistSuccinctUhlmann\({}_{1}\) is a complete problem both for avgUnitaryQIP (Section 7.1) and for avgUnitaryPSPACE (Section 7.2), which implies equality of the classes (Corollary 7.13). We then show additional structural results about the succinct Uhlmann transformation problem, namely that SuccinctUhlmann is complete for worst-case unitaryPSPACE for a suitable choice of cutoff parameter (Section 7.3), and how DistSuccinctUhlmann relates to classical (worst-case) PSPACE (Section 7.4).
### Completeness for avgUnitaryQIP
#### 7.1.1 DistSuccinctUhlmann\({}_{1}\in\) avgUnitaryQIP
We begin with an avgUnitaryQIP protocol for DistSuccinctUhlmann\({}_{1}\), which we will use to show that DistSuccinctUhlmann\({}_{1}\in\) avgUnitaryQIP. The protocol closely mirrors that of Protocol 1 (the avgUnitarySZK protocol for DistUhlmann), except that the circuits \(C\) and \(D\) are no longer polynomial size since now they are specified succinctly. As a result, the polynomial time verifier can no longer easily get copies of the statePSPACE-state \(|C\rangle\) and can no longer directly implement the unitary \(D^{\dagger}\) to check that the Uhlmann transformation was applied correctly, which were important steps in Protocol 1. For the first problem, we recall that statePSPACE = stateQIP[13], so by interacting with the prover, the verifier generate additional copies of the input state \(|C\rangle\) (up to arbitrary inverse polynomial error). To solve the second problem, we show that the verifier can perform the measurement \(\{|D\rangle\!\langle D|\,,\mathrm{id}-|D\rangle\!\langle D|\}\) on an arbitrary state with help from the prover. We describe these in more detail next.
Interactive state synthesis.First we recall the main result of of [10], which shows that there is an efficient interactive protocol to synthesize any state sequence \((|\psi_{x}\rangle)_{x}\in\) statePSPACE. We describe this result at a high level (for formal details see [10]): for every statePSPACE state sequence \((|\psi_{x}\rangle)_{x}\) there exists a polynomial-time quantum verifier \(V=(V_{x})_{x}\) such (a) there exists
an honest prover \(P^{*}\) that is accepted by the verifier with probability \(1\) (_completeness_), and after interacting with the honest prover the output register of the verifier is close to \(|\psi_{x}\rangle\) to within \(2^{-n}\) in trace distance (_honest closeness_), and (b) for all provers \(P\) that are accepted with probability at least \(\frac{1}{2}\) (_soundness_), the output register of the verifier is close to \(|\psi_{x}\rangle\) within some polynomial \(1/p(|x|)\) in trace distance (_closeness_).
In what follows we will utilize as a subroutine the interactive state synthesis protocol for the sequence \(\Gamma=(|C\rangle)_{\hat{C}}\) which is indexed by all succinct descriptions \(\hat{C}\) of a unitary circuit \(C\) and \(|C\rangle\) is the corresponding output state of the circuit (given all zeroes). It is straightforward to see that \(\Gamma\in\mathsf{statePSPACE}\), and therefore there is a \(\mathsf{stateQIP}\) protocol to synthesize \(\Gamma\).
Interactive measurement synthesis.Next we describe another primitive which is a protocol for _interactive measurement synthesis_. At a high level, this is a protocol where a verifier gets a description of a measurement \(M\) and an input register \(\mathsf{A}\) in an unknown state \(\tau\). The verifier interacts with a prover and at the end outputs a measurement outcome bit \(b\) as well as register \(\mathsf{A}\). If the prover is accepted with sufficiently high probability, then (a) the measurement outcome bit \(b\) is \(1\) with probability close to \(\mathrm{Tr}(M\tau)\), and (b) conditioned on acceptance and \(b=1\), the output register \(\mathsf{A}\) is close to being in the state \(\tau|_{M}\), the post-measurement state13 of \(\tau\) conditioned on measuring \(M\).
Footnote 13: Recall the notation used for post-measurement states defined in Section 2.
We show there is an efficient interactive measurement synthesis protocol for the case when the measurement \(M\) is a rank-one projector \(|\psi\rangle\!\langle\psi|\) for some succinctly described state \(|\psi\rangle\).
**Lemma 7.1** (Approximate measurement protocol).: _Let \(\Psi=(|\psi_{x}\rangle)_{x}\) be a \(\mathsf{stateQIP}\) family of states. Then for all polynomials \(p(n)\), there exists a polynomial-time quantum verifier \(V\) that takes as input register \(\mathsf{A}\), and outputs an accept/reject flag, a measurement outcome bit \(b\), and a register \(\mathsf{A}\) such that the following properties hold:_
1. _(Completeness) There exists an honest prover_ \(P^{*}\) _such that for all input states_ \(\tau_{\mathsf{A}}\)__ \[\mathrm{Pr}[V(\tau_{\mathsf{A}}){\leftrightarrows}P^{*}\text{ accepts}]=1\,.\] _Furthermore, given input state_ \(|\psi_{x}\rangle\!\langle\psi_{x}|\) _in register_ \(\mathsf{A}\)_, the verifier outputs_ \(b=1\) _with overwhelming probability:_ \[\mathrm{Pr}[V(|\psi_{x}\rangle\!\langle\psi_{x}|_{\mathsf{A}}){ \leftrightarrows}P^{*}\text{ outputs }1]\geq 1-2^{-|x|}\,.\]
2. _(Soundness) For all input states_ \(\tau_{\mathsf{AR}}\) _(where_ \(\mathsf{R}\) _is an arbitrary external register not touched by the verifier or prover) and for all provers_ \(P\) _such that_ \(V(\tau_{\mathsf{AR}}){\leftrightarrows}P\) _accepts with probability at least_ \(1/2\)_,_ _where the events "outputs_ \(b=1\)_" and "accepts" are with respect to the interaction_ \(V(\tau_{\mathsf{AR}}){\leftrightarrows}P\)_. If additionally_ \(\mathrm{Tr}\Big{(}\,|\psi_{x}\rangle\!\langle\psi_{x}|_{\mathsf{A}}\;\tau_{ \mathsf{A}}\Big{)}\geq\frac{1}{2}\)_, then the final state_ \((\tau_{acc})_{\mathsf{AR}}\) _at the end of the protocol conditioned on acceptance and conditioned on measurement outcome bit_ \(b=1\) _satisfies_ \[\mathrm{td}(\tau_{acc},\tau|_{\psi_{x}\otimes\mathrm{id}})\leq\frac{1}{p(|x|)}\,,\] (7.1)
_where_ \(\tau|_{\psi_{x}\otimes\mathrm{id}}\) _denotes the post-measurement state of_ \(\tau_{\mathsf{AR}}\) _conditioned on projecting the_ \(\mathsf{A}\) _register onto the state_ \(|\psi_{x}\rangle\)_. Let_ \(1/p(n)\) _be the_ closeness _of the verifier._
In the second part of the soundness condition, we only require that the verifier's quantum output is close in trace distance to \(\tau|_{\psi_{x}\otimes\mathrm{id}}\) if the verifier accepts with probability \(\frac{1}{2}\)_and_ the probability of obtaining measurement outcome \(1\) (with the ideal measurement) is at least \(1/2\). Intuitively, the reason for this is that Equation (7.1) makes a statement involving the conditioned state \(\tau|_{\psi_{x}\otimes\mathrm{id}}\), which can become very sensitive to errors if the measurement probability \(\mathrm{Tr}\Big{(}\,|\psi_{x}\rangle\!\langle\psi_{x}|_{\mathsf{A}}\ \tau_{\mathsf{A}}\Big{)}\) is very small. The \(1/2\) threshold can be relaxed to being any inverse polynomial if the trace distance error is suitably adjusted.
We defer the proof of Lemma 7.1 to Section 7.1.3.
We now use these two primitives (interactive state and measurement synthesis) to prove the following.
**Lemma 7.2**.: \(\textsc{DistSuccinctUhlmann}_{1}\in\mathsf{avgUnitaryQIP}\)_._
Proof.: Fix a polynomial \(q(n)\). We present in Protocol 2 an \(\mathsf{avgUnitaryQIP}\) protocol for \(\textsc{DistSuccinctUhlmann}_{1}\) with completeness \(1-2^{-\Omega(n)}\), soundness \(\frac{1}{2}\), and closeness \(1/q(n)\). We use as subroutines
1. The \(\mathsf{stateQIP}\) protocol for synthesizing the state sequence \(\Gamma\) (which is all succinctly described states) with completeness \(1\), soundness \(1/2\), and closeness \(1/32q(n)^{2}\).
2. The approximate measurement protocol from Lemma 7.1 for the state sequence \(\Gamma\) with closeness \(1/32q(n)^{2}\).
For a circuit \(C\) we write \(C^{\otimes m}\) to denote \(m\) parallel copies of the circuit which generates the product state \(|C\rangle^{\otimes m}\).
**Protocol 2**.: \(\mathsf{avgUnitaryQIP}_{1-2^{-n+1},\frac{1}{2},\frac{1}{q}}\) **verifier for \(\textsc{DistSuccinctUhlmann}_{1}\)**
**Input:** Classical string \(x=(1^{n},\hat{C},\hat{D})\) specifying a succinct description of a pair of circuits \((C,D)\), and quantum register \(\mathsf{B}_{0}\).
1. Let \(m=16q(n)^{2}\), and perform the \(\mathsf{stateQIP}\) protocol to synthesize \(|C\rangle^{\otimes m}\) in registers \(\mathsf{A}_{1}\mathsf{B}_{1}\cdots\mathsf{A}_{m}\mathsf{B}_{m}\). If the \(\mathsf{stateQIP}\) protocol rejects, reject.
2. Select a permutation \(\pi\in S_{m+1}\) uniformly at random and apply \(\mathcal{P}_{\pi}\) to \(\mathsf{B}_{[0:m]}\). Send \(\mathsf{B}_{[0:m]}\) to the prover.
3. Verifier receives registers \(\mathsf{B}_{[0:m]}\) from the prover. Then 1. Apply \(\mathcal{P}_{\pi^{-1}}\) to \(\mathsf{B}_{[0:m]}\). 2. Perform the approximate measurement protocol with measurement \(|D\rangle\!\langle D|^{\otimes m}\) on quantum register \(\mathsf{AB}_{[m]}\). If the protocol rejects or outputs \(b=0\), reject. 3. Accept and output the register \(\mathsf{B}_{0}\).
We show that the verifier described in Protocol 2 satisfies the required properties of \(\mathsf{avgUnitaryQIP}\) protocols. First, it is clear that the verifier runs in polynomial time. This uses the fact that the \(\mathsf{stateQIP}\) and approximate measurement verifiers run in polynomial time, and the succinct descriptions of \(\left|C\right\rangle^{\otimes m}\) and \(\left|D\right\rangle^{\otimes m}\) are polynomial-sized in the lengths of the succinct descriptions \(\hat{C}\) and \(\hat{D}\). We prove the completeness and soundness conditions in separate lemmas:
1. There is an honest quantum prover that success with probability at least \(1-2^{-n+1}\) (Lemma 7.3).
2. The verifier satisfies the soundness condition of an \(\mathsf{avgUnitaryQIP}_{1-2^{-n+1},1/2,1/q}\) protocol (Lemma 7.4).
Combined, Lemmas 7.3 and 7.4 imply Lemma 7.2.
**Lemma 7.3** (Completeness).: _For all valid \(\textsc{SuccinctUhlmann}_{1}\) instances \(x=(1^{n},\hat{C},\hat{D})\), for sufficiently large \(n\), there exists an honest prover for Protocol 2 satisfying_
\[\Pr[V_{x}(\left|C\right\rangle){\stackrel{{\leftrightarrow}}{{ \leftrightarrow}}}P]\geq 1-2^{-n+1}\,.\]
Proof.: We define an honest prover that acts as follows: the honest prover first implements the honest prover \(\mathsf{stateQIP}\) protocol with honest closeness \(2^{-n}\). Then the prover implements the Uhlmann unitary between \(C\) and \(D\). Finally the prover implements the honest prover for the approximate measurement protocol. After the first step of the protocol, the verifier holds a state within trace distance \(2^{-n}\) of \(\left|C\right\rangle^{\otimes m+1}\) on registers \(\mathsf{AB}_{[0:m]}\), and after the second step the optimal Uhlmann unitary has been performed. Therefore after the second step the verifier holds a state within trace distance \(2^{-n}\) of \(\left|D\right\rangle^{\otimes m+1}\). By the completeness property of the approximate measurement protocol, the verifier accepts with probability \(1\), and when run on \(\left|D\right\rangle^{\otimes m}\), the protocol outputs the bit \(b=1\) with probability at least \(1-2^{-n}\). Since the input state is within \(2^{-n}\) of \(\left|D\right\rangle^{\otimes m}\) in trace distance, the approximate measurement protocol on the verifier's real state outputs \(1\) with probability at least \(1-2^{-n+1}\). When interacting with this honest prover, this is the only step where the verifier has a non-zero chance of rejecting, so the verifier accepts with probability at least \(1-2^{-n+1}\).
**Lemma 7.4** (Soundness).: _For all valid \(\textsc{SuccinctUhlmann}_{1}\) instances \(x=(1^{n},\hat{C},\hat{D})\), for sufficiently large \(n\), for all quantum provers \(P\), there exists a channel completion \(\Phi_{x}\) of \(U_{x}\) such that_
\[\text{if }\quad\Pr[V_{x}(\left|C\right\rangle){\stackrel{{ \leftrightarrow}}{{\leftrightarrow}}}P\text{ accepts}]\geq\frac{1}{2}\qquad\text{then }\qquad\operatorname{td}(\sigma,(\Phi_{x}\otimes\operatorname{id})\left|C \right\rangle\!\!\left\langle C\right|)\leq 1/q(n)\,\]
_where \(\sigma\) denotes the output of \(V_{x}(\left|C\right\rangle){\stackrel{{\leftrightarrow}}{{ \leftrightarrow}}}P\) conditioned on the verifier \(V_{x}\) accepting where \(q(n)\) is the polynomial used to define Protocol 2._
Proof.: By the definition of \(\textsc{SuccinctUhlmann}_{1}\), for all channel completions \(\Phi_{x}\) of \(U_{x}\) we have that \((\Phi_{x}\otimes\operatorname{id})(\left|C\right\rangle\!\!\left\langle C \right|)=\left|D\right\rangle\!\!\left\langle D\right|\), so it suffices to show that conditioned on accepting, the verifier outputs a state within \(1/q(n)\) of \(\left|D\right\rangle\!\!\left\langle D\right|\) in trace distance. The proof follows the template set by the proof that \(\textsc{DistUhlmann}_{1-\operatorname{negl}}\in\mathsf{avgUnitarySZK}_{ \operatorname{HV}}\), with some subtle differences. We first appeal to Lemma 6.3 to claim that if the verifier could prepare exactly \(\left|C\right\rangle\!\!\left\langle C\right|^{\otimes m}\) in registers \(\mathsf{AB}_{[m]}\) after the \(\mathsf{stateQIP}\) protocol, measuring \(\left|D\right\rangle\!\!\left\langle D\right|\) on \(\mathsf{A_{0}B_{0}}\) accepts with high probability. We then show that the verifier's true state, with errors coming from both state preparation and approximate measurement, is close to this ideal post-measurement state. Finally we apply the Gentle Measurement Lemma.
We begin, as before, by expressing the state of the verifier's registers. By the soundness property of \(\mathsf{stateQIP}\), if the verifier accepts with probability at least \(1/2\), after step 1 the verifier has a state in registers \(\mathsf{AB}_{[0:m]}\) that is within \(1/32q(n)^{2}\) of \(\left|C\right\rangle^{\otimes m}\) in trace distance. Let \(\rho_{0}\) be the state of registers \(\mathsf{AB}_{[m]}\) after accepting in step 1 of the protocol. After performing the \(\mathsf{stateQIP}\) protocol, the verifier applies a random permutation on \(\mathsf{B}_{[0:m]}\); then the prover will perform some arbitrary action on \(\mathsf{B}_{[0:m]}\) represented by a quantum channel \(\Lambda\); and finally the verifier will undo the permutation from the first step. Treating \(\mathsf{A_{0}}\) as a purification of the verifier's quantum input, the state of \(\mathsf{AB}_{[0:m]}\) is given by
\[\rho^{*}\coloneqq\mathbb{E}_{\pi\in S_{m+1}}\left((\mathcal{P}_{\pi^{-1}})_{ \mathsf{B}_{[0:m]}}\circ\Lambda_{\mathsf{B}_{[0:m]}}\circ(\mathcal{P}_{\pi})_ {[0:m]}\right)(\left|C\right\rangle\!\!\left\langle C\right|_{\mathsf{A_{0}} \mathsf{B_{0}}}\otimes(\rho_{0})_{\mathsf{AB}_{[m]}}) \tag{7.2}\]
Let the state \(\sigma^{*}\) be defined as follows
\[\sigma^{*}=\mathbb{E}_{\pi\in S_{m+1}}\left((\mathcal{P}_{\pi^{-1}})_{ \mathsf{B}_{[0:m]}}\circ\Lambda_{\mathsf{B}_{[0:m]}}\circ(\mathcal{P}_{\pi})_ {\mathsf{B}_{[0:m]}}\right)(\left|C\right\rangle\!\!\left\langle C\right|^{ \otimes m+1}).\]
One can think of \(\sigma^{*}\) as the state the verifier hopes to have in their registers after step \(3(a)\). Because trace distance can only decrease when applying a channel, we have that
\[\mathrm{td}(\rho^{*},\sigma^{*})\leq\mathrm{td}((\rho_{0})_{\mathsf{AB}_{[m]} },\left|C\right\rangle\!\!\left\langle C\right|^{\otimes m})\leq\frac{1}{32q^ {2}}\,,\]
where the final inequality comes from the \(\mathsf{stateQIP}\) soundness promise, as explained above. Let \(\rho_{acc}\) be the state of the verifier after step \(3(b)\) conditioned on the verifier accepting. In step \(3(b)\), the verifier hopes to measure some ideal measurement and see outcome \(\mathcal{M}\), described below as
\[\mathcal{M}=\mathrm{id}_{\mathsf{A_{0}}\mathsf{B_{0}}}\otimes\left|D\right\rangle \!\!\left\langle D\right|^{\otimes m}\,.\]
Then let \(\rho_{acc}^{*}=\rho^{*}|_{\mathcal{M}}\) and \(\sigma_{acc}^{*}=\sigma^{*}|_{\mathcal{M}}\). Our goal is to get a lower bound for the quantity
\[\mathrm{Tr}\left(\left|D\right\rangle\!\!\left\langle D\right|_{\mathsf{A_{0} }\mathsf{B_{0}}}\rho_{acc}\right)\,, \tag{7.3}\]
because applying the Gentle Measurement Lemma will then give us a bound on the trace distance between \(\rho_{acc}\) and \(\left|D\right\rangle\!\!\left\langle D\right|_{\mathsf{A_{0}}\mathsf{B_{0}}}\). Following the calculations in Lemma 6.3, we have that
\[\mathrm{Tr}(\left|D\right\rangle\!\!\left\langle D\right|_{\mathsf{A_{0}} \mathsf{B_{0}}}\sigma_{acc}^{*})\geq 1-\frac{2}{m+1}\,.\]
From the approximate measurement soundness (Lemma 7.1), together with the assumption that the verifier accepts with probability at least \(1/2\) (and thus the outcome bit \(b\) of the approximate measurement protocol is \(1\) with at least the same probability), we have that
\[\mathrm{td}(\rho_{acc},\rho_{acc}^{*})\leq\frac{1}{32q^{2}}\,.\]
We can compute the trace distance between \(\rho^{*}_{acc}\) and \(\sigma^{*}_{acc}\) directly as follows:
\[2\mathrm{td}(\sigma^{*}_{acc},\rho^{*}_{acc}) =\left\|\frac{\mathcal{M}\sigma^{*}\mathcal{M}}{\mathrm{Tr}(\mathcal{ M}\sigma^{*})}-\frac{\mathcal{M}\rho^{*}\mathcal{M}}{\mathrm{Tr}(\mathcal{M} \rho^{*})}\right\|_{1}\] \[\leq\left\|\frac{\mathcal{M}\sigma^{*}\mathcal{M}}{\mathrm{Tr}( \mathcal{M}\sigma^{*})}-\frac{\mathcal{M}\sigma^{*}\mathcal{M}}{\mathrm{Tr}( \mathcal{M}\rho^{*})}\right\|_{1}+\left\|\frac{\mathcal{M}(\sigma^{*}-\rho^{*} )\mathcal{M}}{\mathrm{Tr}(\mathcal{M}\rho)}\right\|_{1}\] \[\leq\left\|\mathcal{M}\sigma^{*}\mathcal{M}\right\|_{1}\left| \frac{1}{\mathrm{Tr}(\mathcal{M}\sigma^{*})}-\frac{1}{\mathrm{Tr}(\mathcal{M} \rho^{*})}\right|+\frac{\left\|\mathcal{M}(\sigma^{*}-\rho^{*})\mathcal{M} \right\|_{1}}{\mathrm{Tr}(\mathcal{M}\rho^{*})}\] \[\leq\mathrm{Tr}(\mathcal{M}\sigma^{*})\left|\frac{\mathrm{Tr}( \mathcal{M}\rho^{*})-\mathrm{Tr}(\mathcal{M}\sigma^{*})}{\mathrm{Tr}( \mathcal{M}\rho^{*})\mathrm{Tr}(\mathcal{M}\sigma^{*})}\right|+\frac{1}{8q^{2}}\] \[\leq\frac{3}{16q^{2}}\,.\]
For both terms, we get bounds from the fact that \(\mathrm{td}(\rho^{*},\sigma^{*})\leq 1/32q^{2}\) (and trace distance is contractive under channels, including measurements) and \(\mathrm{Tr}(\mathcal{M}\rho^{*})\geq 1/2\). For the second term, we multiply by \(2\) because the trace distance is half the \(1\)-norm. Thus we have that
\[\mathrm{td}(\sigma^{*}_{acc},\rho^{*}_{acc})\leq\frac{3}{32q^{2}}\,.\]
Applying the triangle inequality we have that
\[\mathrm{td}(\rho_{acc},\sigma^{*}_{acc})\leq\frac{1}{8q^{2}}\,.\]
From this trace distance bound we can bound Equation (7.3) by
\[\mathrm{Tr}\left(|D\rangle\!\langle D|_{\mathsf{A}_{\mathsf{Q}}\mathsf{B}_{ \mathsf{0}}}\,\rho_{acc}\right)\geq 1-\frac{2}{m+1}-\frac{1}{8q^{2}}\,.\]
Applying the gentle measurement lemma (Proposition 2.2), we see that the trace distance error from the state \(|D\rangle\!\langle D|\) is at most
\[2\sqrt{\frac{2}{m+1}+\frac{1}{8q^{2}}}\leq 2\sqrt{\frac{1}{4q^{2}}}\leq 1/q\,,\]
where we use the fact that \(m=16q^{2}\), so \(2/(m+1)\leq 1/(8q^{2})\). This completes the proof of Lemma 7.4.
#### 7.1.2 DistSuccinctUhlmann\({}_{1}\) is avgUnitaryQIP-hard
Having shown that \(\textsc{DistSuccinctUhlmann}_{1}\in\mathsf{avgUnitaryQIP}\), we now need to show that it is in fact a complete problem, i.e. any unitary synthesis problem in avgUnitaryQIP can be reduced to \(\textsc{DistSuccinctUhlmann}_{1}\). This is the statement of Lemma 7.5. Before giving the full proof of Lemma 7.5, we provide some intuition. We need to show that any distributional unitary synthesis problem \((\mathscr{U},\Psi)\in\mathsf{avgUnitaryQIP}\) can be solved by a polynomial-sized circuit with access to a \(\textsc{DistSuccinctUhlmann}_{1}\)-oracle.
As a first step, let us consider a stateQIP-protocol (i.e. an interactive protocol where the verifier receives no input state and is asked to prepare a certain quantum state from scratch) and implement
this in \(\mathsf{stateBQP}^{\textsc{DistSuccinctUhlmann}1}\). For any \(\mathsf{stateQIP}\)-protocol, by [13, Lemma 7.5] there exist purifications of the intermediate states of the protocol (on the verifier and message registers) that are in \(\mathsf{statePSPACE}\). Furthermore, from the proof of [13, Lemma 7.5] it is easy to see that the circuits preparing these purifications have succinct descriptions that are efficiently computable from the descriptions of the verifier actions. Then, a possible successful prover strategy in the \(\mathsf{stateQIP}\)-protocol is simply to implement the Uhlmann transformation between these purifications; see [13, Proof of Thm. 7.1] for a more detailed explanation of this idea. These Uhlmann transformations can be accomplished by a \(\textsc{DistSuccinctUhlmann}_{1}\)-oracle: we can efficiently compute the succinct descriptions of the circuits between which we need to apply the Uhlmann transformation and feed these descriptions to the Uhlmann oracle in order to perform the required transformation, effectively simulating the prover with the Uhlmann oracle.
Now we consider the more difficult case of an \(\mathsf{avgUnitaryQIP}\)-protocol. The key difficulty compared to the \(\mathsf{stateQIP}\)-setting is that we are only given one copy of one register to which we want to apply our desired unitary. However, we can observe that the above argument for the \(\mathsf{stateQIP}\)-protocol only relied on being able to compute the succinct classical descriptions of the circuits preparing purifications of the intermediate states of the protocol. Once we have these classical descriptions, we can implement the required Uhlmann transformation on any given state, i.e. the step of applying the Uhlmann oracle does not require having access to arbitrarily many copies of the input state.
Therefore, to apply the \(\mathsf{avgUnitaryQIP}\)-protocol on a given input register, we proceed in two steps. The first step is purely classical: since in a distributional unitary synthesis problem \((\mathscr{U},\Psi)\in\mathsf{avgUnitaryQIP}\) the state family \(\Psi\) is in \(\mathsf{stateQIP}\), we can construct a \(\mathsf{stateQIP}\)-protocol for the state \(U_{x}\left|\psi_{x}\right\rangle\) with \(U_{x}\in\mathscr{U}\) and \(\left|\psi_{x}\right\rangle\in\Psi\).14 As described above, this \(\mathsf{stateQIP}\)-protocol allows us to efficiently (classically) compute succinct descriptions of the circuits preparing purifications of the intermediate states of the protocol. In the second (quantum) step, we can now use these precomputed succinct classical descriptions to efficiently simulate the \(\mathsf{avgUnitaryQIP}\)-protocol with the Uhlmann oracle. For this, when it is the verifier's turn, we simply apply the (efficient) verifier actions, and when it is the prover's turn we use our pre-computed succinct classical descriptions and the Uhlmann oracle to apply the prover actions. This way, we can simulate the actions of the \(\mathsf{avgUnitaryQIP}\)-protocol given only a single copy of the input register.
Footnote 14: Note that of course this is not the same as solving the average unitary synthesis problem: here, we simply prepare the desired state from scratch, whereas in the unitary synthesis setting we are given a single register of an entangled quantum state and have to apply the desired unitary to that register while keeping the entanglement with the remaining (inaccessible) register intact.
We formalise this idea in the following lemma.
**Lemma 7.5**.: \(\mathsf{avgUnitaryQIP}\) _polynomial-time reduces to \(\textsc{DistSuccinctUhlmann}_{1}\)._
Proof.: Let \((\mathscr{U},\Psi)\in\mathsf{avgUnitaryQIP}\). This means that there exists some polynomial-time quantum verifier \(V=(V_{x})\) who receives as input the \(\mathsf{A}\)-register of the state \(\left|\psi_{x}\right\rangle_{\mathsf{AR}}\) and satisfies the completeness and soundness condition in Definition 4.2. Throughout the proof, whenever we say "successful prover", we mean a prover that is accepted in the protocol with probability at least the soundness threshold \(s(n)=1/2\). Since \(\Psi\in\mathsf{stateQIP}\), there exists another polynomial-time verifier \(V^{\prime}=(V^{\prime}_{x})\) for synthesising the states \(\Psi=(\left|\psi_{x}\right\rangle)\); note that \(V^{\prime}\) receives no quantum input. The can combine these two verifier's into one verifier \(\tilde{V}=(\tilde{V}_{x})\) who receives no input and first executed the actions of \(V^{\prime}\); at the end of this, \(\tilde{V}\) will be in possession of a state on registers \(\mathsf{A}\) and \(\mathsf{R}\). \(\tilde{V}\) then runs \(V\) with \(\mathsf{A}\) as the input register and outputs the resulting state. If either \(V\) or \(V^{\prime}\) rejects, so does \(\tilde{V}\).
Applying [13, Lemma 7.5] to the verifier \(\tilde{V}\) shows that the intermediate states on the message and verifier register in the interaction of \(\tilde{V}\) with any prover with sufficiently high success probability have purifications in \(\mathsf{statePSPACE}\). Furthermore, from the proof of [13, Lemma 7.5] it is easy to see that there are polynomial-time Turing machines that, given as input a description of the verifier's actions in the protocol, output succinct classical descriptions of the quantum polynomial-space circuits for preparing \(\ket{\psi_{n,j}}\) and \(\ket{\varphi_{n,j}}\). This holds because [13, Lemma 7.5] only relies on the block-encoding transformations implemented in [13, Theorems 5.5 and 6.1], which have efficient (and explicit) descriptions.
This means that for each round \(i\) of the protocol, there exist polynomial-space quantum circuits \(C^{i}_{x}\) and \(D^{i}_{x}\) with efficiently computable succinct classical descriptions \(\hat{C}^{i}_{x}\) and \(\hat{D}^{i}_{x}\) such that \(\ket{\alpha^{i}_{x}}_{\mathsf{V}\mathsf{M}\mathsf{P}^{i}}=C^{i}_{x}\ket{0 \ldots 0}\) and \(\ket{\beta^{i}_{x}}_{\mathsf{V}\mathsf{M}\mathsf{P}^{i}}=D^{i}_{x}\ket{0 \ldots 0}\) are purifications of the reduced state on the message register \(\mathsf{M}^{i}\) and verifier register \(\mathsf{V}^{i}\) of the interactive protocol right before and after the prover's action in round \(i\). Observe that because the verifier register in the interactive protocol is not acted upon by the prover, the reduced states on the verifier register are unchanged, i.e.
\[\mathrm{Tr}_{\mathsf{M}\mathsf{P}^{i}}\big{(}\ket{\alpha^{i}_{x}}\!\!\bra{ \alpha^{i}_{x}}_{\mathsf{V}\mathsf{M}\mathsf{P}^{i}}\big{)}=\mathrm{Tr}_{ \mathsf{M}\mathsf{P}^{i}}\big{(}\ket{\beta^{i}_{x}}\!\!\bra{\beta^{i}_{x}}_{ \mathsf{V}\mathsf{M}\mathsf{P}^{i}}\big{)}\,.\]
We can therefore interpret the circuit pair \((C^{i}_{x},D^{i}_{x})\) as an instance of the SuccinctUhlmann problem, with \(\mathsf{V}^{i}\) taking the role of the register that cannot be acted upon by the Uhlmann unitary.15 With access to a DistSuccinctUhlmann-oracle, we can therefore apply an Uhlmann transformation mapping \(\ket{\alpha^{i}_{x}}_{\mathsf{V}\mathsf{M}\mathsf{P}^{i}}=C^{i}_{x}\ket{0 \ldots 0}\) to \(\ket{\beta^{i}_{x}}_{\mathsf{V}\mathsf{M}\mathsf{P}^{i}}=D^{i}_{x}\ket{0 \ldots 0}\) by acting only on registers \(\mathsf{M}^{i}\mathsf{P}^{i}\). This means that with the DistSuccinctUhlmann-oracle, we can efficiently implement the actions of a successful prover in the interactive protocol.16
Footnote 15: Technically we also need to include the space requirement of \(C^{i}_{x}\) and \(D^{i}_{x}\), which can be explicitly computed from the proof of [13, Lemma 7.5], as part of the Uhlmann instance, and pad the verifier register \(\mathsf{V}^{i}\) with additional qubits so that \(\mathsf{V}^{i}\) and \(\mathsf{M}\mathsf{P}^{i}\) have the same number \(n\) of qubits. To help with readability, we do not do this explicitly in the proof.
Footnote 16: Note that of course not every successful prover has to implement the Uhlmann transformation. The important point is that we can implement _some_ successful prover in this way, and the guarantee of the interactive protocol applies to any successful prover.
We now use this observation to construct a polynomial-size quantum query circuit that, when instantiated with DistSuccinctUhlmann\({}_{1}\) and run on register \(\mathsf{A}\) of \(\ket{\psi_{x}}\), produces the same output state as the quantum interactive protocol with verifier \(V=(V_{x})\) for this problem. The query circuit is constructed as follows: the circuit receives as input register \(\mathsf{A}\) of \(\ket{\psi_{x}}\). The circuit applies the first action of the verifier \(V\), which we can assume to be unitary by purifying the actions of \(V\) and which can be done in polynomial-time since \(V\) is efficient. To the resulting state, the query circuit then applies an oracle gate with the succinct Uhlmann instance \((1^{n},\hat{C}^{i^{*}}_{x},\hat{D}^{i^{*}}_{x})\), where \(i^{*}\) is the round of the verifier \(\tilde{V}\) that corresponds to the first round of the verifier \(V\) (i.e. the first round that is part of the \(\mathsf{avgUnitaryQIP}\), not the \(\mathsf{stateQIP}\), protocol). As we showed above, \(\hat{C}^{i^{*}}_{x}\) and \(\hat{D}^{i^{*}}_{x}\) as well as the number of qubits \(n\) are efficiently computable given a description of the verifier \(\tilde{V}\). This step will correctly implement the actions of a successful prover on this state. The query circuit then proceeds in this manner, applying the next action of the verifier \(V\), simulating the next action of the prover using the oracle gates, etc.
Since \(V\) is polynomial-time, it is clear that the query circuit we constructed above is polynomial-time, too. Finally, to show that it outputs the same state as the interactive protocol, we simply notice that since the quantum query circuits simulates a run of the protocol with an honest prover
and we are applying it on the state \(\ket{\psi_{x}}\) for which the guarantee of the \(\mathsf{avgUnitaryQIP}\)-problem \((\mathscr{U},\Psi)\) holds, it follows that the query circuit produces the correct state.
Combining Lemma 7.2 and Lemma 7.5, we immediately obtain the following theorem.
**Theorem 7.6**.: _The distributional unitary synthesis problem \(\textsc{DistSuccinctUhlmann}_{1}\) is complete for \(\mathsf{avgUnitaryQIP}\)._
#### 7.1.3 Proof of approximate measurement protocol (Lemma 7.1)
To conclude this subsection, we need to prove Lemma 7.1, which we used in Lemma 7.2. The key insight, first observed in [10], will be that given copies of a pure state, the verifier can approximately perform the projection onto the pure state via density matrix exponentiation, described below.
**Lemma 7.7** (Density Matrix Exponentiation [12, 13]).: _Let \(t\in\mathbb{R}\). There exists a quantum polynomial-time algorithm \(\mathrm{DME}\) that takes as input registers \(\mathsf{AC}_{[m]}\) and outputs register \(\mathsf{A}\) with the following guarantee: if the input registers are in state \(\tau_{\mathsf{AB}}\otimes\rho_{\mathsf{C}_{[m]}}^{\otimes k}\), where \(\mathsf{B}\) is an arbitrary purifying register on which the algorithm does not act and \(\rho\) is an \(n\)-qubit mixed state, then the output state \(\sigma_{\mathsf{AB}}\) of the algorithm satisfies_
\[\mathrm{td}(\sigma_{\mathsf{AB}},(W_{\mathsf{A}}\otimes\mathrm{id}_{\mathsf{ B}})\tau_{\mathsf{AB}}(W_{\mathsf{A}}^{\dagger}\otimes\mathrm{id}_{\mathsf{ B}}))\leq O(t^{2}/k)\,,\text{ where }W=e^{2\pi i\cdot t\cdot\rho}\,.\]
Let \(U_{\mathrm{DME}}\) be a unitary dilation of \(\mathrm{DME}\), so that applying \(U_{\mathrm{DME}}\) and tracing out all registers except for \(\mathsf{A}\) yields \(\sigma_{\mathsf{AB}}\) in the lemma statement above. Although not proven here, following from the implementation in [12], DME does not act on an ancilla register. Then there is a quantum polynomial time algorithm that implements a controlled DME operation, on a control register \(\mathsf{R}\), which implements the following unitary
\[C_{\mathsf{R}}\,\mathrm{DME}=\ket{0}\!\!\bra{0}_{\mathsf{R}}\otimes\mathrm{ id}+\ket{1}\!\!\bra{1}_{\mathsf{R}}\otimes(U_{\mathrm{DME}})_{\mathsf{AC}_{[m]}}.\]
We now describe the _approximate measurement_ protocol, mentioned in Lemma 7.1, which uses the controlled \(C_{(\cdot)}\,\mathrm{DME}\) operation as a subroutine. Let \(k_{q}\) be the number of copies of the "program state" \(\rho\) needed to implement DME to trace distance error \(1/(10q(n))\).
**Protocol 3. Approximate measurement**
**Input:** A classical string \(x\) that is a succinct representation of a polynomial-space circuit \(C\), acting on \(n\) qubits, that prepares some state \(\ket{\psi_{x}}=\ket{C}\), and a quantum register \(\mathsf{A}\).
1. Perform the \(\mathsf{stateQIP}\) protocol for preparing \(\ket{\psi_{x}}^{\otimes k_{q}}\) in register \(\mathsf{C}_{[m]}\) with soundness error \(1/(10q(n))\). If the protocol rejects, reject.
2. If the \(\mathsf{stateQIP}\) protocol accepts: 1. Prepare ancilla qubit \(\ket{+}_{\mathsf{N}}\). 2. Perform \(C_{\mathsf{N}}\,\mathrm{DME}\) on registers \(\mathsf{AC}_{[m]}\) with \(t=\frac{1}{2}\). 3. Measure \(\mathsf{N}\) with the POVM \(\{\ket{+}\!\!\bra{+},\ket{-}\!\!\bra{-}\}\). Accept and output register \(\mathsf{A}\) and the result of the measurement (\(0\) for the first outcome, \(1\) for the second).
**Lemma 7.8** (Approximate measurement completeness).: _There exists an honest prover \(P^{*}\) such that when \(V\) implements Protocol 3,_
\[\Pr[V(\tau_{\mathsf{AB}}){\leftrightarrow}P^{*}\text{ accepts}]=1\,.\]
Proof.: The honest prover implements the honest stateQIP protocol for \(\ket{\psi_{x}}^{\otimes k_{q}}\) with completeness error \(2^{-n}\). By the completeness of stateQIP, the verifier accepts with probability 1 during the stateQIP protocol. Hence, the verifier accepts with probability 1.
**Corollary 7.9** (Approximate measurement honest prover output probability).: _For the honest prover \(P^{*}\), when the input state is \(\ket{\psi_{x}}\!\!\bra{\psi_{x}}_{\mathsf{A}}\otimes\rho_{\mathsf{B}}\) for some state \(\rho_{\mathsf{B}}\),_
\[\Pr[V(\ket{\psi_{x}}\!\bra{\psi_{x}}_{\mathsf{A}}\otimes\rho_{\mathsf{B}}){ \leftrightarrow}P^{*}\text{ outputs 1}]\geq 1-2^{-n}\,.\]
Proof.: We first describe the DME protocol in more detail (see [13] for a full description). The protocol involves performing partial SWAP gates (\(e^{-i\Delta t\mathsf{SWAP}}\)) with a well-chosen value of \(\Delta t\). From [13, Equation (1)], the action of the partial SWAP gate on one of the input registers is given by
\[\Tr_{\mathsf{P}}(e^{-i\Delta tS}\left((\rho)_{\mathsf{P}}\otimes(\sigma)_{ \mathsf{Q}}\right)e^{i\Delta tS})=\cos^{2}(\Delta t)\sigma+\sin^{2}(\Delta t) \rho-i\sin(\Delta t)[\rho,\sigma]\,, \tag{7.4}\]
whence it follows that when \(\rho=\sigma=\ket{\psi_{x}}\!\!\bra{\psi_{x}}\), the DME algorithm implements the identity operation. Hence, if the verifier's state after step 1 was exactly \(\ket{\psi_{x}}^{\otimes k_{q}+1}\), it would accept with probability 1.
By the completeness property of the honest prover (for stateQIP), the state of the verifier's system after step 1 is within \(2^{-n}\) of \(\ket{\psi_{x}}^{\otimes k_{q}+1}\) in trace distance. Since step 2 of the protocol can be thought of as a single measurement, the probability that the verifier accepts after interacting with the honest prover is at least \(1-2^{-n}\).
**Lemma 7.10** (Approximate measurement soundness).: _For all provers \(P\) that are accepted by the verifier with probability at least \(1/2\),_
\[|\Pr[V(\tau_{\mathsf{AB}}){\leftrightarrow}P\text{ outputs 1}]-\Tr((\ket{ \psi_{x}}\!\bra{\psi_{x}}\otimes\mathrm{id})\tau)|\leq\delta(|x|)\,. \tag{7.5}\]
_Furthermore if the verifier outputs \(1\) with probability at least \(1/2\), conditioned on accepting and outputting \(1\), the verifier outputs a state \(\tau_{acc}\) satisfying_
\[\mathrm{td}(\tau_{acc},\tau|_{\psi_{x}\otimes\mathrm{id}})\leq 1/q(n)\,. \tag{7.6}\]
Proof.: By the stateQIP soundness property, conditioned on accepting with probability at least \(1/2\), the verifier holds a state within \(1/(10q(n))\) of \(\ket{\psi_{x}}\!\bra{\psi_{x}}^{\otimes k_{q}}\) in \(\mathsf{C}\). Let \(\rho_{1}\) be the state of \(\mathsf{C}\) after interacting with the prover on step 1 and accepting. Define the following unitary \(W\)
\[W=e^{i\pi|\psi_{x}\rangle\!\bra{\psi_{x}}|}=\mathrm{id}-2\ket{\psi_{x}}\!\! \bra{\psi_{x}}.\]
\(W\) is the unitary that DME will approximate when \(\rho=\ket{\psi_{x}}\!\!\bra{\psi_{x}}\) and \(t=1/2\) in Lemma 7.7. Now define the following states.
\[\sigma =C_{\mathsf{N}}\,\mathrm{DME}(\ket{+}\!\bra{+}_{\mathsf{N}} \otimes\tau_{\mathsf{AB}}\otimes(\rho_{1})_{\mathsf{C}})\,,\] \[\sigma^{\prime} =C_{\mathsf{N}}\,\mathrm{DME}(\ket{+}\!\bra{+}_{\mathsf{N}} \otimes\tau_{\mathsf{AB}}\otimes(\ket{\psi_{x}}\!\bra{\psi_{x}})_{\mathsf{C}})\,,\] \[\sigma^{*} =\frac{1}{2}\left(\ket{0}\!\!\bra{0}_{\mathsf{N}}\otimes\tau_{ \mathsf{AB}}+\ket{1}\!\bra{1}_{\mathsf{N}}\otimes(W\tau W^{\dagger})_{\mathsf{ AB}}+\ket{0}\!\!\bra{1}_{\mathsf{N}}\otimes(\tau W^{\dagger})_{\mathsf{ AB}}+\ket{1}\!\bra{0}_{\mathsf{N}}\otimes(W\tau)_{\mathsf{AB}}\right)\,.\]
Note that \(\sigma\) is the state the algorithm will hold after step \(2(b)\), \(\sigma^{\prime}\) is the state the verifier would hold if it could do perfect state preparation of \(\ket{\psi_{x}}\!\bra{\psi_{x}}\) in step \(1\), and \(\sigma^{*}\) is the state the verifier would hold if it could perform \(W\) on \(\mathsf{A}\) instead of performing \(C_{\mathsf{N}}\,\mathrm{DME}\) with perfect program states. From the stateQIP soundness property,
\[\mathrm{td}(\sigma,\sigma^{\prime})\leq\frac{1}{10q(n)}\]
and from Lemma 7.7,
\[\mathrm{td}(\sigma^{\prime},\sigma*)\leq\frac{1}{10q(n)}\,.\]
Combining these with the triangle inequality yields
\[\mathrm{td}(\sigma,\sigma^{*})\leq\frac{1}{5q(n)}\,. \tag{7.7}\]
It suffices to show the following claim about the ideal state \(\sigma^{*}\).
**Claim 7.11**.: _Measuring the POVM \(\{\ket{+}\!\bra{+},\ket{-}\!\bra{-}\}\) on \(\sigma^{*}\) yields outcome \(1\) with probability_
\[\mathrm{Tr}((\ket{\psi_{x}}\!\bra{\psi_{x}}\otimes\mathrm{id})\tau)\,.\]
_and the state of register \(\mathsf{A}\) after measuring the POVM on \(\sigma^{*}\) and seeing outcome \(\ket{-}\!\bra{-}\) is \(\tau|_{\psi_{x}\otimes\mathrm{id}}\)._
We can assume that \(\tau\) is a pure state since we can take \(\mathsf{B}\) to be a purifying register, so let \(\tau=\ket{\phi}\!\bra{\phi}\!\ket{{\mathsf{A}}{\mathsf{B}}}\) be a pure state. Since \(W\) has \(2\) eigenvalues, with corresponding eigenspaces \(\ket{\psi_{x}}\!\bra{\psi_{x}}\otimes\mathrm{id}\) and \(\mathrm{id}-\ket{\psi_{x}}\!\bra{\psi_{x}}\otimes\mathrm{id}\), consider the following decomposition of \(\ket{\phi}\) in the \(\{\ket{\psi_{x}}\!\bra{\psi_{x}}_{\mathsf{A}}\otimes\mathrm{id}_{\mathsf{B}}, \mathrm{id}-\ket{\psi_{x}}\!\bra{\psi_{x}}_{\mathsf{A}}\otimes\mathrm{id}_{ \mathsf{B}}\}\) subspaces
\[\ket{\phi}=\alpha\ket{\phi_{\psi}}+\beta\ket{\phi_{\perp}}.\]
It is clear from the definition of \(\sigma^{*}\) that \(\sigma^{*}\) is a pure state when \(\tau\) is a pure state. Then we can express \(\sigma^{*}=\ket{\phi^{*}}\!\bra{\phi^{*}}\) as a pure state (in ket notation) as
\[\ket{\phi^{*}}=\frac{1}{\sqrt{2}}\ket{0}_{\mathsf{N}}\otimes(\alpha\ket{\phi_ {\psi}}+\beta\ket{\phi_{\perp}})_{\mathsf{AB}}+\frac{1}{\sqrt{2}}\ket{1}_{ \mathsf{N}}\otimes(-\alpha\ket{\phi_{\psi}}+\beta\ket{\phi_{\perp}})_{\mathsf{ AB}}\,.\]
Re-arranging terms, we get
\[\ket{\phi^{*}}=\beta\ket{+}_{\mathsf{N}}\otimes\ket{\phi_{\perp}}_{\mathsf{ AB}}+\alpha\ket{-}_{\mathsf{N}}\otimes\ket{\phi_{\psi}}_{\mathsf{AB}}\,.\]
Then it is clear that the probability of seeing outcome \(\ket{-}\!\bra{-}\) when measuring the POVM \(\{\ket{+}\!\bra{+}_{\mathsf{N}},\ket{-}\!\bra{-}_{\mathsf{N}}\}\) is
\[\ket{\alpha^{2}}=\mathrm{Tr}(\ket{\psi_{x}}\!\bra{\psi_{x}}_{\mathsf{A}}\tau_{ \mathsf{AB}})\,.\]
Thus, the measurement yields outcome \(1\) with the desired probability. With the trace distance bound from Equation (7.7), we have Equation (7.5). Furthermore the state of registers \(\mathsf{AB}\) after measuring the POVM on \(\sigma^{*}\) and seeing outcome \(\ket{-}\!\bra{-}\) is
\[\ket{\phi_{\psi}}\!\bra{\phi_{\psi}}=\tau|_{\psi_{x}\otimes\mathrm{id}}\,.\]
Our goal now is to bound the post-measurement state when measuring \(\ket{-}\!\bra{-}\) on \(\sigma\) instead of \(\sigma^{*}\). Before doing this, we prove a useful inequality relating the post-measurement state of states that
start out close. Let \(\rho\) and \(\rho^{\prime}\) be any two states satisfying \(\operatorname{td}(\rho,\rho^{\prime})\leq\delta\leq 1/4\), and \(\operatorname{Tr}(\Pi\rho)\geq 1/2\) for some projector \(\Pi\). Then we have that
\[\operatorname{td}\left(\rho|_{\Pi},\rho^{\prime}|_{\Pi}\right) \leq\frac{1}{\operatorname{Tr}(\Pi\rho)}\operatorname{td}\left( \Pi\rho\Pi,\Pi\rho^{\prime}\Pi^{\prime}\frac{\Pi\rho}{\Pi\rho^{\prime}}\right)\] \[\leq 2\operatorname{td}(\rho,\rho^{\prime}(1+4\delta))\] \[\leq\left\|\rho-\rho^{\prime}(1+4\delta)\right\|_{1}\] \[\leq\left\|\rho-\rho^{\prime}\right\|_{1}+4\delta\left\|\rho^{ \prime}\right\|_{1}\] \[\leq 5\delta\,.\]
Here the second line results from the fact that \(\operatorname{Tr}(\Pi\rho^{\prime})\geq\operatorname{Tr}(\Pi\rho)-\delta\), so
\[\frac{\operatorname{Tr}(\Pi\rho)}{\operatorname{Tr}(\Pi\rho^{ \prime})}\leq\frac{\operatorname{Tr}(\Pi\rho)}{\operatorname{Tr}(\Pi\rho)-\delta} =\frac{1}{1-\delta/\operatorname{Tr}(\Pi\rho)}\] \[\leq 1+2\delta/\operatorname{Tr}(\Pi\rho)\] \[\leq 1+4\delta\,.\]
Here the last line uses the fact that for \(x\leq 1/2\), we have \(\frac{1}{1-x}\leq 1+2x\), and the assumption that \(\operatorname{Tr}(\Pi\rho)\geq 1/2\). The rest of the lines apply the definition of trace distance and the triangle inequality for the \(1\)-norm. In the calculations above, let \(\rho=\sigma\), \(\rho^{\prime}=\sigma^{*}\), and \(\Pi=|-\rangle\!\langle-|\). We have shown that
\[\sigma^{*}|_{|-\underline{\chi}-|}=\tau|_{\psi_{x}\otimes\operatorname{id}} \quad\text{and}\quad\operatorname{td}(\sigma,\sigma^{*})\leq 1/(5q(n))\,,\]
and by the assumption that \(\operatorname{Tr}(|\psi_{x}\rangle\!\langle\psi_{x}|_{\mathsf{A}}\,\tau_{ \mathsf{AB}})\geq 1/2\), we have that
\[\operatorname{Tr}(|-\rangle\!\langle-|_{\mathsf{N}}\,\sigma^{*})\geq\frac{1}{ 2}\,.\]
Putting these together with the definition of \(\tau_{acc}=\sigma|_{|-\underline{\chi}-|}\), we have that
\[\operatorname{td}(\tau_{acc},\tau|_{\psi_{x}\otimes\operatorname{id}})\leq \frac{1}{q(n)}\,.\]
This completes the proof of the lemma.
Finally we combine the previous lemmas to prove Lemma 7.1.
Proof of Lemma 7.1.: Lemma 7.8 and Corollary 7.9 prove completeness. Lemma 7.10 proves soundness.
### Completeness for \(\mathsf{avgUnitaryPSPACE}\)
Having shown that \(\textsc{DistSuccinctUhlmann}_{1}\) is complete for \(\mathsf{avgUnitaryQIP}\), we will now show that it is also complete for \(\mathsf{avgUnitaryPSPACE}\). Together, this implies that \(\mathsf{avgUnitaryQIP}=\mathsf{avgUnitaryPSPACE}\) (Corollary 7.13).
**Theorem 7.12**.: \(\textsc{DistSuccinctUhlmann}_{1}\) _is complete for \(\mathsf{avgUnitaryPSPACE}\)._
Proof.: We first show that \(\mathsf{DistSuccinctUhlmann}_{1}\in\mathsf{avgUnitaryPSPACE}\). This is essentially a restatement of [13, Theorem 7.4], but re-written using notation defined in this paper. An instance \(U_{x}\) of \(\mathsf{SuccinctUhlmann}_{1}\) is specified by a succinct description of a pair of unitary circuits \((C_{x},D_{x})\) on \(n=\operatorname{poly}(|x|)\) qubits. This means that the space complexity of \(C_{x}\) and \(D_{x}\) is \(\operatorname{poly}(|x|)\), so the state families \(\ket{\psi_{x}}_{\mathsf{AB}}=C_{x}\ket{0^{2n}}_{\mathsf{AB}}\) and \(\ket{\phi_{x}}=D_{x}\ket{0^{2n}}_{\mathsf{AB}}\) are in \(\mathsf{statePSPACE}\) (where \(\mathsf{A},\mathsf{B}\) have \(n\) qubits each, and \(C_{x},D_{x}\) act only on \(\mathsf{B}\)). Then, [13, Theorem 7.4] states that there exist a family of unitaries \((K_{x})_{x}\in\mathsf{unitaryPSPACE}\) that performs the Uhlmann transformation between the state families \(\ket{\psi_{x}}\) and \(\ket{\phi_{x}}\). More formally, for any polynomial \(p\), \((\ket{\psi_{x}})_{x},(\ket{\phi_{x}})_{x}\in\mathsf{statePSPACE}_{1/p}\), and \(F(\rho_{x},\sigma_{x})=1\), where \(\rho_{x}\) and \(\sigma_{x}\) are the reduced density matrices of \(\ket{\psi_{x}}_{\mathsf{AB}}\) and \(\ket{\phi_{x}}_{\mathsf{AB}}\) on register \(\mathsf{A}\), it holds that \(\operatorname{td}\Bigl{(}(\operatorname{id}\otimes K_{x})\ket{\psi_{x}}\! \bra{\psi_{x}}(\operatorname{id}\otimes K_{x}^{\dagger})\,,\,\ket{\phi_{x}}\! \bra{\phi_{x}}\Bigr{)}\leq O(1/p(|x|))\). This implies that \(\mathsf{DistSuccinctUhlmann}_{1}\in\mathsf{avgUnitaryPSPACE}\).17
Footnote 17: Note that [13] does not employ the language of average case unitary complexity classes, but their phrasing of “there exists a unitary in \(\mathsf{unitaryPSPACE}\) that performs the Uhlmann transformation on these specific states” is equivalent to our definition of average case unitary classes.
We now need to show that any distributional unitary synthesis problem \((\mathscr{U}=(U_{x})_{x},\Psi=(\ket{\psi_{x}})_{x})\in\mathsf{avgUnitaryPSPACE}\) can be reduced to a succinct Uhlmann problem in the sense of Definition 3.21. The idea for this is simple, though the formalisation is slightly tedious: to implement \((\mathscr{U}=(U_{x})_{x},\Psi=(\ket{\psi_{x}})_{x})\), we can simply run the Uhlmann transformation between \(\ket{\psi_{x}}\) and \(U_{x}\ket{\psi_{x}}\). Since \(\Psi\in\mathsf{statePSPACE}\), \(U_{x}\ket{\psi_{x}}\) is too, so we can efficiently construct a string \(y\) that describes this Uhlmann instance. Further note that their reduced states on the first half of the qubits are identical, since \(U_{x}\) only acts on the second half of qubits. With access to a \(\mathsf{DistSuccinctUhlmann}_{1}\)-oracle, we can therefore implement this Uhlmann transformation, and as a result implement any unitary synthesis problem \((\mathscr{U},\Psi)\in\mathsf{avgUnitaryPSPACE}\).
To show this formally, we fix any polynomial \(q(n)\) and need to construct a quantum query algorithm \(C^{*}=(C^{*}_{x})_{x}\) and a polynomial \(r(n)\) such that all \(1/r(n)\)-error average case instantiations of \(C^{\mathsf{DistSuccinctUhlmann}_{1}}\) implement \((\mathscr{U},\Psi)\) with average-case error \(1/q(n)\).
Since \((\mathscr{U},\Psi)\in\mathsf{avgUnitaryPSPACE}\), by definition \(\Psi=(\ket{\psi_{x}})_{x}\in\mathsf{statePSPACE}\), i.e. there exists a family of space-uniform circuits \(S=(S_{x})\) on \(2n=\operatorname{poly}(|x|)\) qubits such that \(\ket{\psi_{x}^{\prime}}\coloneqq S_{x}\ket{0^{2n}}\) is \(1/q^{\prime}(|x|)\)-close in trace distance to \(\ket{\psi_{x}}\) for \(q^{\prime}(n)\) a polynomial (dependent on \(q\)) that we will choose later. Let \(A=(A_{x})_{x}\) denote the space-uniform quantum algorithm that implements \((\mathscr{U},\Psi)\) with average-case error \(1/q^{\prime}(n)\). Define the circuit \(T_{x}\) to be the concatenation of \(\operatorname{id}\otimes A_{x}\) and \(S_{x}\) (i.e. it implements \((\operatorname{id}\otimes A_{x})S_{x}\)), which is space-uniform since \(A\) and \(S\) are space-uniform. Since both \(S_{x}\) and \(T_{x}\) are space-uniform circuits on at most \(2n\) qubits, this means that, given \(x\), we can efficiently construct a string \(y\) such that \(y=(1^{n},\hat{S},\hat{T})\) is a valid succinct Uhlmann instance (Definition 5.5) for the family of circuit pairs \((S,T)\). We therefore define the following family of quantum query circuits: \(C^{*}_{x}\) contains a single oracle gate acting on \(n\) qubits with label \(y\), where \(y=(1^{n},\hat{S},\hat{T})\) is a valid succinct Uhlmann instance constructed from \(x\) as described before. Since \(y\) can be efficiently computed from \(x\), \((C^{*}_{x})_{x}\) is a time-uniform family of quantum query circuits.
By assumption, there exists a channel completion \(\Phi_{x}\) of \(U_{x}\) such that
\[\operatorname{td}\Bigl{(}(A_{x}\otimes\operatorname{id})(\psi_{x}),\,(\Phi_{x} \otimes\operatorname{id})(\psi_{x})\Bigr{)}\leq 1/q^{\prime}(n)\,.\]
Furthermore, by construction the Uhlmann unitary for circuits \(S_{x}\) and \(T_{x}\) applied to the state \(\ket{\psi_{x}^{\prime}}=S_{x}\ket{0^{2n}}\) produces the state \(T_{x}\ket{0^{2n}}=(A_{x}\otimes\operatorname{id})\ket{\psi_{x}^{\prime}}\). Therefore, considering a \(1/r(n)\)-error average case instantiation \(C^{\mathsf{DistSuccinctUhlmann}_{1}}\) and using \(\operatorname{td}(\psi_{x},\psi_{x}^{\prime})\leq 1/q^{\prime}(n)\), we can apply the
triangle inequality to get
\[(\mathrm{id}\otimes C_{x}^{\mathrm{DistSuccinctUhlmann}_{1}})(\psi_{x})=(\mathrm{id }\otimes A_{x})(\psi_{x})\leq 1/r(n)+2/q^{\prime}(n)\,.\]
Choosing \(r(n)=q^{\prime}(n)=4q(n)\) and combining these two statements with the triangle inequality, this means that \(C_{x}^{\mathrm{DistSuccinctUhlmann}_{1}}\) implements \(U_{x}\) (for channel completion \(\Phi_{x}\)) with average-case error \(1/q(n)\) as desired.
We are now in a position to prove that \(\mathsf{avgUnitaryQIP}\) and \(\mathsf{avgUnitaryPSPACE}\) are in fact equal. This answers an average-case version of an open problem raised in [13, 14], namely whether \(\mathsf{unitaryQIP}=\mathsf{unitaryPSPACE}\), and is one of the first non-trivial results on relations between unitary complexity classes. It also highlights the utility of having complete problems for unitary complexity classes, just like complete problems for traditional complexity classes are an invaluable tool for relating classes to one another.
**Corollary 7.13**.: \(\mathsf{avgUnitaryPSPACE}=\mathsf{avgUnitaryQIP}\)_._
Proof.: This follows immediately from the fact that \(\mathsf{DistSuccinctUhlmann}\) is complete both for \(\mathsf{avgUnitaryQIP}\) and \(\mathsf{avgUnitaryPSPACE}\).
While this resolves the question in the average case, the worst-case version of this question remains open:
**Open Problem 14**.: Does it hold that \(\mathsf{unitaryQIP}=\mathsf{unitaryPSPACE}\)?
Another interesting open question concerns the relationship between between traditional complexity theory and unitary complexity theory, and in particular the Uhlmann Transformation Problem:
**Open Problem 15**.: \(\mathsf{SuccinctUhlmann}\in\mathsf{unitaryBQP}^{\mathsf{PSPACE}}\)? This is a "dual" statement to Theorem 7.12. This is closely related to the Unitary Synthesis Problem of [1] - not to be confused with our notion of unitary synthesis problems - which asks if there is a quantum algorithm \(A\) and for every \(n\)-qubit unitary \(U\) a boolean function \(f:\{0,1\}^{\mathrm{poly}(n)}\to\{0,1\}\) such that the unitary \(U\) can be implemented by \(A^{f_{U}}\).
### Completeness for worst-case \(\mathsf{unitaryPSPACE}\)
Most of the results in this paper that we have seen so far - and the ones that follow - focus on the complexity of the distributional Uhlmann Transformation Problem and average-case unitary complexity classes such as \(\mathsf{avgUnitaryBQP}\) and \(\mathsf{avgUnitaryPSPACE}\). As mentioned previously, average-case unitary complexity classes are natural for studying problems where the goal is to locally transform one entangled state to another.
However the "worst-case" unitary synthesis problems like Uhlmann and "worst-case" unitary complexity classes such as \(\mathsf{unitaryBQP}\) and \(\mathsf{unitaryPSPACE}\) are natural in their own right, and there are many interesting questions about them. For example, is Uhlmann is complete for a natural worst-case unitary complexity classes? Is \(\mathsf{unitaryQIP}=\mathsf{unitaryPSPACE}\), just like \(\mathsf{avgUnitaryQIP}=\mathsf{avgUnitaryPSPACE}\)?
Here we describe a result about worst-case unitary complexity: \(\mathsf{SuccinctUhlmann}\) is complete for \(\mathsf{unitaryPSPACE}\), complementing the completeness of \(\mathsf{DistSuccinctUhlmann}\) for \(\mathsf{avgUnitaryPSPACE}\). We sketch the argument for this, and leave a deeper exploration of worst-case unitary complexity classes, and the questions mentioned above, to future work.
**Theorem 7.14**.: \(\textsc{SuccinctUhlmann}_{1,\eta}\) _is complete for unitaryPSPACE for cutoff parameter \(\eta=2^{-2n}\)._
Recall that the unitary synthesis problems Uhlmann and SuccinctUhlmann are parameterized by a cutoff parameter \(\eta\), which is used to make the definition of the canonical Uhlmann isometry (see Definition 5.2) more robust. As discussed at the end of Section 5, the cutoff parameter is set to \(0\) for the distributional problems DistUhlmann and DistSuccinctUhlmann. However the cutoff parameter is important for discussing the worst-case complexity of the Uhlmann Transformation Problem.
Proof sketch.: First we sketch the hardness result; i.e., that \(\textsc{SuccinctUhlmann}_{1,\eta}\) is hard for unitaryPSPACE. Let \(A\) denote a polynomial space quantum algorithm that implements a unitary synthesis problem \(\mathscr{U}\in\mathsf{unitaryPSPACE}\). Suppose for simplicity that \(A\) is a unitary algorithm (i.e., it consists only of unitary gates). Fix an instance size \(n\), and consider the following two states: let \(\ket{C}\) denote the maximally entangled state on \(n\) qubits, and let \(\ket{D}\) denote the state obtained by applying \(A\) on half of the maximally entangled state. Clearly \((\ket{C},\ket{D})\) are computable in polynomial space and thus \((1^{n},\hat{C},\hat{D})\) are valid \(\textsc{SuccinctUhlmann}_{1}\) instances. The canonical Uhlmann isometry \(W\) with cutoff \(\eta\) corresponding to \((\ket{C},\ket{D})\) is exactly the unitary \(A\), which can be seen as follows:
\[W=\mathrm{sgn}_{\eta}(\mathrm{Tr}_{\mathsf{A}}(\ket{D}\!\bra{C} \ket{)})=\mathrm{sgn}_{\eta}(\mathrm{Tr}_{\mathsf{A}}((\mathrm{id}\otimes A) \ket{C}\!\bra{C}))=\mathrm{sgn}_{\eta}\left(2^{-n}A\right)=A\]
where we used that \(2^{-n}\geq\eta\). Thus implementing this Uhlmann transformation with inverse polynomial error can be used to implement \(\mathscr{U}\) to inverse polynomial error. In the case that \(A\) is not a unitary circuit, we can leverage the fact that a _purification_ of the mixed state \((\mathrm{id}\otimes A)(\ket{C}\!\bra{C})\) can be synthesized in polynomial space; this uses the fact that \(\mathsf{statePSPACE}\) is closed under purification [13, Theorem 6.1].
Next we sketch the containment of \(\textsc{SuccinctUhlmann}_{1,\eta}\) in unitaryPSPACE. This follows from an average-case-to-worst-case reduction. Let \((1^{n},\hat{C},\hat{D})\) be a valid \(\textsc{SuccinctUhlmann}_{1}\) instance. Then [13, Theorem 7.4], which was also used to prove that \(\textsc{DistSuccinctUhlmann}_{1}\in\mathsf{avgUnitaryPSPACE}\), implies that there is a polynomial space algorithm \(A\) such that
\[\mathrm{td}\Big{(}(\mathrm{id}\otimes A)(\ket{C}\!\bra{C}),\ket{D}\!\bra{D} \Big{)}\leq 2^{-4n}\.\]
We claim that the algorithm \(A\) actually implements with small _worst-case error_ the canonical Uhlmann isometry with cutoff \(\eta\) corresponding to \((\ket{C},\ket{D})\). Since the reduced density matrices of \(\ket{C}\) and \(\ket{D}\) on the first \(n\) qubits are identical we can write the Schmidt decompositions of \(\ket{C},\ket{D}\) as
\[\ket{C}=\sum_{i}\sqrt{p_{i}}\ket{v_{i}}\otimes\ket{s_{i}},\qquad \ket{D}=\sum_{i}\sqrt{p_{i}}\ket{v_{i}}\otimes\ket{t_{i}}\]
for some orthonormal bases \(\{\ket{v_{i}}\},\{\ket{s_{i}}\},\{\ket{t_{i}}\}\). Imagine measuring the first \(n\) qubits of \((\mathrm{id}\otimes A)(\ket{C}\!\bra{C})\) and \(\ket{D}\!\bra{D}\) in the \(\{\ket{v_{i}}\}\) basis; then by the convexity of the trace distance we get
\[\sum_{i}p_{i}\ \mathrm{td}\Big{(}A(\ket{s_{i}}\!\bra{s_{i}}),\ket{t_{i}}\! \bra{t_{i}}\Big{)}\leq 2^{-4n}\.\]
It must be that for every \(i\) such that \(p_{i}\geq 2^{-2n}\) we have \(\operatorname{td}\!\left(A(|s_{i}\rangle\!\langle s_{i}|),|t_{i}\rangle\! \langle t_{i}|\;\right)\leq 2^{-2n}\); otherwise the total error would exceed \(2^{-4n}\). The canonical Uhlmann isometry with cutoff \(\eta\) corresponding to \((|C\rangle\,,|D\rangle)\) can be calculated to be
\[W=\sum_{i:p_{i}\geq\eta}|t_{i}\rangle\!\langle s_{i}|\enspace.\]
Since \(A\) maps \(|s_{i}\rangle\) to \(|t_{i}\rangle\) with error \(2^{-2n}\) for every \(i\) with \(p_{i}\geq\eta\), this implies that \(A\) approximates \(W\) with exponentially small error. (Additional care has to be taken to show that \(A\) coherently maps \(|s_{i}\rangle\) to \(|t_{i}\rangle\), but this follows from the fact that \(A\) maps \(|C\rangle\) to \(|D\rangle\).)
### Relationship between avgUnitaryPSPACE and PSPACE
We now turn our attention to the relationship between avgUnitaryPSPACE and "traditional" worst-case PSPACE, which will again involve the DistSuccinctUhlmann\({}_{1}\) problem. We will show that even though avgUnitaryPSPACE is a class of distributional (average-case) unitary synthesis problems, it is "harder" than PSPACE. At first blush, this seems like it should not be true because the average case solver is allowed an inverse polynomial error on the distributional input; meaning, if the input is sampled randomly from instances of a PSPACE-complete problem, the average case solver can be incorrect on a large fraction of them. However, we show that languages in PSPACE are computable in polynomial time, given oracle access to DistSuccinctUhlmann\({}_{1}\). A general definition for a reduction between a decision problem and average case unitary synthesis problem can easily be extracted from the proposition.
The key idea is to take advantage of the _nonadaptive random-self-reducibility_ of PSPACE. Informally, a language satisfies nonadaptive random-self reducibility if there exists a series of _fixed_ distributions over inputs such any algorithm that decides the language with high probability over that distribution can be used to decide the language for all instances. More formally, Fortnow and Feigenbaum showed [13, Corollary 4.4] that there exists a PSPACE-complete language \(L\) satisfying the following: there exists a polynomial \(m(n)\) such that for all \(n\in\mathbb{N}\),
1. there exist \(m=m(n)\) polynomial-time computable functions \(\{\sigma_{i}\}\) that each take as input randomness \(r\in\{0,1\}^{m}\) and an instance \(x\in\{0,1\}^{n}\), and
2. there exists a polynomial-time computable function \(\phi\) that takes as input randomness \(r\in\{0,1\}^{m}\), an instance \(x\in\{0,1\}^{n}\), and answers \(y\in\{0,1\}^{m}\),
such that for all instances \(x\in\{0,1\}^{n}\)
\[\Pr_{r}[\phi(r,x,f_{L}(z_{1}),\ldots,f_{L}(z_{m}))=f_{L}(x)]\geq\frac{3}{4}\]
where \(f_{L}\) is the characteristic function of \(L\) and \(z_{i}=\sigma_{i}(r,x)\). Additionally, for all \(x_{1},x_{2}\in\{0,1\}^{n}\), when \(r\) is chosen uniformly at random, \(\sigma_{i}(x_{1},r)\) is identically distributed to \(\sigma_{i}(x_{2},r)\).
**Theorem 7.15**.: _Let \(L\in\textsc{PSPACE}\). There exists a polynomial time query algorithm \(C^{*}=(C^{*}_{x})_{x}\) and a polynomial \(p\) such that for all \(x\in\{0,1\}^{n}\), all \(1/p(n)\)-error average-case instantiations \(C^{\textsc{DistSuccinctUhlmann}}_{x}\) accept with probability at least \(2/3\) (completeness), and for all \(x\not\in L\), all \(1/p(n)\)-error average case instantiations accept with probability at most \(1/3\) (soundness)._
Proof.: At a high level, recall that a language satisfies random-self-reducibility if there exists efficiently sample distributions such that given the answer to \(f_{L}\) on those instances, the answer to \(f_{L}(x)\) can be determined efficiently. We transform this property into a DistSuccinctUhlmann instance by having the first circuit (denoted \(A\)) sample the fixed distributions (where the \(\mathsf{A}\) purifies the system by holding a uniform superposition over the randomness). The second circuit (denoted \(B\)) simply samples the fixed distribution, and also appends the answers for each instance in another register. Recall that in DistSuccinctUhlmann, we give succinct representations of circuits, so generating the succinct representation of \(B\) can be done efficiently (even though implementing \(B\) would likely take exponential time). It is clear that solving DistSuccinctUhlmann on the distributional input with \(0\)-error and measuring the system would yield an input sampled from the fixed distribution, together with the corresponding values of \(f_{L}\), which would be used to get the answer for the original instance with high probability. We now make this intuition formal.
Let \(L\) be a \(\mathsf{PSPACE}\)-complete language that is non-adaptively random-self reducible. Then there exist polynomial-time computable functions \(\phi_{L}\) and \(\{\sigma_{L,i}\}\) satisfying the conditions from the definition of nonadaptive random-self-reducibility above.
Because all \(\sigma_{L,i}\) run in polynomial time in the length of \(x\), there is a polynomial time quantum circuit \(A\) that prepares the following state
\[A\ket{0}=\sum_{r\in\{0,1\}^{m}}\ket{r}_{\mathsf{A}}\otimes\ket{\sigma_{L,1} (r,x),\sigma_{L,2}(r,x),\ldots,\sigma_{L,m}(r,x)}_{\mathsf{B}}\otimes\ket{0^{ m}}_{\mathsf{C}}.\]
As described in the high level description, the \(\mathsf{B}\) register stores the samples from the fixed distributions \(\sigma_{L,i}\), and the \(\mathsf{A}\) register purifies the system by storing the randomness used to generate the samples. It is important to note that \(A\) is polynomial sized, so a polynomial time quantum algorithm can generate a copy of \(A\ket{0}\).
Since \(L\in\mathsf{PSPACE}\), there exists a polynomial space Turing machine \(M_{L}\) that decides the language. For every input of length \(n\), this Turing machine can be turned into a succinct representation of a circuit \(C_{L,n}\) that implements the characteristic function \(f_{L}\) for inputs of length \(n\) via standard Turing machine to circuit reductions. Therefore there exists an efficient algorithm preparing a succinct representation of a quantum circuit \(B\) that prepares the following state
\[B\ket{0}=\sum_{r\in\{0,1\}^{m}}\ket{r}_{\mathsf{A}}\otimes\ket{\sigma_{L,1} (r,x),\sigma_{L,2}(r,x),\ldots,\sigma_{L,m}(r,x)}_{\mathsf{B}}\otimes\ket{C_ {L,n}(\sigma_{L,1}(r,x)),\ldots,C_{L,n}(\sigma_{L,m}(r,x))}_{\mathsf{C}}.\]
In order to fully align with the definitions, the circuits can be padded with an additional ancillary register initialized to \(\ket{0}\) so that the size of \(\mathsf{BC}\) is the same as \(\mathsf{A}\) (because SuccinctUhlmann, as defined, implements a \(n\)-qubit channel, where the inputs specify \(2n\)-qubit states). Let \(\hat{x}\) be the classical string \((1^{n},\hat{A},\hat{B})\). Finally, let \(\phi_{L,x}\) be a polynomial time quantum circuit implements the function \(\phi_{L}\) when the input \(x\) is hard coded, i.e.
\[\phi_{L,x}\ket{r,b_{1},b_{2},\ldots,b_{m},l}=\ket{r,b_{1},b_{2},\ldots,b_{m},l \oplus\phi_{L}(x,r,b_{1},b_{2},\ldots,b_{m})}\]
Then, the following family of polynomial-time query circuits \((C_{x}^{*})_{x}\) decides \(L\).
**Protocol 4**.: \(\mathsf{BQP}^{\mathsf{DistSuccinctUhlmann}_{1}}\) **protocol for \(L\in\mathsf{PSPACE}\)**
**Input:** Classical string \(x\).
1. Prepare a copy of \(A\ket{0}\) on registers \(\mathsf{ABC}\).
2. Call the \(\textsc{DistSuccinctUhlmann}_{1}\) oracle on quantum registers \(\mathsf{BC}\) with classical input \(\hat{x}\).
3. Let \(\mathsf{O}\) be a single qubit register in the \(\ket{0}\) state. Run \(\phi_{L,x}\) on registers \(\mathsf{ACO}\).
4. Measure \(\mathsf{O}\) in the computational basis and accept if the measurement outcome is \(1\).
In order to prove the proposition, we will show that, when instantiated with a \(0\)-error average case \(\textsc{DistSuccinctUhlmann}_{1}\) oracle, Protocol 4 decides \(L\) with completeness \(3/4\) and soundness \(1/4\). Then by the operational definition of the trace distance, even when instantiated with a \(1/12\)-error average case solver, the protocol still accepts with probability at least \(3/4-1/12=2/3\) if \(x\in L\) and at most probability \(1/3\) if \(x\not\in L\).
Assume that the call to \(\textsc{DistSuccinctUhlmann}\) in Protocol 4 is instantiated with a \(0\)-error average case solver. Then we can write the state of registers \(\mathsf{ABCO}\) after step 3 in the protocol as
\[\sum_{r}\ket{r}_{\mathsf{A}}\otimes\ket{\sigma_{L,1}(r,x),\ldots,\sigma_{L,1}(r,x)}_{\mathsf{B}}\otimes\ket{C_{L,n}(\sigma_{L,1}(r,x)),\ldots,C_{L,n}(\sigma_{L,m}(r,x))}_{\mathsf{C}}\\ \otimes\ket{\phi_{L}(x,r,C_{L,n}\sigma_{L,1}(r,x),\ldots,C_{L,n} \sigma_{L,m}(r,x))}_{\mathsf{O}} \tag{7.8}\]
The probability that the protocol accepts in step 4 is exactly the probability that, when \(r\) is chosen uniformly at random, \(\phi_{L}\) outputs the outputs \(1\) when run on \(x\), \(r\) and \(f_{L}\) applied to each \(\sigma_{L,i}(r,x)\). By the definition of nonadaptive random-self-reducibility, \(\phi_{L}\) is correct with probability \(3/4\). Thus, when \(x\in L\), \(f_{L}(x)=1\), so with probability at least \(3/4\), the protocol accepts. Similarly if \(x\not\in L\), \(f_{L}(x)=0\), so with probability at most \(1/4\) the protocol accepts.
Now, assume that Protocol 4 is instantiated with a \(1/12\)-error average case solver instead. By the definition of \(1/12\)-error solver and the fact that unitaries preserve trace distance, the state of the protocol after step 3 is within \(1/12\) of the state in Equation 7.8, in trace distance. Then, for any measurement \(M\), the probability that \(M\) accepts on the protocol state is within \(1/12\) of the probability that \(M\) accepts on the state in Equation 7.8. So if \(x\in L\), the protocol accepts with probability at least \(2/3\), and if \(x\not\in L\), the protocol accepts with probability at most \(1/3\).
In this section, we have used the random self-reducibility of \(\mathsf{PSPACE}\) to relate \(\mathsf{avgUnitaryPSPACE}\) and \(\mathsf{PSPACE}\). It is natural to wonder whether a similar self-reducibility property also holds for \(\mathsf{unitaryPSPACE}\) itself, and in particular whether \(\textsc{SuccinctUhlmann}\) might be randomly self reducible as a unitary synthesis problem:
**Open Problem 16**.: Is \(\textsc{SuccinctUhlmann}\) randomly self reducible (in some suitably defined sense), in analogy to randomly self-reducible \(\mathsf{PSPACE}\)-complete problems?
## Part III Uhlmann Transformation Problem: Applications
### 8 Applications to Quantum Cryptography
In this section, we connect the Uhlmann Transformation Problem to concepts in quantum cryptography. We first discuss an equivalence between the existence of quantum commitment schemes and the hardness of the Uhlmann Transformation Problem. Then, we relate the problem of breaking (a class of) one-way state generators, a primitive recently introduced by Morimae and Yamakawa [14], to solving \(\textsc{Uhlmann}_{\kappa}\) for _small_\(\kappa\ll 1\) (whereas most of the other results we discuss in this paper concern \(\textsc{Uhlmann}_{\kappa}\) for \(\kappa\) negligibly close to \(1\)). Finally, we show that any _falsifiable quantum cryptographic assumption_ must imply that DistSuccinctUhlmann cannot be solved in polynomial time. Put in other words, this essentially means that any security definition that can be phrased in terms of a security game is either information-theoretically realizable or can be broken in avgUnitaryPSPACE.
#### Quantum commitment schemes
We first review the notion of quantum commitment schemes, and in particular the notion of a _canonical quantum bit commitment scheme_ which is a non-interactive protocol for bit commitment involving quantum communication. Yan [13] showed that a general interactive quantum commitment scheme can always be compiled to a non-interactive commitment scheme with the same security properties. Thus without loss of generality we focus on such non-interactive schemes.
**Definition 8.1** (Canonical quantum bit commitment scheme [13]).: _A canonical non-interactive quantum bit commitment scheme is given by a uniform family of unitary quantum circuits \(\{C_{\lambda,b}\}_{\lambda\in\mathbb{N},b\in\{0,1\}}\) where for each \(\lambda\), the circuits \(C_{\lambda,0},C_{\lambda,1}\) act on \(poly(\lambda)\) qubits and output two registers \(\mathsf{C},\mathsf{R}\). The scheme has two phases:_
1. _In the_ commit stage_, to commit to a bit_ \(b\in\{0,1\}\)_, the sender prepares the state_ \(\left|\psi_{\lambda,b}\right\rangle_{\mathsf{RC}}=C_{\lambda,b}\left|0\cdots 0\right\rangle\)_, and then sends the "commitment register"_ \(\mathsf{C}\) _to the receiver._
2. _In the_ reveal stage_, the sender announces the bit_ \(b\) _and sends the "reveal register"_ \(\mathsf{R}\) _to the receiver. The receiver then accepts if performing the inverse unitary_ \(C_{\lambda,b}^{\dagger}\) _on registers_ \(\mathsf{C},\mathsf{R}\) _and measuring in the computational basis yields the all zeroes state._
The security of a canonical commitment scheme consists of two parts, hiding and binding, which we define next.
**Definition 8.2** (Hiding property of commitment scheme).: _Let \(\epsilon(\lambda)\) denote a function. We say that a commitment scheme \(\{C_{\lambda,b}\}_{\lambda,b}\) satisfies \(\epsilon\)-computational (resp. \(\epsilon\)-statistical) hiding if for all non-uniform polynomial-time algorithms (resp. for non-uniform algorithms) \(A=(A_{\lambda})_{\lambda}\) that take as input the commitment register \(\mathsf{C}\) of the scheme \(\{C_{\lambda,b}\}_{\lambda,b}\), the following holds for sufficiently large \(\lambda\):_
\[\Big{|}\Pr\Big{[}A_{\lambda}(\rho_{\lambda,0})=1\Big{]}-\Pr\Big{[}A_{\lambda} (\rho_{\lambda,1})=1\Big{]}\Big{|}\leq\epsilon(\lambda)\,. \tag{8.1}\]
_Here, \(\rho_{\lambda,b}\) denotes the reduced density matrix of \(\left|\psi_{\lambda,b}\right\rangle\) on register \(\mathsf{C}\). If \(\epsilon\) is a negligible function of \(\lambda\) then we simply say that the scheme satisfies strong computational (resp. statistical) hiding. If \(\epsilon(\lambda)\leq 1-\frac{1}{p(\lambda)}\) for some polynomial \(p(\lambda)\) we say it satisfies weak computational (resp. statistical) hiding._
_We call the left hand side of Equation (8.1) the advantage of the family of adversaries \(A=(A_{\lambda})_{\lambda}\)._
**Definition 8.3** (Honest binding property of commitment scheme).: _Let \(\epsilon(\lambda)\) denote a function. We say that a commitment scheme \(\{C_{\lambda,b}\}_{\lambda,b}\) satisfies \(\epsilon\)-computational (resp. \(\epsilon\)-statistical) honest binding if for all non-uniform polynomial-time algorithms (resp. for all non-uniform algorithms) \(A=(A_{\lambda})_{\lambda}\) that take as input the reveal register \(\mathsf{R}\) the following holds for sufficiently large \(\lambda\):_
\[\mathrm{F}\left(\Big{(}A_{\lambda}\otimes\mathrm{id}_{\mathsf{C}}\Big{)}( \psi_{\lambda,0}),\psi_{\lambda,1}\right)\leq\epsilon(\lambda)\,, \tag{8.2}\]
_where \(\psi_{\lambda,b}=\left|\psi_{\lambda,b}\right\rangle\!\!\left\langle\psi_{ \lambda,b}\right|_{\mathsf{RC}}\)._
_If \(\epsilon\) is a negligible function of \(\lambda\) then we simply say that the scheme satisfies strong computational (resp. statistical) honest binding. Otherwise if \(\epsilon(\lambda)\leq 1-\frac{1}{p(\lambda)}\) for some polynomial \(p(\cdot)\) we say that it satisfies weak computational (resp. statistical) honest binding._
_Remark 8.4_.: Definition 8.3 is called _honest_ binding because it requires the binding property only for the states \(\left|\psi_{\lambda,b}\right\rangle\) that are produced if the commit phase is executed honestly. We refer to [13] for a discussion of this definition and stronger versions thereof. Throughout this paper, we will only consider the honest binding property, so we will just drop the term "honest" for brevity.
_Remark 8.5_.: The definitions of hiding and binding can easily be revised to include adversaries that have quantum side information, but for simplicity we focus on adversaries take classical side information (by way of the non-uniformity of the adversary's circuits). This would require us to consider unitary complexity classes with _quantum advice_, e.g., avgUnitaryBQP/qpoly. We leave this for future work.
Before discussing the connection between the Uhlmann Transformation Problem and commitment schemes, we review several basic facts about them. First, information-theoretically secure quantum commitments do not exist:
**Theorem 8.6** (Impossibility of unconditionally secure quantum commitments [16, 17]).: _There is no quantum commitment scheme that is both strong statistical hiding and strong statistical binding._
Thus at least one of the hiding or binding must be computationally secure. There are two commonly considered _flavors_ of quantum commitments: one with statistical hiding and computational binding, and the other with statistical binding and computational hiding. A remarkable fact about canonical quantum commitments is that there is a generic blackbox reduction between these two flavors [17, 18, 19, 20].
Commitment flavor switching.The reduction works as follows. Let \(\{C_{\lambda,b}\}_{\lambda,b}\) denote a commitment scheme. For every \(\lambda\in\mathbb{N}\) and \(b\in\{0,1\}\), define the circuit \(C^{\prime}_{\lambda,b}\) that acts on one more qubit than \(C_{\lambda,b}\) does, and produces the following state:
\[\left|\psi^{\prime}_{\lambda,b}\right\rangle_{\mathsf{C}^{\prime}\mathsf{R}^{ \prime}}\coloneqq C^{\prime}_{\lambda,b}\left|0\cdots 0\right\rangle=\frac{1}{ \sqrt{2}}\Big{(}\left|0\right\rangle_{\mathsf{A}}\left|\psi_{\lambda,0} \right\rangle_{\mathsf{RC}}+(-1)^{b}\left|1\right\rangle_{\mathsf{A}}\left| \psi_{\lambda,1}\right\rangle_{\mathsf{RC}}\Big{)}\,,\]
where \(|\psi_{\lambda,b}\rangle=C_{\lambda,b}\,|0\cdots 0\rangle\) and the registers are \(\mathsf{C}^{\prime}=\mathsf{RA}\) and \(\mathsf{R}^{\prime}=\mathsf{C}\) (i.e., the reveal and commitment registers are swapped, and the commitment register has an extra qubit). Clearly, the circuit \(C^{\prime}_{\lambda,b}\) is polynomial-size if \(C_{\lambda,b}\) is.
**Proposition 8.7** (Flavor switching for commitments [13, 14]).: _Let \(\epsilon(n),\delta(n)\) be functions. If \(\{C_{\lambda,b}\}_{\lambda,b}\) is an \(\epsilon\)-computationally (resp. statistical) hiding and \(\delta\)-statistical (resp. computational) binding commitment scheme, then \(\{C^{\prime}_{\lambda,b}\}_{\lambda,b}\) is a \(\sqrt{\delta}\)-statistical (resp. computational) hiding and \(2\epsilon^{2}\)-computationally (resp. statistical) binding commitment scheme._
Hardness amplification for commitments.In Section 6 we proved a hardness amplification result for the Uhlmann transformation problem (Theorem 6.8). The key lemma in the proof of this result, Lemma 6.9, also implies that the computational binding property of a commitment scheme can also be amplified: roughly speaking, if there is a commitment scheme where it is hard for a malicious sender to map the \(0\)-commitment to have fidelity more than \(1-1/p(\lambda)\) with the \(1\)-commitment for some polynomial \(p(\lambda)\), then there exists another commitment scheme where it is hard for an adversary to map the \(0\)-commitment to have more than \(\frac{1}{q(\lambda)}\) overlap with the \(1\)-commitment for all polynomials \(q(\lambda)\). Flavor switching (Proposition 8.7) then implies hardness amplification for the hiding property (i.e., if it is somewhat hard to distinguish between commitments to \(0\) and \(1\), there is another commitment scheme for which it is much harder). This answers an open question of [16], who asked whether hardness amplification for commitments is possible.
**Theorem 8.8** (Amplification of quantum commitment schemes).: _The following are equivalent:_
1. _Strong statistical hiding and weak computational binding commitment schemes exist._
2. _Strong statistical binding and weak computational hiding commitment schemes exist._
3. _For every polynomial_ \(p(\lambda)\)_, strong statistical hiding and_ \(\frac{1}{p(\lambda)}\)_-computational binding commitment schemes exist._
4. _For every polynomial_ \(p(\lambda)\)_, strong statistical binding and_ \(\frac{1}{p(\lambda)}\)_-computational hiding commitment schemes exist._
Proof.: Proposition 8.7 shows that \((2)\implies(1)\) and \((3)\implies(4)\). Furthermore, \((4)\implies(2)\) is trivial by definition of binding. Thus it only remains to prove \((1)\implies(3)\).
Let \(C\coloneqq\{\,C_{\lambda,b}\,\}_{\lambda,b}\) be a strong statistical hiding, weak computational binding commitment scheme with corresponding commitment states \(\{|\psi_{\lambda,b}\rangle\}\), and let \(p(\lambda)\) be some polynomial. There exists a polynomial \(q(\lambda)\) such that for all non-uniform polynomial size quantum families of circuits \(\{\,R_{\lambda}\,\}_{\lambda}\) and for all sufficiently large \(\lambda\) it holds that
\[\mathrm{F}\left(\Big{(}R_{\lambda}\otimes\mathrm{id}_{\mathsf{C}}\Big{)}(\psi _{\lambda,0}),\psi_{\lambda,1}\right)\leq 1-\frac{1}{q(\lambda)}\.\]
Let \(\nu(\lambda)=1/p(\lambda)\). There exist polynomials \(k(\lambda),T(\lambda)\) such that
\[1-\left(2(1-\nu(\lambda))^{T(\lambda)}+\frac{32T(\lambda)}{\sqrt{k(\lambda)}} \right)\geq 1-\frac{1}{q(\lambda)}\]
for all sufficiently large \(\lambda\). For notational brevity we write \(k=k(\lambda),T=T(\lambda),\nu=\nu(\lambda)\).
Consider the _amplified commitment scheme_\(C^{\otimes k}\coloneqq\{\,C^{\otimes k(\lambda)}_{\lambda,b}\,\}_{\lambda,b}\). This commitment scheme is clearly polynomial-time and uniform. Applying the contrapositive of Lemma 6.9 with the circuits \(C,D\) of the lemma set to \(C_{\lambda,0},C_{\lambda,1}\) respectively, it holds for all non-uniform polynomial-time algorithms \(\{\,A_{\lambda}\,\}_{\lambda}\), for all sufficiently large \(\lambda\),
\[\mathrm{F}\left(\Big{(}A_{\lambda}\otimes\mathrm{id}_{\mathsf{C}}\Big{)}(\psi _{\lambda,0}^{\otimes k}),\psi_{\lambda,1}^{\otimes k}\right)\leq\nu=\frac{1}{ p(\lambda)}\]
where the algorithm \(A_{\lambda}\) acts only on the part of the state \(|\psi_{\lambda,b}\rangle^{\otimes k}\) kept by the sender after the commitment phase. Thus the original commitment \(C\) is \(\frac{1}{p(\cdot)}\)-computational binding.
The amplified commitment \(C^{\otimes k}\) is statistically hiding since by definition \(\rho_{\lambda,0}\) and \(\rho_{\lambda,1}\) (the reduced density matrices of the original commitment on register \(\mathsf{C}\)) have trace distance at most \(\mathrm{negl}(\lambda)\) for some negligible function \(\mathrm{negl}\). Thus \(\rho_{\lambda,0}^{\otimes k}\) and \(\rho_{\lambda,1}^{\otimes k}\) (the reduced density matrices of the amplified commitment) have trace distance at most \(\mathrm{negl}(\lambda)k(\lambda)\), which is still a negligible function as \(k(\lambda)\) is a polynomial.
Thus the amplified commitment scheme \(C^{\otimes k}\) has statistical hiding and \(\frac{1}{p(\cdot)}\)-computational binding, as required. This shows that \((1)\implies(3)\) and thus concludes the proof of the theorem.
Note that Theorem 8.8 is just shy of showing an equivalence between weak commitments and the standard notion of commitments in cryptography, where the adversaries can only break the hiding or binding properties with negligible advantage. To prove this stronger statement, we would need to show that weak commitments implies the existence of a _single_ commitment scheme for which an adversary cannot break the binding property with more than \(1/p(\lambda)\) for all polynomials \(p\) (whereas Theorem 8.8 implies the existence of a commitment scheme that depends on the polynomial \(p\)). We conjecture that this stronger amplification holds:
**Conjecture 8.9**.: _Strong statistical hiding and weak computational binding commitment schemes exist if and only if strong statistical hiding and strong computational binding commitment schemes exist._
We note that this conjecture would essentially be implied by Open Problem 11, in the same way in which Theorem 8.8 is implied by Theorem 6.8.
Commitments and the Uhlmann Transformation Problem.The main result of this section is a close connection between the existence of commitment schemes and the complexity-theoretic assumption that DistUhlmann\(\notin\mathsf{avgUnitaryBQP/poly}\).
**Theorem 8.10**.: _If for all negligible functions \(\nu\), DistUhlmann\({}_{1-\nu}\in\mathsf{avgUnitaryBQP/poly}\), then strong statistical hiding, weak computational binding commitments as well as strong statistical binding, weak computational hiding commitments do not exist._
_On the other hand, suppose there exists a negligible function \(\mu\) such that_
1. \(\textsc{DistUhlmann}_{1-\mu}\notin\mathsf{avgUnitaryBQP/poly}\)_, and_
2. _there exists a uniform polynomial-time computable_18 _family_ \(X=\{x_{\lambda}\}_{\lambda\in\mathbb{N}}\) _of Uhlmann_\({}_{1-\mu}\) _instances satisfying the following: there exists a polynomial_ \(q(\lambda)\) _such that for every non-uniform
polynomial-time algorithm_ \(A=(A_{\lambda})_{\lambda}\) _where_ \(A\) _implements the Uhlmann transformation correspond to_ \(x_{\lambda}\) _with error greater than_ \(1/q(\lambda)\) _for all sufficiently large_ \(\lambda\)_._
_Then there exist quantum commitments with strong statistical hiding and weak computational binding as well as quantum commitments with weak computational hiding and strong statistical binding._
We note that in the second part of Theorem 8.10, technically speaking the assumption that \(\textsc{DistUhlmann}_{1-\mu}\notin\mathsf{avgUnitaryBQP/poly}\) is implied by the assumption about the existence of the uniform family \(X\) of "hard" instances. However we state it as such in order to highlight the close connection between quantum commitments and whether \(\textsc{DistUhlmann}_{1-\mathrm{negl}}\) is in \(\mathsf{avgUnitaryBQP/poly}\).
We also note that using hardness amplification for commitments (Theorem 8.8), the conclusion of the second part of Theorem 8.10 can be revised to imply the existence of quantum commitments where the computational hiding or computational binding property holds with inverse polynomial security.
Proof of Theorem 8.10.: We begin with the first part of the theorem. Suppose for contradiction that \(\textsc{DistUhlmann}_{1-\nu}\in\mathsf{avgUnitaryBQP/poly}\) for all negligible functions \(\nu\) and there exists a strong statistical hiding and weak computational binding commitment scheme \(C=\{C_{\lambda,b}\}\) (the proof for the other flavor follows from Proposition 8.7). Let \(p(\lambda)\) denote the polynomial from the weak binding property of \(C\), and let \(n(\lambda)\) denote the number of qubits of the commitment on security parameter \(\lambda\). The strong statistical hiding property implies that for some negligible function \(\epsilon(\lambda)\) we have
\[\mathrm{F}(\rho_{\lambda,0},\rho_{\lambda,1})\geq 1-\epsilon(\lambda)\,,\]
where \(\rho_{\lambda,b}\) is the reduced density matrix of the commitment state \(|\psi_{\lambda,b}\rangle=C_{\lambda,b}\,|0\cdots 0\rangle\) on register \(\mathsf{C}\) (the register sent by the sender in the commitment phase). Since \(C_{\lambda,b}\) are quantum polynomial size circuits, it follows that \(\Big{(}(1^{n(\lambda)},C_{\lambda,0},C_{\lambda,1}),|\psi_{\lambda,0}\rangle \,\Big{)}\) is a valid instance of \(\textsc{DistUhlmann}_{1-\epsilon}\) (by padding with zeroes we can assume that \(\{C_{\lambda,b}\}_{b\in\{0,1\}}\) output \(2n(\lambda)\) qubits).
Let \(\delta(\lambda)=\frac{1}{3p(\lambda)}\). By the assumption that \(\textsc{DistUhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP/poly}\) we get that there is a (non-uniform) family of \(\mathrm{poly}(\lambda)\)-size circuits \(A_{\lambda}\) only acting on register \(\mathsf{R}\) for which
\[\mathrm{td}\,\Big{(}\Big{(}A_{\lambda}\otimes\mathrm{id}_{\mathsf{C}}\Big{)}( \psi_{\lambda,0})\,,\,\psi_{\lambda,0}\Big{)}\leq\delta(\lambda)\.\]
By Fuchs-van de Graaf we get
\[\mathrm{F}\,\Big{(}\Big{(}A_{\lambda}\otimes\mathrm{id}_{\mathsf{C}}\Big{)}( \psi_{\lambda,0})\,,\,\psi_{\lambda,0}\Big{)}\geq 1-2\delta(\lambda)>1- \frac{1}{p(\lambda)}\,,\]
which breaks the \((1-1/p)\)-computational binding property of the commitment scheme, a contradiction.
We now prove the second part of the theorem. Let \(X=\{\,x_{\lambda}\,\}_{\lambda\in\mathbb{N}}\) be a family of \(\textsc{Uhlmann}_{1-\mu}\) instances satisfying the premise of the second part of the theorem. For each \(\lambda\), we have \(x_{\lambda}=(1^{n(\lambda)},D_{\lambda},E_{\lambda})\) for some polynomial \(n(\lambda)\) (since \(X\) is uniform polynomial-time computable). Consider the commitment scheme defined by \(X\), i.e. \(C\coloneqq\{\,C_{\lambda,b}\,\}_{\lambda,b}\) where for each \(\lambda\), \(C_{\lambda,0}\coloneqq D_{\lambda},C_{\lambda,1}\coloneqq E_{\lambda}\). By assumption, we have that \(\{\,C_{\lambda,b}\,\}_{\lambda,b}\) is a uniform polynomial-size family of circuits each acting on \(2n(\lambda)\) qubits. Since \(x_{\lambda}\) is a valid \(\textsc{Uhlmann}_{1-\mu}\) instance for all \(\lambda\), the reduced density
matrices \(\rho_{\lambda,0}\) and \(\rho_{\lambda,0}\) have fidelity at least \(1-\mu(\lambda)\) for some negligible function \(\mu\); applying Fuchs-van de Graaf their trace distance is at most \(O(\sqrt{\mu(\lambda)})\), which is still a negligible function. Thus \(\left\{\,C_{\lambda,b}\,\right\}_{\lambda,b}\) satisfies strong statistical hiding.
To show the weak computational binding property, by assumption there exists a polynomial \(p(\lambda)\) such that for all non-uniform polynomial time algorithms \(A=(A_{\lambda})_{\lambda}\) and for all sufficiently large \(\lambda\) we have
\[\operatorname{td}\left(\Big{(}A_{\lambda}\otimes\operatorname{id}_{\mathsf{C }}\Big{)}(\psi_{\lambda,0})\,,\,\psi_{\lambda,1}\right)\geq\frac{1}{q(\lambda)}\.\]
Applying Fuchs-van de Graaf implies that for all \(\lambda\),
\[\operatorname{F}\left(\Big{(}A_{\lambda}\otimes\operatorname{id}_{\mathsf{C }}\Big{)}(\psi_{\lambda,0})\,,\,\psi_{\lambda,1}\right)\leq 1-\frac{1}{q( \lambda)^{2}}\.\]
Thus the commitment scheme \(\left\{\,C_{\lambda,b}\,\right\}_{\lambda,b}\) satisfies \((1-\frac{1}{q(\lambda)^{2}})\)-computational binding, and thus weak binding, as required.
As mentioned before, the proof for the other flavor follows from Proposition 8.7. This concludes the proof of the theorem.
### One-way state generators
Morimae and Yamakawa [14] introduced _one-way state generator (OWSG)_ as a quantum analogue of one-way functions (OWFs). Intuitively speaking, an OWSG is an efficient algorithm that maps a classical key \(k\) to a quantum state \(\left|\phi_{k}\right\rangle\) that is, in a sense, hard to invert. OWSGs are intimately related [14] to other quantum cryptographic primitives such as pseudorandom states [13] and quantum money schemes [15, 1]. A fascinating open question is whether OWSGs could constitute a so-called _minimal assumption_ in quantum cryptography in the same way that OWFs are minimal in classical cryptography, meaning OWFs are implied by nearly all classical cryptographic primitives.19
Footnote 19: Even properly formulating this question is a subtle task, as there exist _information-theoretically secure_ OWSGs (e.g., Wiesner’s quantum money scheme [15]).
The question of the minimality of OWSGs is related to questions about the complexity of breaking OWSGs. If OWSGs are indeed minimal (in some appropriate sense) for quantum cryptography, then complexity of breaking most quantum cryptographic primitives would be upper-bounded by the complexity of breaking OWSGs. (The classical analogue is that if \(\mathsf{P}=\mathsf{NP}\), then most classical cryptography is impossible.) In this section we essentially prove that for a natural class of OWSGs, either they are information-theoretically secure or they can be efficiently cloned with an oracle for solving \(\textsc{Uhlmann}_{\kappa}\) for small \(\kappa\ll 1\).
We present the original definition of a OWSG, given by [14]. There are more general definitions given by [14], but we stick with the simpler one for now.
**Definition 8.11** (One-way state generator).: _Let \(t(\lambda)\) be a polynomial. A one-way state generator (OWSG) is a quantum polynomial-time algorithm \(G=(G_{\lambda})_{\lambda}\) that for all \(\lambda\in\mathbb{N}\) takes as input a computational basis state \(\left|k\right\rangle\) with \(k\in\{0,1\}^{\lambda}\) and outputs a pure state \(\left|\phi_{k}\right\rangle\)._
Next, we define security for OWSGs. Intuitively, it states that even when given multiple copies of \(\left|\phi_{k}\right\rangle\) (the output of the OWSG on key \(k\)), an adversary cannot output another key \(k^{\prime}\) such that \(\left|\phi_{k^{\prime}}\right\rangle\) has non-negligible overlap with \(\left|\phi_{k}\right\rangle\).
**Definition 8.12** (OWSG security).: _We say that a OWSG \(G\) has statistical (resp. computational) \(t\)-copy OWSG security if for all computationally unbounded (resp. polynomial-time) non-uniform algorithms \(A=(A_{\lambda})_{\lambda}\),_
\[\Pr\left(\text{measuring }|\phi_{k}\rangle\text{ with }|\phi_{k^{\prime}}\rangle\! \langle\phi_{k^{\prime}}|\text{ accepts}:\begin{array}{c}k\leftarrow\{0,1\}^{ \lambda}\\ k^{\prime}\gets A_{\lambda}(G_{\lambda}(k)^{\otimes t(\lambda)})\end{array} \right)\leq\operatorname{negl}(\lambda)\.\]
_Equivalently, this can also be written as_
\[\operatorname*{\mathbb{E}}_{\begin{subarray}{c}k\leftarrow\{0,1\}^{\lambda} \\ k^{\prime}\gets A_{\lambda}(G_{\lambda}(k)^{\otimes t(\lambda)})\end{subarray}} \left|\,\langle\phi_{k^{\prime}}|\phi_{k}\rangle\,\right|^{2}\leq\operatorname {negl}(\lambda)\,. \tag{8.3}\]
Next we consider a potentially incomparable definition of OWSG security that we call _unclonability_. Intuitively it means that no efficient adversary, when given \(t\) copies of \(|\phi_{k}\rangle\), can produce a \((t+1)\)-partite state that has high overlap with \(|\phi_{k}\rangle^{\otimes t+1}\).
**Definition 8.13** (Unclonability).: _We say that a OWSG \(G\) is statistically (resp. computationally) \(t\)-copy unclonable if for all computationally unbounded (resp. polynomial-time) non-uniform algorithms \(A=(A_{\lambda})_{\lambda}\),_
\[\Pr\left(\text{measuring }\rho\text{ with }|\phi_{k}\rangle\!\langle\phi_{k}|^{ \otimes t(\lambda)+1}\text{ accepts}:\begin{array}{c}k\leftarrow\{0,1\}^{ \lambda}\\ \rho\gets A_{\lambda}(G_{\lambda}(k)^{\otimes t(\lambda)})\end{array} \right)\leq\operatorname{negl}(\lambda)\.\]
_Equivalently, this can also be written as_
\[\operatorname*{\mathbb{E}}_{\begin{subarray}{c}k\leftarrow\{0,1\}^{\lambda} \\ \rho\gets A_{\lambda}(G_{\lambda}(k)^{\otimes t(\lambda)})\end{subarray}} \Pr(|\phi_{k}\rangle\!\langle\phi_{k}|^{\otimes t(\lambda)+1}\ \rho)\leq \operatorname{negl}(\lambda)\,.\]
Note that when \(t\) is a constant (independent of the security parameter \(\lambda\)), \(t\)-copy unclonability implies \(t\)-copy OWSG security: suppose that there was an OWSG inverter that given copies of \(|\phi_{k}\rangle\), outputs a key \(k^{\prime}\) where \(|\phi_{k^{\prime}}\rangle\) has decent squared overlap with \(|\phi_{k}\rangle\). Using \(k^{\prime}\), an adversary can efficiently generate the state \(|\phi_{k^{\prime}}\rangle^{\otimes t+1}\), which has \(|\langle\phi_{k^{\prime}}\mid\phi_{k}\rangle|^{2(t+1)}\) squared overlap with \(|\phi_{k}\rangle^{\otimes t+1}\). If \(t\) is constant, this is still an inverse polynomial, and thus violates the unclonability security condition.
When \(t\) can grow with \(t(\lambda)\), the connection between OWSG security and unclonability is less clear. However we can establish stronger connections for the following class of _real-valued, clean-output_ OWSGs. Intuitively, clean-output means that the state \(|\phi_{k}\rangle\) can be computed from \(k\) by a unitary that returns all its ancilla qubits to the zero-state.
**Definition 8.14** (Real-valued, clean-output OWSG).: _A OWSG \(G\) is clean-output if for all \(\lambda\) the generator \(G_{\lambda}\) is a unitary such that_
\[|k\rangle\otimes|0\cdots 0\rangle\mapsto|k\rangle\otimes|\phi_{k}\rangle \otimes|0\cdots 0\rangle\]
_where \(|0\cdots 0\rangle\) denotes some number of ancilla zeroes. Furthermore, we say that \(G\) is real-valued if for all \(\lambda\) and for all \(k\in\{0,1\}^{\lambda}\), the output state \(|\phi_{k}\rangle\) is a real-valued vector when expanded in the computational basis._
We claim that real-valued, clean-output OWSGs capture a natural class of OWSGs; here are two well-known examples.
1. _Pseudorandom states_: The canonical constructions of pseudorandom state generators [11, 12] map keys \(k\) to states of the form \[|\phi_{k}\rangle=2^{-n/2}\sum_{x\in\{0,1\}^{n}}(-1)^{f_{k}(x)}\,|x\rangle\,\] where \(\{f_{k}:\{0,1\}^{n}\to\{0,1\}\}_{k}\) is a post-quantum pseudorandom function family. Since the \(f_{k}\) are computable by deterministic classical circuits, the states \(|\phi_{k}\rangle\) are cleanly computable. Furthermore they are clearly real-valued as the amplitudes are all \(\pm 2^{-n/2}\).
2. _Quantum subspace states_: The constructions of quantum money from [1, 10] and other unclonable primitives [13] make use of generators that produce _subspace states_ (and their generalizations called _coset states_). A subspace state generator maps a key \(k\), which is interpreted as a description of linearly independent generators of a random subspace \(A\subset\mathbb{F}_{2}^{n}\) of dimension \(n/2\), to the following state: \[|\phi_{k}\rangle=2^{-n/4}\sum_{x\in A}|x\rangle\enspace.\] It is easy to see that this state can be cleanly computed in polynomial time given the key \(k\), and is clearly real-valued.
_Remark 8.15_.: We note that our proofs will only require a weaker condition than "real-valued": we will only need that the inner product \(\langle\phi_{k}|\phi_{k^{\prime}}\rangle\) is real for all choices of \(k\) and \(k^{\prime}\). However, as we have seen above, real-valued OWSGs are a natural class, so we stick to this stronger requirement for simplicity.
We show that for this class of OWSGs, unclonability security implies computational OWSG security for any number of copies.
**Proposition 8.16**.: _Let \(t(\lambda)\) be a polynomial. If a real-valued, clean-output OWSG \(G\) has \(t\)-copy computational (resp. statistical) unconability security, then it has \(t\)-copy computational (resp. statistical) OWSG security._
Proof.: Suppose for contradiction that a \(t\)-copy unclonable OWSG \(G\) did not have \(t\)-copy OWSG security. Let \(A=(A_{\lambda})_{\lambda}\) denote an adversary that breaks OWSG security of \(G\). Let \(\tilde{A}_{\lambda}\) denote a unitary dilation of \(A_{\lambda}\). We can write its behavior as
\[\tilde{A}_{\lambda}\,|\phi_{k}\rangle^{\otimes t(\lambda)}\otimes|0\rangle= \sum_{k^{\prime}}\sqrt{\epsilon_{k,k^{\prime}}}\,|\text{aux}_{k,k^{\prime}} \rangle\otimes|k^{\prime}\rangle\]
for some auxiliary states \(\{|\text{aux}_{k,k^{\prime}}\rangle\}_{k,k^{\prime}\in\{0,1\}^{\lambda}}\) and for every \(k\in\{0,1\}^{\lambda}\) some probabilities \(\{\epsilon_{k,k^{\prime}}\}_{k^{\prime}\in\{0,1\}^{\lambda}}\). The condition that \(A\) breaks OWSG security means that there exists a polynomial \(p(\lambda)\) such that
\[2^{-\lambda}\sum_{k,k^{\prime}}\epsilon_{k,k^{\prime}}|\langle\phi_{k}\mid \phi_{k^{\prime}}\rangle|^{2}\geq 1/p(\lambda)\]
for infinitely many \(\lambda\). In words, the l.h.s. computes the expected overlap \(|\langle\phi_{k}\mid\phi_{k^{\prime}}\rangle|^{2}\) when \(k\) is sampled uniformly at random (which is why there is a normalisation \(2^{-\lambda}\)) and then \(k^{\prime}\) is sampled according to the distribution \(\{\epsilon_{k,k^{\prime}}\}_{k^{\prime}}\), which is exactly the quantity on the l.h.s. of Equation (8.3). Then consider the unitary \(V\) that first applies \(\tilde{A}\); then controlled on the state \(|k^{\prime}\rangle\), using the generator \(G\) twice, prepares \(|\phi_{k^{\prime}}\rangle^{\otimes 2}\) in an ancilla register; and finally applies the inverse unitary \(\tilde{A}^{\dagger}\). Note that the unitary \(V\) is efficient if \(\tilde{A}\) is efficient. Then consider applying \(V\) to \(t(\lambda)\) copies of \(|\phi_{k}\rangle\) and some ancillas:
\[V\,|\phi_{k}\rangle^{\otimes t(\lambda)}\otimes|0\rangle=\sum_{k^{\prime}}\sqrt {\epsilon_{k,k^{\prime}}}\,\tilde{A}^{\dagger}_{\lambda}\Big{(}\,|\mathrm{aux }_{k,k^{\prime}}\rangle\otimes|k^{\prime}\rangle\,\Big{)}\otimes|\phi_{k^{ \prime}}\rangle^{\otimes 2}\enspace.\]
We now calculate the average overlap between this state and \(|\phi_{k}\rangle^{\otimes t(\lambda)}\otimes|0\rangle\otimes|\phi_{k}\rangle^ {\otimes 2}\):
\[2^{-\lambda}\sum_{k}|\ \langle\phi_{k}|^{\otimes t(\lambda)} \otimes\langle 0|\otimes\langle\phi_{k}|^{\otimes 2}\,V\,|\phi_{k}\rangle^{ \otimes t(\lambda)}\otimes|0\rangle\ |^{2}\] \[\qquad=2^{-\lambda}\sum_{k}\Big{|}\Big{(}\sum_{k^{\prime}}\sqrt{ \epsilon_{k,k^{\prime}}}\ \langle\mathrm{aux}_{k,k^{\prime}}|\otimes\langle k^{\prime}|\otimes \langle\phi_{k}|^{\otimes 2}\,\Big{)}\Big{(}\sum_{k^{\prime\prime}}\sqrt{ \epsilon_{k,k^{\prime\prime}}}\ |\mathrm{aux}_{k,k^{\prime\prime}}\rangle \otimes|k^{\prime\prime}\rangle\otimes|\phi_{k^{\prime\prime}}\rangle^{ \otimes 2}\,\Big{)}\Big{|}^{2}\] \[\qquad=2^{-\lambda}\sum_{k}\Big{|}\sum_{k^{\prime}}\epsilon_{k,k ^{\prime}}\langle\phi_{k}\mid\phi_{k^{\prime}}\rangle^{2}\Big{|}^{2}\] \[\qquad\geq\Big{|}2^{-\lambda}\sum_{k,k^{\prime}}\epsilon_{k,k^{ \prime}}|\langle\phi_{k}\mid\phi_{k^{\prime}}\rangle|^{2}\Big{|}^{2}\geq 1/p( \lambda)^{2}\,,\]
where in the last line we used Cauchy-Schwarz and the premise that \(G\) is a real-valued OWSG so that \(\langle\phi_{k}\mid\phi_{k^{\prime}}\rangle^{2}=|\langle\phi_{k}\mid\phi_{k^{ \prime}}\rangle|^{2}\).
In other words, the unitary \(V\) maps \(t(\lambda)\) copies of \(|\phi_{k}\rangle\) to have inverse polynomial overlap with \(t(\lambda)+2\) copies of \(|\phi_{k}\rangle\), on average over the key \(k\). Since \(V\) is efficient if \(A\) is efficient, this breaks the \(t\)-copy unclonability security of \(G\).
We now give an upper bound on the complexity of breaking real-valued, clean-output OWSGs; we essentially show that either they have information-theoretic OWSG security, or can be efficiently cloned if \(\textsc{DistUhlmann}_{\kappa}\) is efficiently solvable for inverse polynomial \(\kappa\).
**Theorem 8.17**.: _Suppose for all polynomials \(q(n)\) there exists a non-uniform polynomial-time algorithm \(M=(M_{x})_{x}\) and a polynomial \(r(n)\) such that for all valid \(\textsc{Uhlmann}_{1/q(n)}\) instances \(x=(1^{n},C,D)\) we have_
\[\mathrm{F}\Big{(}(\mathrm{id}\otimes M_{x})(|C\rangle\!\langle C|),|D\rangle\! \langle D|\,\Big{)}\geq 1/r(n)\enspace.\]
_Then for all real-valued, clean-output OWSG \(G\) and for all polynomials \(t(\lambda)\) either:_
* \(G\) _has_ \(t\)_-copy statistical OWSG security, or_
* _the_ \(t\)_-copy unclonability security of_ \(G\) _can be broken in polynomial time._
The reader may wonder why the assumption of the theorem is not written as \(\textsc{DistUhlmann}_{1/p(n)}\in\textsc{avgUnitaryBQP}/\textsc{poly}\). This is an illustration of how the (distributional) \(\textsc{Uhlmann}_{\kappa}\) problem differs depending on \(\kappa\). When \(\kappa\) is very close to \(1\), then Proposition 5.8 shows that being able to locally map \(|C\rangle\) to have fidelity approximately \(\kappa\) with \(|D\rangle\) implies that one can solve \(\textsc{DistUhlmann}_{\kappa}\) with small error - and vice versa. However, when \(\kappa\) is small Proposition 5.8 no longer gives meaningful bounds.
Proof.: For simplicity we present the proof for \(t(\lambda)=1\); adapting the proof to general polynomials \(t(\lambda)\) is straightforward. Suppose the OWSG \(G\) does not satisfy \(1\)-copy statistical security. Then there exists a (possibly computationally unbounded) algorithm \(A=(A_{\lambda})_{\lambda}\) and a polynomial \(p(\lambda)\) such that for infinitely many \(\lambda\):
\[\Pr\left(\text{measuring }|\phi_{k}\rangle\text{ with }|\phi_{k^{\prime}}\rangle \!\langle\phi_{k^{\prime}}|\text{ accepts}:\begin{array}{c}k\leftarrow\{0,1\}^{ \lambda}\\ k^{\prime}\gets A_{\lambda}(G_{\lambda}(k))\end{array}\right)\geq 1/p( \lambda)\.\]
By the assumption that \(G\) computes its outputs cleanly, there exist polynomial-sized quantum circuits \(C,D\) that prepare the following states:
\[|C\rangle_{\mathsf{KSK^{\prime}T}} \coloneqq 2^{-\lambda/2}\sum_{k\in\{0,1\}^{\lambda}}|k\rangle_{ \mathsf{K}}\otimes|\phi_{k}\rangle_{\mathsf{S}}\otimes|0\rangle_{\mathsf{K^{ \prime}}}\otimes|0\rangle_{\mathsf{T}}\] \[|D\rangle_{\mathsf{KSK^{\prime}T}} \coloneqq 2^{-\lambda/2}\sum_{k\in\{0,1\}^{\lambda}}|k\rangle_{ \mathsf{K}}\otimes|\phi_{k}\rangle_{\mathsf{S}}\otimes|0\rangle_{\mathsf{K^{ \prime}}}\otimes|\phi_{k}\rangle_{\mathsf{T}}^{\otimes 2}\.\]
Let \(\rho,\sigma\) denote the reduced density matrices of \(|C\rangle\,,|D\rangle\) respectively on register \(\mathsf{K}\). We now show that \(\mathrm{F}(\rho,\sigma)\geq 1/p(\lambda)\) by exhibiting a unitary \(V\) acting on register \(\mathsf{SK^{\prime}T}\) such that
\[\mathrm{F}(\rho,\sigma)\geq|\,\langle D|\,(\mathrm{id}_{\mathsf{K}}\otimes V_ {\mathsf{SK^{\prime}T}})\,|C\rangle\,|^{2}\geq 1/p(\lambda)^{2}. \tag{8.4}\]
Consider the unitary purification \(\tilde{A}\) of the adversary \(A\); without loss of generality it can be expressed as follows. For all \(k\in\{0,1\}^{\lambda}\),
\[\tilde{A}_{\mathsf{SK^{\prime}}}\,|\phi_{k}\rangle_{\mathsf{S}}\otimes|0 \rangle_{\mathsf{K^{\prime}}}=\sum_{k^{\prime}}\sqrt{\epsilon_{k,k^{\prime}}} \,|\mathrm{aux}_{k,k^{\prime}}\rangle_{\mathsf{S}}\otimes|k^{\prime}\rangle_{ \mathsf{K^{\prime}}}\]
for some auxiliary states \(\{|\mathrm{aux}_{k,k^{\prime}}\rangle\}_{k,k^{\prime}\in\{0,1\}^{\lambda}}\) and for some probabilities \(\{\epsilon_{k,k^{\prime}}\}_{k,k^{\prime}\in\{0,1\}^{\lambda}}\) satisfying (by the assumption on the adversary)
\[2^{-\lambda}\sum_{k,k^{\prime}}\epsilon_{k,k^{\prime}}|\langle\phi_{k}\mid \phi_{k^{\prime}}\rangle|^{2}\geq 1/p(\lambda)\.\]
Now define the unitary \(V\) acting on register \(\mathsf{SK^{\prime}T}\) that first applies the unitary \(\tilde{A}\) to registers \(\mathsf{TK^{\prime}}\); then, controlled on the state \(|k^{\prime}\rangle\) in register \(\mathsf{K^{\prime}}\), prepares the state \(|\phi_{k^{\prime}}\rangle^{\otimes 2}\) in register \(\mathsf{T}\); and finally applies the inverse unitary \(\tilde{A}^{\dagger}\). Note that this unitary \(V\) is not necessarily efficient because it runs the adversary \(W\). We now verify Equation (8.4):
\[|\,\langle D| \,(\mathrm{id}_{\mathsf{A}}\otimes V_{\mathsf{B}})\,|C\rangle\, |^{2}\] \[=\Big{|}2^{-\lambda}\sum_{k}\Big{(}\sum_{k^{\prime}}\sqrt{ \epsilon_{k,k^{\prime}}}\,\langle\mathrm{aux}_{k,k^{\prime}}|\otimes\langle k ^{\prime}|\otimes\langle\phi_{k}|^{\otimes 2}\,\Big{)}\Big{(}\sum_{k^{\prime \prime}}\sqrt{\epsilon_{k,k^{\prime\prime}}}\,|\mathrm{aux}_{k,k^{\prime \prime}}\rangle\otimes|k^{\prime\prime}\rangle\otimes|\phi_{k^{\prime\prime}} \rangle^{\otimes 2}\,\Big{)}\Big{|}^{2}\] \[=\Big{|}2^{-\lambda}\sum_{k,k^{\prime}}\epsilon_{k,k^{\prime}} \langle\phi_{k}\mid\phi_{k^{\prime}}\rangle^{2}\Big{|}^{2}\geq 1/p(\lambda)^{2}\]
where in the last line we used the premise that \(G\) is a real-valued OWSG so that \(\langle\phi_{k}\mid\phi_{k^{\prime}}\rangle^{2}=|\langle\phi_{k}\mid\phi_{k^{ \prime}}\rangle|^{2}\). Thus \((1^{n},C,D)\) is a valid Uhlmann\({}_{1/p(\lambda)^{2}}\) instance for some \(n=\mathrm{poly}(\lambda)\).
We have shown that the existence of _some_ adversary \(A\) breaking the \(t\)-copy OWSG security of \(G\) implies there is _some_ Uhlmann transformation that maps \(|C\rangle\) to a state with fidelity at least
\(1/p(\lambda)^{2}\) with \(|D\rangle\). Now we argue that _all_ algorithms \(M=(M_{x})_{x}\) that implement an Uhlmann transformation for Uhlmann\({}_{1/p(\lambda)^{2}}\) instances can be used to break the unclonability security of \(G\). In particular, letting \(M_{x}\) be the Uhlmann transformation for instance \(x=(1^{n},C,D)\), we have
\[\mathrm{F}\Big{(}(\mathrm{id}\otimes M_{x})(|C\rangle\!\langle C|),|D\rangle \!\langle D|\,\Big{)}\geq 1/r(\lambda)\]
for some polynomial \(r(\lambda)\). By measuring the \(\mathsf{K}\) register of both arguments, and using the joint concavity of the fidelity function, we have
\[2^{-\lambda}\sum_{k}\mathrm{F}\Big{(}(\mathrm{id}\otimes M_{x})(|\phi_{k} \rangle\!\langle\phi_{k}|\otimes|0\rangle\!\langle 0|),|\phi_{k}\rangle\! \langle\phi_{k}|^{\otimes 3}\otimes|0\rangle\!\langle 0|\,\Big{)}\geq 1/r( \lambda)\,,\]
where for notational convenience we have grouped the three copies of \(|\phi_{k}\rangle\) in \(|D\rangle\) together. This means that on average over the key \(k\), the algorithm \(M_{x}\) maps single copy of \(|\phi_{k}\rangle\) (plus some zeroes) to three copies of \(|\phi_{k}\rangle\) (plus some zeroes) with fidelity at least \(1/r(\lambda)\). This implies that \(G\) does not have single-copy unclonability security.
### Falsifiable quantum cryptographic assumptions
In this section, we show an avgUnitaryPSPACE upper bound for breaking _falsifiable quantum cryptographic assumptions_, which can be seen as a quantum analogue of the notion of falsifiable assumption considered by Naor [100] as well as Gentry and Wichs [111]. Morally having a falsifiable assumption means that the challenger in the security game must be efficient, so that if an adversary claims to break the security game, it is possible to verify that she has done so. Roughly speaking, we show that a falsifiable assumption is either _information-theoretically_ secure (in which case, not even a computationally unbounded prover can win at the security experiment beyond a certain threshold), or it can be reduced to DistSuccinctUhlmann, and hence it can broken in avgUnitaryPSPACE (as shown in Section 7).
Our notion of a _falsifiable quantum cryptographic assumption_ captures most cryptographic assumptions in both classical and quantum cryptography. The definition is essentially a QIP protocol, albeit cast in a cryptographic language. Instead of a _verifier_, we have a _challenger_; instead of a _prover_, we have an _adversary_. We formally define falsifiable quantum cryptographic assumptions as follows. We refer the reader to Section 4 for the formal definitions of quantum verifiers and interactive protocols.
**Definition 8.18** (Falsifiable quantum cryptographic assumption).: _A falsifiable quantum cryptographic assumption (or falsifiable assumption for short) is a pair \((\mathcal{C},c)\) consisting of a polynomial-time quantum verifier \(\mathcal{C}=(\mathcal{C}_{x})_{x}\) (which we call the challenger) and a constant \(c\in[0,1]\). Given a string \(x\in\{0,1\}^{*}\),20 the challenger \(\mathcal{C}_{x}\) engages in an interaction with a prover \(\mathcal{A}\) (which also gets the input \(x\)) called the adversary. At the end of the protocol, the challenger accepts or rejects. If the challenger accepts, we say that the adversary wins._
Footnote 20: Here, \(x\) should be taken as the security parameter in unary \(1^{\lambda}\), and perhaps in addition expected format of the interaction. This includes for example, the number of queries that the adversary wishes to make (in a CCA security game for an encryption scheme as an example), or an upper bound on the message length sent by the adversary (in a collision finding security game as an example). The point of having \(x\) is so that the overall running time of the challenger is upper bounded by a _fixed_ polynomial in \(|x|\).
See Figure 3 for a depiction of an interaction between a challenger and adversary. We now describe the security property corresponding to a falsifiable assumption.
**Definition 8.19** (Security of a falsifiable assumption).: _A falsifiable assumption \((\mathcal{C},c)\) is computationally secure (resp. information-theoretically secure) if there exists a negligible function \(\nu\) such that for all polynomial-time (resp. computationally unbounded) adversaries \(\mathcal{A}\) and for all \(x\in\{0,1\}^{*}\), the probability that the adversary is accepted is at most \(c+\nu(|x|)\) over the randomness of the interaction \(\mathcal{C}_{x}\raisebox{-1.72pt}{\hbox to 0.0pt{\scalebox{1.0}{$\leftrightarrow$}} \raisebox{-1.72pt}{$\leftrightarrow$}}\mathcal{A}\). We say that a (possibly inefficient) adversary \(\mathcal{A}\)_ breaks instance \(x\) of the assumption \((\mathcal{C},c)\) with advantage \(\delta\)_if \(\Pr\left(\mathcal{C}_{x}\raisebox{-1.72pt}{\hbox to 0.0pt{\scalebox{1.0}{$ \leftrightarrow$}}\raisebox{-1.72pt}{$\leftrightarrow$}}\mathcal{A}\text{ accepts}\right)\geq c+\delta\)._
Here are some (informally-described) examples of falsifiable quantum cryptographic assumptions.
1. (_Public-key quantum money_) Consider a candidate public-key quantum money scheme (see [1, Lectures 8 and 9] for a longer discussion of quantum money). The assumption here is the pair \((\mathcal{C}^{\$},0)\). The challenger \(\mathcal{C}^{\$}\) first generates a random money state along with the serial number and sends both to the adversary (while remembering the serial number). The adversary wins if it can send back two states (which may be entangled) that both pass the money scheme's verification procedure.
2. (_Pseudorandom states_) Consider a candidate pseudorandom state generator \(G\)[12]. The assumption here is \((\mathcal{C}^{\text{PRS}},\frac{1}{2})\) where the instances \(x\) specify the security parameter \(\lambda\) as well as a positive integer \(t\). The challenger \(\mathcal{C}^{\text{PRS}}\), given \(x=(1^{\lambda},1^{t})\), either sends to the adversary \(t\) copies of a pseudorandom state or \(t\) copies of a Haar-random state (which can be done efficiently using, e.g., \(t\)-designs [1]). The adversary wins if it can guess whether it was given pseudorandom states or Haar-random states.
3. (_Quantum EFI pairs_) Consider a candidate ensemble of EFI pairs \(\{(\rho_{\lambda,0},\rho_{\lambda,1})\}_{\lambda}\)[1]. The assumption here is \((\mathcal{C}^{\text{EFI}},\frac{1}{2})\). The challenge \(\mathcal{C}^{\text{EFI}}\) picks a random bit \(b\in\{0,1\}\) and sends \(\rho_{\lambda,b}\) to the adversary. The adversary wins if it can guess the bit \(b\).
Figure 3: Quantum circuit representation of a \(4\)-message interaction between an efficient challenger and an adversary who seeks to falsify a cryptographic assumption \((\mathcal{C},c)\).
**Theorem 8.20**.: _A falsifiable quantum cryptographic assumption \((\mathcal{C},c)\) is either information-theoretically secure, or breaking the assumption \((\mathcal{C},c)\) can be reduced to DistSuccinctUhlmann\({}_{1}\)._
Formally what we mean by "breaking the assumption can be reduced to DistSuccinctUhlmann\({}_{1}\)" is the following: there exists an adversary \(A\) that is a polynomial time quantum query algorithm with access to a DistSuccinctUhlmann\({}_{1}\) oracle and breaks infinitely many instances \(x\) of the assumption \((\mathcal{C},c)\) with advantage \(1/p(|x|)\) for some polynomial \(p\).
The proof of Theorem8.20 is very similar to that of Lemma7.5: again, the idea is that if we are considering a quantum interactive protocol, we can implement the prover's (or in this case adversary's) actions as Uhlmann unitaries. Hence, if there is any adversary that can break the falsifiable assumption, we can implement that adversary using a DistSuccinctUhlmann\({}_{1}\) oracle, so breaking the assumption reduces to DistSuccinctUhlmann\({}_{1}\). To make the paper more modular, we nonetheless spell out the details.
Proof of Theorem8.20.: Suppose that \((\mathcal{C},c)\) is not in fact information-theoretically secure and there exists a possibly inefficient adversary \(\mathcal{A}\) with at most \(r=\operatorname{poly}(n)\) many rounds of interaction and a polynomial \(p(n)\) such that
\[\Pr\Big{(}\mathcal{C}_{x}{\stackrel{{\leftarrow}}{{=}}} \mathcal{A}\text{ accepts}\Big{)}\geq c+1/p(n)\,,\]
where \(x\in\{0,1\}^{*}\) and \(n=|x|\). For each round \(j\in\{1,\ldots,r\}\), we let
* \(\rho^{(j)}_{\mathsf{M}^{j}_{x}\mathsf{W}^{j}_{x}}\) denote the state of the message register \(\mathsf{M}^{j}_{x}\) and the private workspace \(\mathsf{W}^{j}_{x}\) of the challenger \(\mathcal{C}_{x}\) at the beginning of the challenger's \(j\)'th turn
* \(\sigma^{(j)}_{\mathsf{M}^{j}_{x}\mathsf{W}^{j}_{x}}\) denote the state of the message register and the challenger's private workspace at the end of the challenger's \(j\)'th turn.
We now argue that the intermediate states on the message and challenger register in the interaction of \(\mathcal{C}_{x}\) with \(\mathcal{A}\) have purifications in statePSPACE. Let \(q(n)=p(n)/2\) be a polynomial. From [13, Lemma 7.5], it follows that, for all \(x\), there exists a prover \(\mathcal{P}_{x}\) that is accepted with probability at least \(c+1/2p(n)\) for which the following property holds: there are families of pure states
\[(|\psi_{x,j}\rangle_{\mathsf{M}^{j}_{x}\mathsf{W}^{j}_{x}\mathsf{P}^{j}_{x}} )_{x,j},\,|\varphi_{x,j}\rangle_{\mathsf{M}^{j}_{x}\mathsf{W}^{j}_{x}\mathsf{ P}^{j}_{x}})_{x,j}\in\textsf{statePSPACE}_{1/q(n)}\]
for some purifying registers \(\mathsf{P}^{j}_{x}\) that are purifications of intermediate states \(\rho^{(j)}_{\mathsf{M}^{j}_{x}\mathsf{W}^{j}_{x}}\) and \(\sigma^{(j)}_{\mathsf{M}^{j}_{x}\mathsf{W}^{j}_{x}}\) of the challenger \(\mathcal{C}_{x}\) interacting with the prover \(\mathcal{P}_{x}\). Moreover, there are polynomial-time Turing machines that, given as input a description of the verifier's actions in the protocol, output succinct classical descriptions of the quantum polynomial-space circuits for preparing \(|\psi_{x,j}\rangle\) and \(|\varphi_{x,j}\rangle\). This holds because [13, Lemma 7.5] only relies on the block-encoding transformations implemented in [13, Theorems 5.5 and 6.1], which have efficient (and explicit) descriptions.
This means that for each round \(j\) of the protocol, there exist polynomial-space quantum circuits \(C^{j}\) and \(D^{j}\) with efficiently computable succinct classical descriptions \(\hat{C}^{j}\) and \(\hat{D}^{j}\) such that \(|\psi_{x,j}\rangle_{\mathsf{M}^{j}_{x}\mathsf{W}^{j}_{x}\mathsf{P}^{j}_{x}}=C^ {j}\,|0\ldots 0\rangle\) and \(|\varphi_{x,j}\rangle_{\mathsf{M}^{j}_{x}\mathsf{W}^{j}_{x}\mathsf{P}^{j}_{x} }=D^{j}\,|0\ldots 0\rangle\) are purifications of the reduced state on the message register \(\mathsf{M}^{\dot{\mathsf{j}}}_{\mathsf{x}}\) and challenger register \(\mathsf{W}^{\dot{\mathsf{j}}}_{\mathsf{x}}\) of the interactive protocol right before and after
the prover's action in round \(j\). Notice that because the challenger register in the interactive protocol is not acted upon by the prover, the reduced states on the challenger register are unchanged, i.e.
\[\mathrm{Tr}_{\mathsf{M}_{\mathsf{x}}^{\mathsf{j}}\mathsf{P}_{\mathsf{x}}^{ \mathsf{j}}}\!\left(|\psi_{x,j}\rangle\!\langle\psi_{x,j}|_{\mathsf{M}_{ \mathsf{x}}^{\mathsf{j}}\mathsf{W}_{\mathsf{x}}^{\mathsf{j}}\mathsf{P}_{ \mathsf{x}}^{\mathsf{j}}}\right)=\mathrm{Tr}_{\mathsf{M}_{\mathsf{x}}^{ \mathsf{j}}\mathsf{P}_{\mathsf{x}}^{\mathsf{j}}}\!\left(|\varphi_{x,j}\rangle \!\langle\varphi_{x,j}|_{\mathsf{M}_{\mathsf{x}}^{\mathsf{j}}\mathsf{W}_{ \mathsf{x}}^{\mathsf{j}}\mathsf{P}_{\mathsf{x}}^{\mathsf{j}}}\right).\]
We can therefore interpret the circuit pair \((C^{j},D^{j})\) as an instance of the \(\textsc{SuccinctUhlmann}_{1}\) problem, with \(\mathsf{W}^{\mathsf{j}}\) taking the role of the register that cannot be acted upon by the Uhlmann unitary. Hence, with access to a \(\textsc{DistSuccinctUhlmann}_{1}\)-oracle, we can apply an Uhlmann transformation mapping \(|\psi_{x,j}\rangle_{\mathsf{M}_{\mathsf{x}}^{\mathsf{j}}\mathsf{W}_{\mathsf{ x}}^{\mathsf{j}}\mathsf{P}_{\mathsf{x}}^{\mathsf{j}}}\) to \(|\varphi_{x,j}\rangle_{\mathsf{M}_{\mathsf{x}}^{\mathsf{j}}\mathsf{W}_{ \mathsf{x}}^{\mathsf{j}}}\mathsf{P}_{\mathsf{x}}^{\mathsf{j}}\) by acting only on registers \(\mathsf{M}_{\mathsf{x}}^{\mathsf{j}}\mathsf{P}_{\mathsf{x}}^{\mathsf{j}}\). This means that with the \(\textsc{DistSuccinctUhlmann}_{1}\)-oracle, we can efficiently implement the actions of a successful prover in the interactive protocol.
### Open problems
In Section 8.2 we studied the complexity of breaking a special class of _real-valued_ OWSGs. A natural question is whether real-valued OWSGs are without loss of generality. For solving decision problems, it is without loss of generality to use quantum circuits with only real-valued gates [21, 22], it is _a priori_ possible that protocols where the parties compute with complex-valued gates may allow for stronger security guarantees than if they were restricted to only real-valued gates.
**Open Problem 17**.: Can all OWSGs be made real-valued without loss of generality? Are there quantum cryptographic primitives where the security depends on quantum computing with complex-valued gates?
It would also be desirable to obtain the following two stronger versions of Theorem 8.17.
**Open Problem 18**.: Can the conclusion of Theorem 8.17 be strengthened so that instead of breaking unclonability security, the OWSG security property is broken instead?
**Open Problem 19**.: Can Theorem 8.17 be extended to prove a complexity upper bound on all OWSGs (not just real-valued, clean-output ones)? Alternatively, can breaking real-valued, clean-output OWSGs (or another natural class of them) be reduced to implementing \(\textsc{DistUhlmann}_{1-\mathrm{negl}}\)? This would mean that this class of OWSGs implies the existence of EFI [1].
Morimae and Yamakawa [14, 15] asked whether OWSGs constitute a _minimal assumption_ in quantum cryptography. A natural question, in particular, is whether OWSGs are implied by so-called _unclonable cryptographic primitives_, such as quantum money [1, 2], quantum copy-protection [1, 2], or unclonable encryption [1, 3], which leverage the quantum no-cloning principle to achieve unforgeable banknotes, programs, and ciphertexts.
**Open Problem 20**.: Do unclonable cryptographic primitives, such as quantum money, copy-protection, or unclonable encryption imply the hardness of the Uhlmann Transformation Problem?
Finally, we ask the following question related to the results in this section.
**Open Problem 21**.: Can computationally secure OWSGs be constructed assuming the hardness of \(\textsc{DistUhlmann}_{1/p(n)}\) for some polynomial \(p\)?
Applications to Quantum Shannon Theory
We now relate the Uhlmann Transformation Problem to two fundamental tasks in quantum Shannon theory: decoding the output of quantum channels and compressing quantum information. We show that both of these tasks can be performed in polynomial time if the Uhlmann transformation problem can be implemented in polynomial time. We also prove that channel decoding is as hard as solving the Uhlmann transformation problem for a range of parameters.
### Decoding channels
We discuss the task of decoding the output of a channel (i.e. recovering the input to the channel from its output). We focus on channels that are _decodable_:
**Definition 9.1** (Decodable channel).: _Let \(\epsilon>0\). A channel \(\mathcal{N}\) mapping register \(\mathsf{A}\) to \(\mathsf{B}\) is \(\epsilon\)-decodable if there exists a (not necessarily efficient) quantum algorithm \(D\) that takes as input register \(\mathsf{B}\) and outputs register \(\mathsf{A}^{\prime}\) isomorphic to \(\mathsf{A}\) such that_
\[\mathrm{F}\Big{(}(D_{\mathsf{B}\to\mathsf{A}^{\prime}}\circ\mathcal{N}_{ \mathsf{A}\to\mathsf{B}})(|\Phi\rangle\!\langle\Phi|_{\mathsf{A}\mathsf{R}}), \,|\Phi\rangle\!\langle\Phi|_{\mathsf{A}^{\prime}\mathsf{R}}\,\Big{)}\geq 1- \epsilon\,\]
_where \(|\Phi\rangle_{\mathsf{A}\mathsf{R}}\) is the maximally entangled state on registers \(\mathsf{A}\mathsf{R}\)._
_Remark 9.2_.: We could also consider a generalization of Definition 9.1 where we consider states other than the maximally entangled state. However we focus on the maximally entangled state for simplicity, and it already illustrates the key ideas of our complexity result. Furthermore, decodable channels most naturally arise in the context of error corrected communication: there, given any noisy channel, the goal is to find an encoding channel such that the concatenation of encoder and noisy channel is decodable. It is known that using the maximally entangled state as the input to a coding scheme for a noisy channel is without loss of generality (up to small changes in capacity, see e.g. [14, Chapter 15]).
We first show a sufficient and necessary condition for a channel \(\mathcal{N}:\mathsf{A}\to\mathsf{B}\) to be decodable. Recall the definition of a Stinespring dilation of a channel: this is an isometry \(V:\mathsf{A}\to\mathsf{BC}\) such that \(\mathcal{N}(X)=\mathrm{Tr}_{\mathsf{C}}(VXV^{*})\). We introduce a condition about the _complementary channel_\(\mathcal{N}^{c}(X)\coloneqq\mathrm{Tr}_{\mathsf{B}}(VXV^{*})\) defined relative to a Stinespring dilation \(V\):
**Definition 9.3** (Decoupling condition for channels).: _We say a channel \(\mathcal{N}_{\mathsf{A}\to\mathsf{B}}\) satisfies the decoupling condition with error \(\epsilon\) if_
\[\mathrm{F}\Big{(}\mathcal{N}_{\mathsf{A}\to\mathsf{C}}^{c}(|\Phi\rangle\! \langle\Phi|_{\mathsf{A}\mathsf{R}}),\,\mathcal{N}_{\mathsf{A}\to\mathsf{C}} ^{c}\Big{(}\frac{\mathrm{id}_{\mathsf{A}}}{\dim\mathsf{A}}\Big{)}\otimes \frac{\mathrm{id}_{\mathsf{R}}}{\dim\mathsf{R}}\Big{)}\geq 1-\epsilon\,,\]
_where \(\mathcal{N}^{c}\) is a complementary channel to \(\mathcal{N}\) relative to any Stinespring dilation._
**Proposition 9.4** (Necessary and sufficient conditions for decodability).: _If a channel \(\mathcal{N}\) satisfies the decoupling condition with error \(\epsilon\), then it is \(\epsilon\)-decodable. If it is \(\epsilon\)-decodable, then it satisfies the decoupling condition with error \(2\sqrt{\epsilon}\)._
In other words, a channel is decodable if and only if the output of the complementary channel is close to unentangled with the reference register \(\mathsf{R}\) of the maximally entangled state that was input to channel.
Proof.: The first direction we prove is the following: if a channel \(\mathcal{N}\) satisfies the decoupling condition, then it is decodable. Let \(V\) denote the Stinespring dilation of \(\mathcal{N}\) which defines the complementary channel \(\mathcal{N}^{c}\) satisfying the decoupling condition.
Let registers \(\mathsf{A}^{\prime},\mathsf{R}^{\prime}\) be isomorphic to \(\mathsf{A},\mathsf{R}\) respectively. Consider the following pure states:
\[\left|E\right\rangle_{\mathsf{RBCA^{\prime}R^{\prime}}} \coloneqq V_{\mathsf{A}\to\mathsf{BC}}\left|\Phi\right\rangle_{ \mathsf{RA}}\otimes\left|0\right\rangle_{\mathsf{A}^{\prime}\mathsf{R}^{ \prime}}\] \[\left|F\right\rangle_{\mathsf{RA^{\prime}BCR^{\prime}}} \coloneqq\left|\Phi\right\rangle_{\mathsf{RA}^{\prime}}\otimes V _{\mathsf{A}\to\mathsf{BC}}\left|\Phi\right\rangle_{\mathsf{AR}^{\prime}}\.\]
Note that the reduced density matrices of \(\left|E\right\rangle\) and \(\left|F\right\rangle\) on registers \(\mathsf{C}\) and \(\mathsf{R}\) are, respectively, \(\mathcal{N}^{c}_{\mathsf{A}\to\mathsf{C}}(\left|\Phi\right\rangle\!\!\left\langle \Phi\right|_{\mathsf{AR}})\) and \(\mathcal{N}^{c}_{\mathsf{A}\to\mathsf{C}}\!\left(\frac{\mathrm{id}_{\mathsf{ A}}}{\dim\mathsf{A}}\right)\otimes\frac{\mathrm{id}_{\mathsf{B}}}{\dim \mathsf{R}}\). Therefore by the decoupling condition and Uhlmann's theorem there exists a unitary \(U\) mapping registers \(\mathsf{BA}^{\prime}\mathsf{R}^{\prime}\) to registers \(\mathsf{A}^{\prime}\mathsf{BR}^{\prime}\) such that
\[\mathrm{F}\!\left((\mathrm{id}\otimes U)\left|E\right\rangle\!\!\left\langle E \right|(\mathrm{id}\otimes U^{\dagger}),\left|F\right\rangle\!\!\left\langle F \right|\,\right)\geq 1-\epsilon. \tag{9.1}\]
Define the decoding procedure \(D\) that maps register \(\mathsf{B}\) to register \(\mathsf{A}^{\prime}\) and behaves as follows: it appends registers \(\mathsf{A}^{\prime}\mathsf{R}^{\prime}\) in the \(\left|0\right\rangle\) state, applies the isometry \(U\) to registers \(\mathsf{BA}^{\prime}\mathsf{R}^{\prime}\), and then traces out registers \(\mathsf{BR}^{\prime}\) to obtain register \(\mathsf{A}^{\prime}\). Since \(\left|E\right\rangle\) is the result of applying the Stinespring dilation of \(\mathcal{N}\) to \(\left|\Phi\right\rangle\) and appending \(\left|0\right\rangle_{\mathsf{A}^{\prime}\mathsf{R}^{\prime}}\), and using the fact that tracing out registers \(\mathsf{BR}^{\prime}\) does not reduce the fidelity, Equation (9.1) implies that
\[\mathrm{F}\!\left((D_{\mathsf{B}\to\mathsf{A}^{\prime}}\circ\mathcal{N}_{ \mathsf{A}\to\mathsf{B}})(\left|\Phi\right\rangle\!\!\left\langle\Phi\right|_ {\mathsf{AR}}),\left|\Phi\right\rangle\!\!\left\langle\Phi\right|_{\mathsf{A}^ {\prime}\mathsf{R}}\,\right)\geq 1-\epsilon\,,\]
showing that \(\mathcal{N}\) is \(\epsilon\)-decodable, as desired.
Now we argue the other direction (if \(\mathcal{N}\) is decodable, then the decoupling condition holds). The fact that it is decodable is equivalent to
\[\mathrm{Tr}\!\left(\,\left|\Phi\right\rangle\!\!\left\langle\Phi\right|\, \left(D_{\mathsf{B}\to\mathsf{A}^{\prime}}\circ\mathcal{N}_{\mathsf{A}\to \mathsf{B}}\right)(\left|\Phi\right\rangle\!\!\left\langle\Phi\right|_{\mathsf{ AR}})\right)\geq 1-\epsilon\.\]
Considering the Stinespring dilation \(V:\mathsf{A}\to\mathsf{BC}\) of \(\mathcal{N}\) this is equivalent to
\[\mathrm{Tr}\!\left((\left|\Phi\right\rangle\!\!\left\langle\Phi\right|_{ \mathsf{A}^{\prime}\mathsf{R}}\otimes\mathrm{id}_{\mathsf{C}})\,D_{\mathsf{B }\to\mathsf{A}^{\prime}}(V\,\left|\Phi\right\rangle\!\!\left\langle\Phi\right| _{\mathsf{AR}}V^{\dagger})\right)\geq 1-\epsilon. \tag{9.2}\]
Suppose we measure \(D_{\mathsf{B}\to\mathsf{A}^{\prime}}\!\left(V\,\left|\Phi\right\rangle\!\! \left\langle\Phi\right|_{\mathsf{AR}}V^{\dagger}\right)\) with the projector \(\left|\Phi\right\rangle\!\!\left\langle\Phi\right|\) and succeed. The post-measurement state is thus \(\left|\Phi\right\rangle\!\!\left\langle\Phi\right|\otimes\rho_{\mathsf{C}}\) for some density matrix \(\rho\). Since the measurement succeeds with probability at least \(1-\epsilon\), by the Gentle Measurement Lemma we get
\[\mathrm{F}\!\left(D_{\mathsf{B}\to\mathsf{A}^{\prime}}(V\,\left|\Phi\right\rangle \!\!\left\langle\Phi\right|_{\mathsf{AR}}V^{\dagger}),\left|\Phi\right\rangle\! \!\left\langle\Phi\right|_{\mathsf{A}^{\prime}\mathsf{R}}\otimes\rho_{ \mathsf{C}}\right)\geq 1-\epsilon. \tag{9.3}\]
Tracing out register \(\mathsf{A}^{\prime}\) from both sides, which does not reduce the fidelity, yields
\[\mathrm{F}\!\left(\mathcal{N}^{c}_{\mathsf{A}\to\mathsf{C}}(\left|\Phi\right\rangle \!\!\left\langle\Phi\right|_{\mathsf{AR}}),\,\rho_{\mathsf{C}}\otimes\frac{ \mathrm{id}_{\mathsf{R}}}{\dim\mathsf{R}}\right)\geq 1-\epsilon. \tag{9.4}\]
On the other hand, tracing out registers \(\mathsf{A}^{\prime}\mathsf{R}\) in Equation (9.3) also yields
\[\mathrm{F}\!\left(\mathcal{N}^{c}_{\mathsf{A}\to\mathsf{C}}\!\left(\frac{ \mathrm{id}_{\mathsf{A}}}{\dim\mathsf{A}}\right),\,\rho_{\mathsf{C}}\right)\geq 1 -\epsilon. \tag{9.5}\]
Combining Equations (9.4) and (9.5), tracing out register \(\mathsf{A}^{\prime}\), and using Fuchs-van de Graaf twice, and we get
\[\mathrm{F}\!\left(\mathcal{N}^{c}_{\mathsf{A}\to\mathsf{C}}(\left|\Phi\right\rangle \!\!\left\langle\Phi\right|_{\mathsf{AR}})\,,\,\mathcal{N}^{c}_{\mathsf{A}\to \mathsf{C}}\!\left(\frac{\mathrm{id}_{\mathsf{A}}}{\dim\mathsf{A}}\right)\otimes \frac{\mathrm{id}_{\mathsf{R}}}{\dim\mathsf{R}}\right)\geq 1-2\sqrt{\epsilon}\,,\]
which is the desired decoupling condition.
#### 9.1.1 Complexity of the Decodable Channel Problem
Previously we identified necessary and sufficient conditions for when a channel is information-theoretically decodable. Now we investigate when a decodable channel can be _efficiently_ decoded. First we define a computational problem corresponding to decoding a given channel.
**Definition 9.5** (\(\epsilon\)-Decodable Channel Problem).: _Let \(\epsilon,\delta:\mathbb{N}\to[0,1]\) be functions. Let \(D=(D_{x})_{x}\) be quantum algorithm. Then we say that \(D\) solves the \(\epsilon\)-Decodable Channel Problem with error \(\delta\) if for all \(x=(1^{m},1^{r},C)\) where \(C\) is an explicit description of a quantum circuit that maps \(m\) qubits to \(r\) qubits and is a \(\epsilon\)-decodable channel, the circuit \(D_{x}\) takes as input \(r\) qubits and satisfies_
\[\mathrm{F}\Big{(}(D_{x}\circ C)(|\Phi\rangle\!\langle\Phi|),|\Phi\rangle\! \langle\Phi|\,\Big{)}\geq 1-\delta(|x|)\,,\]
_where \(|\Phi\rangle\) is the maximally entangled state on \(m\) qubits._
The main result of this section is to show that the complexity of the Decodable Channel Problem is equivalent to the complexity of the (distributional) Uhlmann Transformation Problem.
**Theorem 9.6**.: \(\textsc{DistUhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}\) _for all negligible functions \(\epsilon(n)\) if and only if for every negligible function \(\epsilon(n)\) and for every polynomial \(q(n)\), the \(\epsilon\)-Decodable Channel Problem is solvable in uniform polynomial time with error \(O(1/q(n))\)._
Proof.: **Upper bound.** We start by proving the the "only if" direction (if \(\textsc{DistUhlmann}_{1-\epsilon}\) is easy, then the Decodable Channel Problem is easy). Let \(\epsilon(n)\) be a negligible function and let \(q(n)\) be a polynomial. We present an algorithm \(D\) that solves the \(\epsilon\)-Decodable Channel Problem with error \(O(1/q(n))\), and is efficient under the assumption about DistUhlmann.
Let \(x=(1^{m},1^{r},C)\) be an instance of the \(\epsilon\)-Decodable Channel Problem be such that \(C\) is a quantum circuit computing an \(\epsilon\)-decodable channel mapping \(n\) qubits (which we label as register \(\mathsf{A}\)) to \(r\) qubits (which we label as register \(\mathsf{B}\)). Let \(V\) denote the unitary purification of \(C\) (see Definition 2.7) of \(C\), which we view also as a Stinespring dilation of \(C\) that maps register \(\mathsf{A}\) to registers \(\mathsf{BC}\). Let \(\mathsf{A}^{\prime},\mathsf{R}^{\prime}\) denote registers isomorphic to \(\mathsf{A},\mathsf{R}\), respectively. Consider the pure states \(|E\rangle_{\mathsf{RBCA}^{\prime}\mathsf{R}^{\prime}}\) and \(|F\rangle_{\mathsf{RA}^{\prime}\mathsf{BCR}^{\prime}}\) defined in the proof of Proposition 9.4 with respect to the dilation \(V\). Note that these states can be computed by circuits \(E,F\) with size \(\mathrm{poly}(|C|)\). By padding we can assume without loss of generality that \(E,F\) act on \(2k\) qubits where \(k\geq|x|\).
Since the channel \(C\) is \(\epsilon\)-decodable, then by Proposition 9.4 it satisfies the decoupling condition with error \(2\sqrt{\epsilon}\). Therefore it follows that \(y=(1^{k},E,F)\) is a valid \(\textsc{Uhlmann}_{1-2\sqrt{\epsilon}}\) instance (where the registers are divided into two groups \(\mathsf{CR}\) and \(\mathsf{BA}^{\prime}\mathsf{R}^{\prime}\)). Since \(\epsilon\) is negligible, so is \(2\sqrt{\epsilon}\). Therefore \(\textsc{DistUhlmann}_{1-2\sqrt{\epsilon}}\in\mathsf{avgUnitaryBQP}\) by assumption, and thus there exists a polynomial-time algorithm \(M=(M_{y})_{y}\) that implements \(\textsc{DistUhlmann}_{1-2\sqrt{\epsilon}}\) with average-case error \(1/q\). By Proposition 5.8, it follows that for \(y=(1^{k},E,F)\) with \(k=\mathrm{poly}(|x|)\), the algorithm \(M_{y}\) satisfies, for sufficiently large \(k\),
\[\mathrm{F}\Big{(}(\mathrm{id}\otimes M_{y})(|E\rangle\!\langle E|),|F\rangle\! \langle F|\,\Big{)}\geq\Big{(}1-\frac{1}{q(k)}-O(\epsilon(k)^{1/4})\Big{)}^{2} \geq 1-O(1/q(k)). \tag{9.6}\]
In the second inequality we used the fact that \(\epsilon\) is a negligible function.
The algorithm \(D=(D_{x})_{x}\) behaves as follows on instance \(x=(1^{m},1^{r},C)\) of the \(\epsilon\)-Decodable Channel Problem. It receives as input a register \(\mathsf{B}\). It first computes the description of the
Uhlmann\({}_{1-2\sqrt{\epsilon}}\) instance \(y=(1^{k},E,F)\) described above. It initializes ancilla registers \(\mathsf{A}^{\prime}\mathsf{R}^{\prime}\) in the zero state, and then applies the algorithm \(M_{y}\) to registers \(\mathsf{B}\mathsf{A}^{\prime}\mathsf{R}^{\prime}\). Finally, the algorithm \(D_{x}\) then traces out registers \(\mathsf{B}\mathsf{R}^{\prime}\) and outputs the remaining register \(\mathsf{A}^{\prime}\).
Now we analyze the behavior of the algorithm \(D_{x}\) when it receives the \(\mathsf{B}\) register of the state \(C_{\mathsf{A}\to\mathsf{B}}(|\Phi\rangle\!\langle\Phi|_{\mathsf{A}\mathsf{R}})\). Note that
\[\Big{(}(D_{x})_{\mathsf{B}\to\mathsf{A}^{\prime}}\circ C_{ \mathsf{A}\to\mathsf{B}}\Big{)}(|\Phi\rangle\!\langle\Phi|_{\mathsf{A}\mathsf{ A}}) =\operatorname{Tr}_{\mathsf{CBR}^{\prime}}\Bigl{(}(\operatorname{id} \otimes M_{y})(|E\rangle\!\langle E|)\Bigr{)}\] \[|\Phi\rangle\!\langle\Phi|_{\mathsf{A}\mathsf{R}^{\prime}} =\operatorname{Tr}_{\mathsf{A}\mathsf{B}\mathsf{C}^{\prime}} \Bigl{(}\,|F\rangle\!\langle F|\,\Bigr{)}\.\]
By Equation (9.6) and the fact that the fidelity does not decrease under partial trace we have
\[\operatorname{F}\Bigl{(}\Bigl{(}(D_{x})_{\mathsf{B}\to\mathsf{A}^{\prime}} \circ C_{\mathsf{A}\to\mathsf{B}}\Bigr{)}(|\Phi\rangle\!\langle\Phi|_{\mathsf{ A}\mathsf{A}})\,,\,|\Phi\rangle\!\langle\Phi|_{\mathsf{A}\mathsf{A}^{\prime}}\, \Bigr{)}\geq\operatorname{F}\Bigl{(}(\operatorname{id}\otimes M_{y})(|E \rangle\!\langle E|),|F\rangle\!\langle F|\,\Bigr{)}\geq 1-O(1/q(k))\.\]
Thus we have shown that \(D=(D_{x})_{x}\) solves the \(\epsilon\)-Decodable Channel Problem with error \(O(1/q(k))\), and since \(k\geq|x|\), this is at most \(O(1/q(|x|))\) for sufficiently large \(|x|\), as desired. This concludes the "only if" direction.
Lower bound.We now prove the "if" part of the theorem (if the Decodable Channel Problem is easy, then DistUhlmann is easy). The intuition behind the proof is as follows: we prove the contrapositive and argue that if DistUhlmann is hard, then we can construct a family of hard instances of the Decodable Channel Problem. These hard instances, intuitively, will be decodable channels \(\mathcal{N}\) that take as input \(b\in\{0,1\}\) and output an _encryption_\(\rho_{b}\). The states \(\rho_{0}\) and \(\rho_{1}\) are far from each other, but are computationally indistinguishable (this is also known as an _EFI pair_[2]). Thus no efficient decoder can correctly recover the bit \(b\), even though the channel \(\mathcal{N}\) is information-theoretically decodable by construction.
To construct such an encryption channel, we leverage quantum commitments, which we have already discussed in Section 8.1. Theorem 8.10 and Proposition 8.7 almost show that DistUhlmann\({}_{1-\epsilon}\notin\mathsf{avgUnitaryBQP}\) for some negligible function \(\epsilon\) implies the existence of strong statistical binding, weak computational hiding commitments. By "almost", we mean that Theorem 8.10 assumes something stronger, which is that DistUhlmann\({}_{1-\epsilon}\) is not in \(\mathsf{avgUnitaryBQP/poly}\), and furthermore hard instances of DistUhlmann can be efficiently generated. This is needed in order to obtain a bonafide quantum commitment with the requisite properties. However for this lower bound we use a slightly weaker primitive, where we do not need the hard instances to be uniformly generated and for the security to only hold against uniform adversaries. We describe the primitive formally below, and the proof of this implication follows along the same lines as the proof of Theorem 8.10.
Let \(\epsilon(n)\) be a negligible function and let \(\delta(n)=1/p(n)\) be an inverse polynomial for which DistUhlmann\({}_{1-\epsilon}\not\in\mathsf{avgUnitaryBQP}_{\delta}\), and assume towards a contradiction that for every negligible function \(\nu(n)\) and every polynomial \(q(n)\), the \(\nu\)-Decodable Channel Problem is solvable in polynomial time by an algorithm \(D=(D_{x})_{x}\) with error at most \(1/q(n)\).
The following lemma shows that the hardness of DistUhlmann implies the existence of families of circuits that can be interpreted as strong statistical binding, _infinitely often_ weak computational hiding commitments.
**Lemma 9.7**.: _Let \(\epsilon(n)\) be a negligible function. If \(\textsc{DistUhlmann}_{1-\epsilon}\notin\mathsf{avgUnitaryBQP}\), then there exists an inverse polynomial \(\delta(n)=1/p(n)\) and a family of circuits \(\{C_{x,b}\}_{x\in\{0,1\}^{*},b\in\{0,1\}}\) on registers
\(\mathsf{BE}\) where \(C_{x,b}\) acts on \(\operatorname{poly}(|x|)\) qubits satisfying the following properties: for all \(x\in\{0,1\}^{*}\), letting \(\rho_{x,b}\) denote the reduced density matrix of \(|C_{x,b}\rangle\) on register \(\mathsf{E}\),_
1. _(Always strong statistical binding)_ \(\operatorname{F}(\rho_{x,0},\rho_{x,1})\leq\epsilon(|x|)\)_._
2. _(Infinitely often weak computational hiding) For all uniform polynomial-time algorithms_ \(A=\{A_{x}\}_{x}\)_, there exist infinitely many_ \(x\) _such that_ \[|\Pr\left(A_{x}(\rho_{x,0})=1\right)-\Pr\left(A_{x}(\rho_{x,1})=1\right)|\leq \delta(|x|)\,.\]
We first show how this lemma implies the lower bound for Theorem 9.6. For all \(x\in\{0,1\}^{*},b\in\{0,1\}\) let \(|\psi_{x,b}\rangle\coloneqq C_{x,b}\,|0\cdots 0\rangle\). For every \(x\in\{0,1\}^{*}\) define the channel \(\mathcal{N}_{x}\) that does the following: given a qubit register \(\mathsf{A}\) in the state \(|b\rangle\) it prepares the state
\[|\theta_{b}\rangle_{\mathsf{AXBE}}\coloneqq\frac{1}{2}\sum_{a}X^{a}\,|b\rangle _{\mathsf{A}}\otimes|a\rangle_{\mathsf{X}}\otimes|\psi_{x,a}\rangle_{\mathsf{ BE}}\]
and then traces out registers \(\mathsf{XB}\), and outputs registers \(\mathsf{AE}\). Note that this channel can be computed by a unitary circuit \(V_{n}\) of size \(\operatorname{poly}(|C_{x,0}|,||C_{x,1}|)\).
**Claim 9.8**.: _For all \(x\in\{0,1\}^{*}\) the channel \(\mathcal{N}_{x}\) is \(8\sqrt{\epsilon(|x|)}\)-decodable._
Proof.: Let \(\mathcal{N}_{x}^{c}\) denote the complementary channel that does the same thing as \(\mathcal{N}_{x}\) except it outputs registers \(\mathsf{XB}\) and traces out registers \(\mathsf{AE}\). Consider applying \(\mathcal{N}_{x}^{c}\) to qubit \(\mathsf{A}\) of the maximally entangled state \(|\Phi\rangle_{\mathsf{RA}}\). Then the state of registers \(\mathsf{RXB}\) is as follows:
\[\frac{1}{4}\sum_{b,c,a,a^{\prime}}|b\rangle\!\langle c|_{\mathsf{R}}\otimes|a \rangle\!\langle a^{\prime}|_{\mathsf{X}}\otimes\operatorname{Tr}_{\mathsf{ E}}(|\psi_{x,a}\rangle\!\langle\psi_{x,a^{\prime}}|)\otimes\,\langle c|\,X^{a^{ \prime}}X^{a}\,|b\rangle\enspace. \tag{9.7}\]
Fix \(a\neq a^{\prime}\). Then we claim that
\[\|\operatorname{Tr}_{\mathsf{E}}(|\psi_{x,a}\rangle\!\langle\psi_{x,a^{ \prime}}|)\|_{1}=\sqrt{\operatorname{F}(\rho_{x,a},\rho_{x,a^{\prime}})}\]
where \(\rho_{x,b}\) is the reduced density matrix of \(|\psi_{x,b}\rangle\) on register \(\mathsf{E}\). To see this, let \(|\psi_{x,a}\rangle=\sqrt{\rho_{x,a}}\otimes U_{a}\,|\Omega\rangle\) where \(U_{a}\) is some unitary on register \(\mathsf{B}\), and \(|\Omega\rangle_{\mathsf{BE}}\) is an unnormalized maximally entangled state between registers \(\mathsf{B}\) and \(\mathsf{E}\). Then
\[\|\operatorname{Tr}_{\mathsf{E}}(|\psi_{x,0}\rangle\!\langle\psi_ {x,1}|)\|_{1} =\|\operatorname{Tr}_{\mathsf{E}}\Bigl{(}\sqrt{\rho_{x,0}}\otimes U _{0}\,|\Omega\rangle\!\langle\Omega|\,\sqrt{\rho_{x,1}}\otimes U_{1}^{\dagger }\Bigr{)}\|_{1}\] \[=\|U_{0}\sqrt{\rho_{x,0}}^{\top}\sqrt{\rho_{x,1}}U_{1}^{\dagger} \|_{1}\] \[=\|\sqrt{\rho_{x,0}}^{\top}\sqrt{\rho_{x,1}}\|_{1}=\|\sqrt{\rho_{x,0}}\sqrt{\rho_{x,1}}\|_{1}=\sqrt{\operatorname{F}(\rho_{x,0},\rho_{x,1})}\]
as desired. Here, \({}^{\top}\) and \(\overline{\cdot}\) denote transpose and complex conjugate with respect to the standard basis, respectively. The third line follows from the unitary invariance of the trace norm, invariance of the trace norm by complex conjugation, and the definition of fidelity.
By the always strong statistical binding of commitment \(\{C_{x,b}\}\) the fidelity between \(\operatorname{F}(\rho_{x,0},\rho_{x,1})\) is at most \(\epsilon(|x|)\). Thus the cross-terms in the state in Equation (9.7) are small and we get that
the state in Equation (9.7) is within \(4\sqrt{\epsilon(|x|)}\) trace distance (and thus by Fuchs van-de Graaf, \(1-8\sqrt{\epsilon(|x|)}\) fidelity) of
\[\frac{\mathrm{id}_{\mathsf{R}}}{2}\otimes\frac{1}{4}\sum_{a}|a\rangle\!\langle a |\!\rangle_{\mathsf{X}}\otimes\mathrm{Tr}_{\mathsf{E}}(|\psi_{x,a}\rangle\! \langle\psi_{x,a}|)=\frac{\mathrm{id}_{\mathsf{R}}}{2}\otimes\mathcal{N}_{x}^ {c}(\mathrm{id}_{\mathsf{A}}/2)\.\]
Therefore the channel \(\mathcal{N}_{x}\) satisfies the \(8\sqrt{\epsilon(|x|)}\)-decoupling condition, so by Proposition 9.4, the channel \(\mathcal{N}_{x}\) is \(8\sqrt{\epsilon(|x|)}\)-decodable as desired.
Suppose for contradiction that there existed a uniform polynomial-time quantum algorithm \(D=(D_{x})_{x}\) that solves the \(8\sqrt{\epsilon}\)-Decodable Channel Problem with error \(1/n\). For every \(x\in\{0,1\}^{*}\) define \(y_{x}\coloneqq(1^{1},1^{r_{x}},V_{x})\) where \(r_{x}\) is the number of output qubits and \(V_{x}\) is the circuit computing channel \(\mathcal{N}_{x}\). Then for all \(x\),
\[\mathrm{F}((D_{x}\circ\mathcal{N}_{x})(|\Phi\rangle\!\langle\Phi|_{\mathsf{ RA}}),|\Phi\rangle\!\langle\Phi|_{\mathsf{RA}})\geq 1-1/|x|\,.\]
Applying Fuchs-van de Graaf we get
\[\mathrm{td}((D_{x}\circ\mathcal{N}_{x})(|\Phi\rangle\!\langle\Phi|_{\mathsf{ RA}}),|\Phi\rangle\!\langle\Phi|_{\mathsf{RA}})\leq 1/\sqrt{|x|}\.\]
Measuring the register \(\mathsf{R}\) in the standard basis of both arguments does not increase the trace distance. Using this and convexity we have
\[\frac{1}{2}\sum_{b}\mathrm{td}((D_{x}\circ\mathcal{N}_{x})(|b\rangle\!\langle b |_{\mathsf{A}}),|b\rangle\!\langle b|_{\mathsf{A}})\leq 1/\sqrt{|x|}. \tag{9.8}\]
We now perform a hybrid argument. Define the channel \(\mathcal{N}_{x}^{(0)}\coloneqq\mathcal{N}_{x}\). Define the channel \(\mathcal{N}_{x}^{(1)}\) that prepares the state \(|\theta_{b}^{(1)}\rangle\) that is the same as \(|\theta_{b}\rangle\) except the \(\mathsf{BE}\) register is prepared in the state \(|\psi_{x,0}\rangle\) (i.e., independently of \(a\)), and then traces out registers \(\mathsf{XB}\). Observe that the output of \(\mathcal{N}_{x}^{(1)}\) on \(|b\rangle\) is
\[\frac{1}{2}\sum_{a}X^{a}\,|b\rangle\!\langle b|\,X^{a}\otimes\rho_{x,0}=\frac {\mathrm{id}}{2}\otimes\rho_{x,0}\,,\]
i.e., independent of \(b\). Then
\[\frac{1}{2}\sum_{b}\mathrm{td}((D_{x}\circ\mathcal{N}_{x}^{(0)})( |b\rangle\!\langle b|),(D_{x}\circ\mathcal{N}_{x}^{(1)})(|b\rangle\!\langle b |))\] \[\qquad=\frac{1}{2}\sum_{b}\mathrm{td}\Big{(}D_{x}\Big{(}\frac{1}{ 2}\sum_{a}X^{a}\,|b\rangle\!\langle b|\,X^{a}\otimes\rho_{x,a}\Big{)}\,,\,D_{ x}\Big{(}\frac{\mathrm{id}}{2}\otimes\rho_{x,0}\Big{)}\Big{)}\] \[\qquad=\frac{1}{2}\sum_{b}\mathrm{td}\Big{(}D_{x}\Big{(}\frac{1}{ 2}\,|\bar{b}\rangle\!\langle\bar{b}|\otimes\rho_{x,1}\Big{)}\,,\,D_{x}\Big{(} \frac{1}{2}\,|\bar{b}\rangle\!\langle\bar{b}|\otimes\rho_{x,0}\Big{)}\Big{)}\.\]
By the computational hiding property of \(\{C_{x,b}\}\), for infinitely many \(x\in\{0,1\}^{*}\) this quantity is at most \(\delta(|x|)\) (otherwise \(D_{x}\) could be used to distinguish between \(\rho_{x,0}\) and \(\rho_{x,1}\) with bias better than \(\delta(|x|)\)).
Combined with Equation (9.8), we have that for infinitely many \(x\),
\[\frac{1}{2}\sum_{b}\mathrm{td}((D_{x}\circ\mathcal{N}_{x}^{(1)})(|b\rangle\! \langle b|),|b\rangle\!\langle b|)\leq 1/\sqrt{|x|}+\delta(|x|)\.\]
However since \((D_{x}\circ\mathcal{N}_{x}^{(1)})(|b\rangle\!\langle b|)\) is a density matrix independent of \(b\), this quantity is at least \(\frac{1}{2}\), which is a contradiction for sufficiently large \(|x|\) since \(\delta(|x|)\) is an inverse polynomial.
We finish by establishing the existence of the commitments promised by Lemma 9.7, assuming the hardness of DistUhlmann.
Proof of Lemma 9.7.: Assume that \(\textsc{DistUhlmann}_{1-\epsilon}\notin\mathsf{avgUnitaryBQP}\) for some negligible function \(\epsilon(n)\). Then by Theorem 6.8 we have that \(\textsc{DistUhlmann}_{1-\epsilon}\notin\mathsf{avgUnitaryBQP}_{1-\xi}\) for \(\xi(n)=n^{-1/16}\). If \(x=(1^{n},E,F)\) is a valid \(\textsc{Uhlmann}_{1-\epsilon}\) instance, then we define the circuits \(C^{\prime}_{x,0}\coloneqq E\) and \(C^{\prime}_{x,1}\coloneqq F\). Otherwise, define \(C^{\prime}_{x,0},C^{\prime}_{x,1}\) to be circuits such that \(\left|C^{\prime}_{x,0}\right\rangle_{\mathsf{BE}}=\left|0^{2|x|}\right\rangle _{\mathsf{BE}}\) and \(\left|C^{\prime}_{x,1}\right\rangle_{\mathsf{BE}}=\left|0^{|x|}\right\rangle _{\mathsf{B}}\otimes\left|1^{|x|}\right\rangle_{\mathsf{E}}\). By definition and Proposition 5.8 we have that the family of circuits \(\{(C^{\prime}_{x,0},C^{\prime}_{x,1})\}_{x\in\{0,1\}^{*}}\) satisfies
1. (_Always strong statistical hiding_) For all \(x\in\{0,1\}^{*}\), we have \(\mathrm{F}(\sigma_{x,0},\sigma_{x,1})\geq 1-\epsilon(|x|)\) where \(\sigma_{x,b}\) is the reduced density matrix of \(|C^{\prime}_{x,b}\rangle\) on register \(\mathsf{B}\). This is because either \(x\) is a valid \(\textsc{Uhlmann}_{1-\epsilon}\) instance or \(C^{\prime}_{x,0},C^{\prime}_{x,1}\) were set to be trivial circuits that satisfy this condition.
2. (_Infinitely often weak computational binding_) For all uniform polynomial time algorithms \(A=(A_{x})_{x}\) there exists a polynomial \(p(n)\) such that the following holds for infinitely many \(x\): \[\mathrm{F}\left(\left(\mathrm{id}_{\mathsf{B}}\otimes A_{x}\right)\left|C^{ \prime}_{x,0}\right\rangle\!\langle C^{\prime}_{x,0}\right|,\left|C^{\prime}_{ x,1}\right\rangle\!\langle C^{\prime}_{x,1}|\right)\leq\frac{1}{p(|x|)}\.\]
Thus we can think of the collection of circuit pairs \(\{(C^{\prime}_{x,0},C^{\prime}_{x,1})\}_{x}\) as a "pseudo-commitment" that always has strong statistical hiding, and has weak computational binding infinitely often.
By performing the same flavor switching transformation (see Proposition 8.7) to each instance \((C^{\prime}_{x,0},C^{\prime}_{x,1})\) of this pseudo-commitment, we get another family of circuit pairs \(\{(C_{x,0},C_{x,1})\}_{x}\) satisfying
1. (_Always strong statistical binding_) For all \(x\in\{0,1\}^{*}\), for all quantum circuits \(A\) acting on \(\mathsf{B}\), \[\mathrm{F}\left(\left(A\otimes\mathrm{id}_{\mathsf{E}}\right)\left|C_{x,0} \right\rangle\!\langle C_{x,0}\right|,\left|C_{x,1}\right\rangle\!\langle C_{ x,1}|\right)\leq 2\epsilon(|x|)^{2}\.\]
2. (_Infinitely often weak computational hiding_) For all uniform polynomial-time algorithms \(A=(A_{x})_{x}\), the following holds for infinitely many \(x\): \[\left|\mathrm{Pr}\left(A_{x}(\rho_{x,0})=1\right)-\mathrm{Pr}\left(A_{x}(\rho _{x,1})=1\right)\right|\leq\sqrt{1/p(|x|)}\] where \(\rho_{x,b}\) is the reduced density matrix of \(|C_{x,b}\rangle\) on register \(\mathsf{E}\).
We make use of Uhlmann's theorem to rephrase the statistical binding property in terms of the fidelity between the reduced states on register \(\mathsf{E}\). Namely, by Uhlmann's theorem, we have that for all \(x\)
\[\mathrm{F}(\rho_{x,0},\rho_{x,1})=\sup_{A}\mathrm{F}\left(\left(A\otimes \mathrm{id}_{\mathsf{E}}\right)\left|C_{x,0}\right\rangle\!\langle C_{x,0} \right|,\left|C_{x,1}\right\rangle\!\langle C_{x,1}|\right).\]
The statistical binding property ensures that this is at most \(2\epsilon(|x|)^{2}\leq\epsilon(|x|)\). This concludes the proof.
### Compressing quantum information
In this section we show that the computational complexity of performing optimal compression of a quantum state (that can be efficiently prepared) is equivalent to the complexity of performing the Uhlmann Transformation Problem.
We consider the _one-shot_ version of the information compression task, where one is given just one copy of a density matrix \(\rho\) (rather than many copies) and the goal is to compress it to as few qubits as possible while being able to recover the original state within some error. The task is defined formally as follows:
**Definition 9.9** (Information compression task).: _Let \(\delta\geq 0\) and let \(\rho\) be an \(n\)-qubit density matrix. We say that a pair of (not necessarily efficient) quantum circuits \((E,D)\) compresses \(\rho\) to \(s\) qubits with error \(\delta\) if_
1. \(E\) _is a quantum circuit that takes as input_ \(n\) _qubits and outputs_ \(s\) _qubits,_
2. \(D\) _is a quantum circuit that takes as input_ \(s\) _qubits and outputs_ \(n\) _qubits,_
3. _For all purifications_ \(\ket{\psi}_{\mathsf{AR}}\) _of_ \(\rho\) _(where_ \(\mathsf{R}\) _is the purifying register), we have_ \[\operatorname{td}\Bigl{(}(D\circ E)(\psi),\psi\Bigr{)}\leq\delta\] _where the composite channel_ \(D\circ E\) _acts on register_ \(\mathsf{A}\) _of_ \(\ket{\psi}\)_._
_Define the \(\delta\)-error communication cost of \(\rho\), denoted by \(K^{\delta}(\rho)\), as the minimum integer \(s\) such that there exists a pair of quantum circuits \((E,D)\) that compresses \(\rho\) to \(s\) qubits with error \(\delta\)._
In this section, we first analyze what is information-theoretically achievable for one-shot compression. Then, we study the complexity of compressing quantum information to the information-theoretic limit; we will show that it is equivalent to the complexity of the Uhlmann Transformation Problem.
#### 9.2.1 Information-theoretic compression
In the one-shot setting the state \(\rho\) can be (information-theoretically) compressed to its _smoothed max entropy_ and no further. The smoothed max entropy is just one of a rich zoo of entropy measures that are used in the setting of non-asymptotic quantum information theory [103]. In this section we consider the following entropy measures:
**Definition 9.10** (Min-, max-, and Renyi \(2\)-entropy).: _Let \(\epsilon\geq 0\) and let \(\psi_{\mathsf{AB}}\) be a density matrix on registers \(\mathsf{AB}\)._
* _The_ min-entropy of register_ \(\mathsf{A}\) _conditioned on register_ \(\mathsf{B}\) _of the state_ \(\psi\) _is_ \[H_{\min}(\mathsf{A}|\mathsf{B})_{\psi}\coloneqq-\log\inf_{\sigma\in\operatorname {Pos}(\mathsf{B}):\psi_{\mathsf{AB}}\leq\operatorname{id}_{\mathsf{A}}\otimes \sigma_{\mathsf{B}}}\operatorname{Tr}(\sigma)\] _The_ \(\epsilon\)-smoothed conditional min-entropy _is_ \[H^{\epsilon}_{\min}(\mathsf{A}|\mathsf{B})_{\psi}\coloneqq\sup_{\sigma:P( \sigma,\psi)\leq\epsilon}H_{\min}(\mathsf{A}|\mathsf{B})_{\sigma}\,\] _where_ \(P(\sigma,\psi)\) _is the purified distance (whose definition need not concern us, see_ _[_103_, Definition 3.15]__)._
* _The_ max-entropy of register \(\mathsf{A}\) conditioned on register \(\mathsf{B}\) of the state \(\psi\)_is_ \[H_{\max}(\mathsf{A}|\mathsf{B})_{\psi}\coloneqq\sup_{\sigma\in\mathrm{Pos}( \mathsf{B}):\mathrm{Tr}(\sigma)\leq 1}\log\|\sqrt{\psi_{\mathsf{AB}}}\sqrt{ \mathrm{id}_{\mathsf{A}}\otimes\sigma_{\mathsf{B}}}\|_{1}^{2}\.\] _The_ \(\epsilon\)-smoothed conditional max-entropy _is_ \[H_{\max}^{\epsilon}(\mathsf{A}|\mathsf{B})_{\psi}\coloneqq\inf_{\sigma: \mathrm{td}(\sigma,\psi)\leq\epsilon}H_{\max}(\mathsf{A}|\mathsf{B})_{\sigma}\.\]
* _The_ Renyi \(2\)-entropy of register \(\mathsf{A}\) conditioned on register \(\mathsf{B}\) of the state \(\psi\)_is_ _[_10_, Definition 2.11]___ \[H_{2}(\mathsf{A}|\mathsf{B})_{\psi}\coloneqq-\log\inf_{\sigma>0}\mathrm{Tr} \Big{(}\Big{(}(\mathrm{id}_{\mathsf{A}}\otimes\sigma_{\mathsf{B}})^{-1/2}\psi_ {\mathsf{AB}}\Big{)}^{2}\Big{)}\] _where the infimum is over all positive definite density operators_ \(\sigma\) _acting on register_ \(\mathsf{B}\)_. The_ \(\epsilon\)_-smoothed conditional Renyi_ \(2\)_-entropy is_ \[H_{2}^{\epsilon}(\mathsf{A}|\mathsf{B})_{\psi}\coloneqq\sup_{\sigma: \mathrm{td}(\sigma,\psi)\leq\epsilon}H_{2}(\mathsf{A}|\mathsf{B})_{\sigma}\.\]
We do not elaborate further on the meaning or motivation for the definitions of these entropy measures (we refer the reader to [10, 13] for deeper discussions); we will only use the following properties of them:
**Proposition 9.11** (Relations between the entropy measures).: _Let \(\epsilon\geq 0\) and let \(|\psi\rangle_{\mathsf{ABC}}\) be a tripartite pure state. The following relationships hold:_
* _(_Duality relation_)_ \(H_{\min}^{\epsilon}(\mathsf{A}|\mathsf{B})_{\psi}=-H_{\max}^{\epsilon}( \mathsf{A}|\mathsf{C})_{\psi}\)_. We note that this duality relation only holds when_ \(\psi\) _is a pure state on registers_ \(\mathsf{ABC}\)_._
* _(_Bounds for conditional min/max-entropy_) Both_ \(H_{\min}^{\epsilon}(\mathsf{A}|\mathsf{B})_{\psi}\) _and_ \(H_{\max}^{\epsilon}(\mathsf{A}|\mathsf{B})_{\psi}\) _are bounded below by_ \(-\log\mathrm{rank}(\psi_{\mathsf{A}})\)_, and bounded above by_ \(\log\mathrm{rank}(\psi_{\mathsf{A}})\)_._
* _(_Isometric invariance_) For all isometries_ \(V\) _mapping register_ \(\mathsf{A}\) _to_ \(\mathsf{A}^{\prime}\) _we have_ \(H_{\min}(\mathsf{A}|\mathsf{B})_{\psi}=H_{\min}(\mathsf{A}^{\prime}|\mathsf{B })_{V\psi V^{\dagger}}\)_._
* _(_Min- versus \(2\)-entropy_)_ \(H_{\min}(\mathsf{A}|\mathsf{B})_{\psi}\leq H_{2}(\mathsf{A}|\mathsf{B})_{\psi}\)_._
* _(_Operational interpretation of min-entropy_) When_ \(\psi_{\mathsf{AB}}\) _is diagonal (i.e., it corresponds to a bipartite probability distribution_ \(p(a,b)\)_),_ \(2^{-H_{\min}(\mathsf{A}|\mathsf{B})_{\psi}}=\sum_{b}p(b)\,\max_{a}p(a|b)\)_, i.e., the maximum probability of guessing the state of_ \(\mathsf{A}\) _given the state of_ \(\mathsf{B}\)_._
* _(_Max-entropy does not decrease after appending a state_) For all density matrices_ \(\sigma\in\mathrm{S}(\mathsf{D})\)_, we have_ \(H_{\max}^{\epsilon}(\mathsf{A})_{\psi}\leq H_{\max}^{\epsilon}(\mathsf{AD})_{ \psi\otimes\sigma}\)_._
Proof.: A proof of the duality relation can be found in [10, Theorem 5.4]. The bounds for the conditional min-entropy can be found in [10, Proposition 4.3]; the bounds on the conditional max-entropy follow via the duality relation. The isometric invariance property follows directly from the definition of the (smoothed) conditional min-entropy. The min- versus \(2\)-entropy bound is proved in [10, Lemma 2.3]. The operational interpretation of min-entropy is given in [10]. The fact that the max-entropy does not decrease after appending a state follows from [10, Theorem 5.7], which states that the smoothed max-entropy is non-decreasing under trace-preserving quantum operations; consider the quantum operation \(\psi_{\mathsf{A}}\mapsto\psi_{\mathsf{A}}\otimes\sigma_{\mathsf{D}}\), which is clearly trace-preserving.
Having established the definitions and properties of these entropy measures, we can now state and prove the characterization of the fundamental limits on one-shot compression for quantum states.
**Theorem 9.12** (Information-theoretic one-shot compression).: _For all \(\delta>0\) and all density matrices \(\rho\),_
\[H_{\max}^{\epsilon_{1}}(\rho)\leq K^{\delta}(\rho)\leq H_{\max}^{\epsilon_{2}}( \rho)+8\log\frac{4}{\delta}\]
_where \(\epsilon_{1}\coloneqq 2\delta^{1/4}\) and \(\epsilon_{2}\coloneqq(\delta/40)^{4}\)._
Proof.: **Lower bound.** We first prove the lower bound \(H_{\max}^{2\delta^{1/4}}(\rho)\leq K^{\delta}(\rho)\). Let \((E,D)\) denote a pair of quantum circuits that compresses \(\rho\) to \(s=K^{\delta}(\rho)\) qubits with error \(\delta\). Let \(\ket{\psi}_{\mathsf{AR}}\) denote a purification of \(\rho\). Then using the Fuchs-van de Graaf inequality we get that
\[\mathrm{F}\Big{(}(D\circ E)(\psi),\psi\Big{)}\geq 1-2\delta. \tag{9.9}\]
Let \(\hat{E}:\mathsf{A}\to\mathsf{CE},\hat{D}:\mathsf{C}\to\mathsf{AF}\) denote the unitary purifications of the channels corresponding to \(E\) and \(D\), respectively. Then by Uhlmann's theorem, since \((\hat{D}\hat{E}\otimes\mathrm{id}_{\mathsf{R}})\ket{\psi}_{\mathsf{RA}}\) is a purification of \((D\circ E)(\psi)\) and \(\ket{\psi}_{\mathsf{AR}}\) is pure, Equation (9.9) implies that there exists a pure state \(\ket{\theta}_{\mathsf{EF}}\) such that
\[1-2\delta\leq\mathrm{F}\Big{(}(D\circ E)(\psi),\psi\Big{)}=\mathrm{F}\Big{(}( \hat{D}\circ\hat{E})(\psi),\psi_{\mathsf{AR}}\otimes\theta_{\mathsf{EF}}\Big{)} \leq\mathrm{F}\Big{(}\mathrm{Tr}_{\mathsf{BC}}\Big{(}\hat{D}\circ\hat{E}(\psi )\Big{)},\rho_{\mathsf{A}}\otimes\theta_{\mathsf{F}}\Big{)}\,.\]
The last inequality follows from monotonicity of the fidelity under partial trace. By Fuchs-van de Graaf we have
\[\mathrm{td}\Big{(}\mathrm{Tr}_{\mathsf{BC}}(\hat{D}\circ\hat{E}(\psi)),\rho_{ \mathsf{A}}\otimes\theta_{\mathsf{F}}\Big{)}\leq\sqrt{2\delta}. \tag{9.10}\]
Next consider the following entropy bounds using the properties given by Proposition 9.11:
\[s=\dim(\mathsf{C}) \geq-H_{\min}(\mathsf{C}|\mathsf{RE})_{\hat{E}|\psi}\] \[=-H_{\min}(\mathsf{AF}|\mathsf{RE})_{\hat{D}\hat{E}|\psi}\] \[=H_{\max}(\mathsf{AF})_{\hat{D}\hat{E}|\psi}\] \[\geq H_{\max}^{2\delta^{1/4}}(\mathsf{AF})_{\rho_{\mathsf{B}} \otimes\theta_{\mathsf{R}}}\] \[\geq H_{\max}^{2\delta^{1/4}}(\mathsf{A})_{\rho}.\]
The first item follows from the bounds on min-entropy. The second line follows from the isometric invariance of the min-entropy. The third line follows from the duality relation between min- and max-entropy. The fourth line follows from the definition of the smoothed max-entropy (9.10) and the relationship between the purified distance and trace distance [13, Lemma 3.17]. The last line follows from the fact that the smoothed max-entropy does not decrease when appending a state. Putting everything together we have \(H_{\max}^{2\delta^{1/4}}(\rho)\leq s=K^{\delta}(\rho)\) as desired.
Upper bound.We now prove the upper bound, i.e., show that there exists a pair of circuits \((E,D)\) that compresses \(\rho\) to \(s\coloneqq H_{\max}^{\epsilon}(\rho)+4\log\frac{8}{\delta}\) qubits with error \(\delta\), where \(\epsilon=\delta^{2}/512\). Let \(\rho_{\mathsf{AR}}\) be an arbitrary purification of \(\rho\) (with purifying register \(\mathsf{R}\)).
We leverage the following _decoupling theorem_, which has been a ubiquitous tool in quantum information theory. Informally, a decoupling theorem states that applying a Haar-random unitary
to the \(\mathsf{A}\) system of a bipartite state \(\rho_{\mathsf{AR}}\) and then tracing out an appropriately large subsystem of \(\mathsf{A}\) will result in the remainder of \(\mathsf{A}\) being _decoupled_ (i.e., in tensor product) from the reference register \(\mathsf{R}\). There have been many decoupling theorems proved over the years (see, e.g., [10, 11, 12, 13]); we use the following one due to Dupuis (together with the standard fact that Clifford unitaries form a 2-design).
**Theorem 9.13** (Decoupling Theorem, Theorem 3.8 of [11]).: _Let \(\rho_{\mathsf{AB}}\) be a density matrix, \(\mathcal{T}:\mathrm{S}(\mathsf{A})\to\mathrm{S}(\mathsf{E})\) be a completely positive superoperator, \(\omega_{\mathsf{EA}^{\prime}}=(\mathcal{T}\otimes\mathrm{id}_{\mathsf{A}^{ \prime}})(\Phi_{\mathsf{AA}^{\prime}})\) (where \(\Phi\) denotes the maximally entangled state), and \(\epsilon\geq 0\). Then_
\[\int\,\|(\mathcal{T}\circ U)(\rho_{\mathsf{AB}})-\omega_{\mathsf{E}}\otimes \rho_{\mathsf{B}}\|_{1}\,\mathrm{d}U\leq 2^{-\frac{1}{2}H_{2}^{\epsilon}( \mathsf{A}^{\prime}|\mathsf{E})_{\omega}-\frac{1}{2}H_{2}^{\epsilon}( \mathsf{A}|\mathsf{B})_{\rho}}+8\epsilon\]
_where the integral is over the uniform measure on Clifford unitary matrices acting on \(\mathsf{B}\), and \(\mathcal{T}\circ U\) denotes the superoperator where the input state is conjugated by \(U\) first, and then \(\mathcal{T}\) is applied._
Define the following channel \(\mathcal{T}\) that acts on \(\mathsf{A}\): it measures the first \(n-s\) qubits of \(\mathsf{A}\) in the standard basis to obtain a classical outcome \(y\in\{0,1\}^{n-s}\), traces out \(\mathsf{A}\), and outputs \(y\) in register \(\mathsf{E}\). We now evaluate the state \(\omega_{\mathsf{EA}^{\prime}}=(\mathcal{T}\otimes\mathrm{id}_{\mathsf{A}^{ \prime}})(\Phi_{\mathsf{AA}^{\prime}})\). This can be seen to be
\[\omega_{\mathsf{EA}^{\prime}}=\sum_{y\in\{0,1\}^{n-s}}|yy\rangle\!\langle yy|_ {\mathsf{EA}^{\prime}_{1}}\otimes 2^{-s}\,\mathrm{id}_{\mathsf{A}^{\prime}_{2}}\]
where \(\mathsf{A}^{\prime}\) is subdivided into two registers \(\mathsf{A}^{\prime}_{1}\mathsf{A}^{\prime}_{2}\) with \(\mathsf{A}^{\prime}_{1}\) isomorphic to \(\mathsf{E}\). The entropy \(H_{2}^{\epsilon}(\mathsf{A}^{\prime}|\mathsf{E})_{\omega}\) can be calculated as follows:
\[H_{2}^{\epsilon}(\mathsf{A}^{\prime}|\mathsf{E})_{\omega}\geq H_{2}(\mathsf{ A}^{\prime}|\mathsf{E})_{\omega}\geq H_{\min}(\mathsf{A}^{\prime}|\mathsf{E})_{ \omega}\.\]
The first inequality follows from the definition of the smoothed 2-entropy. The second inequality follows from Proposition 9.11. Note that \(\omega_{\mathsf{A}^{\prime}\mathsf{E}}\) is a classical state (i.e., it is diagonal in the standard basis); using the operational definition of the min-entropy in this case we see that \(H_{\min}(\mathsf{A}^{\prime}|\mathsf{E})=s\).
Now we bound the entropy \(H_{2}^{\epsilon}(\mathsf{A}|\mathsf{R})_{\rho}\). Since \(\rho_{\mathsf{AR}}\) is pure, Proposition 9.11 gives us
\[-H_{2}^{\epsilon}(\mathsf{A}|\mathsf{R})_{\rho}\leq-H_{\min}^{\epsilon}( \mathsf{A}|\mathsf{R})_{\rho}=H_{\max}^{\epsilon}(\mathsf{A})_{\rho}\.\]
By Theorem 9.13, by averaging there exists a Clifford unitary \(U\) such that
\[\|(\mathcal{T}\circ U)(\rho_{\mathsf{AR}})-\omega_{\mathsf{E}}\otimes\rho_{ \mathsf{R}}\|_{1}\leq 2^{-\frac{1}{2}(s-H_{\max}^{\epsilon}(\mathsf{A})_{\rho})}+8 \epsilon\coloneqq\nu\.\]
Consider the following two purifications:
1. \(|\Phi\rangle_{\mathsf{EE}^{\prime}}\otimes|\rho\rangle_{\mathsf{AR}}\) where \(|\Phi\rangle_{\mathsf{EE}^{\prime}}\) denotes the maximally entangled state on two isomorphic registers \(\mathsf{E},\mathsf{E}^{\prime}\). This is a purification of the density matrix \(\omega_{\mathsf{E}}\otimes\rho_{\mathsf{R}}\).
2. \(|\theta\rangle_{\mathsf{EE}^{\prime}\mathsf{CRF}}\coloneqq\sum_{y}|y\rangle_{ \mathsf{E}}\otimes(\Pi_{y}U\otimes\mathrm{id}_{\mathsf{R}})\,|\rho\rangle_{ \mathsf{AR}}\otimes|0\rangle_{\mathsf{F}}\) where \(\Pi_{y}\) is the projection that maps \(\mathsf{A}\) into \(\mathsf{E}^{\prime}\mathsf{C}\) with \(\mathsf{C}\) being an \(s\) qubit register and \(\mathsf{E}^{\prime}\) being \(n-s\) qubit register, projecting the first \(n-s\) qubits of \(\mathsf{A}\) into the \(|y\rangle\) state. The register \(\mathsf{F}\) is isomorphic to \(\mathsf{E}\) and is used to ensure that the dimensions of both purifications are the same. This is a purification of \((\mathcal{T}\circ U)(\rho_{\mathsf{AR}})\).
By Fuchs-van de Graaf and Uhlmann's theorem there exist a partial isometry \(V\) mapping registers \(\mathsf{E}^{\prime}\mathsf{A}\) to \(\mathsf{C}\mathsf{E}^{\prime}\mathsf{F}\) such that
\[\operatorname{td}\Bigl{(}V(\Phi_{\mathsf{E}\mathsf{E}^{\prime}}\otimes\rho_{ \mathsf{A}\mathsf{R}})V^{\dagger}\,,\,\theta_{\mathsf{E}\mathsf{E}^{\prime} \mathsf{C}\mathsf{R}\mathsf{F}}\Bigr{)}\leq\sqrt{2\nu}\.\]
Let \(\Xi\) be an arbitrary channel completion of \(V\). We show that \(\Xi\) can be used in place of \(V\) with small error. Let \(P\) denote the projection onto the support of \(V\). Then we have
\[\Bigl{|}\operatorname{Tr}(P(\Phi_{\mathsf{E}\mathsf{E}^{\prime}}\otimes\rho_{ \mathsf{A}\mathsf{R}}))-1\Bigr{|}\leq\operatorname{td}\Bigl{(}P(\Phi_{\mathsf{E }\mathsf{E}^{\prime}}\otimes\rho_{\mathsf{A}\mathsf{R}})P,\theta_{\mathsf{E} \mathsf{E}^{\prime}\mathsf{C}\mathsf{R}\mathsf{F}}\Bigr{)}\leq\operatorname{ td}\Bigl{(}V(\Phi_{\mathsf{E}\mathsf{E}^{\prime}}\otimes\rho_{\mathsf{A} \mathsf{R}})V^{\dagger},\theta_{\mathsf{E}\mathsf{E}^{\prime}\mathsf{C} \mathsf{R}\mathsf{F}}\Bigr{)}\leq\sqrt{2\nu}\,.\]
Let \(\tau\) denote the post-measurement state of \(\Phi_{\mathsf{E}\mathsf{E}^{\prime}}\otimes\rho_{\mathsf{A}\mathsf{R}}\) after measuring the projector \(P\); by the Gentle Measurement Lemma [10] we have \(\operatorname{td}(\tau,\Phi_{\mathsf{E}\mathsf{E}^{\prime}}\otimes\rho_{ \mathsf{A}\mathsf{R}})\leq 4\nu^{1/4}\). Thus
\[\operatorname{td}\Bigl{(}\Xi(\Phi_{\mathsf{E}\mathsf{E}^{\prime} }\otimes\rho_{\mathsf{A}\mathsf{R}})\,,\,\theta_{\mathsf{E}\mathsf{E}^{ \prime}\mathsf{C}\mathsf{R}\mathsf{F}}\Bigr{)} \leq\operatorname{td}\Bigl{(}\Xi(\Phi_{\mathsf{E}\mathsf{E}^{ \prime}}\otimes\rho_{\mathsf{A}\mathsf{R}})\,,\,\Xi(\tau)\Bigr{)}+ \operatorname{td}\Bigl{(}\Xi(\tau),V\tau V^{\dagger}\Bigr{)}\] \[\qquad+\operatorname{td}\Bigl{(}V\tau V^{\dagger},V(\Phi_{ \mathsf{E}\mathsf{E}^{\prime}}\otimes\rho_{\mathsf{A}\mathsf{R}})V^{\dagger} \Bigr{)}+\operatorname{td}\Bigl{(}V(\Phi_{\mathsf{E}\mathsf{E}^{\prime}} \otimes\rho_{\mathsf{A}\mathsf{R}})V^{\dagger}\,,\,\theta_{\mathsf{E}\mathsf{ E}^{\prime}\mathsf{C}\mathsf{R}\mathsf{F}}\Bigr{)}\] \[\leq 4\nu^{1/4}+4\nu^{1/4}+\sqrt{2\nu}\leq 10\nu^{1/4}\,, \tag{9.11}\]
where we used that \(\Xi(\tau)=V\tau V^{\dagger}\) by definition of channel completion.
Similarly, let \(\Lambda\) be an arbitrary channel completion of the partial isometry \(V^{\dagger}\). A similar argument shows that
\[\operatorname{td}\Bigl{(}\Phi_{\mathsf{E}\mathsf{E}^{\prime}}\otimes\rho_{ \mathsf{A}\mathsf{R}}\,,\,\Lambda(\theta_{\mathsf{E}\mathsf{E}^{\prime}\mathsf{ C}\mathsf{R}\mathsf{F}})\Bigr{)}\leq 10\nu^{1/4}\.\]
We now continue with \(\Xi\) instead of \(V\) and \(\Lambda\) instead of \(V^{\dagger}\). Applying the channel that measures the register \(\mathsf{E}\) in the standard basis to both arguments of the left-hand side of Equation (9.11) and using that the trace distance is non-increasing under quantum operations we have
\[\sum_{y}2^{-(n-s)}\operatorname{td}\Bigl{(}\Xi(|y\rangle\!\langle y|_{\mathsf{ E}^{\prime}}\otimes|\rho\rangle\!\langle\rho|_{\mathsf{A}\mathsf{R}})\,,\,2^{n-s} \alpha_{y}\,|y\rangle\!\langle y|_{\mathsf{E}^{\prime}}\otimes|\rho_{U,y} \rangle\!\langle\rho_{U,y}|_{\mathsf{C}\mathsf{R}}\otimes|0\rangle\!\langle 0|_{ \mathsf{F}}\,\Bigr{)}\leq 10\nu^{1/4}\,\]
where \(\alpha_{y}\coloneqq\|\Pi_{y}U\,|\rho\rangle_{\mathsf{A}\mathsf{R}}\,\|^{2}\) and the pure state \(|\rho_{U,y}\rangle_{\mathsf{R}\mathsf{C}}\) is defined so that
\[\alpha_{y}^{-1/2}\,\Pi_{y}U\,|\rho\rangle_{\mathsf{A}\mathsf{R}}=|y\rangle_{ \mathsf{E}^{\prime}}\otimes|\rho_{U,y}\rangle_{\mathsf{C}\mathsf{R}}\.\]
By averaging, there exists a \(y^{*}\in\{0,1\}^{n-s}\) such that
\[\operatorname{td}\Bigl{(}\Xi(|y^{*}\rangle\!\langle y^{*}|_{\mathsf{E}^{\prime }}\otimes|\rho\rangle\!\langle\rho|_{\mathsf{A}\mathsf{R}})\,,\,2^{n-s}\alpha_ {y^{*}}\,|y^{*}\rangle\!\langle y^{*}|_{\mathsf{E}^{\prime}}\otimes|\rho_{U,y^ {*}}\rangle\!\langle\rho_{U,y^{*}}|_{\mathsf{C}\mathsf{R}}\otimes|0\rangle\! \langle 0|_{\mathsf{F}}\,\Bigr{)}\leq 10\nu^{1/4}\.\]
This also implies that \(|2^{n-s}\alpha_{y^{*}}-1|\leq 10\nu^{1/4}\) so thus
\[\operatorname{td}\Bigl{(}\Xi(|y^{*}\rangle\!\langle y^{*}|_{\mathsf{E}^{\prime }}\otimes|\rho\rangle\!\langle\rho|_{\mathsf{A}\mathsf{R}})\,,\,|y^{*}\rangle \!\langle y^{*}|_{\mathsf{E}^{\prime}}\otimes|\rho_{U,y^{*}}\rangle\!\langle \rho_{U,y^{*}}|_{\mathsf{C}\mathsf{R}}\otimes|0\rangle\!\langle 0|_{ \mathsf{F}}\,\Bigr{)}\leq 10\nu^{1/4}. \tag{9.12}\]
Define the following quantum circuits:
1. The circuit \(E\) acts on register \(\mathsf{A}\) and behaves as follows: it appends the state \(|y^{*}\rangle\) in register \(\mathsf{E}^{\prime}\), applies the channel \(\Xi\), and then traces out registers \(\mathsf{E}^{\prime}\mathsf{F}\). In other words, it implements the following channel: \[E(\sigma_{\mathsf{A}})=\operatorname{Tr}_{\mathsf{E}^{\prime}\mathsf{F}}\Bigl{(} \Xi(|y^{*}\rangle\!\langle y^{*}|_{\mathsf{E}^{\prime}}\otimes\sigma_{\mathsf{A} })\Bigr{)}\.\]
2. The circuit \(D\) takes as input register \(\mathsf{C}\) and behaves as follows: it appends the state \(\left|y^{*}\right\rangle\) in register \(\mathsf{E}^{\prime}\) and \(\left|0\right\rangle\) in register \(\mathsf{F}\), applies the channel \(\Lambda\), and then traces out register \(\mathsf{E}^{\prime}\). In other words, it implements the following channel: \[D(\tau_{\mathsf{C}})=\operatorname{Tr}_{\mathsf{E}^{\prime}}\Bigl{(}\Lambda( \left|y^{*}\right\rangle\!\!\left\langle y^{*}\right|_{\mathsf{E}^{\prime}} \otimes\tau_{\mathsf{C}}\otimes\left|0\right\rangle\!\!\left\langle 0\right|_{ \mathsf{F}})\Bigr{)}\.\]
Then Equation (9.12) implies that
\[\operatorname{td}\Bigl{(}E(\left|\rho\right\rangle\!\!\left\langle \rho\right|_{\mathsf{AR}}),\,\left|\rho_{U,y^{*}}\right\rangle\!\!\left\langle \rho_{U,y^{*}}\right|_{\mathsf{CR}}\Bigr{)} \leq 10\nu^{1/4}\] \[\operatorname{td}\Bigl{(}\left|\rho\right\rangle\!\!\left\langle \rho\right|_{\mathsf{AR}}\,\,D(\left|\rho_{U,y^{*}}\right\rangle\!\!\left\langle \rho_{U,y^{*}}\right|_{\mathsf{CR}})\Bigr{)} \leq 10\nu^{1/4}\.\]
Put together this means
\[\operatorname{td}\Bigl{(}(D\circ E)(\left|\rho\right\rangle\!\!\left\langle \rho\right|_{\mathsf{AR}}),\left|\rho\right\rangle\!\!\left\langle\rho\right| _{\mathsf{AR}}\,\Bigr{)}\leq 20\nu^{1/4}\.\]
Although we have defined the circuits \(E,D\) in terms of the purification \(\left|\rho\right\rangle_{\mathsf{AR}}\), observe that Uhlmann's theorem implies that the same circuits works for _all_ purifications of \(\rho_{\mathsf{A}}\). Thus, since the output of channel \(E\) is register \(\mathsf{C}\) which has size \(s\) qubits, this shows that \((E,D)\) compresses \(\rho\) to \(s\) qubits with error \(20\nu^{1/4}\). By our choice of \(s=H_{\max}^{\epsilon}(\mathsf{B})_{\rho}+8\log\frac{4}{8}\) and \(\epsilon=(\delta/40)^{4}\), this error is at most \(\delta\).
We note that for tensor product states \(\rho^{\otimes k}\), the smoothed max-entropy converges to the well-known von Neumann entropy:
\[\lim_{\epsilon\to 0}\lim_{k\to\infty}\frac{1}{k}H_{\max}^{\epsilon}(\rho^{ \otimes k})=H(\rho)\.\]
This is an instance of the _quantum asymptotic equipartition property_, which roughly states that the min, max, and Renyi entropies approach the von Neumann entropy in the limit of many copies of a state [10].21 Thus Theorem 9.12 applied to tensor product states \(\rho^{\otimes k}\) recovers Schumacher compression [11], using a proof that does not appeal to typical subspaces and the method of types.
Footnote 21: In fact, one can give stronger quantitative bounds on the convergence to the von Neumann entropy as a function of the number of copies \(k\) and the error \(\epsilon\).
#### 9.2.2 Complexity of near-optimal compression
We now initiate the study of the computational complexity of compressing to the information-theoretic limit, i.e., to the smoothed max-entropy of a state. We begin by defining compression as a computational task.
**Definition 9.14** (Compression as a computational task).: _Let \(\epsilon,\eta:\mathbb{N}\to[0,1]\) be functions. Let \(E=(E_{x})_{x}\) and \(D=(D_{x})_{x}\) be quantum algorithms. We say that \((E,D)\) compresses to the \(\epsilon\)-smoothed max-entropy with error \(\eta\) if for all \(x=(1^{n},C)\) where \(C\) is a quantum circuit that outputs \(n\) qubits, we have that \((E_{x},D_{x})\) compresses \(\rho_{x}\coloneqq C(\left|0\ldots 0\right\rangle\!\!\left\langle 0\ldots 0 \right|)\) to at most \(H_{\max}^{\epsilon(n)}(\rho_{x})+O(\log\frac{1}{\epsilon(n)})\) qubits with error at most \(\eta(n)\)._
This brings us to the main result of the section, which are upper and lower bounds on the complexity of the compression task.
**Theorem 9.15** (Near-optimal compression via Uhlmann transformations).: _Let \(\epsilon(n)\) be a negligible function. If \(\textsc{DistUhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP/poly}\), then for all polynomials \(q(n)\) there exists a pair of non-uniform polynomial-time algorithms \((E,D)\) that compresses to the \(\epsilon\)-smoothed max-entropy with error \(\eta(n)=1/q(n)\)._
Proof.: Let \(x=(1^{n},C)\) where \(C\) is a quantum circuit that outputs \(n\) qubits, and let \(\rho_{x}=C(|0\ldots 0\rangle\!\langle 0\ldots 0|)\). Let \(\epsilon=\epsilon(n)\). The proof of the upper bound of Theorem 9.12 involves the following two states:
\[\left|F\right\rangle\coloneqq\left|\Phi\right\rangle_{\mathsf{EE}^ {\prime}}\otimes\left|\rho\right\rangle_{\mathsf{AR}}\,,\] \[\left|G\right\rangle\coloneqq\sum_{y}\left|y\right\rangle_{ \mathsf{E}}\otimes(\Pi_{y}U\otimes\mathrm{id}_{\mathsf{R}})\left|\rho\right \rangle_{\mathsf{AR}}\otimes\left|0\right\rangle_{\mathsf{F}}\,.\]
(The state \(\left|G\right\rangle\) was called \(\left|\theta\right\rangle\) in Theorem 9.12). Here, \(\left|\Phi\right\rangle_{\mathsf{EE}^{\prime}}\) denotes the maximally entangled state on \(\mathsf{EE}^{\prime}\), \(\left|\rho\right\rangle_{\mathsf{AR}}\) is the pure state resulting from evaluating a purification of the circuit \(C\) on the all zeroes input, the projector \(\Pi_{y}\) denotes projecting the first \(n-s\) qubits of register \(\mathsf{A}\) onto \(\left|y\right\rangle\), and \(U\) is a Clifford unitary. Note that \(\left|F\right\rangle,\left|G\right\rangle\) can be prepared by circuits \(F,G\) whose sizes are polynomial in \(n\) and in the size of \(C\); this uses the fact that Clifford unitaries can be computed by a circuit of size \(O(n^{2})\)[1].
The proof of Theorem 9.12 shows that the reduced density matrices of \(\left|F\right\rangle,\left|G\right\rangle\) on registers \(\mathsf{EA}\) have fidelity at least \(1-2\nu=1-\epsilon^{2}/16\geq 1-\epsilon\). Thus \((1^{m},F,G)\) is a valid \(\textsc{Uhlmann}_{1-\epsilon}\) instance. Since \(\textsc{DistUhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP/poly}\) by assumption there exists \(\mathrm{poly}(n,\left|C\right|)\)-size circuit \(L\) mapping registers \(\mathsf{E}^{\prime}\mathsf{A}\) to \(\mathsf{E}^{\prime}\mathsf{CF}\) and a channel completion \(\Xi\) of the canonical Uhlmann transformation \(V\) corresponding to \((\left|F\right\rangle,\left|G\right\rangle)\) such that
\[\mathrm{td}\Big{(}(\mathrm{id}\otimes L)(\left|F\right\rangle\!\langle F|),\ ( \mathrm{id}\otimes\Xi)(\left|F\right\rangle\!\langle F|)\Big{)}\leq\frac{1}{r(n)}\]
where \(r(n)\) is a polynomial such that \(2/r(n)+\epsilon(n)\leq 1/q(n)\), which is possible because \(\epsilon(n)\) is a negligible function. Similarly there exists a \(\mathrm{poly}(n,\left|C\right|)\)-size circuit \(M\) and a channel completion \(\Lambda\) of the Uhlmann transformation \(V^{\dagger}\) corresponding to \((\left|G\right\rangle,\left|F\right\rangle)\) such that
\[\mathrm{td}\Big{(}(\mathrm{id}\otimes M)(\left|G\right\rangle\!\langle G|),\ ( \mathrm{id}\otimes\Lambda)(\left|G\right\rangle\!\langle G|)\Big{)}\leq\frac{1}{r( n)}\,\,.\]
The proof of Theorem 9.12 shows shows that there exists a pair of circuits \((E_{x}^{*},D_{x}^{*})\) that compresses \(\rho_{x}\) to \(s=H_{\max}^{\epsilon}(\rho_{x})+O(\log\frac{1}{\epsilon})\) qubits with error \(\epsilon\). Notice that the circuits \(E_{x}^{*},D_{x}^{*}\) are \(\mathrm{poly}(n)\)-size circuits that make one call to channels \(\Xi,\Lambda\), respectively. Now the idea is to "plug in" the circuits \(L,M\) to implement the call to the channel \(\Xi,\Lambda\), respectively. Let \(E_{x},D_{x}\) denote the resulting \(\mathrm{poly}(n,\left|C\right|)\)-sized circuits. Using \(L,M\) instead of the channels \(\Xi,\Lambda\) incurs at most \(O(1/r(n))\) error, i.e., \(\mathrm{td}\Big{(}(D_{x}\circ E_{x})(\left|\rho\right\rangle\!\langle\rho|_{ \mathsf{AR}}),(D_{x}^{*}\circ E_{x}^{*})(\left|\rho\right\rangle\!\langle \rho|_{\mathsf{AR}})\Big{)}\leq 2/r(n)\). Therefore
\[\mathrm{td}\Big{(}(D_{x}\circ E_{x})(\left|\rho\right\rangle\!\langle\rho|_{ \mathsf{AR}}),\left|\rho\right\rangle\!\langle\rho|_{\mathsf{AR}}\,\Big{)} \leq 2/r(n)+\epsilon(n)\leq 1/q(n)\,.\]
Letting \(E=(E_{x})_{x}\) and \(D=(D_{x})_{x}\) we get the desired pair of non-uniform polynomial-time algorithms that compresses to the \(\epsilon\)-smoothed max entropy with inverse polynomial error.
We now turn to proving a hardness result for near-optimal compression; it cannot be performed in polynomial-time if _stretch pseudorandom state (PRS) generators_ exist. Pseudorandom state generators are a quantum analogue of classical pseudorandom generators (PRGs) and in fact can be constructed from post-quantum pseudorandom generators [10], but there is evidence that the assumption of PRS is less stringent than the assumption of post-quantum PRGs [11]. We first recall the definition of a PRS generator:
**Definition 9.16** (Pseudorandom state generator [10, Definition 3]).: _We say that a (uniform) polynomial-time algorithm \(G=(G_{\lambda})_{\lambda}\) is a pseudorandom state (PRS) generator if the following holds._
1. _(_State generation_). For all_ \(\lambda\)_, on input_ \(k\in\{0,1\}^{k}\) _the algorithm_ \(G\) _outputs_ \[G_{\lambda}(k)=|\psi_{k}\rangle\!\langle\psi_{k}|\] _for some_ \(m(\lambda)\)_-qubit pure state_ \(|\psi_{k}\rangle\)_._
2. _(_Strong pseudorandomness_). For all polynomials_ \(t(\lambda)\) _and non-uniform polynomial-time distinguishers_ \(A=(A_{\lambda})_{\lambda}\) _there exists a negligible function_ \(\epsilon(\lambda)\) _such that for all_ \(\lambda\)_, we have_ \[\left|\Pr_{k\leftarrow\{0,1\}^{\lambda}}\left[A_{\lambda}^{O_{\psi_{k}}}(G_{ \lambda}(k)^{\otimes t(\lambda)})=1\right]-\Pr_{|\vartheta\rangle\leftarrow \operatorname{Haar}_{m(\lambda)}}\left[A_{\lambda}^{O_{\vartheta}}(|\vartheta \rangle\!\langle\vartheta|^{\otimes t(\lambda)})=1\right]\right|\leq\epsilon( \lambda),\] _where_ \(O_{\psi}\coloneqq\operatorname{id}-2\,|\psi\rangle\!\langle\psi|\) _is the reflection oracle for_ \(|\psi\rangle\)_._
_We say that \(G\) is a stretch PRS generator if \(m(\lambda)>\lambda\)._
Here we use the strong pseudorandomness guarantee, which is known to be equivalent to the weaker (standard) pseudorandomness guarantee where the adversary does not get access to the reflection oracle [10, Theorem 4]. We also note that PRS generators do not necessarily provide any _stretch_; there are nontrivial PRS generators where the output length \(m(\lambda)\) can be smaller than the key length \(\lambda\). Furthermore, unlike classical PRGs, it is not known whether PRS can be generically stretched (or shrunk); see [1] for a longer discussion of this.
We now state our hardness result.
**Theorem 9.17** (Hardness of near-optimal compression).: _Let \(\epsilon(n)\) be a function. Let \(m(\lambda)\) be a function satisfying_
\[m(\lambda)>\lambda+O\Big{(}\log\frac{1}{\epsilon(m(\lambda))}\Big{)}+2\]
_for all sufficiently large \(\lambda\). If stretch pseudorandom state generators that output \(m(\lambda)\) qubits exist, then there is no non-uniform polynomial-time algorithm \((E,D)\) that compresses to the \(\epsilon\)-smoothed max-entropy with error \(\frac{1}{2}\)._
Proof.: Let \(G\) be a PRS generator that outputs \(m(\lambda)\)-qubit states for \(m(\lambda)\) satisfying the conditions stated in Theorem 9.17, and fix a sufficiently large \(\lambda\in\mathbb{N}\) for which the condition is satisfied. Define the pure state \(|\varphi_{\lambda}\rangle\) that represents running a unitary purification of the generator \(G\) coherently with the keys \(k\) in superposition:
\[|\varphi_{\lambda}\rangle_{\mathsf{KQA}}\coloneqq 2^{-\lambda/2}\sum_{k\in\{0,1 \}^{\lambda}}\left|k\right\rangle_{\mathsf{K}}\otimes|\tau_{\lambda}\rangle _{\mathsf{Q}}\otimes|\psi_{k}\rangle_{\mathsf{A}}\]
where \(|\psi_{k}\rangle\) denotes the pseudorandom state output by \(G\) on key \(k\), and \(|\tau_{k}\rangle\) denotes the state of the ancilla qubits of \(G\). Let \(\mathsf{R}\coloneqq\mathsf{K}\mathsf{Q}\). The reduced density matrix of \(|\varphi_{\lambda}\rangle\) on register \(\mathsf{A}\) is the following mixed state:
\[\rho_{\lambda}\coloneqq 2^{-\lambda}\sum_{k\in\{0,1\}^{\lambda}}|\psi_{k}\rangle \!\langle\psi_{k}|\enspace.\]
By the second item of Proposition 9.11 we have \(H^{\epsilon}_{\max}(\rho_{\lambda})\leq\lambda\).
Assume for contradiction that there exists a polynomial-time pair of quantum algorithms \((E,D)\) that compresses to the \(\epsilon\)-smoothed max-entropy with error \(\frac{1}{2}\). Let \(x=(1^{n},C)\) where \(C\) outputs the state \(\rho_{\lambda}\) by first synthesizing the state \(|\varphi_{\lambda}\rangle\) and then tracing out register \(\mathsf{R}\). Clearly \(C\) is a \(\operatorname{poly}(\lambda)\)-sized circuit. Therefore \((E_{x},D_{x})\) runs in \(\operatorname{poly}(\lambda)\) time and compresses \(\rho_{\lambda}\) to \(r_{\lambda}\coloneqq H^{\epsilon}_{\max}(\rho_{\lambda})+O\Big{(}\log\frac{1}{ \epsilon(m(\lambda))}\Big{)}\leq\lambda+O\Big{(}\log\frac{1}{\epsilon(m( \lambda))}\Big{)}\) qubits. By assumption we have
\[\operatorname{td}\Bigl{(}(D_{x}\circ E_{x})(|\varphi_{\lambda}\rangle\! \langle\varphi_{\lambda}|),\,|\varphi_{\lambda}\rangle\!\langle\varphi_{ \lambda}|\,\Bigr{)}\leq\frac{1}{2}\enspace.\]
By measuring register \(\mathsf{K}\) and tracing out register \(\mathsf{Q}\) on both arguments (which does not increase the trace distance), we have that
\[\operatorname*{\mathbb{E}}_{k}\operatorname{td}\Bigl{(}(D_{x}\circ E_{x})(| \psi_{k}\rangle\!\langle\psi_{k}|)\,,\,|\psi_{k}\rangle\!\langle\psi_{k}|\, \Bigr{)}\leq\frac{1}{2}\enspace. \tag{9.13}\]
Now consider the following distinguisher \(A=(A_{\lambda})_{\lambda}\): it gets as input \(|\theta\rangle\) where \(|\theta\rangle\) is either \(|\psi_{k}\rangle\) for a randomly sampled \(k\) or \(|\vartheta\rangle\) sampled from the Haar measure; it also gets access to a (controlled) reflection oracle \(O_{\theta}=\operatorname{id}-2\,|\theta\rangle\!\langle\theta|\). It then
1. applies the channel \(D_{x}\circ E_{x}\) to input \(|\theta\rangle\);
2. measures \(\{|\theta\rangle\!\langle\theta|\,,\operatorname{id}-|\theta\rangle\!\langle \theta|\}\) using the reflection oracle, and accept if measurement accepts.
From Equation (9.13) we have that, since the measurement step with respect to \(O_{\psi_{k}}\) accepts on \(|\psi_{k}\rangle\) with probability \(1\), then \(A_{\lambda}\) with oracle access to \(O_{\psi_{k}}\) accepts \(|\psi_{k}\rangle\) with probability at least \(1-\eta\) over the choice of key \(k\) and the randomness of \(A_{\lambda}\).
Now consider what happens when we run \(A_{\lambda}\) with \(|\vartheta\rangle\) as input where \(|\vartheta\rangle\) is sampled from the Haar measure, as well as with the reflection oracle \(O_{\vartheta}\). Since \(A\) runs in \(\operatorname{poly}(\lambda)\) time, by the pseudorandomness property of \(G\) the probability that \(A_{\lambda}\) accepts \(|\vartheta\rangle\) is at least \(\frac{1}{2}-\operatorname{negl}(\lambda)\).
On the other hand we show that since a Haar-random state cannot be compressed, \(A_{\lambda}\) cannot accept with high probability. Let \(R\coloneqq 2^{r_{\lambda}}\) denote the dimensionality of the output of \(E_{\lambda}\), and let \(M=2^{m(\lambda)}\) denote the dimensionality of register \(\mathsf{A}\). For brevity we abbreviate \(E_{x},D_{x}\) as \(E,D\) respectively. The success probability of \(A_{\lambda}\) given a Haar-random state \(|\vartheta\rangle\) and the reflection oracle \(O_{\vartheta}\) can be calculated as follows. First, observe that
\[\int_{\vartheta}\operatorname{Tr}\Bigl{(}(D\circ E)(|\vartheta\rangle\! \langle\vartheta|)\ |\vartheta\rangle\!\langle\vartheta|\,\Bigr{)}\operatorname{d}\! \vartheta=\int_{\vartheta}\operatorname{Tr}\Bigl{(}E(|\vartheta\rangle\! \langle\vartheta|)\,D^{*}(|\vartheta\rangle\!\langle\vartheta|)\Bigr{)} \operatorname{d}\!\vartheta\]
where \(D^{*}\) denotes the _adjoint channel_ corresponding to \(D\); it is the unique superoperator mapping register \(\mathsf{A}^{\prime}\) to \(\mathsf{B}\) satisfying \(\operatorname{Tr}(XD(Y))=\operatorname{Tr}(D^{*}(X)Y)\) for all operators \(X,Y\). Viewing \(E\otimes D^{*}\) as a superoperator mapping registers \(\mathsf{A}_{1}\mathsf{A}_{2}\) to \(\mathsf{B}_{1}\mathsf{B}_{2}\) and letting \(S_{\mathsf{B}_{1}\mathsf{B}_{2}}\) denote the swap operator on registers \(\mathsf{B}_{1}\mathsf{B}_{2}\) the above is equal to
\[\operatorname{Tr}\Bigl{(}S_{\mathsf{B}_{1}\mathsf{B}_{2}}(E\otimes D^{*})(\int_ {\vartheta}|\vartheta\rangle\!\langle\vartheta|^{\otimes 2}\operatorname{d}\! \vartheta)\Bigr{)}\enspace.\]
Now, it is well-known [11] that the integral over two copies of an \(m(\lambda)\)-qubit Haar-random state is proportional to the projector \(\frac{1}{2}(\mathrm{id}+S)\) onto the _symmetric subspace_ of \((\mathbb{C}^{M})^{\otimes 2}\). The dimension of the projector is \(M(M+1)/2\). Thus the above is equal to
\[\frac{1}{M(M+1)}\mathrm{Tr}\Big{(}S_{\mathsf{B}_{1}\mathsf{B}_{2} }\left(E\otimes D^{*}\right)(\mathrm{id}_{\mathsf{A}_{1}\mathsf{A}_{2}}+S_{ \mathsf{A}_{1}\mathsf{A}_{2}})\Big{)}\] \[\qquad\leq\frac{1}{M(M+1)}\mathrm{Tr}\Big{(}(E\otimes D^{*})( \mathrm{id}_{\mathsf{A}_{1}\mathsf{A}_{2}}+S_{\mathsf{A}_{1}\mathsf{A}_{2}}) \Big{)}\] \[\qquad=\frac{1}{M(M+1)}\Big{[}\mathrm{Tr}\Big{(}(E\otimes D^{*})( \mathrm{id}_{\mathsf{A}_{1}\mathsf{A}_{2}})\Big{)}+\mathrm{Tr}\Big{(}(E\otimes D ^{*})(S_{\mathsf{A}_{1}\mathsf{A}_{2}})\Big{)}\Big{]}\] \[\qquad=\frac{1}{M(M+1)}\Big{[}\mathrm{Tr}\Big{(}\mathrm{id}_{ \mathsf{A}_{1}}\otimes D^{*}(\mathrm{id}_{\mathsf{A}_{2}})\Big{)}+\mathrm{Tr }\Big{(}(\mathrm{id}_{\mathsf{A}_{1}}\otimes D^{*})(S_{\mathsf{A}_{1}\mathsf{ A}_{2}})\Big{)}\Big{]}\] \[\qquad=\frac{1}{M(M+1)}\Big{[}\mathrm{Tr}\Big{(}\mathrm{id}_{ \mathsf{A}_{1}}\Big{)}\,\mathrm{Tr}\Big{(}D^{*}(\mathrm{id}_{\mathsf{A}_{2}}) \Big{)}+\mathrm{Tr}\Big{(}D^{*}(\mathrm{id}_{\mathsf{A}_{2}})\Big{)}\Big{]}\] \[\qquad=\frac{1}{M(M+1)}\Big{[}RM+R\Big{]}\] \[\qquad=R/M=2^{-(m(\lambda)-\lambda-O(\log 1/\epsilon))}\leq\frac{1 }{4}\.\]
The second line follows from the fact that \(|\mathrm{Tr}(A^{\dagger}B)|\leq\|A\|_{\infty}\|B\|_{1}\) for all operators \(A,B\) and \(\|S\|_{\infty}\leq 1\). The fourth line follows from the fact that \(E\) is a trace-preserving superoperator. The sixth line follows from the fact that since \(D\) is a channel that takes as input \(\mathsf{B}\), \(\mathrm{Tr}(D^{*}(\mathrm{id}_{\mathsf{A}_{2}}))=\mathrm{Tr}(\mathrm{id}_{ \mathsf{B}})=R\). The last line follows because our assumption about the stretch of the PRS. This shows that the acceptance probability of \(A_{\lambda}\) given a Haar random state and access to its reflection oracle is at most \(\frac{1}{4}\), which is less than \(\frac{1}{2}-\mathrm{negl}(\lambda)\) for sufficiently large \(\lambda\).
Thus we have arrived at a contradiction. There is no polynomial-time pair of algorithms that compresses to the \(\epsilon\)-smoothed max entropy.
We compare our hardness result with the upper bound proved in Theorem 9.15. As an example, let \(\epsilon(n)=2^{-\log^{2}(n)}\), which is a negligible function. Then roughly, if \(\textsc{DistUhlmann}_{1-\epsilon}\) is easy, then compressing to \(H^{\epsilon}_{\max}(\rho)+O(\log 1/\epsilon)=H^{\epsilon}_{\max}(\rho)+O(\log^{2 }(n))\) is easy. On the other hand, the lower bound shows that if PRS generators with output length \(m(\lambda)\geq\lambda+\Omega(\log^{2}(\lambda))\) exist, then compressing to \(H^{\epsilon}_{\max}(\rho)+O(\log^{2}(n))\) is not easy.
We remark that it should be possible to base the lower bound on seemingly weaker assumptions, such as one-way state generators [10]. However, ideally we would be able to base the hardness on an assumption such as the existence of quantum commitments or the hardness of the Uhlmann transformation problem, which would give a true converse to the upper bound of Theorem 9.15. However the main issue is _verifiability_: with pseudorandom states or one-way state generators (with pure-state outputs), one can check whether the state has been compressed and decompressed; it is not clear whether this is possible with quantum commitments. We leave it as an open problem to prove matching upper and lower complexity bounds on compression.
**Open Problem 22**.: Is the complexity of optimal compression equivalent to the complexity of the Uhlmann Transformation Problem?
### Complexity of classical Shannon tasks?
Given the results in this section, the reader may naturally wonder about the complexity of _classical_ Shannon tasks. For example, one can consider the problems of decoding noisy classical channels and optimally compressing classical information. The complexity of both these tasks appears to be essentially _equivalent_ to the existence of one-way functions, which provides some evidence that the hardness of the Uhlmann Transformation Problem could be regarded as the natural quantum analogue of the existence of one-way functions.
We sketch this equivalence for these two Shannon theory problems. The classical analogue of the Decodable Channel Problem is as follows. A decodable classical channel \(N\) is a classical circuit that takes as input two strings \((x,r)\) where both \(x\) and \(r\) are sampled from the uniform distribution, and outputs a string \(y\) such that with high probability over \((x,r)\), the original message \(x\) is information-theoretically recoverable. The task is to recover the original message \(x\) given the output \(y\) of the channel.
Impagliazzo and Levin [11] showed that if all one-way functions can be inverted in polynomial time with high probability, then there exists a _distributional inverter_ that, given an output \(y\) of the channel \(N\), finds a uniformly random preimage \((x,r)\). The decodability of \(N\) ensures that the computed \(x\) is the original message with high probability. Conversely, if one-way functions exist then pseudorandom generators exist [10]. The channel \(N\) that takes the input \(x\) and computes a pseudorandom generator on it is not efficiently decodable in polynomial time.
We now turn to compression. Interestingly, the complexity of compression - and other Shannon theory tasks - was already discussed in Yao's seminal 1982 paper introducing the theory of pseudorandomness [13]. In modern day terms, Yao argued that the existence of pseudorandom generators (which follows from the existence of one-way functions [10]) gives rise to efficiently sampleable distributions \(X\) that cannot be efficiently compressed to their Shannon entropy \(H(X)\). Conversely, a recent work of [12] shows that if one-way functions do not exist, every efficiently sampleable distribution \(X\) can be compressed to a prefix-free encoding of at most \(H(X)+2\) bits.
These two examples motivate asking the broader question: what is the complexity of other fundamental classical Shannon theory tasks, such as obtaining capacity-achieving encoders and decoders for a given classical channel (which is provided in the form of a randomized circuit), or performing distributed source coding? Is the complexity of these tasks all equivalent to the hardness of one-way functions? To our knowledge there has not been a systematic study of the complexity of classical Shannon theory tasks, aside from a few isolated discussions [13, 14].
**Open Problem 23**.: Can the complexity of _classical_ Shannon theory tasks be characterized?
### Open problems
We end this section with some additional open questions. First, the complexity result about compression is stated in terms of the non-uniform complexity class avgUnitaryBQP/poly. The main reason for this is that the upper bound (i.e., if DistUhlmann is easy, then compression is also easy) involves hardcoding some information that depends on the instance of the problem.
**Open Problem 24**.: Can the assumptions in the upper bound result for compression (Theorem 9.15) be improved to be about uniform unitary complexity classes (namely, avgUnitaryBQP)?
This may require finding a new proof approach for the upper bound.
In this section we considered two basic quantum Shannon theory tasks. There are many more that have been studied information-theoretically (including a whole family tree of them [1]), and one can ask about the complexity of each of these tasks.
**Open Problem 25**.: What is the complexity of other quantum Shannon theory tasks, such as achieving capacity over a noisy channel, entanglement distillation, or quantum state redistribution?
We remark that the problem of proving complexity lower bounds on entanglement distillation appears to be conceptually challenging as it requires reasoning about LOCC protocols.
## 10 Applications to Computational Tasks in High-Energy Physics
In this section, we discuss connections between the Uhlmann Transformation Problem and computational tasks motivated by questions in high-energy physics. We first discuss the _black hole radiation decoding task_, which was introduced by Harlow and Hayden [13]. We argue that the complexity of this task is characterized by the complexity of the distributional Uhlmann Transformation Problem. Then, we discuss the _interference detection task_ as formalized by Aaronson, Atia, and Susskind [1]: this is the problem of detecting the interference between two orthogonal states \(\ket{\psi}\) and \(\ket{\varphi}\), i.e. whether the states are in an equal plus or minus superposition. One of the motivations for considering this problem is the task of physically distinguishing between superpositions of spacetime geometries in the AdS/CFT correspondence [1]. We show that solving the interference detection problem between two orthogonal statePSPACE states reduces to \(\textsc{SuccinctUhlmann}_{1}\) in polynomial time.
### Black hole radiation decoding
The black hole radiation decoding task is motivated by the following thought experiment of Almheiri, Marolf, Polchinski, Sully [1]: imagine that Alice creates a maximally entangled pair of qubits \(|\text{EPR}\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\) and throws one half into a newly-formed black hole. After a long time, Alice could potentially decode the Hawking radiation of the black hole and recover the qubit she three in. However, Alice could then jump into the black hole and find another qubit that is supposed to be maximally entangled with the qubit that was not thrown in - witnessing a violation of the monogamy of entanglement. These conclusions were derived assuming supposedly uncontroversial principles of quantum field theory and general relativity.
Harlow and Hayden proposed a resolution to this paradox via a computational complexity argument [13]: it may not be _feasible_ for Alice to decode the black hole's Hawking radiation in any reasonable amount of time -- by the time she decodes the qubit that she threw in, the black hole may have evaporated anyways! They argued that, assuming \(\mathsf{SZK}\not\subseteq\mathsf{BQP}\) - note that these are classes of _decision_ problems -- a formulation of the black hole radiation decoding task cannot be done in polynomial time.
What about the converse? That is, does a traditional complexity class statement such as \(\mathsf{SZK}\subseteq\mathsf{BQP}\) imply that the black hole radiation decoding task is solvable in polynomial time? As pointed out by Aaronson [1], it is not even clear that the black hole radiation decoding task is easy even if we assume \(\mathsf{P}=\mathsf{PSPACE}\). As with all the other "fully quantum" tasks considered in this paper, it appears difficult to characterize the complexity of the black hole decoding problem in terms of traditional notions from complexity theory.
Brakerski recently gave a characterization of the hardness of the black hole radiation task in terms of the existence of a cryptographic primitive known as _quantum EFI pairs_[1], which are in turn equivalent to quantum commitments (as well as many other quantum cryptographic primitives, see [1] for an in-depth discussion). Given the discussion in Section 8 that connects quantum commitments with the Uhlmann Transformation Problem, one would then expect an equivalence between black hole radiation decoding and the Uhlmann Transformation Problem.
We spell out this equivalence by showing that complexity of the black hole radiation decoding task is the same as the complexity of the Decodable Channel Problem, which we showed to be equivalent to the (distributional) Uhlmann Transformation Problem in Section 9.1. We believe that the direct reduction to and from the Decodable Channel Problem is natural, and may be useful to those who are more comfortable with quantum Shannon theory.
We first describe a formulation of the black hole radiation decoding task, which is an adaptation of the formulations of [14, 1].
**Definition 10.1** (Decodable black hole states).: _Let \(P\) denote a unitary quantum circuit mapping registers \(\mathsf{AG}\) to \(\mathsf{HR}\) where \(\mathsf{A}\) is a single qubit register. Consider the state_
\[\left|\psi\right\rangle_{\mathsf{BHR}}\coloneqq\left(\mathrm{id}_{\mathsf{B} }\otimes P_{\mathsf{AG}\rightarrow\mathsf{HR}}\right)\left|\mathrm{EPR} \right\rangle_{\mathsf{BA}}\otimes\left|0\right\rangle_{\mathsf{G}}\.\]
_We say that \(\left|\psi\right\rangle\) is an \(\epsilon\)-decodable black hole state if there exists a quantum circuit \(D\) that takes as input register \(\mathsf{R}\) and outputs a qubit labelled \(\mathsf{A}\), such that letting \(\rho_{\mathsf{HBA}}\) denote the state \((\mathrm{id}\otimes D)(\left|\psi\right\rangle\!\!\left\langle\psi\right|)\), we have_
\[\mathrm{F}\Big{(}\left|\mathrm{EPR}\right\rangle\!\!\left\langle\mathrm{EPR} \right|_{\mathsf{AB}}\,,\,\rho_{\mathsf{AB}}\Big{)}\geq 1-\epsilon\]
_i.e., measuring the registers \(\mathsf{BA}\) in the Bell basis yields the state \(\left|\mathrm{EPR}\right\rangle\coloneqq\frac{1}{\sqrt{2}}(\left|00\right\rangle +\left|11\right\rangle)\) with probability at least \(1-\epsilon\). We say that the circuit \(D\) is a \(\epsilon\)-decoder for the state \(\left|\psi\right\rangle\)._
The circuit \(P\) generating the decodable black hole state can be thought of as a unitary that encodes the laws of black hole evolution: given a qubit in register \(\mathsf{A}\) and a fixed number of ancilla qubits, it forms a black hole in register \(\mathsf{H}\) as well as the outgoing Hawking radiation in register \(\mathsf{R}\). The
Figure 4: Decoding black hole radiation. (a) Qubit \(A\), maximally entangled with qubit \(B\), falls into an early black hole \(H_{\mathrm{early}}\), which is entangled with some early Hawking radiation \(R_{\mathrm{early}}\). (b) After evaporating much of its mass, the old black hole \(H_{\mathrm{old}}\) is entangled with the radiation \(R_{\mathrm{old}}\) which is entangled with the qubit \(B\). (c) By performing a computation on the radiation only, the partner qubit \(A\) can be decoded.
decodability condition implies that, by acting on the radiation only, it is information-theoretically possible to decode the original qubit that was input. See Figure 4 for an illustration of black hole radiation decoding. We formalize black hole radiation decoding as a computational task.
**Definition 10.2** (Black hole radiation decoding task).: _Let \(\epsilon(n),\delta(n)\) be functions. We say that a quantum algorithm \(D=(D_{x})_{x}\) solves the \(\epsilon\)-black hole radiation decoding task with error \(\delta\) if for all \(x=(1^{n},P)\) where \(P\) is a unitary quantum circuit acting on \(n\) qubits and gives rise to an \(\epsilon(n)\)-decodable black hole state \(|\psi\rangle\), the circuit \(D_{x}\) is a \(\delta(n)\)-decoder for \(|\psi\rangle\)._
We now prove that the task of black hole radiation decoding in Definition 10.2 is equivalent to the Decodable Channel Problem in Definition 9.5, which results in the following theorem.
**Theorem 10.3**.: \(\textsc{DistUhlmann}_{1-\epsilon}\in\mathsf{avgUnitaryBQP}\) _for all negligible functions \(\epsilon(n)\) if and only if the \(\epsilon(n)\)-black hole radiation decoding task is solvable in polynomial-time for all inverse polynomials \(\delta(n)\)._
Proof.: We prove this via reduction to the Decodable Channel Problem described in Section 9.1. First, observe (from the proof) that the statement in Theorem 9.6 still holds when considering instances of the \(\epsilon\)-Decodable Channel Problem of the form \(y=(1^{1},1^{r},C)\), i.e., where we restrict \(C\) to single qubit inputs only. Define the following bijection \(\varphi\): for every \(x=(1^{n},P)\), where \(P:\mathsf{AG}\rightarrow\mathsf{HR}\) is a unitary quantum circuit acting on \(n\) qubits and where \(r\) is the size of the register \(\mathsf{R}\), define \(\varphi(x)=(1^{1},1^{r},\tilde{P})\), where \(\tilde{P}\) is the quantum circuit first appends \(n-1\) qubits initialized to \(|0\rangle\) to its input and then runs \(P\).
It is clear that \(x\) corresponds to an \(\epsilon\)-decodable black hole state if and only if \(\varphi(x)\) corresponds to an \(\epsilon\)-decodable channel: the channel can be viewed as taking the input qubit, dumping it in the black hole, and the outputting the radiation emitted by the black hole. Decoding the EPR pair from the channel associated with \(\tilde{P}\) exactly corresponds to decoding the EPR pair from the black hole associated with \(P\). Therefore, the claim follows from Theorem 9.6, which shows that the complexity of the Decodable Channel Problem is equivalent to the complexity of DistUhlmann.
_Remark 10.4_.: We remark that Brakerski proved a stronger theorem by relating the black hole radiation task to EFI [1]. For simplicity, we focus on the task of decoding the EPR pair with fidelity \(1-\epsilon\), for a small \(\epsilon\), whereas Brakerski [1] used amplification to boost weak decoders that succeed with fidelity much smaller than \(1\).
### Interference detection
In this section, we consider the computational task of _interference detection_ between orthogonal \(\mathsf{PSPACE}\) states. Aaronson, Atia, and Susskind [1] recently proved the following folklore observation, sometimes called the _swapping-distinguishing equivalence_: if one can detect the interference between two orthogonal states \(|\psi\rangle\) and \(|\varphi\rangle\), i.e. whether the states are in an equal superposition
\[\frac{|\psi\rangle+|\varphi\rangle}{\sqrt{2}}\qquad\text{ and }\qquad\frac{|\psi \rangle-|\varphi\rangle}{\sqrt{2}},\]
then one can also _swap_ between \(|\psi\rangle\) and \(|\varphi\rangle\), and vice versa. We first review the swapping-distinguishing equivalence shown by Aaronson, Atia, and Susskind [1].
**Theorem 10.5** ([1], Theorem 1).: _Suppose that \(|\psi\rangle\) and \(|\varphi\rangle\) are \(n\)-qubit orthogonal states. Then, the following two statements are equivalent:_
* _There exists a "swapping" unitary_ \(U\) _such that_ \[U\ket{\psi}=\ket{\varphi}\qquad\text{ and }\qquad U\ket{\varphi}=\ket{\psi}\,.\]
* _There exists an "interference detector" unitary_ \(V\) _that perfectly distinguishes between_ \[\frac{\ket{\psi}+\ket{\varphi}}{\sqrt{2}}\qquad\text{ and }\qquad\frac{\ket{\psi}-\ket{\varphi}}{\sqrt{2}}\,.\] _Specifically, by "distinguish" we mean that_ \(V\) _takes one of the two states as input and stores its guess for which state it received as the first qubit of its output._
_Moreover, constructing \(V\) from \(U\) (and vice versa) only incurs a constant multiplicative factor in terms of circuit complexity. The conversion uses the following circuits:_
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/circuit_for_interference_detection_with_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unitof_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_of_unit_unit_unit_of
represent distinct spacetime geometries that were produced by a complex physical process (such as black hole formation) after a long amount of time. What is the complexity of detecting whether one is a _superposition_ of the two spacetime geometries? Theorem10.5 shows that this is the same complexity as mapping from one spacetime to another.
Thus in our definition we incorporate the high complexity of the states \(\ket{C},\ket{D}\) by allowing them to be generated by a polynomial space computation. One could also consider the interference detection problem for other classes of states; we leave this for future work.
We now upper bound the complexity of solving InterferenceDetection (for statePSPACE states). We show that InterferenceDetection polynomial-time reduces to DistSuccinctUhlmann\({}_{1}\), in a sense made precise in the following theorem.
**Theorem 10.8**.: _There exists a polynomial-time query algorithm \(A\) with access to a DistSuccinctUhlmann\({}_{1}\) oracle that solves InterferenceDetection._
Proof.: Consider an instance \(x=(1^{n},\hat{C},\hat{D})\), where \(C,D\) are succinct descriptions of unitary quantum circuits \(C,D\) such that \(\ket{C},\ket{D}\) are orthogonal \(n\) qubit states. First, we show how to construct circuits \(C^{\prime},D^{\prime}\) to obtain a swapping unitary with
\[U\ket{C}=\ket{D}\qquad\text{ and }\qquad U\ket{D}=\ket{C}\]
with a single call to the oracle for DistSuccinctUhlmann\({}_{1}\). Next, we show how to modify \(C^{\prime}\) and \(D^{\prime}\) in order to obtain a controlled-\(U\) unitary instead which suffices for interference detection according to the swapping and distinguishing equivalence from Theorem10.5.
Let \(\mathsf{A}\) be a single-qubit register initialized to \(\ket{0}\), and let \(\mathsf{B}\) be an \(n\)-qubit register initialized to \(\ket{0^{n}}\). We first construct circuits \(\bar{C},\bar{D}\) acting on \(n+1\) qubits as follows:
[MISSING_PAGE_POST]
\[\begin{array}{c}\includegraphics[width=142.26378pt]{Fig24}
We now consider the pure states generated by \(\tilde{C}\) and \(\tilde{D}\) when applied to \(n+2\) qubits inititialized to \(|0\rangle\). Let \(\mathsf{A}^{\prime}\) and \(\mathsf{B}^{\prime}\) be single qubit registers initialized to \(|0\rangle\). First, by applying \(\tilde{C}\) to \(|0^{n+2}\rangle\), we obtain the following state
\[|\tilde{C}\rangle_{\mathsf{A}\mathsf{A}^{\prime}\mathsf{B}\mathsf{B}^{\prime}} =\frac{1}{\sqrt{2}}(|0\rangle_{\mathsf{A}}\otimes|\mathrm{EPR} \rangle_{\mathsf{A}^{\prime}\mathsf{B}^{\prime}}\otimes|C\rangle_{\mathsf{B}} +|1\rangle_{\mathsf{A}}\otimes|\mathrm{EPR}\rangle_{\mathsf{A}^{\prime}\mathsf{ B}^{\prime}}\otimes|D\rangle_{\mathsf{B}}).\]
Moreover, by applying \(\tilde{D}\) to \(|0^{n+2}\rangle\), we obtain the state
\[|\tilde{D}\rangle_{\mathsf{A}\mathsf{A}^{\prime}\mathsf{B}\mathsf{B}^{\prime}} =\left(\sum_{c\in\{0,1\}}|c\rangle\!\langle c|_{\mathsf{B}^{\prime}} \otimes\bar{D}^{c}_{\mathsf{A}\mathsf{B}}\right)\,\left(\sum_{b\in\{0,1\}}|b \rangle\!\langle b|_{\mathsf{B}^{\prime}}\otimes(\bar{C}^{\dagger})^{b}_{ \mathsf{A}\mathsf{B}}\right)|\tilde{C}\rangle_{\mathsf{A}\mathsf{A}^{\prime} \mathsf{B}\mathsf{B}^{\prime}}\.\]
Let us now define density operators
\[\rho_{\mathsf{A}\mathsf{A}^{\prime}\mathsf{B}\mathsf{B}^{\prime}}=|\tilde{C} \rangle\!\langle\tilde{C}|_{\mathsf{A}\mathsf{A}^{\prime}\mathsf{B}\mathsf{B} ^{\prime}}\qquad\text{ and }\qquad\sigma_{\mathsf{A}\mathsf{B}\mathsf{B}^{\prime}}=|\tilde{D} \rangle\!\langle\tilde{D}|_{\mathsf{A}\mathsf{A}^{\prime}\mathsf{B}\mathsf{B} ^{\prime}}\.\]
Because \(|C\rangle\) and \(|D\rangle\) are orthogonal, the reduced states \(\rho_{\mathsf{A}\mathsf{A}^{\prime}}\) and \(\sigma_{\mathsf{A}\mathsf{A}^{\prime}}\) satisfy
\[\rho_{\mathsf{A}\mathsf{A}^{\prime}}=\sigma_{\mathsf{A}\mathsf{A}^{\prime}} =\frac{\mathrm{id}_{\mathsf{A}\mathsf{A}^{\prime}}}{4}\,.\]
By Uhlmann's theorem there exists a unitary \(\tilde{U}\) acting on registers \(\mathsf{B}^{\prime}\mathsf{B}\) such that \(|\tilde{D}\rangle=(\mathrm{id}_{\mathsf{A}\mathsf{A}^{\prime}}\otimes\tilde{ U}_{\mathsf{B}^{\prime}\mathsf{B}})\,|\tilde{C}\rangle\). In particular, the unitary \(\tilde{U}\) satisfies
\[\tilde{U}\,|0\rangle_{\mathsf{B}^{\prime}}\,|C\rangle_{\mathsf{B}} =|0\rangle_{\mathsf{B}^{\prime}}\,|C\rangle_{\mathsf{B}}\] \[\tilde{U}\,|0\rangle_{\mathsf{B}^{\prime}}\,|D\rangle_{\mathsf{B}} =|0\rangle_{\mathsf{B}^{\prime}}\,|D\rangle_{\mathsf{B}}\] \[\tilde{U}\,|1\rangle_{\mathsf{B}^{\prime}}\,|C\rangle_{\mathsf{B}} =|1\rangle_{\mathsf{B}^{\prime}}\,|D\rangle_{\mathsf{B}}\] \[\tilde{U}\,|1\rangle_{\mathsf{B}^{\prime}}\,|D\rangle_{\mathsf{B}} =|1\rangle_{\mathsf{B}^{\prime}}\,|C\rangle_{\mathsf{B}}\.\]
Hence, \(\tilde{U}\) acts as the controlled-\(U\) operator of the form
\[\tilde{U}=\sum_{c\in\{0,1\}}|c\rangle\!\langle c|_{\mathsf{B}^{\prime}} \otimes U^{c}_{\mathsf{B}}\,,\]
where \(U\) is the swapping unitary from before. Therefore, we can use the following circuit to perfectly distinguish between \(\frac{|C\rangle+|D\rangle}{\sqrt{2}}\) and \(\frac{|C\rangle-|D\rangle}{\sqrt{2}}\) with a single call to the oracle for
DistSuccinctUhlmann\({}_{1}\) with respect to the circuits \(\tilde{C}\) and \(\tilde{D}\) and the state \(|+\rangle\otimes\frac{|C\rangle\pm|D\rangle}{\sqrt{2}}\).
### Open problems
We conclude with some open problems related to the physics-inspired applications considered in this section.
**Open Problem 26**.: Does the complexity of any of the information processing tasks discussed in this paper (e.g., compression or state merging) have any ramifications for holography or models of quantum gravity? May [14] has recently suggested that information tasks performable in the bulk are also performable on the boundary of the AdS/CFT correspondence. Does this correspondence also preserve the complexity of the task?
**Open Problem 27**.: What is the complexity of InterferenceDetection? Can we argue that it is hard for some unitary complexity class? For example, can we use the equivalence in Theorem 10.5 to argue that DistSuccinctUhlmann\({}_{1}\) reduces to InterferenceDetection, thereby rendering the two tasks equivalent?
**Open Problem 28**.: What is the complexity of InterferenceDetection with states drawn from some other state complexity class (e.g., a state complexity analogue of QMA or SZK)?
|
2310.07938 | Discrete and continuous mathematical models of sharp-fronted collective
cell migration and invasion | Mathematical models describing the spatial spreading and invasion of
populations of biological cells are often developed in a continuum modelling
framework using reaction-diffusion equations. While continuum models based on
linear diffusion are routinely employed and known to capture key experimental
observations, linear diffusion fails to predict well-defined sharp fronts that
are often observed experimentally. This observation has motivated the use of
nonlinear degenerate diffusion, however these nonlinear models and the
associated parameters lack a clear biological motivation and interpretation.
Here we take a different approach by developing a stochastic discrete
lattice-based model incorporating biologically-inspired mechanisms and then
deriving the reaction-diffusion continuum limit. Inspired by experimental
observations, agents in the simulation deposit extracellular material, that we
call a substrate, locally onto the lattice, and the motility of agents is taken
to be proportional to the substrate density. Discrete simulations that mimic a
two--dimensional circular barrier assay illustrate how the discrete model
supports both smooth and sharp-fronted density profiles depending on the rate
of substrate deposition. Coarse-graining the discrete model leads to a novel
partial differential equation (PDE) model whose solution accurately
approximates averaged data from the discrete model. The new discrete model and
PDE approximation provides a simple, biologically motivated framework for
modelling the spreading, growth and invasion of cell populations with
well-defined sharp fronts | Matthew J Simpson, Keeley M Murphy, Scott W McCue, Pascal R Buenzli | 2023-10-11T23:20:38Z | http://arxiv.org/abs/2310.07938v3 | # Discrete and continuous mathematical models of sharp-fronted collective cell migration and invasion
###### Abstract
Mathematical models describing the spatial spreading and invasion of populations of biological cells are often developed in a continuum modelling framework using reaction-diffusion equations. While continuum models based on linear diffusion are routinely employed and known to capture key experimental observations, linear diffusion fails to predict well-defined sharp fronts that are often observed experimentally. This observation has motivated the use of nonlinear degenerate diffusion, however these nonlinear models and the associated parameters lack a clear biological motivation and interpretation. Here we take a different approach by developing a stochastic discrete lattice-based model incorporating biologically-inspired mechanisms and then deriving the reaction-diffusion continuum limit. Inspired by experimental observations, agents in the simulation deposit extracellular
material, that we call a _substrate_, locally onto the lattice, and the motility of agents is taken to be proportional to the substrate density. Discrete simulations that mimic a two-dimensional circular barrier assay illustrate how the discrete model supports both smooth and sharp-fronted density profiles depending on the rate of substrate deposition. Coarse-graining the discrete model leads to a novel partial differential equation (PDE) model whose solution accurately approximates averaged data from the discrete model. The new discrete model and PDE approximation provides a simple, biologically motivated framework for modelling the spreading, growth and invasion of cell populations with well-defined sharp fronts. Open source Julia code to replicate all results in this work is available on GitHub.
Introduction
Continuum partial differential equation (PDE) models have been used for over 40 years to model and interpret the spatial spreading, growth and invasion of populations of cells [1, 2, 3]. PDE models have been used to improve our understanding of various biological processes including wound healing [4, 5, 6, 7, 8], embryonic development [9, 10, 11], tissue growth [12, 13, 14] as well as disease progression, such as cancer [15, 16, 17]. For a homogeneous population of cells with density \(u\geq 0\), a typical PDE model can be written as
\[\frac{\partial u}{\partial t}=-\nabla\cdot\mathbf{\mathcal{J}}+\mathcal{S}, \tag{1}\]
where \(\mathbf{\mathcal{J}}\) is the flux of cells and \(\mathcal{S}\) is a source term that can be used to model proliferation and/or cell death. Different PDE models are specified by choosing different forms of \(\mathbf{\mathcal{J}}\) and \(\mathcal{S}\). Within the context of modelling homogenous cell populations, the most common choice for the flux term is based on the assumption that cells move randomly [18], giving rise to linear diffusion with a flux term given by Fick's law, \(\mathbf{\mathcal{J}}=-D\nabla u\), where \(D>0\) is the cell diffusivity [3, 7, 8]. A standard choice for the source term is to specify a logistic term to represent carrying capacity-limited proliferation, \(\mathcal{S}=\lambda u(1-u/K)\) where \(\lambda>0\) is the proliferation rate and \(K>0\) is the carrying capacity density [3, 7, 8]. These choices of \(\mathbf{\mathcal{J}}\) and \(\mathcal{S}\) mean that Equation 1 is a multi-dimensional generalisation of the well-known Fisher-Kolmogorov model [19, 20, 21, 22], which has been successfully used to interpret a number of applications including _in vivo_ tumour progression [16], _in vivo_ embryonic development [9], _in vitro_ wound healing [7, 8] and tissue growth [13, 14].
Figure 1(a) shows experimental images of a simple two-dimensional _in vitro_ cell migration experiment, called a _barrier assay_[23, 24]. These experiments are initiated by uniformly placing approximately 30,000 fibroblast cells as a monolayer inside a circular barrier of radius 3 mm. In these experiments cells are pre-treated with an anti-mitotic drug that prevents proliferation [25], and there is no observed cell death [23]. Accordingly, we model this experiment by setting \(\mathcal{S}=0\) in Equation 1. The experiment proceeds by lifting the barrier at \(t=0\) and observing how the population of cells spreads over time, with the right-most image in Figure 1(a) showing the extent to which the population has spread after \(t=3\) days. Two key features of this experiment are immediately clear from
these images: (i) the population of cells spreads symmetrically with time; and (ii) the experimental image at \(t=3\) days shows a clear well-defined sharp front at the leading edge of the population as it spreads. Images in Figure 1(b) show a numerical solution of Equation 1 with \(\mathcal{S}=0\) and the standard choice of linear diffusion, \(\boldsymbol{\mathcal{J}}=-D\nabla u\), for a typical choice of \(D\)[23]. Consistent with the experiments in Figure 1(a) we see that the simulated population spreads symmetrically, but plotting the density along the line \(y=0\), in the right-most panel of Figure 1(b) shows that we have \(u>0\) for all \(x\) which is inconsistent with the well-defined sharp fronts at the leading edge in the experimental images. This property of having \(u>0\) for all \(x\) persists for all \(t>0\) which is a well-known deficiency of linear diffusion [26]. Figure 1(c) shows a numerical solution of Equation 1 with \(\mathcal{S}=0\) and a nonlinear degenerate diffusive flux, \(\boldsymbol{\mathcal{J}}=-Du\nabla u\), for a typical choice of \(D\) in this model [7, 8, 14, 27, 28]. Consistent with the experiments we see that the simulated population spreads symmetrically, and plotting the solution along the line \(y=0\) in the right-most panel of Figure 1(c) shows that we have a well-defined sharp front; \(u>0\) for \(|x|<X(t)\), and \(u=0\) for \(|x|\geq X(t)\), where \(X(t)\) is the front location at time \(t\). Full details of our numerical method for solving Equation 1 are given in the Appendix.
The qualitative comparison between the solution of the linear diffusion equation, the nonlinear degenerate diffusion equation and the experimental images in Figure 1 has been made with \(\mathcal{S}=0\) so that the continuum PDE model is consistent with the experiments where proliferation is suppressed. However, the difference between spreading cell fronts having sharp or smooth fronts is also relevant for models with \(\mathcal{S}\neq 0\)[3, 7, 8]. Throughout the first part of this work we set \(\mathcal{S}=0\), noting that the difference between smooth and sharp-fronted solutions of Equation 1 is, in general, determined by the choice of \(\boldsymbol{\mathcal{J}}\) rather than \(\mathcal{S}\). We will come back to this point in Section 22.5 and provide evidence to support this claim.
Figure 1: (a) Experimental images showing a population of non-proliferative fibroblast cells spreading in a two–dimensional barrier assay. The image at \(t=0\) shows the population just as the barrier is lifted, and the image at \(t=3\) days showing the population of migrating cells spreading symmetrically with a sharp front. Images reproduced from Simpson et al. [23] with permission. (b)–(c) Numerical solutions of Equation 1 with \(\boldsymbol{\mathcal{J}}=-D\nabla u\) and \(\boldsymbol{\mathcal{J}}=-Du\nabla u\), respectively. Both numerical solutions have \(\mathcal{S}=0\) and \(u(x,y,0)=1\) inside a disc of radius \(3\) mm, and \(u(x,y,0)=0\) elsewhere to match the initial distribution of cells in the experiments shown in (a). The numerical domain is a square of side length \(10\), and Equation 1 is discretised on a \(201\times 201\) uniform mesh. The numerical solution of Equation 1 at \(t=3\) days is given in the middle panel of (b)–(c), and the details of the density profile are shown in the right-most panels where \(u(x,0,t)\) is plotted at \(t=0\) (red) and \(t=3\) days (blue). Details of the leading edge of the profiles are highlighted in the green rectangle near \(x=4\) illustrating that the density profile in (b) has \(u>0\) at all locations, whereas the density profile in (c) has compact support, which is consistent with the experimental images in (a). All values of \(x\) and \(y\) in (b)–(c) measure location in terms of mm to be consistent with the experimental images in (a). The numerical solution in (b) corresponds to a typical value of \(D=D_{1}=2100\)\(\mu\)m\({}^{2}\)/hour for linear diffusion [23], and in (c) we set \(D=D_{2}=4200\)\(\mu\)m\({}^{2}\)/hour to satisfy \(\int_{0}^{1}D_{1}\,\mathrm{d}u=\int_{0}^{1}D_{2}u\,\mathrm{d}u\), to ensure that both the linear and nonlinear diffusion models lead to a similar amount of spreading over the experimental timescale [29].
Many continuum models of homogeneous cell populations adopt a simple linear diffusive flux, \(\mathbf{\mathcal{J}}=-D\nabla u\), and this approximation is often made with the implicit or explicit acknowledgment that solutions of this PDE model fail to predict a well-defined front as observed experimentally. In contrast, working with the degenerate nonlinear diffusion model by setting \(\mathbf{\mathcal{J}}=-Du\nabla u\), can lead to a better match with experimental data with well-defined sharp fronts [7, 8, 14, 28, 30, 31]. With this choice of flux and \(\mathbf{\mathcal{S}}=0\), Equation 1 is also known as the _porous medium equation_[32, 33, 34, 35, 36]. Working with the degenerate nonlinear diffusion model is complicated by the fact that this model is one member of a family of models obtained by setting \(\mathbf{\mathcal{J}}=-Du^{n}\nabla u\), where \(n>0\) is some constant. Solutions of Equation 1 with this more general choice of nonlinear flux also leads to symmetric spreading with a well-defined sharp front like we saw in Figure 1(c) for all values of \(n>0\). These sharp-fronted solutions with compact support are similar to moving boundary problems in the sense that there is a well-defined front location with zero density, and the position of this front evolves with time which we can interpret as a model of the position of the cell front in an experiment [32, 33, 34, 35, 36]. The question of how to choose the value of the exponent \(n\) remains unclear. For example, Sherratt and Murray [2] studied an _in vivo_ wound healing experiment with \(n=0,1\) and \(4\), and showed that all three choices of exponent could be used to make their reaction-diffusion PDE model match their experimental data. Later, Jin et al [28] studied a series of _in vitro_ scratch assays by setting \(n=0,0.5,1,2,3\) and \(4\) and concluded that \(n=1\) led to the best match to their experimental data without attempting to provide a biological motivation or interpretation of this choice of \(n\). Similarly, McCue et al. [37] studied a series of two-dimensional _in vitro_ wound closure experiments and also found that \(n=1\) provided the best match to their experimental data. Other continuum modelling studies have simply worked with \(n=1\) without explicitly considering other choices of the exponent [12, 14, 38]. In summary, a key challenge in using continuum PDE models with this generalised nonlinear degenerate diffusivity is that the exponent \(n\) often acts as a fitting parameter [30], and lacks clear a biological interpretation. In addition to using these kinds of degenerate diffusion models to interpret biological observations, there is also a great deal of inherent mathematical interest in these models and their solutions [27, 39]
An alternative to working with a continuum model to understand the collective spatial spreading, growth and invasion of cell populations is to work with a discrete modelling
framework that considers the stochastic motion of individual cells [17, 18]. Many kinds of discrete models of cell populations have been implemented to interpret experimental observations ranging from simple lattice-based models [40, 41] to more complicated lattice-free [42, 43] and vertex-based models [44, 45]. An attractive feature of working with discrete models is that experimental images and time-lapse movies showing individual cellular-level behaviours can be translated into a set of individual _rules_ that can be implemented with a stochastic framework to provide a high fidelity simulation-based model capturing the key biological processes of interest [46, 47]. Discrete models can be implemented to visualise snapshots of the spreading population in a way that is directly analogous to performing and imaging an experiment to reveal the positions of individual cells within the population. Another advantage of working with discrete stochastic models is that the discrete mechanism can be coarse-grained into an approximate continuum model, which means that we can encode different individual-level _rules_ into a simulation-based model, and then convert these rules into approximate continuum PDE models, and the solution of these coarse-grained models can be compared with averaged discrete data obtained by repeated simulation [40, 48]. As described previously, there has been a great deal of effort devoted to understanding how different forms of continuum PDE models predict smooth or sharp-fronted solution profiles, however far less attention has been devoted to understanding what individual-level mechanisms lead to smooth or sharp fronts in discrete models of cell migration.
All mathematical models discussed so far are simple in the sense that they involve a single PDE or a single population of agents in a discrete framework that can be used to describe spreading of homogeneous population of cells. Of course, there are many other more complicated models of collective cell spreading that can lead to sharp-fronted solution profiles. These models include coupled reaction-diffusion models of multiple interacting cell populations of cells [49] as well as discrete models describing multiple populations [50]. Other families of mathematical models include models that describe cell migration that involves biased movement along chemical gradients, such as chemotaxis or haptotaxis [51, 52]. Here we will focus on more fundamental mathematical models of simple homogeneous populations composed of one cell type only, and we do not explicitly consider any biased migration mechanism, such as chemotaxis or haptotaxis.
In this work we propose a simple, biologically-motivated, lattice-based discrete model of collective cell migration and proliferation. The discrete model explicitly models how individual cells in a two-dimensional _in vitro_ experiment produce a biological substrate (e.g. biological macromolecules, extracellular material) that is deposited onto the surface of the tissue culture plate [53, 54, 12]. Substrate is produced at a particular rate, and deposited locally by individuals within the simulated population. Individual agents within the stochastic model undergo an unbiased random walk at a rate that is proportional to local substrate concentration, and crowding effects are incorporated by ensuring that each lattice site can be occupied by no more than a single agent. As we will demonstrate, this simple biologically-inspired mechanism allows us to simulate cell spreading experiments similar to those in Figure 1(a). Through simulation, we first show that altering the rate of substrate deposition visually impacts the sharpness of the agent density front. A deeper mathematical understanding of these observations is obtained by coarse-graining the discrete mechanism to give a novel PDE model whose solution describes the average behaviour of the stochastic model. One way to interpret this new PDE model is that it naturally describes a linear diffusion mechanism at spatial locations well-behind the leading edge of the population, as well as a more complicated transport mechanisms at the leading edge of the spreading population that gives rise to sharp-fronted solution profiles consistent with experimental observations. We show that averaged data from the discrete model can be very well approximated by numerical solutions of the new continuum-limit PDE. In particular, both the continuum and discrete models predict the formation of sharp-fronted density profiles. A careful examination of the new continuum limit PDE model allows us to interpret how the different terms in the model lead to the formation of sharp, sometimes non-monotone fronts. We conclude this study by incorporating a minimal model of cell proliferation into the discrete model, coarse-graining the proliferative discrete mechanism and comparing averaged data from the discrete model with proliferation to numerical solutions of the new PDE model.
Results and Discussion
### Stochastic model and simulations
To account for crowding effects, we implement a lattice-based exclusion process where each lattice site can be either vacant or occupied by, at most, a single agent [48, 40]. From this point forward we will use the word _agent_ to refer to individuals within the simulated population and the word _cell_ to refer to individuals within an experimental population of biological cells. For simplicity we implement the model on a two-dimensional square lattice with lattice spacing \(\Delta\). Each site is indexed by \((i,j)\), where \(i,j\in\mathbb{Z}_{+}\), and each site has position \((x,y)=(i\Delta,j\Delta)\). The lattice spacing is taken to be the size of a typical cell diameter [40, 23]. In any single realisation of the stochastic model the occupancy of each site \((i,j)\) is a binary variable \(U_{i,j}\), with \(U_{i,j}=1\) if the site is occupied, and \(U_{i,j}=0\) if the site is vacant. Each site is also associated with a substrate concentration, which is a continuous function of time, \(\bar{S}_{i,j}(t)\in[0,\bar{S}_{\max}]\), where \(\bar{S}_{\max}\) is the maximum amount of substrate that can be accommodated at each lattice site. For simplicity we write \(S_{i,j}(t)\in[0,1]\), where \(S_{i,j}(t)=\bar{S}_{i,j}(t)/\bar{S}_{\max}\) is the non-dimensional substrate density.
A random sequential update method is used to advance the stochastic simulations through time. If there are \(N\) agents on the lattice, during the next time step of duration \(\tau\), \(N\) agents are selected independently, at random, one at a time with replacement, and given the opportunity to move. If the chosen agent is at site \((i,j)\), the agent will attempt to move with probability \(PS_{i,j}\), where \(P\in[0,1]\) is the probability that an isolated agent will attempt to move during a time interval of duration \(\tau\). The target site for all potential motility events is selected at random from one of the four nearest neighbour lattice sites, and the potential motility event will be successful if the target site is vacant [40, 23]. Once \(N\) potential movement events have been attempted, the density of substrate is updated by assuming that agents deposit substrate at a rate of \(\Gamma\) per time step, so that the amount of substrate at each occupied lattice site increased by an amount \(\Gamma\tau\), taking care to ensure that the maximum non-dimensional substrate density at each site is one. In addition to specifying initial conditions for the distribution of agents and the initial density of substrate, we must specify values of two parameters to implement the
stochastic simulation algorithm: \(P\in[0,1]\) which determines the motility of agents, and \(\Gamma>0\) which determines the rate of substrate deposition. With this framework a typical cell diffusivity is given by \(D=P\Delta^{2}/(4\tau)\)[40].
To illustrate how the discrete model can be used to model the barrier assay in Figure 1(a) we perform a suite of simulations summarised in Figure 2. The radius of the barrier assay is 3 mm, and a typical cell diameter is approximately 20 \(\mu\)m [23, 40]. This means we can simulate the initial placement of cells within the barrier by taking a circular region of radius \(3000/20=150\) lattice sites to represent the disc enclosed by the barrier. The experiments in Figure 1(a) are initiated by placing approximately 30,000 cells uniformly, as a monolayer, within the circular barrier. In the simulations we have \(\lfloor\pi 150^{2}\rceil=70,686\) lattice sites within the simulated barrier, and we initialise the simulations by randomly populating each lattice site within the barrier with probability \(30,000/70,686\approx 0.42\). With the discrete model we can simulate a population of cells with a typical cell diffusivity of \(D=2100\)\(\mu\)m\({}^{2}\)/hour [23] by choosing \(P=1\) and \(\tau=24/500\) hour. This means that simulating 500 time steps of duration \(\tau=24/500\) hours is equivalent to one day in the experiment. Results in Figure 2(a) show a preliminary simulation with this initial condition where we set \(S_{i,j}(0)=1\) at all lattice sites at the beginning of the simulation. This first simulation corresponds to the simplest possible case where all lattice sites have the maximum amount of substrate present at the beginning of the experiment which means that the simulation does not depend upon the rate of deposition, \(\Gamma\). In Figure 2(a) we see that the population of agents spreads symmetrically, and after 3 days we have a symmetric distribution of individuals without any clear front at the leading edge of the population. In fact, by \(t=3\) days we see that some agents within the simulated population become completely isolated, having spread far away from the bulk of the population as a result of chance alone. This situation is inconsistent with the experimental images in Figure 1(a) where we see a clear front at the leading edge of the population and a complete absence of individuals that become separated from the bulk population. Open source Julia code to replicate these stochastic simulations is available on GitHub
Figure 2: Discrete simulations illustrating the role of the substrate deposition rate \(\Gamma\). All simulations are performed on a \(500\times 500\) square lattice where the lattice spacing corresponds to 20 \(\mu\)m making the diameter of the simulated population distribution in the left column equal to the diameter of the barrier assay at \(t=0\) in Figure 1(a). Simulations are initiated by randomly occupying sites within a circular region of radius 150 lattice sites so that the expected number of agents at the beginning of the simulation is 30,000. Simulations are performed by setting \(P=1\) and \(\tau=24/500\) hours, with values of \(\Gamma\) as indicated. Results in (a) correspond to initialising \(S_{i,j}(0)=1\) at all lattice sites, whereas results in (b)–(d) correspond to initialising \(S_{i,j}(0)=0\) at all lattice sites. Each day of simulation corresponds to 500 time steps in the discrete model, and snapshots are reported in terms of the \((i,j)\) index of the lattice, which can be re-scaled to give the dimensional coordinates noting that \((x,y)=(i\Delta,j\Delta)\).
Additional simulation results in Figure 2(b)-(d) involve setting up the same initial distribution of agents as in Figure 2(a) except that we set \(S_{i,j}(0)=0\) at all lattice sites at the beginning of the simulation. These simulations in Figure 2(b)-(d) are more biologically realistic than the simulations in Figure 2(a) because in the real experiment cells are placed into the barriers at the beginning of the experiment without having had any chance to deposit significant amounts of substrate onto the surface of the tissue culture plate before the barrier is lifted. Simulations in Figure 2(b)-(d) are shown for different substrate deposition rates, \(\Gamma\). If the substrate is deposited sufficiently fast, as in Figure 2(b), the distribution of individual agents at \(t=3\) days is visually indistinguishable from the case in Figure 2(a) where we do not observe a clear front in the spreading population. As \(\Gamma\) is reduced, results in Figure 2(c)-(d) show that the populations spread symmetrically with time, and now we see an increasingly well-defined sharp front as the population of agents spreads. The snapshot of individuals in Figure 2(d) shows that after \(t=3\) days there are very few individual agents that are isolated away from the bulk of the population, and this distribution is consistent with the experimental observations in Figure 1(a).
### Continuum limit partial differential equation model
We now provide greater mathematical understanding and interpretation of the discrete simulation results in Figure 2 by coarse-graining the discrete mechanism to give an approximate continuum limit description in terms of a PDE model in the form of Equation 1. We begin by considering the average occupancy of site \((i,j)\), where the average is constructed by considering a suite of \(M\) identically-prepared simulations to give
\[\langle U_{i,j}\rangle(t)=\frac{1}{M}\sum_{m=1}^{M}U_{i,j}^{m}(t), \tag{2}\]
where \(U_{i,j}^{m}(t)\) is the binary occupancy of lattice site \((i,j)\) at time \(t\) in the \(m\)th identically-prepared realisation. With this definition we treat \(\langle U_{i,j}\rangle(t)\in[0,1]\) as a smooth function of time, and for notational convenience we will simply refer to this quantity as \(\langle U_{i,j}\rangle\). Under these conditions we can write down an approximate conservation statement describing the change in average occupancy of site \((i,j)\) during the time interval from time \(t\) to time
\(t+\tau\)[40, 48],
\[\delta\langle U_{i,j}\rangle= \frac{P}{4}\left[\underbrace{(1-\langle U_{i,j}\rangle)\sum_{ \text{migration onto site }(i,j)}[S_{i,j}\langle U_{i,j}\rangle]}_{\text{ migration onto site }(i,j)}-\underbrace{S_{i,j}\langle U_{i,j}\rangle\left(4-\sum \langle U_{i,j}\rangle\right)}_{\text{migration out of site }(i,j)}\right], \tag{3}\] \[\delta S_{i,j}= \begin{cases}\Gamma\langle U_{i,j}\rangle\quad\text{for}\quad S _{i,j}<1,\\ 0\quad\text{for}\quad S_{i,j}=1,\end{cases} \tag{4}\]
where, for notational convenience, we write
\[\sum \left[S_{i,j}\langle U_{i,j}\rangle\right]=S_{i+1,j}\langle U_{i +1,j}\rangle+S_{i-1,j}\langle U_{i-1,j}\rangle+S_{i,j+1}\langle U_{i,j+1} \rangle+S_{i,j-1}\langle U_{i,j-1}\rangle, \tag{5}\] \[\sum \langle U_{i,j}\rangle=\langle U_{i+1,j}\rangle+\langle U_{i-1,j }\rangle+\langle U_{i,j+1}\rangle+\langle U_{i,j-1}\rangle. \tag{6}\]
The first term on the right of Equation 3 approximately describes the increase in expected occupancy of site \((i,j)\) owing to motility events that would place agents on that site. Similarly, the second term on the right of Equation 3 approximately describes the decrease in expected occupancy of site \((i,j)\) owing to motility events associated with agents leaving site \((i,j)\). We describe these terms as _approximate_ as we have invoked the mean field assumption that the average occupancy of lattice sites are independent [48]. While this assumption is clearly questionable for any particular realisation of a discrete model, when we consider the expected behaviour of an ensemble of identically-prepared simulations this approximation turns out to be quite accurate [40, 48]. Note that setting \(S_{i,j}=1\) at all lattice sites means that this conservation statement simplifies to previous discrete conservation statements that have neglected the role of the substrate [40].
To proceed to the continuum limit we identify \(\langle U_{i,j}\rangle\) and \(S_{i,j}\) with smooth functions \(u(x,y,t)\) and \(s(x,y,t)\), respectively. Throughout this work we associate uppercase variables with the stochastic model and lowercase variables with the continuum limit model. We expand all terms in 3 in a Taylor series about \((x,y)=(i\Delta,j\Delta)\), neglecting terms of \(\mathcal{O}(\Delta^{3})\) and smaller. Dividing the resulting expressions by \(\tau\), we take limits as \(\Delta\to 0\) and
\(\tau\to 0\), with the ratio \(\Delta^{2}/\tau\) held constant [18] to give
\[\frac{\partial u}{\partial t}= D\nabla\cdot\left[s\nabla u+u\left(1-u\right)\nabla s\right], \tag{7}\] \[\frac{\partial s}{\partial t}= \begin{cases}\gamma u&\text{for}\quad s<1,\\ 0&\text{for}\quad s=1,\end{cases} \tag{8}\]
where
\[D=\lim_{\begin{subarray}{c}\Delta\to 0\\ \tau\to 0\end{subarray}}\left(\frac{P\Delta^{2}}{4\tau}\right),\quad\gamma= \lim_{\begin{subarray}{c}\Delta\to 0\\ \tau\to 0\end{subarray}}\left(\frac{\Gamma}{\tau}\right), \tag{9}\]
which relates parameters in the discrete model: \(\Delta,\tau,P\) and \(\Gamma\), to parameters in the continuum model: \(D\) and \(\gamma\).
The evolution equation for \(s\), Equation 8, arises directly from our discrete model where we assume that each lattice site can occupy a maximum amount of substrate. This leads to a mechanism that is very similar to an approach that has been recently adopted to study a generalisation of the well-known Fisher-KPP model where the nonlinear logistic source term is replaced with a linear _saturation_ mechanism [55, 56, 57]. Solutions of these saturation-type models of invasion involve moving boundaries that form as a result of the saturation mechanism since this provides a natural moving boundary between regions where \(s=1\) and \(s<1\). Later in Section 22.3 and 22.4 we will show that Equations 7-8 can also be interpreted as moving boundary problem in exactly the same way as [55, 56, 57].
The form of Equations 7-8 provides insight into the population-level mechanisms encoded with the discrete model. To see this we write the flux encoded within Equation 7 as,
\[\boldsymbol{\mathcal{J}}=\underbrace{-Ds\nabla u}_{\text{diffusive flux}}- \underbrace{Du(1-u)\nabla s}_{\text{advective flux}}. \tag{10}\]
Written in this way we can now interpret how these two components of the cell flux impact the population-level outcomes. One way to interpret these terms is to note that the first term on the right of Equation 10 is proportional to \(\nabla u\) which is similar to a diffusive flux, and the second term on the right of Equation 10 is proportional to \(u(1-u)\) which acts like a non-linear advective flux. In particular, this non-linear advective flux is similar to fluxes often encountered in mathematical models of traffic flow [58].
We can also interpret how the two components of \(\boldsymbol{\mathcal{J}}\) in Equation 10 give rise to different features in the solution of the model depending on the location within a spreading population of individuals, such as the discrete populations shown in Figure 2. For example, in regions that have been occupied by agents for a sufficiently long period of time, such as regions near the centre of the spreading populations in Figure 2 where \(u>0\), locally we will eventually have \(s=1\) and \(\nabla s=\mathbf{0}\). This means that Equations 7-8 simplifies to the linear diffusion equation since the nonlinear advective flux vanishes and the diffusive-like flux simplifies to Fick's law of diffusion. In contrast, regions that have been recently occupied by agents, such as near the leading edge of a population, we have \(s<1\) and \(\nabla s\neq\mathbf{0}\). Under these conditions the diffusive flux is similar to a nonlinear diffusion term where the diffusive flux of \(u\) is proportional to \(s\), which reflects the fact that agent motility in the discrete model is directly proportional to the local density of substrate. The advective-like component of the flux acts like a nonlinear advection term since the flux is proportional to \(u(1-u)\)[58], meaning that the advective flux vanishes when \(u=0\) and \(u=1\), and is a maximum when \(u=1/2\). The direction of the nonlinear advective flux is opposite to \(\nabla s\). The nonlinear advective flux explicitly includes crowding effects encoded into the discrete model by enforcing that each lattice site can be occupied by, at most, a single agent.
### Continuum-discrete comparison
We now examine how well the numerical solution of Equations 7-8 matches averaged data from the discrete model. The experimental images and stochastic simulations in Figures 1-2 correspond to a radially-symmetric polar coordinate system, which can be described by writing Equations 7-8 in terms of a radial coordinate system. Instead, we consider a second set of discrete simulations, shown in Figure 3, where we consider a rectangular domain with a width of 300 lattice sites, and height of 20 lattice sites. Simulations are initialised by setting \(S_{i,j}(0)=0\) at all lattice sites, and uniformly occupying all sites within \(i\leq 150\) with agents. Reflecting boundary conditions are imposed along all boundaries, and simulations are performed for \(\Gamma=10^{2},10^{1},10^{0},10^{-1}\) and \(10^{-2}\) per time step, as shown in Figure 3. Simulation results are consistent with previous results in Figure 2 where we see that simulations performed with sufficiently large substrate deposition rates leads population spreading with a smooth front, without any obvious well-defined front position. Simulations with larger \(\Gamma\) lead to population spreading with a visually noticeable defined front. Our main motivation for performing simulations on a rectangular-shaped lattice is that we can work with Equations 7-8 in a one-dimensional Cartesian coordinate system [40].
Figure 3: Discrete simulations with \(P=1\), \(\tau=24/500\) hours, and various values of \(\Gamma\), as indicated. All simulations are performed on a rectangular lattice of width \(W=300\) and height \(H=20\). Simulations are initialised by setting \(S_{i,j}=0\) at all lattice sites, and all sites with \(i\leq 150\) are occupied by agents. Snapshots are shown at \(t=1,2,3\) and \(4\) days. Each day of simulation corresponds to \(500\) time steps in the discrete model, and snapshots are reported in terms of the \((i,j)\) index of the lattice, which can be re-scaled to give the dimensional coordinates noting that \((x,y)=(i\Delta,j\Delta)\).
Averaged agent density data are extracted from the simulations illustrated in Figure 3 by considering \(M\) identically prepared realisations of the discrete model, averaging the occupancy of each lattice site across these realisations and then further averaging the occupancy along each column of the lattice to give [40],
\[\langle U_{i}\rangle=\frac{1}{HM}\sum_{m=1}^{M}\sum_{j=1}^{H}U_{i,j}^{m}, \tag{11}\]
where \(H\) is the height of the lattice. Numerical solutions of Equations 7-8 in a one-dimensional Cartesian coordinate system are obtained for parameter values and initial data consistent with the discrete simulations. Details of the numerical method used to solve the continuum PDE model are given in the Appendix. Results in Figure 4 compare numerical solutions of Equations 7-8 with averaged data from the discrete simulations, given by Equation 11 for various values of \(\Gamma\), as indicated.
Results in Figure 4 indicate that the quality of the continuum-discrete match is very good for all values of \(\Gamma\) considered. For sufficiently large values of the substrate deposition rate in Figure 4(a)-(b) we see that the density profiles are smooth, with no clear well-defined front location at the low density leading edge. These results are consistent with the preliminary numerical results in Figure 1(a)-(b) for the barrier assay geometry. In
Figure 4: Averaged discrete data (dots) superimposed on numerical solutions of Equations 7–8 (solid). Each subfigure compares averaged discrete data, constructed using Equation 11 with \(H=20\) and \(M=100\), with a numerical solution of Equations 7–8. Four sets of solutions are shown for \(\Gamma=10^{1},10^{0},10^{-1}\) and \(10^{-2}\) per time step, as indicated. Discrete simulations are initialised by occupying all lattice sites with \(i\leq 150\), and with \(P=1\) and \(\tau=24/500\) hours. Within each subfigure a comparison is made at \(t=0,1,2,3\) and 4 days shown in blue, green orange and yellow, respectively, as indicated. Each day of simulation corresponds to 500 time steps in the discrete model, and snapshots are reported in terms of the \((i,j)\) index of the lattice, which can be re-scaled to give the dimensional coordinates noting that \((x,y)=(i\Delta,j\Delta)\). The arrows within each subfigure show the direction of increasing time.
contrast, for sufficiently small values of the substrate deposition rate, density profiles in Figure 4(c)-(d) show that we have a well-defined sharp front at the leading edge of the spreading populations. The density profiles in Figure 4(d) indicate that the solution of Equations 7-8 for \(u(x,t)\) has compact support, and the density profiles are non-monotone with a small dip in density just behind the leading edge. Interestingly, we see the small dip in density behind the leading edge in the averaged discrete data. This indicates that the continuum limit PDE model provides an accurate approximation of the average densities from the discrete simulations. Open source Julia code to solve the continuum limit PDE model is available on GitHub
### Front structure
Now that we have confirmed that averaged data from the discrete model can be approximated by numerical solutions of Equations 7-8, we will briefly describe and summarise the general features of the front structure in a simple one-dimensional Cartesian geometry analogous to the results in Figure 4. This discussion of the front structure is relevant for initial conditions of the form \(s(x,0)=0\) for all \(x\), and \(u(x,0)=1-\mathrm{H}(X)\), where \(\mathrm{H}\) is the usual Heaviside step function so that initially we have \(u=1\) for \(x<X\) and \(u=0\) for \(x>X\). In all cases considered we impose zero flux boundaries on \(u(x,t)\) at both boundaries of the one-dimensional domain. Figure 5 shows a typical solution of Equations 7-8. This schematic solution corresponds to the most interesting case with sufficiently small \(\gamma\) that we see a clear sharp-fronted solutions, and both \(\partial u/\partial x\) and \(\partial s/\partial x\) are discontinuous at some moving location \(x=\eta(t)\). As discussed in Section 22.2, the moving boundary at \(x=\eta(t)\) arises because of the saturation mechanism governing the dynamics of \(s\) in Equation 8. This kind of moving boundary problem has been previously studied in the case of a generalised Fisher-KPP model [55, 56, 57], except that these previous investigations have not involved any discrete stochastic models, or any kind of coarse-graining to arrive at an approximate PDE model.
The schematic showing \(u(x,t)\) and \(s(x,t)\) in Figure 5(a) motivates us to consider two regions within the solution:
* Region 1: \(x<\eta(t)\) where \(s(x,t)=1\), and
* Region 2: \(\eta(t)<x<\xi(t)\), where \(0<s(x,t)<1\).
Ahead of Region 2 where \(x>\xi(t)\) we have \(u=s=0\), and so we consider \(x=\xi(t)\) to be the _front_ of the solution. In Region 1 we have \(s(x,t)=1\) and \(\partial s/\partial x=0\), which means that the evolution of \(u(x,t)\) in Region 1 is governed by the linear diffusion equation and the flux of \(u\) simplifies to \(\mathbf{\mathcal{J}}=-D\partial u/\partial x\). This simplification explains why, for
this initial condition, \(u(x,t)\) is a monotonically decreasing function of \(x\) within Region 1 because solutions of the linear diffusion equation obey a maximum principle [59].
Region 2 is characterised by having \(s(x,t)<1\) with \(\partial s/\partial x<0\). The interface between Region 1 and Region 2 has \(s(\eta(t),t)=1\) and \(u(\eta(t),t)=u^{*}\), for some value \(0<u^{*}<1\). Within Region 2 the flux of \(u\) is given by \(\mathbf{\cal J}=-Ds\partial u/\partial x-Du(1-u)\partial s/\partial x\). The advective component of the flux, \(-Du(1-u)\partial s/\partial x\), is directed in the positive \(x\) direction, which means that the flux of \(u\) entering Region 2 across the interface at \(x=\eta(t)\) is partly advected in the positive \(x\) direction due to the advective flux term that acts within Region 2 only. This additional advective flux in the positive \(x\) direction within Region 2 explains why there can be a local minima in \(u\) at \(x=\eta(t)\). The diffusive component of the flux in Region 2, \(-Ds\partial u/\partial x\), can act in either the positive \(x\)-direction when \(\partial u/\partial x<0\) or in the negative \(x\)-direction when \(\partial u/\partial x>0\). The schematic in Figure 5(a) shows \(u(x,t)\) and \(s(x,t)\) across Regions 1 and 2. Associated schematics in Figure 5(b) shows \(\mathbf{\cal J}_{d}=-Ds\partial u/\partial x\) and \(\mathbf{\cal J}_{a}=-Du(1-u)\partial s/\partial x\), and the schematic in Figure 5(c) shows \(\mathbf{\cal J}=\mathbf{\cal J}_{d}+\mathbf{\cal J$ }_{a}\) for the \(u\) and \(s\) profiles in Figure 5(a). These plots of the fluxes show that while the total flux \(\mbox{\boldmath$\cal J}>0\) across both Regions 1 and 2, we see that \(\mathbf{\cal J}_{a}\) vanishes everywhere except within Region 2, and \(\mathbf{\cal J}_{d}>0\) within Region 1, but \(\mathbf{\cal J}_{d}\) changes sign within Region 2 in this case.
Exploring numerical solutions of Equations 7-8 indicates that the width of Region 2, \(w(t)=\xi(t)-\eta(t)\) decreases with \(\gamma\). This is both intuitively reasonable, and consistent with the observations in Figure 2 regarding how the structure of the front appeared to vary with the deposition rate in the discrete model. Numerical solutions of Equations 7-8 indicate that as \(\gamma\to\infty\) we have \(s(x,t)\to 1-\mbox{H}(\eta(t))\) and \(w(t)\to 0^{+}\). Since the width of Region 2 vanishes for sufficiently large \(\gamma\), the solution of Equations 7-8 can be accurately approximated by the solution of the linear diffusion equation, which is independent of \(\gamma\). Again, this outcome is consistent with the discrete simulations in Figure 2 where we observed that simulations with large deposition rates were visually indistinguishable from simulations where all lattice sites were initialised with the maximum substrate concentration where the continuum limit of the discrete model is the linear diffusion equation [40].
The schematic profiles of \(u(x,t)\) and \(s(x,t)\) in Figure 5 can also be interpreted in
terms of the mechanisms acting in discrete model. When agents within Region 2, close to the front, move in the positive \(x\)-direction to a lattice site that has never been previously occupied, that agent will experience \(S_{i,j}=0\) at the new site. This means that the agent will be stationary for a period of time until that agent deposits substrate, which means that \(S_{i,j}\) increases. While there is empty space behind that agent, for example at site \((i-1,j)\) where it was previously located, the agent cannot easily move back until a sufficient amount of time has passed to build up the amount of substrate. Therefore, Region 2 within the discrete model involves acts as a low-motility zone where agents become momentarily stationary until sufficient substrate is produced to enable the agents to continue to move.
### Proliferation
The experimental image in Figure 1(a) shows a barrier assay describing the spatial spreading of a population of fibroblast cells that are pre-treated to prevent proliferation [23, 25]. All subsequent discrete and continuum modelling in this work so far has focused on conservative populations without any death or proliferation mechanisms so that these simulations are consistent with the preliminary experimental observations in Figure 1. In the discrete model this is achieved by simulating a population of \(N\) agents, where \(N\) is a constant. In the continuum model this is achieved by working with PDE models like Equation 1 with \(\mathcal{S}=0\). To conclude this study we now re-examine all discrete and continuum models by incorporating a minimal proliferation mechanism motivated by the additional experimental results summarised in Figure 6. The left-most image in Figure 6 shows a barrier assay initialised with approximately 30,000 fibroblast cells just after the barrier is lifted at \(t=0\). The central image in Figure 6 shows the outcome of a barrier assay where the fibroblast cells are pre-treated to suppress proliferation [23, 25], and the right-most image shows the outcome of a barrier assay that is initialised in the same way except that the fibroblast cells are not pre-treated to suppress proliferation. This means that the right-most image in Figure 6 shows the outcome of a barrier assay in which fibroblast cells are free to move and proliferate [23]. The motile and proliferative population expands symmetrically, and the leading edge of the population remains sharp. The main difference between the outcome of the barrier assays for the motile and proliferative population compared to the population where proliferation is suppressed is that cell proliferation leads to more rapid spatial expansion of the population.
A minimal model of proliferation is now incorporated into the discrete model described previously in Section 22.1. The key difference is that previously the number of agents \(N\) remained fixed during the stochastic simulations, whereas now \(N(t)\) is an non-decreasing function of time. Within each time step of the discrete model, after giving \(N(t)\) randomly-selected agents an opportunity to move, we then select another \(N(t)\) agents at random, one at a time with replacement, and given the selected agents an opportunity to proliferate with probability \(Q\in[0,1]\). We take a simple approach and assume that the proliferation is independent of the local substrate density. If a selected agent is going to attempt to proliferate, the target site for the placement of the daughter agent is randomly selected from one of the four nearest neighbour lattice sites [60]. If the target site is occupied then the proliferation event is aborted owing to crowding effects, whereas if the target site is vacant a new daughter agent is placed on the target site. At the end of every time step we update \(N(t)\) to reflect the change in total population owing to proliferation during that time step [60, 40]. A set of preliminary simulations comparing results with \(Q=0\) and \(Q>0\) are given in Figure 7. In these simulations we compare the spatial spreading of 30,000 agents uniformly distributed within a circular region of diameter
Figure 6: Circular barrier assay images comparing the spatial spreading of motile and non-proliferative population with the spatial spreading of a motile and proliferative population of fibroblast cells. The left–most image shows a barrier assay at \(t=0\) days just after the circular barrier is lifted. This experiment is initiated by placing approximately 30,000 fibroblast cells uniformly inside a barrier of radius 3 mm. The central image shows the spatial extent of the population at \(t=3\) days where the cell are pre-treated to suppress proliferation. The right–most image shows the spatial extent of the population at \(t=3\) days where the cells are motile and proliferative. All images reproduced from Simpson et al. [23] with permission.
3 mm. Results in Figure 7(a) are made in the case where we set \(S_{i,j}(0)=1\) at all lattice sites at the beginning of the experiment, and we repeat the comparison for simulations with \(S_{i,j}(0)=0\) with \(\Gamma=10^{0},10^{-1}\) and \(10^{-2}\) in Figure 7(c)-(d), respectively.
Figure 7: Discrete simulations illustrating the role of the substrate deposition rate \(\Gamma\) in the spatial spreading of a population of motile agents without proliferation with the spatial spreading of a population of motile and proliferative agents. All simulations are performed on a \(500\times 500\) square lattice where the lattice spacing corresponds to 20 \(\mu\)m making the diameter of the simulated populations in the left column equal to the diameter of the populations at \(t=0\) in Figures 1 and 6. Simulations are initiated by randomly occupying sites within a circular region of radius 150 lattice sites so that the expected number of agents at the beginning of the simulation is 30,000. Simulations of motile and non-proliferative populations correspond to \(P=1\), \(Q=0\) and \(\tau=24/500\) days, with values of \(\Gamma\) as indicated. Simulations of motile and proliferative populations correspond to \(P=1\), \(Q=1/500\) and \(\tau=24/500\) days, with values of \(\Gamma\) as indicated. Results in (a) correspond to initialising \(S_{i,j}(0)=1\) at all lattice sites, whereas results in (b)–(d) correspond to initialising \(S_{i,j}(0)=0\) at all lattice sites. Each day of simulation corresponds to 500 time steps in the discrete model, and snapshots are reported in terms of the \((i,j)\) index of the lattice, which can be re-scaled to give the dimensional coordinates noting that \((x,y)=(i\Delta,j\Delta)\).
Similar to the experiments in Figure 6, our simulations in Figure 7 show that incorporating proliferation increases the rate at which the growing populations spread and invade the surrounding area. As for the non-proliferative simulations in Figure 2 we see that the front of the spreading populations is poorly defined when \(\Gamma\) is sufficiently large, with a relatively diffuse distrubution of agents that includes many isolated individuals that have migrated well-ahead of the bulk population. In contrast, reducing \(\Gamma\) leads to visually well-defined sharp fronts with a clearer boundary at the leading edge of the proliferative population. These sharper fronts contain very few isolated individual agents. Visual comparison of the proliferative and non-proliferative snapshots in Figure 7(c)-(d) indicates that incorporating proliferation leads to an increasingly sharp and well-defined sharp front. These simulations indicate that having substrate-dependent motility and substrate-independent proliferation is sufficient to produce sharp and well-defined fronts in the discrete simulations.
To interpret the differences between the motile populations and the motile and proliferative populations in Figure 7 we coarse grain the discrete model by following a similar approach taken in Section 22.2. To proceed we write down an approximate conservation statement describing the change in average occupancy of site \((i,j)\) during the time interval from time \(t\) to time \(t+\tau\),
\[\delta\langle U_{i,j}\rangle= \frac{P}{4}\left[\underbrace{(1-\langle U_{i,j}\rangle)\sum_{ \text{migration onto site }(i,j)}}_{\text{migration onto site }(i,j)}-\underbrace{S_{i,j}\langle U_{i,j}\rangle \left(4-\sum\langle U_{i,j}\rangle\right)}_{\text{migration out of site }(i,j)}\right],\] \[+ \frac{Q}{4}\underbrace{(1-\langle U_{i,j}\rangle)\sum\langle U_{ i,j}\rangle}_{\text{proliferation onto site }(i,j)} \tag{12}\] \[\delta S_{i,j}= \begin{cases}\Gamma\langle U_{i,j}\rangle\quad\text{for}\quad S _{i,j}<1,\\ 0\quad\text{for}\quad S_{i,j}=1.\end{cases} \tag{13}\]
The new term on the right of Equation 12 approximately describes the increase in expected density of site \((i,j)\) owing to proliferation events that would place an agent on that site provided that the target site is vacant [40]. To proceed to the continuum limit we again identify \(\langle U_{i,j}\rangle\) and \(S_{i,j}\) with smooth functions \(u(x,y,t)\) and \(s(x,y,t)\), respectively, and expand all terms in 3 in a Taylor series about \((x,y)=(i\Delta,j\Delta)\), neglecting terms of
\(\mathcal{O}(\Delta^{3})\) and smaller. Dividing the resulting expression by \(\tau\), we take limits as \(\Delta\to 0\) and \(\tau\to 0\), with the ratio \(\Delta^{2}/\tau\) held constant [18] to give
\[\frac{\partial u}{\partial t}= D\nabla\cdot\left[s\nabla u+u\left(1-u\right)\nabla s\right]+ \lambda u(1-u), \tag{14}\] \[\frac{\partial s}{\partial t}= \begin{cases}\gamma u&\text{for}\quad s<1\\ 0&\text{for}\quad s=1,\end{cases} \tag{15}\]
where
\[D=\lim_{\begin{subarray}{c}\Delta\to 0\\ \tau\to 0\end{subarray}}\left(\frac{P\Delta^{2}}{4\tau}\right),\quad\lambda= \lim_{\begin{subarray}{c}\Delta\to 0\\ \tau\to 0\end{subarray}}\left(\frac{Q}{\tau}\right),\quad\gamma=\lim_{ \begin{subarray}{c}\Delta\to 0\\ \tau\to 0\end{subarray}}\left(\frac{\Gamma}{\tau}\right), \tag{16}\]
which provides relationships between parameters in the discrete model: \(\Delta,\tau,P,Q\) and \(\Gamma\), to parameters in the continuum model: \(D\), \(\lambda\), and \(\gamma\). The additional term in Equation 12 is simply a logistic source term with carrying capacity of unity, which reflects the fact that the occupancy of any lattice site is limited to a single agent. The numerical method we use to solve Equations 14-15 is given in the Appendix.
It is straightforward to choose parameters to mimic known biological observations. An important parameter for applying these models to biological experiments is the ratio \(P/Q\), which compares the relative frequency of motility to proliferation events for isolated agents in regions where \(S_{i,j}=1\). Key parameters in an experiment are the cell diameter \(\Delta\), the cell diffusivity \(D\), and the proliferation rate \(\lambda\), which is related to the cell doubling time, \(t_{\text{d}}\), by \(\lambda=\log_{\text{e}}2/t_{\text{d}}\). Using Equation 16 we have \((P/Q)^{-1}=\Delta^{2}\log_{\text{e}}2/(4Dt_{\text{d}})\), noting that this ratio is independent of \(\tau\). With typical values of \(\Delta=20\)\(\mu\)m, \(D=2100\)\(\mu\)m\({}^{2}\)/hour and \(t_{\text{d}}=16\) hours we have \((P/Q)^{-1}\approx 1/500\), which means setting \(P=1\) and \(Q=1/500\) correspond to biologically-relevant parameter values of the discrete model. One way of interpreting this choice of parameters is that the average time between proliferation events for an isolated agent is 500 times longer than the average time between motility events in regions where \(S_{i,j}=1\).
We now repeat the comparison of averaged discrete data with the solution of Equations 14-15 in a one-dimensional Cartesian coordinate system for the same domain, initial conditions and parameter values considered previously in Figure 4 except now we consider simulations that include proliferation with \(Q=1/500\). The quality of the continuum
discrete match in Figure 8 is very good for all values of \(\gamma\) considered. Comparing the solution profiles in Figures 4 and 8 shows that the presence of proliferation over a period of four days increases the distance that the population front moves in the positive \(x\)-direction, just as we demonstrated using the stochastic model in Figure 7. In addition to noting that the numerical solution of Equations 14-15 provides a reasonable match to averaged discrete data, it is important to note that the presence of proliferation in Figure 8 does not alter the trends established previously in Figure 4 regarding how \(\Gamma\) affects the sharpness of the front, namely that sufficiently large substrate deposition rates leads to smooth-fronted profiles whereas reduced substrate deposition rates leads to sharp-fronted profiles, with the possibility of having a non-monontone shape.
## 3 Conclusion and Outlook
In this work we have revisited the question of using continuum PDE models to study spatial spreading and invasion of populations of cells. While many continuum PDE models involve linear diffusion, solutions of these models do not have compact support, and do not replicate clearly defined fronts that are often observed experimentally. Previously,
Figure 8: Averaged discrete data (dots) superimposed on numerical solutions of Equations 14–15 (solid). Each subfigure compares averaged discrete data, constructed using Equation 11 with \(H=20\), \(M=100\), \(P=1\) and \(Q=1/500\) with a numerical solution of Equations 14–15. Four sets of solutions are shown for \(\Gamma=10^{1},10^{0},10^{-1}\) and \(10^{-2}\), as indicated. Within each subfigure a comparison is made at \(t=0,1,2,3\) and \(4\) days shown in blue, green orange and yellow, respectively, as indicated. Each day of simulation corresponds to \(500\) time steps in the discrete model, and snapshots are reported in terms of the \((i,j)\) index of the lattice, which can be re-scaled to give the dimensional coordinates noting that \((x,y)=(i\Delta,j\Delta)\).
this issue has been addressed by generalising the linear diffusion flux, \(\mathbf{\cal J}=-D\nabla u\), to a degenerate nonlinear diffusion flux, \(\mathbf{\cal J}=-Du^{n}\nabla u\) where \(n>0\). The motivation for working with degenerate nonlinear diffusion is that the flux vanishes when \(u=0\) and the solution of the PDE model has a well-defined sharp front that can match experimental observations [32, 33, 34, 35]. While PDE models with this kind of degenerate nonlinear diffusion flux leads to solutions with well-defined sharp fronts, the biological motivation for these models and a biological interpretation of the exponent \(n\) remains unclear. In this work we have revisited the question of modelling spatial spreading and cellular invasion from the point of view of developing simple lattice-based discrete model. In the discrete model we assume that agents produce an external substrate (e.g. biomacromolecules, extracellular matrix) that is deposited locally on the lattice, and the rate of randomly-directed agent migration is taken to be proportional to the density of substrate at each lattice site. We explicitly incorporate crowding effects in the discrete model by allowing each lattice site to be occupied by, at most, one single agent. This simple, biologically-motivated mechanism allows us to model collective spreading and invasion with well-defined sharp fronts provided that the rate of substrate deposition is sufficiently small. Stochastic simulations that mimic the spatial spreading of cells in a two-dimensional circular barrier assay illustrate that our discrete model is capable of replicating key features of the experiment, namely symmetric spreading of the population with a well-defined sharp front at the leading edge of the population.
Coarse-graining the discrete mechanisms leads to a PDE model with a novel flux term that simplifies to linear diffusion in the bulk of the population, and has features similar to a degenerate nonlinear diffusion flux at the leading edge of the population. Importantly, these features arise within the context of a simple, biologically-motivated discrete mechanism that is capable of replicating sharp-fronted density profiles and our approach does not involve specifying a degenerate nonlinear diffusivity function that is difficult to relate to biological mechanisms. Numerical solutions of the new PDE model provide us with a computationally efficient, accurate approximation of averaged data from the stochastic model. Careful examination of the solutions of the PDE indicate that the structure of the leading edge depends upon the rate of substrate deposition. For sufficiently fast substrate deposition the substrate profile approaches a step function at the leading edge of the spreading population, and the nonlinear PDE model simplifies to
the linear diffusion equation. In contrast, for sufficiently slow substrate deposition the leading edge of the population behaves like a moving boundary problem where the density profile has compact support, and the shape of the density profile at the leading edge can be non-monotone. The first set of stochastic simulations and coarse-grained PDE models presented in this work focus on conservative populations where cell proliferation and cell death are absent. To understand how the shape of the front could change when considering a proliferative population we present a second set of simulations and coarse-grained PDE models that incorporate a minimal proliferation mechanisms. In this case the coarse-grained PDE model takes the form of a reaction-diffusion model. We solve the new PDE numerically, using biologically motivated parameter values which show that numerical solutions of the PDE model matches averaged data from the discrete simulations very well, and confirms that sharp-fronted density profiles occur in the presence of proliferation. In fact, for the biologically motivated parameter values considered in this work, we find that incorporating proliferation leads tends to sharpen the density fronts at the leading edge relative to non-proliferative stochastic simulations.
There are many options for extending the work presented in this study. One obvious avenue for exploration is to introduce additional details into the discrete model since our approach in this work is to introduce very simple mechanisms only. An interesting option for further examination would be to generalise the transition probabilities in the following way. In the current model the transition probability for an agent undergoing a motility event from site \((i,j)\) to site \((i+1,j)\) is proportional to \(S_{i,j}\langle U_{i,j}\rangle\left(1-\langle U_{i+1,j}\rangle\right)\), which indicates that the transition probability is a linearly increasing function of local substrate density \(S_{i,j}\). An interesting extension would be to generalise the transition probability to be proportional to \(g(S_{i,j})\langle U_{i,j}\rangle\left(1-\langle U_{i+1,j}\rangle\right)\), where \(0\leq g(S)\leq 1\) is smooth function describing how the motility probability for individual agents depends upon the substrate density. In the context of modelling cell migration it is natural to assume that \(g(S)\) is an increasing function. Taking the continuum limit of the discrete mechanism under these
circumstances leads to
\[\frac{\partial u}{\partial t}= D\nabla\cdot\left[g(s)\nabla u+\frac{\mathrm{d}g(s)}{\mathrm{d}s}u \left(1-u\right)\nabla s\right]+\lambda u(1-u), \tag{17}\] \[\frac{\partial s}{\partial t}= \begin{cases}\gamma u&\text{for}\quad s<1\\ 0&\text{for}\quad s=1,\end{cases} \tag{18}\]
which is a generalisation of setting \(g(s)=s\). Returning to our initial discussions in the Introduction, choosing \(g(s)=s^{n}\), for \(n>0\) means that the diffusive flux term in Equations 17-18 is analogous to the flux term in the generalised porous medium equation [32, 33, 36]. All results presented in this study involves working with the simple choice of \(g(s)=s\), however generating and comparing averaged discrete data with numerical solutions of Equations 17-18 would be very interesting to explore how different choices of \(g(s)\) might impact the quality of the discrete-continuum match and the shape of the front. Another extension would be to couple the probability of proliferation in the discrete model to the substrate density. This would, in effect, introduce a substrate-dependent proliferate rate \(\lambda(s)\) into Equation 17. Again, the question of generating and comparing averaged discrete density data for this generalisation would be interesting and a relatively straightforward extension of the current discrete and continuum modelling frameworks established in this work.
Another extension would be to examine long time travelling wave solutions of Equations 14-15[3]. In the current work we have limited our examination of this model to relatively short-time simulations of the discrete model and relatively short time numerical solutions of the continuum limit PDE which is relevant when using these models to mimic standard experimental protocols. Standard experimental protocols examining collective cell migration and proliferation are typically limited to durations of 24 or 48 hours [24, 28]. This means that for a typical cell line with a doubling time of 12-24 hours, these standard experimental protocols last for approximately one-to-four times the cell doubling time. This means that standard experimental protocols are perfectly suited to examine the effects of proliferation that will be evident over these typical timescales. In our theoretical comparison of averaged discrete data and the solution of the continuum-limit PDE in Figure 8 is relevant for such typical expeirmental durations since we compare
the evolution of the front position over four days for a population with a doubling time of 18 hours, which is just over five times the doubling time. Despite the fact that we have considered numerical solutions of Equations 14-15 over time scales that are five times the doubling time. it is clear that the numerical solutions in Figure 8 have not had sufficient time to approach a constant speed, constant shape travelling wave solution [3]. Therefore, taking a more theoretical point of view, it would be mathematically interesting to examine time-dependent numerical solutions of Equations 14-15 over much longer time scales and study the resulting travelling wave behaviour as \(t\to\infty\). This could be achieved by transforming the time-dependent PDE model into the travelling wave coordinate, \(z=x-ct\), where \(c\) is the long-time asymptotic speed of the travelling wave solutions. Properties of the solution of the resulting dynamical system could then be studied in the phase space to provide information about the relationship between parameters in the continuum PDE model and the travelling wave speed \(c\) and the shape of the travelling wave profile [3, 54]. We leave both these potential extensions for future consideration.
**Data Accessibility** Open source Julia implementations of all computations are available on GitHub [https://github.com/ProfMJSimpson/DiscreteSubstrate](https://github.com/ProfMJSimpson/DiscreteSubstrate).
**Authors' Contributions** MJS: Conceptualisation, Formal analysis, Investigation, Methodology, Software, Validation, Writing - original draft. KMM: Conceptualisation, Formal Analysis, Methodology, Software, Validation, Writing - review & editing. SWM: Investigation, Writing - review & editing. PRB: Investigation, Methodology, Writing - review & editing.
**Competing Interests** We declare we have no competing interests.
**Funding** MJS and PRB are supported by the Australian Research Council (DP230100025).
**Acknowledgements** We thank the Faculty of Science at QUT for providing KMM with
a mid-year research fellowship to support this project.
## Appendix: Numerical Methods
Results in Figure 1 involve generating numerical solutions of
\[\frac{\partial u}{\partial t}=\frac{\partial}{\partial x}\left[\mathcal{D}(u) \frac{\partial u}{\partial x}\right]+\frac{\partial}{\partial y}\left[\mathcal{D} (u)\frac{\partial u}{\partial y}\right], \tag{19}\]
on a square domain centered at the origin with side length \(L\). To solve Equation 19 we discretise all spatial derivative terms on a uniform square mesh with mesh spacing \(h\) so that the mesh point with index \((i,j)\) is associated with location \((-L/2+(i-1)h,-L/2+(j-1))h\)). Applying a standard central difference approximation to the spatial derivative terms in Equation 1 at the central nodes leads to
\[\frac{\mathrm{d}u_{i,j}}{\mathrm{d}t}=\frac{1}{2h^{2}}\left[\left( \mathcal{D}(u_{i,j})+\mathcal{D}(u_{i+1,j})\right)(u_{i+1,j}-u_{i,j})-\left( \mathcal{D}(u_{i,j})+\mathcal{D}(u_{i-1,j})\right)(u_{i,j}-u_{i-1,j})\right.\] \[+\left.\left(\mathcal{D}(u_{i,j})+\mathcal{D}(u_{i,j+1})\right)( u_{i,j+1}-u_{i,j})-\left(\mathcal{D}(u_{i,j})+\mathcal{D}(u_{i,j-1})\right)(u_{i,j}- u_{i,j-1})\right]. \tag{20}\]
This central difference formula is adjusted along the domain boundaries to enforce no-flux boundaries. When we apply this discretisation to simulate linear diffusion we set \(\mathcal{D}(u)=D\), and when simulate nonlinear degenerate diffusion we set \(\mathcal{D}(u)=Du^{n}\). This system of coupled nonlinear ordinary differential equations is solved using the DifferentialEquation.jl package in Julia, which uses automatic time stepping routines to minimise truncation error. Results in Figure 1 are obtained with \(h=0.05\), which is sufficiently small to ensure that these numerical results are grid-independent.
Results in the main document include numerical solutions of Equations 7-8 in a one-dimensional Cartesian geometry,
\[\frac{\partial u}{\partial t}= D\frac{\partial}{\partial x}\left[s\frac{\partial u}{\partial x}+u(1-u) \frac{\partial s}{\partial x}\right]+\lambda u(1-u), \tag{21}\] \[\frac{\partial s}{\partial t}= \begin{cases}\gamma u&\text{for}\quad s<1\\ 0&\text{for}\quad s\geq 1,\end{cases} \tag{22}\]
on \(0<x<L\). To solve Equations 21)-22 we discretise all spatial derivative terms on a uniform mesh with mesh spacing \(h\) so that the \(i\)th mesh point is associated with position \(x_{i}=(i-1)h\). Applying a standard central difference approximation to the
spatial derivative terms in Equation 21 gives the following system of coupled nonlinear ordinary differential equations at the \(i\)th node,
\[\frac{\mathrm{d}u_{i}}{\mathrm{d}t}=\frac{D}{2h^{2}}\left[\left(s_{i +1}+s_{i}\right)\left(u_{i+1}-u_{i}\right)-\left(s_{i-1}+s_{i}\right)\left(u_{i }-u_{i-1}\right)\right. \tag{23}\] \[+\left.\left(u_{i+1}[1-u_{i+1}]+u_{i}[1-u_{i}]\right)\left(s_{i+1 }-s_{i}\right)-\left(u_{i-1}[1-u_{i-1}]+u_{i}[1-u_{i}]\right)\left(s_{i}-s_{i- 1}\right)\right]\] \[+\lambda u_{i}(1-u_{i})\] \[\frac{\mathrm{d}s_{i}}{\mathrm{d}t}=\begin{cases}\gamma u_{i}& \text{for}\quad s_{i}<1\\ 0&\text{for}\quad s_{i}\geq 1.\end{cases} \tag{24}\]
The discrete equations for \(s\), Equation 24 holds for all mesh points \(i=1,2,\ldots,I\) because there are no spatial derivative terms in Equation 22 and no boundary conditions need to be imposed. In contrast, the discrete equations for \(u\), Equation 23 holds only on the interior mesh points \(i=2,3,\ldots,I-1\). Applying no-flux boundary conditions at \(i=1\) and \(i=I\) means that we impose the constraints \(u_{1}=u_{2}\) and \(u_{I-1}=u_{I}\), respectively. This system of coupled nonlinear ordinary differential equations is solved using the DifferentialEquation.jl package in Julia, which implements automatic time stepping routines to control temporal truncation error. All numerical results in this work correspond to \(h=0.1\), which is sufficiently small to ensure that our numerical results are grid-independent for the problems that we consider. Open source Julia code to solve Equations 23-24 is available on GitHub |
2303.07800 | Structure and Rank of Cyclic codes over a class of non-chain rings | The rings $Z_{4}+\nu Z_{4}$ have been classified into chain rings and
non-chain rings on the basis of the values of $\nu^{2} \in Z_{4}+\nu Z_{4}.$ In
this paper, the structure of cyclic codes of arbitrary length over the rings
$Z_{4}+\nu Z_{4}$ for those values of $\nu^{2}$ for which these are non-chain
rings has been established. A unique form of generators of these codes has also
been obtained. Further, rank and cardinality of these codes have been
established by finding minimal spanning sets for these codes. | Nikita Jain, Sucheta Dutt, Ranjeet Sehmi | 2023-03-14T11:18:00Z | http://arxiv.org/abs/2303.07800v1 | # Structure and rank of cyclic codes over a class of non-chain rings
###### Abstract.
The rings \(Z_{4}+\nu Z_{4}\) have been classified into chain rings and non-chain rings on the basis of the values of \(\nu^{2}\in Z_{4}+\nu Z_{4}.\) In this paper, the structure of cyclic codes of arbitrary length over the rings \(Z_{4}+\nu Z_{4}\) for those values of \(\nu^{2}\) for which these are non-chain rings has been established. A unique form of generators of these codes has also been obtained. Further, rank and cardinality of these codes have been established by finding minimal spanning sets for these codes.
Key words and phrases:Cyclic code, Generator, Rank, Cardinality, Rings 2020 Mathematics Subject Classification: Primary: 94B15, 20M05, 15A03, 54A25, 13C12 Nikita Jain would like to thank Council of Scientific and Industrial Research (CSIR) India, for providing fellowship in support of this research.
Introduction
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\). Let \(\mathfrak{C}=\langle g(z)+2p(z),2a(z)\rangle\,,\) where \(g(z),a(z)\) and \(p(z)\) are binary polynomials such that \(a(z)|g(z)|z^{n}-1\) and either \(p(z)=0\) or \(a(z)|p(z)\frac{z^{n}-1}{g(z)}\) with \(deg\,\,a(z)>deg\,p(z)\).
Let \(\mathfrak{C}\) be a cyclic code of arbitrary length over \(Z_{4}\).
\(2k_{\theta}f_{{}_{34}}(z)\), \(f_{{}_{\theta_{4}}}(z)=2k_{\theta}f_{{}_{44}}(z)\) such that the polynomials \(f_{{}_{ij}}(z)\) are in \(Z_{2}[z]/\langle z^{n}-1\rangle\) for \(1\leq i\leq 4,i\leq j\leq 4\). Further,_
\[f_{{}_{22}}(z)|f_{{}_{11}}(z)|z^{n}-1, \tag{3.1}\]
\[\text{either }f_{{}_{12}}(z)=0\text{ or }f_{{}_{22}}(z)|f_{{}_{12}}(z) \frac{z^{n}-1}{f_{{}_{11}}(z)}\text{ with deg }f_{{}_{22}}(z)>\text{ deg }f_{{}_{12}}(z), \tag{3.2}\]
\[f_{{}_{44}}(z)|f_{{}_{33}}(z)|z^{n}-1, \tag{3.3}\]
\[\text{either }f_{{}_{34}}(z)=0\text{ or }f_{{}_{44}}(z)|f_{{}_{34}}(z) \frac{z^{n}-1}{f_{{}_{33}}(z)}\text{ with deg }f_{{}_{44}}(z)>\text{ deg }f_{{}_{34}}(z). \tag{3.4}\]
Proof.: Let \(\mathtt{C}_{{}_{\theta}}\) be a cyclic code of length \(n\) over \(\mathtt{R}_{{}_{\theta}}\), \(\theta\in\mathtt{S}\). Define \(\phi_{{}_{\theta}}:\mathtt{R}_{{}_{\theta}}\to Z_{4}\) by \(\phi_{{}_{\theta}}(x)=x\pmod{k_{{}_{\theta}}}\). It is easy to see that the maps \(\phi_{{}_{\theta}}\), \(\theta\in\mathtt{S}\) are ring homomorphisms. Let \(ker_{{}_{\theta}}=\{x\in\mathtt{C}_{{}_{\theta}}\) such that \(\phi_{{}_{\theta}}(x)=0\}\). Clearly, \(\phi_{{}_{\theta}}(\mathtt{C}_{{}_{\theta}})\) is a cyclic code of length \(n\) over \(Z_{4}\). Using Lemma 2.1, we get
\(\phi_{{}_{\theta}}(\mathtt{C}_{{}_{\theta}})=\langle f_{{}_{11}}(z)+2f_{{}_{1 2}}(z),2f_{{}_{22}}(z)\rangle\), where \(f_{{}_{22}}(z)|f_{{}_{11}}(z)|z^{n}-1\) and
either \(f_{{}_{12}}(z)=0\) or \(f_{{}_{22}}(z)|f_{{}_{12}}(z)\frac{z^{n}-1}{f_{{}_{11}}(z)}\) with \(\deg f_{{}_{22}}(z)>\deg f_{{}_{12}}(z)\).
Also, \(ker_{{}_{\theta}}\) is \(k_{{}_{\theta}}\) times a cyclic code of length \(n\) over \(Z_{4}\). Again using Lemma 2.1, we get \(ker_{{}_{\theta}}=k_{{}_{\theta}}\langle f_{{}_{33}}(z)+2f_{{}_{34}}(z),2f_{{} _{44}}(z)\rangle\), where \(f_{{}_{44}}(z)|f_{{}_{33}}(z)|z^{n}-1\) and either \(f_{{}_{34}}(z)=0\) or \(f_{{}_{44}}(z)|f_{{}_{34}}(z)\frac{z^{n}-1}{f_{{}_{33}}(z)}\) with \(\deg f_{{}_{44}}(z)>\deg f_{{}_{34}}(z)\).
It follows that \(\mathtt{C}_{{}_{\theta}}=\langle f_{{}_{\theta_{1}}}(z),f_{{}_{\theta_{2}}}(z),f_{{}_{\theta_{3}}}(z),f_{{}_{\theta_{4}}}(z)\rangle\), where \(f_{{}_{\theta_{1}}}(z)=f_{{}_{11}}(z)+2f_{{}_{12}}(z)+k_{{}_{\theta}}f_{{}_{13} }(z)+2k_{{}_{\theta}}f_{{}_{14}}(z)\), \(f_{{}_{\theta_{2}}}(z)=2f_{{}_{22}}(z)+k_{{}_{\theta}}f_{{}_{23}}(z)+2k_{{}_{ \theta}}f_{{}_{24}}(z)\), \(f_{{}_{\theta_{3}}}(z)=k_{{}_{\theta}}f_{{}_{33}}(z)+2k_{{}_{\theta}}f_{{}_{34} }(z)\), \(f_{{}_{\theta_{4}}}(z)=2k_{{}_{\theta}}f_{{}_{44}}(z)\) such that the polynomials \(f_{{}_{ij}}(z)\) are in \(Z_{2}[z]/\langle z^{n}-1\rangle\) for \(1\leq i\leq 4,i\leq j\leq 4\) and satisfy the conditions (3.1)-(3.4).
Let \(\mathtt{C}_{{}_{\theta}}\) be a cyclic code of length \(n\) over \(\mathtt{R}_{{}_{\theta}}\), \(\theta\in\mathtt{S}\), generated by the polynomials \(f_{{}_{\theta_{1}}}(z),f_{{}_{\theta_{2}}}(z),f_{{}_{\theta_{3}}}(z),f_{{}_{ \theta_{4}}}(z)\) as obtained in Theorem 1. Define Residue and Torsion of \(\mathtt{C}_{{}_{\theta}}\) as
\[\text{Res}(\mathtt{C}_{{}_{\theta}}) =\left\{a(z)\in\frac{Z_{4}[z]}{\langle z^{n}-1\rangle}:a(z)+k_{{}_ {\theta}}b(z)\in\mathtt{C}_{{}_{\theta}}\text{ for some }b(z)\in\frac{Z_{4}[z]}{\langle z^{n}-1 \rangle}\right\}\] \[\text{Tor}(\mathtt{C}_{{}_{\theta}}) =\left\{a(z)\in\frac{Z_{4}[z]}{\langle z^{n}-1\rangle}:k_{{}_{ \theta}}a(z)\in\mathtt{C}_{{}_{\theta}}\right\}\]
Clearly, \(\text{Res}(\mathtt{C}_{{}_{\theta}})\) and \(\text{Tor}(\mathtt{C}_{{}_{\theta}})\) are the ideals of the ring \(\frac{Z_{4}[z]}{\langle z^{n}-1\rangle}\).
Also, define
\(\mathtt{C}_{{}_{\theta_{1}}}\)=Res(Res(\mathtt{C}_{{}_{\theta}})\))\(=\mathtt{C}_{{}_{\theta}}\) mod \((2,k_{{}_{\theta}})\)
\(\mathtt{C}_{{}_{\theta_{2}}}\)=Tor(Res(\mathtt{C}_{{}_{\theta}})\))\(=\{a(z)\in Z_{2}[z]:2a(z)\in\mathtt{C}_{{}_{\theta}}\) mod \(k_{{}_{\theta}}\}\)
\(\mathtt{C}_{{}_{\theta_{3}}}\)=Res(Tor(\mathtt{C}_{{}_{\theta}})\))\(=\{a(z)\in Z_{2}[z]:k_{{}_{\theta}}a(z)\in\mathtt{C}_{{}_{\theta}}\) mod \(2k_{{}_{\theta}}\}\)
\(\mathtt{C}_{{}_{\theta_{4}}}\)=Tor(Tor(\mathtt{C}_{{}_{\theta}})\))\(=\{a(z)\in Z_{2}[z]:2k_{{}_{\theta}}a(z)\in\mathtt{C}_{{}_{\theta}}\}\)
It is easy to see that \(\mathtt{C}_{{}_{\theta_{1}}}\),\(\mathtt{C}_{{}_{\theta_{2}}}\),\(\mathtt{C}_{{}_{\theta_{3}}}\),\(\mathtt{C}_{{}_{\theta_{4}}}\) are ideals of the ring \(Z_{2}[z]/\left\langle z^{n}-1\right\rangle\) generated by the unique minimal degree polynomials \(f_{{}_{11}}(z),f_{{}_{22}}(z),f_{{}_{33}}(z),f_{{}_{44}}(z)\) respectively as defined in Theorem 3.1.
**Theorem 3.2**.: _Let \(\mathtt{C}_{{}_{\theta}}=\langle f_{{}_{\theta_{1}}}(z),f_{{}_{\theta_{2}}}(z),f_{{}_{ \theta_{3}}}(z),f_{{}_{\theta_{4}}}(z)\rangle\) be a cyclic code of arbitrary length \(n\) over the ring \(\mathtt{R}_{{}_{\theta}},\theta\in\mathtt{S};\) where \(f_{{}_{\theta_{i}}}(z),\)\(1\leq i\leq 4\) are polynomials as defined in Theorem 3.1. Then there exists a set of generators \(\{g_{{}_{\theta_{1}}}(z),g_{{}_{\theta_{2}}}(z),g_{{}_{\theta_{3}}}(z),g_{{}_{ \theta_{4}}}(z)\}\)
of \(\mathsf{C}_{{}_{\theta}}\), where \(g_{{}_{\theta_{1}}}(z)=g_{{}_{11}}(z)+2g_{{}_{12}}(z)+k_{{}_{\theta}}g_{{}_{13}}(z) +2k_{{}_{\theta}}g_{{}_{14}}(z)\), \(g_{{}_{\theta_{2}}}(z)=2g_{{}_{22}}(z)+k_{{}_{\theta}}g_{{}_{23}}(z)+2k_{{}_{ \theta}}g_{{}_{24}}(z)\), \(g_{{}_{\theta_{3}}}(z)=k_{{}_{\theta}}g_{{}_{33}}(z)+2k_{{}_{\theta}}g_{{}_{34 }}(z)\), \(g_{{}_{\theta_{4}}}(z)=2k_{{}_{\theta}}g_{{}_{44}}(z)\) such that the polynomials \(g_{{}_{ij}}(z)\) are in \(Z_{2}[z]/\langle z^{n}-1\rangle\) satisfy the conditions (3.1)-(3.4) as defined in Theorem 1 and \(g_{{}_{ii}}(z)\) are unique minimal degree polynomial generators of \(\mathsf{C}_{{}_{\theta_{i}}},1\leq i\leq 4.\) Also, either \(g_{{}_{ij}}(z)=0\) or deg \(g_{{}_{ij}}(z)<\) deg \(g_{{}_{jj}}(z)\) for \(1\leq i\leq 3,i<j\leq 4.\)_
Proof.: Clearly, \(f_{{}_{\theta_{1}}}(z)=f_{{}_{11}}(z)+2f_{{}_{12}}(z)+k_{{}_{\theta}}f_{{}_{13} }(z)+2k_{{}_{\theta}}f_{{}_{14}}(z)\), \(f_{{}_{\theta_{2}}}(z)=2f_{{}_{22}}(z)+k_{{}_{\theta}}f_{{}_{23}}(z)+2k_{{}_{ \theta}}f_{{}_{24}}(z)\), \(f_{{}_{\theta_{3}}}(z)=k_{{}_{\theta}}f_{{}_{33}}(z)+2k_{{}_{\theta}}f_{{}_{34 }}(z)\), \(f_{{}_{\theta_{4}}}(z)=2k_{{}_{\theta}}f_{{}_{44}}(z)\) are the generators of \(\mathsf{C}_{{}_{\theta}}\) such that either \(f_{{}_{12}}=0\) or deg \(f_{{}_{12}}<\) deg \(f_{{}_{22}}\) and either \(f_{{}_{34}}=0\) or deg \(f_{{}_{34}}<\) deg \(f_{{}_{44}}.\) Further, if either \(f_{{}_{ij}}=0\) or deg \(f_{{}_{ij}}<\) deg \(f_{{}_{jj}}\) for all \(1\leq i\leq 2,i<j\leq 4,\) then we get the required result. Otherwise, let us suppose that deg \(f_{{}_{ij}}\geq\) deg \(f_{{}_{jj}}\) for some \(i=1,2\) and \(j=3,4.\) Assume that deg \(f_{{}_{ij}}\geq\) deg \(f_{{}_{jj}}\) for (say) \(i=1\) and \(j=3,4\) i.e., deg \(f_{{}_{13}}\geq\) deg \(f_{{}_{33}}.\) Thus by division algorithm, there exist some \(q_{{}_{13}}(z)\) and \(g_{{}_{13}}(z)\in Z_{2}[z]\) such that \(f_{{}_{13}}(z)=q_{{}_{13}}(z)f_{{}_{33}}(z)+g_{{}_{13}}(z),\) where either \(g_{{}_{13}}(z)=0\) or deg \(g_{{}_{13}}(z)<\) deg \(f_{{}_{33}}(z).\) Consider, \(f_{{}_{\theta_{1}}}(z)-q_{{}_{13}}(z)f_{{}_{\theta_{3}}}(z)=f_{{}_{11}}(z)+2f_{ {}_{12}}(z)+k_{{}_{\theta}}g_{{}_{13}}(z)+2k_{{}_{\theta}}(f_{{}_{14}}(z)-q_{{ }_{13}}(z)f_{{}_{34}}(z)).\) Further, deg \((f_{{}_{14}}(z)-q_{{}_{13}}(z)f_{{}_{34}}(z))\geq\) deg \(f_{{}_{44}}(z),\) then again by division algorithm, there exist some \(q_{{}_{14}}(z)\) and \(g_{{}_{14}}(z)\) such that \(f_{{}_{14}}(z)-q_{{}_{13}}(z)f_{{}_{34}}(z)=f_{{}_{44}}(z)q_{{}_{14}}(z)+g_{{} _{14}}(z),\) where either \(g_{{}_{14}}(z)=0\) or deg \(g_{{}_{14}}(z)<\) deg \(f_{{}_{44}}(z).\) Now consider, \(f_{{}_{\theta_{1}}}(z)-q_{{}_{13}}(z)f_{{}_{\theta_{3}}}(z)-q_{{}_{14}}(z)f_{{} _{\theta_{4}}}(z)=f_{{}_{11}}(z)+2f_{{}_{12}}(z)+k_{{}_{\theta}}g_{{}_{13}}(z) +2k_{{}_{\theta}}g_{{}_{14}}(z).\) Therefore, there exist a polynomial \(g_{{}_{\theta_{1}}}(z)=f_{{}_{11}}(z)+2f_{{}_{12}}(z)+k_{{}_{\theta}}g_{{}_{13 }}(z)+2k_{{}_{\theta}}g_{{}_{14}}(z)\in\mathsf{C}_{{}_{\theta}}\) such that either \(g_{{}_{13}}(z)=0\) or deg \(g_{{}_{13}}(z)<\) deg \(f_{{}_{33}}(z)\) and either \(g_{{}_{14}}(z)=0\) or deg \(g_{{}_{14}}(z)<\) deg \(f_{{}_{44}}(z).\) Also, since \(g_{{}_{\theta_{1}}}(z)\) is a linear combination of \(f_{{}_{\theta_{1}}}(z),f_{{}_{\theta_{3}}}(z),f_{{}_{\theta_{4}}}(z)\), we have \(\mathsf{C}_{{}_{\theta}}=\left\langle f_{{}_{\theta_{1}}}(z),f_{{}_{\theta_{2} }}(z),f_{{}_{\theta_{3}}}(z),f_{{}_{\theta_{4}}}(z)\right\rangle\)=\(\left\langle g_{{}_{\theta_{1}}}(z),f_{{}_{ \theta_{2}}}(z),f_{{}_{\theta_{3}}}(z),f_{{}_{\theta_{4}}}(z)\right\rangle.\) Further, if deg \(f_{{}_{ij}}(z)\geq\) deg \(f_{{}_{jj}}(z)\) for other values of \(i\) and \(j\) also, then we obtain the required set of generators by using the same arguments as above.
In the following theorem, a unique form of the generators of a cyclic code \(\mathsf{C}_{{}_{\theta}}\) of arbitrary length \(n\) over \(\mathtt{R}_{{}_{\theta}},\theta\in\mathsf{S}\), has been determined.
**Theorem 3.3**.: _Let \(\mathsf{C}_{{}_{\theta}}=\left\langle g_{{}_{\theta_{1}}}(z),g_{{}_{\theta_{2}}}(z),g_{{}_{\theta_{3}}}(z),g_{{}_{\theta_{4}}}(z)\right\rangle\) be a cyclic code of arbitrary length \(n\) over the ring \(\mathtt{R}_{{}_{\theta}},\theta\in\mathsf{S}\), where \(g_{{}_{\theta_{1}}}(z)=g_{{}_{11}}(z)+2g_{{}_{12}}(z)+k_{{}_{\theta}}g_{{}_{13}}(z) +2k_{{}_{\theta}}g_{{14}}(z)\), \(g_{{}_{\theta_{2}}}(z)=2g_{{}_{22}}(z)+k_{{}_{\theta}}g_{{23}}(z)+2k_{{}_{ \theta}}g_{{24}}(z)\), \(g_{{}_{\theta_{3}}}(z)=k_{{}_{\theta}}g_{{33}}(z)+2k_{{}_{\theta}}g_{{34}}(z)\), \(g_{{}_{\theta_{4}}}(z)=2k_{{}_{\theta}}g_{{44}}(z)\) such that the polynomials \(g_{{}_{ij}}(z)\) are in \(Z_{2}[z]/\langle z^{n}-1\rangle\) and satisfy the conditions (3.1)-(3.4) as defined in Theorem 3.1 with either \(g_{{}_{ij}}(z)=0\) or deg \(g_{{}_{ij}}(z)<\) deg \(g_{{}_{jj}}(z)\) for \(1\leq i\leq 3,i<j\leq 4\) and \(g_{{}_{ii}}(z)\) are the unique minimal degree polynomial generators of \(\mathsf{C}_{{}_{\theta_{i}}},1\leq i\leq 4.\) Then the polynomials \(g_{{}_{\theta_{1}}}(z),g_{{}_{\theta_{2}}}(z
\(h_{{}_{12}}(z)\in\mathtt{C}_{{}_{\theta_{2}}}=\left\langle g_{{}_{22}}(z)\right\rangle.\) Also \(\deg\)\((g_{{}_{12}}(z)-h_{{}_{12}}(z))<\deg\)\(g_{{}_{22}}(z),\) which is a contradiction because \(g_{{}_{22}}(z)\) is a minimal degree poynomial in \(\mathtt{C}_{{}_{\theta_{2}}}\). Hence, \(g_{{}_{12}}(z)=h_{{}_{12}}(z).\) It follows that \(g_{{}_{\theta_{1}}}(z)-h_{{}_{\theta_{1}}}(z)=k_{{}_{\theta}}(g_{{}_{13}}(z)- h_{{}_{13}}(z))+2k_{{}_{\theta}}(g_{{}_{14}}(z)-h_{{}_{14}}(z))\in\mathtt{C}_{{}_{ \theta}}\) which implies that \(g_{{}_{13}}(z)-h_{{}_{13}}(z)\in\mathtt{C}_{{}_{\theta_{3}}}=\left\langle g_{{ }_{33}}(z)\right\rangle.\) As \(\deg\)\((g_{{}_{13}}(z)-h_{{}_{13}}(z))<\deg\)\(g_{{}_{33}}(z),\) we must have \(g_{{}_{13}}(z)=h_{{}_{13}}(z).\)
Subsequently, \(g_{{}_{\theta_{1}}}(z)-h_{{}_{\theta_{1}}}(z)=2k_{{}_{\theta}}(g_{{}_{14}}(z) -h_{{}_{14}}(z))\in\mathtt{C}_{{}_{\theta}}\) implying that \(g_{{}_{14}}(z)-h_{{}_{14}}(z)\in\mathtt{C}_{{}_{\theta_{4}}}=\left\langle g_{{ }_{44}}(z)\right\rangle.\) This together with the fact that \(\deg\)\((g_{{}_{14}}(z)-h_{{}_{14}}(z))<\deg\)\(g_{{}_{44}}(z),\) implies that \(g_{{}_{14}}(z)=h_{{}_{14}}(z).\)
In a similar manner, we can prove that \(g_{{}_{23}}(z)=h_{{}_{23}}(z),\)\(g_{{}_{24}}(z)=h_{{}_{24}}(z)\) and \(g_{{}_{34}}(z)=h_{{}_{34}}(z)\). This proves the uniqueness of the polynomials \(g_{{}_{\theta_{1}}}(z),g_{{}_{\theta_{2}}}(z),g_{{}_{\theta_{3}}}(z),g_{{}_{ \theta_{4}}}(z).\)\(\square\)
**Theorem 3.4**.: _Let \(\mathtt{C}_{{}_{\theta}}=\left\langle g_{{}_{\theta_{1}}}(z),g_{{}_{\theta_{2} }}(z),g_{{}_{\theta_{3}}}(z),g_{{}_{\theta_{4}}}(z)\right\rangle,\) be a cyclic code of arbitrary length \(n\) over the ring \(\mathtt{R}_{{}_{\theta}},\theta\in\mathtt{S},\) where the generators \(g_{{}_{\theta_{1}}}(z)=g_{{}_{11}}(z)+2g_{{}_{12}}(z)+k_{{}_{\theta}}g_{{}_{13 }}(z)+2k_{{}_{\theta}}g_{{}_{14}}(z)\), \(g_{{}_{\theta_{2}}}(z)=2g_{{}_{22}}(z)+k_{{}_{\theta}}g_{{}_{23}}(z)+2k_{{}_{ \theta}}g_{{}_{24}}(z)\), \(g_{{}_{\theta_{3}}}(z)=k_{{}_{\theta}}g_{{}_{33}}(z)+2k_{{}_{\theta}}g_{{}_{34 }}(z)\), \(g_{{}_{\theta_{4}}}(z)=2k_{{}_{\theta}}g_{{}_{44}}(z)\) are in the unique form as given by Theorem 3.3. Then the following relations hold for \(g_{{}_{ij}}(z),\)\(1\leq i\leq 4,i\leq j\leq 4\) in \(Z_{2}[z]/\left\langle z^{n}-1\right\rangle.\)_
* \(g_{{}_{33}}(z)|\frac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{13}}(z)- \frac{g_{{}_{12}}(z)}{g_{{}_{22}}(z)}g_{{}_{23}}(z)\Big{)},\)__
* \(g_{{}_{44}}(z)|g_{{}_{23}}(z),\)__
* \(g_{{}_{33}}(z)|\frac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}g_{{}_{23}}(z),\)__
* \(g_{{}_{44}}(z)|\frac{z^{n}-1}{g_{{}_{22}}(z)}\Big{(}g_{{}_{24}}(z)- \frac{g_{{}_{23}}(z)}{g_{{}_{33}}(z)}g_{{}_{34}}(z)\Big{)},\)__
* \(g_{{}_{44}}(z)|g_{{}_{13}}(z)-\frac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}g_{{}_{24}}(z )+\frac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)g_{{}_{33}}(z)}g_{{}_{23}}(z)g_{{}_{34}}(z),\)__
* \(g_{{}_{44}}(z)|\frac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{14}}(z)- \frac{g_{{}_{12}}(z)}{g_{{}_{22}}(z)}g_{{}_{24}}(z)+\frac{-g_{{}_{13}}(z)+ \frac{g_{{}_{12}}(z)g_{{}_{23}}(z)}{g_{{}_{33}}(z)}}g_{{}_{34}}(z)\Big{)},\)__
* \(g_{{}_{33}}(z)|g_{{}_{11}}(z)\) _for_ \(\theta\in\{0,1,2\nu,3+2\nu\},\)__ \(g_{{}_{44}}(z)|g_{{}_{11}}(z)\) _for_ \(\theta\in\{0,1,2\nu,3+2\nu\},\)__ \(g_{{}_{44}}(z)|g_{{}_{22}}(z)\) _for_ \(\theta\in\{0,3+2\nu\},\)__ \(g_{{}_{44}}(z)|g_{{}_{22}}(z)+g_{{}_{23}}(z)\) _for_ \(\theta\in\{1,2\nu\},\)__
* \(g_{{}_{44}}(z)|g_{{}_{12}}(z)+g_{{}_{13}}(z)-\frac{g_{{}_{11}}(z)}{g_{{}_{33}}(z)}g _{{}_{34}}(z)\) _for_ \(\theta\in\{1,2\nu\},\)__ \(g_{{}_{44}}(z)|g_{{}_{12}}(z)-\frac{g_{{}_{11}}(z)}{g_{{}_{33}}(z)}g _{{}_{34}}(z)\) _for_ \(\theta\in\{0,3+2\nu\},\)__ \(g_{{}_{44}}(z)|g_{{}_{13}}(z)\) _for_ \(\theta\in\{\nu,3\nu,2+\nu,2+3\nu\}.\)__
Proof.:
* Since \(\mathtt{C}_{{}_{\theta}}\) is an ideal in the \(\frac{\mathtt{R}_{{}_{\theta}}[z]}{(z^{n}-1)},\) we have \(\frac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{11}}(z)+2g_{{}_{12}}(z)+\)
to \(\mathtt{C}_{\varrho}.\) It follows that \(k_{\theta}\dfrac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{13}}(z)-\dfrac{g_{{}_{12}}(z )}{g_{{}_{22}}(z)}g_{{}_{23}}(z)\Big{)}+2k_{\theta}\dfrac{z^{n}-1}{g_{{}_{11}}(z )}\Big{(}g_{{}_{14}}(z)-\) \(\dfrac{g_{{}_{12}}(z)}{g_{{}_{22}}(z)}g_{{}_{24}}(z)\Big{)}\in\mathtt{C}_{\varrho},\) which implies that \(k_{\theta}\dfrac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{13}}(z)-\dfrac{g_{{}_{12 }}(z)}{g_{{}_{22}}(z)}g_{{}_{23}}(z)\Big{)}\) belongs to \(\mathtt{C}_{\varrho}\pmod{2k_{\theta}}.\) Hence \(\dfrac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{13}}(z)-\dfrac{g_{{}_{12}}(z)}{g _{{}_{22}}(z)}g_{{}_{23}}(z)\Big{)}\in\mathtt{C}_{\varrho_{3}}=\langle g_{{}_ {33}}(z)\rangle\). Therefore, \(g_{{}_{33}}(z)|\dfrac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{13}}(z)-\dfrac{g_{ {}_{12}}(z)}{g_{{}_{22}}(z)}g_{{}_{23}}(z)\Big{)}.\)
2. Since \(2\Big{(}2g_{{}_{22}}(z)+k_{\theta}g_{{}_{23}}(z)+2k_{\theta}g_{{}_{24}}(z) \Big{)}\in\mathtt{C}_{\varrho},\) we have \(2k_{\theta}g_{{}_{23}}(z)\in\mathtt{C}_{\varrho}\). It follows that \(g_{{}_{23}}(z)\in\mathtt{C}_{\varrho_{4}}=\langle g_{{}_{44}}(z)\rangle\,,\) and therefore \(g_{{}_{44}}(z)|g_{{}_{23}}(z).\)
3. As \(2\Big{(}g_{{}_{11}}(z)+2g_{{}_{12}}(z)+k_{\theta}g_{{}_{13}}(z)+2k_{\theta}g_{ {}_{14}}(z)\Big{)}-\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}\Big{(}2g_{{}_{22}}(z )+k_{\theta}g_{{}_{23}}(z)+2k_{\theta}g_{{}_{24}}(z)\Big{)}\) belongs to \(\mathtt{C}_{\varrho}\), it follows that \(-k_{\theta}\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}g_{{}_{23}}(z)\in\mathtt{C}_{ \varrho}\pmod{2k_{\theta}},\) which implies that \(\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}g_{{}_{23}}(z)\in\mathtt{C}_{\theta_{3} }=\langle g_{{}_{33}}(z)\rangle\,.\) Therefore, \(g_{{}_{33}}(z)|\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}g_{{}_{23}}(z).\)
4. Since \(\dfrac{z^{n}-1}{g_{{}_{22}}(z)}\Big{(}2g_{{}_{22}}(z)+k_{\theta}g_{ {}_{23}}(z)+2k_{\theta}g_{{}_{24}}(z)\Big{)}-\dfrac{z^{n}-1}{g_{{}_{22}}(z)} \dfrac{g_{{}_{23}}(z)}{g_{{}_{33}}(z)}\Big{(}k_{\theta}g_{{}_{33}}(z)+2k_{ \theta}g_{{}_{34}}(z)\Big{)}\) belongs to \(\mathtt{C}_{\varrho}\), it follows that \(2k_{\theta}\dfrac{z^{n}-1}{g_{{}_{22}}(z)}\Big{(}g_{{}_{24}}(z)-\dfrac{g_{{}_ {23}}(z)}{g_{{}_{33}}(z)}g_{{}_{34}}(z)\Big{)}\in\mathtt{C}_{\varrho},\) which implies that \(2k_{\theta}g_{{}_{24}}(z)\Big{)}+\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}\dfrac{ g_{{}_{23}}(z)}{g_{{}_{33}}(z)}\Big{(}k_{\theta}(g_{{}_{33}}(z)+2g_{{}_{34}}(z)) \Big{)}\in\mathtt{C}_{\varrho},\) it follows that \(2k_{\theta}\Big{(}g_{{}_{13}}(z)-\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}g_{{}_ {24}}(z)\!+\!\frac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}\frac{g_{{}_{23}}(z)}{g_{{}_ {33}}(z)}g_{{}_{34}}(z)\Big{)}\in\mathtt{C}_{\varrho},\) which implies that \(\Big{(}g_{{}_{13}}(z)-\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}g_{{}_ {24}}(z)+\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}\frac{g_{{}_{23}}(z)}{g_{{}_ {33}}(z)}g_{{}_{34}}(z)\Big{)}\in\mathtt{C}_{\varrho_{4}}=\langle g_{{}_{44}}(z )\rangle\,.\) Therefore, \(g_{{}_{44}}(z)|g_{{}_{13}}(z)-\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}g_{{}_ {24}}(z)+\dfrac{g_{{}_{11}}(z)}{g_{{}_{22}}(z)}g_{{}_{33}}(z)g_{{}_{23}}(z)g_{{} _{34}}(z).\)
5. Since \(\dfrac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{11}}(z)+2g_{{}_{12}}(z)+k_{\theta }g_{{}_{13}}(z)+2k_{\theta}g_{{}_{14}}(z)\Big{)}-\dfrac{z^{n}-1}{g_{{}_{11}}(z )}\dfrac{g_{{}_{12}}(z)}{g_{{}_{22}}(z)}\Big{(}2g_{{}_{22}}(z)+\) \(k_{\theta}g_{{}_{23}}(z)+2k_{\theta}g_{{}_{24}}(z)\Big{)}+\dfrac{z^{n}-1}{g_{{}_{1 1}}(z)}\Big{(}\dfrac{-g_{{}_{13}}(z)+\dfrac{g_{{}_{12}}(z)}{g_{{}_{22}}(z)}g_{{}_ {23}}(z)}g_{{}_{33}}(z)\Big{)}\Big{(}k_{\theta}(g_{{}_{33}}(z)+2g_{{}_{34}}(z)) \Big{)}\) belongs to \(\mathtt{C}_{\varrho}\), it follows that \(2k_{\theta}\dfrac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{14}}(z)-\dfrac{g_{{}_ {12}}(z)}{g_{{}_{22}}(z)}g_{{}_{24}}(z)+\dfrac{-g_{{}_{13}}(z)+\dfrac{g_{{}_{12 }}(z)g_{{}_{23}}(z)}{g_{{}_{33}}(z)}g_{{}_{34}}(z)\Big{)}\in\mathtt{C}_{\varrho},\) which
implies that \(\dfrac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{14}}(z)-\dfrac{g_{{}_{12}}(z)}{g_{{}_ {22}}(z)}g_{{}_{24}}(z)+\dfrac{-g_{{}_{13}}(z)+\dfrac{g_{{}_{12}}(z)g_{{}_{23}}(z )}{g_{{}_{33}}(z)}}g_{{}_{34}}(z)\Big{)}\) belongs to \(\mathtt{C}_{{}_{\theta_{4}}}\). Therefore, \(g_{{}_{44}}(z)|\dfrac{z^{n}-1}{g_{{}_{11}}(z)}\Big{(}g_{{}_{14}}(z)-\dfrac{g_{{} _{12}}(z)}{g_{{}_{22}}(z)}g_{{}_{24}}(z)+\dfrac{-g_{{}_{13}}(z)+\dfrac{g_{{}_{12 }}(z)g_{{}_{23}}(z)}{g_{{}_{22}}(z)}}{g_{{}_{33}}(z)}g_{{}_{34}}(z)\Big{)}\).
* Since \(\mathtt{C}_{{}_{\theta_{1}}}\subseteq\mathtt{C}_{{}_{\theta_{3}}},\mathtt{C}_ {{}_{\theta_{1}}}\subseteq\mathtt{C}_{{}_{\theta_{4}}}\) for \(\theta\in\{0,1,2\nu,3+2\nu\}\) and \(\mathtt{C}_{{}_{\theta_{2}}}\subseteq\mathtt{C}_{{}_{\theta_{4}}}\) for \(\theta\in\{0,3+2\nu\}\), it follows that \(g_{{}_{33}}(z)|g_{{}_{11}}(z),g_{{}_{44}}(z)|g_{{}_{11}}(z)\) for \(\theta\in\{0,1,2\nu,3+2\nu\}\) and \(g_{{}_{44}}(z)|g_{{}_{22}}(z)\) for \(\theta\in\{0,3+2\nu\}\). Also, \(k_{{}_{\theta}}\Big{(}2g_{{}_{22}}(z)+k_{{}_{\theta}}g_{{}_{23}}(z)+2k_{{}_{ \theta}}g_{{}_{24}}(z)\Big{)}\) belongs to \(\mathtt{C}_{{}_{\theta}}\), it follows that \(2k_{{}_{\theta}}\big{(}g_{{}_{22}}(z)+g_{{}_{23}}(z)\big{)}\) belongs to \(\mathtt{C}_{{}_{\theta}}\) for \(\theta\in\{1,2\nu\}\), which implies that \(\big{(}g_{{}_{22}}(z)+g_{{}_{23}}(z)\big{)}\) belongs to \(\mathtt{C}_{{}_{\theta_{4}}}\). Therefore, \(g_{{}_{44}}(z)|g_{{}_{22}}(z)+g_{{}_{23}}(z)\) for \(\theta\in\{1,2\nu\}\).
* Since \(k_{{}_{\theta}}\Big{(}g_{{}_{11}}(z)+2g_{{}_{12}}(z)+k_{{}_{ \theta}}g_{{}_{13}}(z)+2k_{{}_{\theta}}g_{{}_{14}}(z)\Big{)}-\dfrac{g_{{}_{11} }(z)}{g_{{}_{33}}(z)}\Big{(}k_{{}_{\theta}}g_{{}_{33}}(z)+2k_{{}_{\theta}}g_{{ }_{34}}(z)\Big{)}\) belongs to \(\mathtt{C}_{{}_{\theta}}\), it follows that \(2k_{{}_{\theta}}\Big{(}g_{{}_{12}}(z)-\dfrac{g_{{}_{11}}(z)}{g_{{}_{33}}(z)}g_ {{}_{34}}(z)\Big{)}+k_{{}_{\theta}}^{2}g_{{}_{13}}(z)+2k_{{}_{\theta}}^{2}g_ {{}_{14}}(z)\) belongs to \(\mathtt{C}_{{}_{\theta}}\). Therefore, \(2k_{{}_{\theta}}\Big{(}g_{{}_{12}}(z)+g_{{}_{13}}(z)-\dfrac{g_{{}_{11}}(z)}{g_ {{}_{33}}(z)}g_{{}_{34}}(z)\Big{)}\in\mathtt{C}_{{}_{\theta}}\) for \(\theta\in\{1,2\nu\}\) and \(2k_{{}_{\theta}}\Big{(}g_{{}_{12}}(z)-\dfrac{g_{{}_{11}}(z)}{g_ {{}_{33}}(z)}g_{{}_{34}}(z)\Big{)}\in\mathtt{C}_{{}_{\theta}}\) for \(\theta\in\{0,3+2\nu\}\) which implies that \(g_{{}_{12}}(z)+g_{{}_{13}}(z)-\dfrac{g_{{}_{11}}(z)}{g_ {{}_{33}}(z)}g_{{}_{34}}(z)\in\mathtt{C}_{{}_{\theta_{4}}}\) for \(\theta\in\{1,2\nu\}\) and \(g_{{}_{12}}(z)-\dfrac{g_{{}_{11}}(z)}{g_ {{}_{33}}(z)}g_{{}_{34}}(z)\in\mathtt{C}_{{}_{\theta_{4}}}\) for \(\theta\in\{0,3+2\nu\}\). Hence, \(g_{{}_{44}}(z)|g_{{}_{12}}(z)+g_{{}_{13}}(z)-\dfrac{g_{{}_{11}}(z)}{g_ {{}_{33}}(z)}g_{{}_{34}}(z)\) for \(\theta\in\{1,2\nu\}\) and \(g_{{}_{44}}(z)|g_{{}_{12}}(z)-\dfrac{g_{{}_{11}}(z)}{g_ {{}_{33}}(z)}g_{{}_{34}}(z)\) for \(\theta\in\{0,3+2\nu\}\). Also, \(2k_{{}_{\theta}}\Big{(}g_{{}_{11}}(z)+2g_{{}_{12}}(z)+k_{{}_{\theta}}g_{{}_{13}} (z)+2k_{{}_{\theta}}g_{{}_{14}}(z)\Big{)}-2\dfrac{g_{{}_{11}}(z)}{g_ {{}_{33}}(z)}\Big{(}k_{{}_{\theta}}g_{{}_{33}}(z)+2k_{{}_{\theta}}g_{{}_{34}}(z) \Big{)}\Big{)}\in\mathtt{C}_{{}_{\theta}}\) implies that \(2k_{{}_{\theta}}g_{{}_{13}}(z)\in\mathtt{C}_{{}_{\theta}}\) for \(\theta\in\{\nu,3\nu,2+\nu,2+3\nu\}\), and hence \(g_{{}_{13}}(z)\in\mathtt{C}_{{}_{\theta_{4}}}\) for \(\theta\in\{\nu,3\nu,2+\nu,2+3\nu\}\). Thus \(g_{{}_{44}}(z)|g_{{}_{13}}(z)\) for \(\theta\in\{\nu,3\nu,2+\nu,2+3\nu\}\).
Rank and Cardinality of cyclic codes of arbitrary length over \(\mathtt{R}_{\theta},\theta\in\mathtt{S}\)
In this section, the rank and cardinality of cyclic codes of arbitrary length over \(\mathtt{R}_{\theta},\theta\in\mathtt{S}\), have been obtained by determining a minimal spanning set of a cyclic code over \(\mathtt{R}_{\theta}\).
**Theorem 4.1**.: _Let \(\mathtt{C}_{\theta}=\left\langle g_{{}_{\theta_{1}}}(z),g_{{}_{\theta_{2}}}(z), g_{{}_{\theta_{3}}}(z),g_{{}_{\theta_{4}}}(z)\right\rangle\) be a cyclic code of arbitrary length \(n\) over the ring \(\mathtt{R}_{\theta},\theta\in\mathtt{S}\), where the generators \(g_{{}_{\theta_{1}}}(z)=g_{{}_{11}}(z)+2g_{{}_{12}}(z)+k_{{}_{\theta}}g_{{}_{1 3}}(z)+2k_{{}_{\theta}}g_{{}_{14}}(z)\), \(g_{{}_{\theta_{2}}}(z)=2g_{{}_{22}}(z)+k_{{}_{\theta}}g_{{}_{23}}(z)+2k_{{}_{ \theta}}g_{{}_{24}}(z)\), \(g_{{}_{\theta_{3}}}(z)=k_{{}_{\theta}}g_{{}_{33}}(z)+2k_{{}_{\theta}}g_{{}_{34 }}(z)\), \(g_{{}_{\theta_{4}}}(z)=2k_{{}_{\theta}}g_{{}_{44}}(z)\) are in the unique form as given in Theorem 3.3. Then \(rank(\mathtt{C}_{\theta})\) is \(n+s_{{}_{1}}+\tilde{s}-s_{{}_{2}}-s_{{}_{3}}-s_{{}_{4}},\) where \(s_{{}_{i}}=\) deg \(g_{{}_{ii}}(z)\) for \(1\leq i\leq 4\) and \(\tilde{s}=min\{s_{{}_{2}},s_{{}_{3}}\}.\)_
Proof.: It can be easily seen that the set \(\mathtt{A}_{\theta}=\{g_{{}_{\theta_{1}}}(z),zg_{{}_{\theta_{1}}}(z),\cdots,z ^{n-s_{{}_{1}}-1}g_{{}_{\theta_{1}}}(z),\\ g_{{}_{\theta_{2}}}(z),zg_{{}_{\theta_{2}}}(z),\cdots,z^{n-s_{{}_{2}}-1}g_{{}_{ \theta_{2}}}(z),g_{{}_{\theta_{3}}}(z),zg_{{}_{\theta_{3}}}(z),\cdots,z^{n-s_{{ }_{3}}-1}g_{{}_{\theta_{3}}}(z),g_{{}_{\theta_{4}}}(z),zg_{{}_{\theta_{4}}}(z), \\ \cdots,z^{n-s_{{}_{4}}-1}g_{{}_{\theta_{4}}}(z)\}\) is a spanning set of \(\mathtt{C}_{\theta}\).
To prove that \(rank\left(\mathtt{C}_{\theta}\right)\) is \(n+s_{{}_{1}}+\tilde{s}-s_{{}_{2}}-s_{{}_{3}}-s_{{}_{4}},\) it is sufficient to show that the set \(\mathtt{B}_{\theta}=\{g_{{}_{\theta_{1}}}(z),zg_{{}_{\theta_{1}}}(z),\cdots,z ^{n-s_{{}_{1}}-1}g_{{}_{\theta_{1}}}(z),g_{{}_{\theta_{2}}}(z),zg_{{}_{\theta_ {2}}}(z),\cdots,z^{s_{{}_{1}}-s_{{}_{2}}-1}g_{{}_{\theta_{2}}}(z),g_{{}_{\theta_ {3}}}(z),\\ zg_{{}_{\theta_{3}}}(z),\cdots,z^{s_{{}_{1}}-s_{{}_{3}}-1}g_{{}_{\theta_{3}}}(z), g_{{}_{\theta_{4}}}(z),zg_{{}_{\theta_{4}}}(z),\cdots,z^{\tilde{s}-s_{{}_{4}}-1}g_{{}_{ \theta_{4}}}(z)\}\) is a minimal spanning set of \(\mathtt{C}_{\theta}\), where \(\tilde{s}=min\{s_{{}_{2}},s_{{}_{3}}\}.\)
In order to prove that the set \(\mathtt{B}_{\theta}\) spans \(\mathtt{C}_{\theta}\), it is enough to show that \(z^{\tilde{s}-s_{{}_{4}}}g_{{}_{\theta_{4}}}(z)\), \(z^{s_{{}_{1}}-s_{{}_{3}}}g_{{}_{\theta_{3}}}(z),z^{s_{{}_{1}}-s_{{}_{2}}}g_{{} _{\theta_{2}}}(z)\in span(\mathtt{B}_{\theta})\). First, let us suppose that \(\tilde{s}=s_{{}_{3}}.\) As \(g_{{}_{44}}(z)|g_{{}_{33}}(z)\) in \(Z_{2}[z]/\left\langle z^{n}-1\right\rangle,\) there exists some \(m(z)\in Z_{2}[z]\) with deg \(m(z)=s_{{}_{3}}-s_{{}_{4}}\) such that \(g_{{}_{33}}(z)=g_{{}_{44}}(z)m(z)=g_{{}_{44}}(z)\big{(}m_{{}_{0}}+zm_{{}_{1}}+ \cdots+z^{s_{{}_{3}}-s_{{}_{4}}-1}m_{{}_{{}_{3}}-s_{{}_{4}}-1}+z^{s_{{}_{3}}-s_ {{}_{4}}}-s_{{}_{4}}\big{)},m_{{}_{i}}\in Z_{2}.\) Multiplying both sides by \(2k_{{}_{\theta}}\), we get
\[2g_{{}_{\theta_{3}}}(z)=\big{(}m_{{}_{0}}+zm_{{}_{1}}+\cdots+z^{s_{{}_{3}}-s_{ {}_{4}}-1}m_{{}_{{}_{3}}-s_{{}_{4}}-1}\big{)}g_{{}_{\theta_{4}}}(z)+z^{s_{{}_{3}}- s_{{}_{4}}}g_{{}_{\theta_{4}}}(z)\]
which implies that \(z^{s_{{}_{3}}-s_{{}_{4}}}g_{{}_{\theta_{4}}}(z)\in span(\mathtt{B}_{\theta}).\) Next, suppose that \(\tilde{s}=s_{{}_{2}}.\) Using the divisibilties \(g_{{}_{44}}(z)|g_{{}_{22}}(z)\) for \(\theta\in\{0,3+2\nu\},g_{{}_{44}}(z)|g_{{}_{22}}(z)+g_{{}_{23}}(z)\) for \(\theta\in\{1,2\nu\}\) and \(g_{{}_{44}}(z)|g_{{}_{23}}(z)\) for \(\theta\in\{\nu,3\nu,2+\nu,2+3\nu\}\), it can be proved that \(z^{s_{{}_{2}}-s_{{}_{4}}}g_{{}_{\theta_{4}}}(z)\in span(\mathtt{B}_{\theta})\) by working on the same lines as above. Thus, we have \(z^{\tilde{s}-s_{{}_{4}}}g_{{}_{\theta_{4}}}(z)\in span(\mathtt{B}_{\theta}),\) where \(\tilde{s}=min\{s_{{}_{2}},s_{{}_{3}}\}.\)
Now, we proceed to prove that \(z^{s_{{}_{1}}-s_{{}_{3}}}g_{{}_{\theta_{3}}}(z)\in span(\mathtt{B}_{\theta}).\) Since deg \(z^{s_{{}_{1}}-s_{{}_{3}}}g_{{}_{\theta_{3}}}(z)=\) deg \(g_{{}_{\theta_{1}}}(z)=s_{{}_{1}},\) there exist a polynomial \(r_{{}_{1}}(z)\) such that
\[r_{{}_{1}}(z)=z^{s_{{}_{1}}-s_{{}_{3}}}g_{{}_{\theta_{3}}}(z)-k_{{}_{\theta}}g_{{}_{ \theta_{1}}}(z). \tag{4.1}\]
Clearly, \(r_{{}_{1}}(z)\in\mathtt{C}_{\theta}.\) Moreover, either \(r_{{}_{1}}(z)=0\) or deg \(r_{{}_{1}}(z)<s_{{}_{1}}.\) If \(r_{{}_{1}}(z)=0,\) then \(z^{s_{{}_{1}}-s_{{}_{3}}}g_{{}_{\theta_{3}}}(z)\in span(\mathtt{B}_{\theta}).\) If deg \(r_{{}_{1}}(z)<s_{{}_{1}},\) then it is easy to see that \(r_{{}_{1}}(z)\) is of the type \(g_{{}_{\theta_{3}}}(z)\) or \(g_{{}_{\theta_{4}}}(z).\)
If \(r_{{}_{1}}(z)\) is of the type \(g_{{}_{\theta_{4}}}(z),\) then due to the minimality of degree of \(g_{{}_{\theta_{4}}}(z),\) we have deg \(r_{{}_{1}}(z)\geq s_{{}_{4}}.\
It is easy to see that \(r_{{}_{2}}(z)\in\mathsf{C}_{\theta}\) and it is of the type \(g_{{}_{\theta_{4}}}(z)\). Also, either \(r_{{}_{2}}(z)=0\) or \(\deg\,r_{{}_{2}}(z)<\deg\,r_{{}_{1}}(z)\). If \(r_{{}_{2}}(z)=0\), then \(r_{{}_{1}}(z)=z^{\,\deg\,r_{{}_{1}}(z)-s_{{}_{4}}}g_{{}_{\theta_{4}}}(z)\). Substituting the value of \(r_{{}_{1}}(z)\) in (4.1), we see that \(z^{s_{{}_{1}}-s_{{}_{3}}}g_{{}_{\theta_{3}}}(z)\in span(\mathsf{B}_{\theta})\). If \(\deg\,r_{{}_{2}}(z)<\deg\,r_{{}_{1}}(z)\), then after repeating the argument a finite number of times we obtain a polynomial \(r_{{}_{l}}(z)=r_{{}_{l-1}}(z)-z^{\,\deg\,r_{{}_{l-1}}(z)-s_{{}_{4}}}g_{{}_{ \theta_{4}}}(z)\) such that \(r_{{}_{l}}(z)\in\mathsf{C}_{\theta}\) and it is of the type \(g_{{}_{\theta_{4}}}(z)\). Moreover, \(r_{{}_{l}}(z)=0\) or \(\deg\,r_{{}_{l}}(z)<s_{{}_{4}}\). Since \(r_{{}_{l}}(z)\) is of the type \(g_{{}_{\theta_{4}}}(z)\), \(\deg\,r_{{}_{l}}(z)\) cannot be less than \(s_{{}_{4}}\). Therefore, \(r_{{}_{l}}(z)=0\). Hence, from equation (4.1), we have,
\(z^{s_{{}_{1}}-s_{{}_{3}}}g_{{}_{\theta_{3}}}(z)=k_{{}_{\theta}}g_{{}_{\theta_{ 1}}}(z)+r_{{}_{1}}(z)=k_{{}_{\theta}}g_{{}_{\theta_{1}}}(z)+z^{\,\deg\,r_{{}_{ 1}}(z)-s_{{}_{4}}}g_{{}_{\theta_{4}}}(z)+r_{{}_{2}}(z)\)
\(=k_{{}_{\theta}}g_{{}_{\theta_{1}}}(z)+z^{\,\deg\,r_{{}_{1}}(z)-s_{{}_{4}}}g_{ {}_{\theta_{4}}}(z)+z^{\,\deg\,r_{{}_{2}}(z)-s_{{}_{4}}}g_{{}_{\theta_{4}}}(z)+ \cdots+z^{\,\deg\,r_{{}_{l-1}}(z)-s_{{}_{4}}}g_{{}_{\theta_{4}}}(z)\).
It follows that \(z^{s_{{}_{1}}-s_{{}_{3}}}g_{{}_{\theta_{3}}}(z)\in span(\mathsf{B}_{\theta})\), in case \(r_{{}_{1}}(z)\) is of the type \(g_{{}_{\theta_{4}}}(z)\). A simiar arguments can be used to prove that \(z^{s_{{}_{1}}-s_{{}_{3}}}g_{{}_{\theta_{3}}}(z)\in span(\mathsf{B}_{\theta})\) in case \(r_{{}_{1}}(z)\) is of the type \(g_{{}_{\theta_{3}}}(z)\).
By using a similar argument as above, it can be proved that \(z^{s_{{}_{1}}-s_{{}_{2}}}g_{{}_{\theta_{2}}}(z)\in span(\mathsf{B}_{\theta})\). Thus, \(\mathsf{B}_{\theta}\) is a spanning set of \(\mathsf{C}_{\theta}\).
To prove that the set \(\mathsf{B}_{\theta}\) is a minimal spanning set, it is enough to show that none of \(z^{n-s_{{}_{1}}-1}g_{{}_{\theta_{1}}}(z),z^{s_{{}_{1}}-s_{{}_{2}}-1}g_{{}_{ \theta_{2}}}(z),z^{s_{{}_{1}}-s_{{}_{3}}-1}g_{{}_{\theta_{3}}}(z)\) and \(z^{\tilde{s}-s_{{}_{4}}-1}g_{{}_{\theta_{4}}}(z)\) can be written as a linear combination of other elements of \(\mathsf{B}_{\theta}\). Suppose, if possible, that \(z^{n-s_{{}_{1}}-1}g_{{}_{\theta_{1}}}(z)\) can be written as a linear combinations of other elements of \(\mathsf{B}_{\theta}\), i.e,
\[z^{n-s_{{}_{1}}-1}g_{{}_{\theta_{1}}}(z)=a(z)g_{{}_{\theta_{1}}}(z)+b(z)g_{{}_{ \theta_{2}}}(z)+c(z)g_{{}_{\theta_{3}}}(z)+d(z)g_{{}_{\theta_{4}}}(z), \tag{4.2}\]
where \(\deg\,a(z)<n-s_{{}_{1}}-1\), \(\deg\,b(z)<s_{{}_{1}}-s_{{}_{2}}\), \(\deg\,c(z)<s_{{}_{1}}-s_{{}_{3}}\) and \(\deg\,d(z)<\tilde{s}-s_{{}_{4}}\). On multiplying equation (4.2) on both sides by \(2k_{{}_{\theta}}\) for \(\theta\in\{0,1,2\nu,3+2\nu\}\), we get
\[2k_{{}_{\theta}}z^{n-s_{{}_{1}}-1}g_{{}_{11}}(z)=2k_{{}_{\theta}}a(z)g_{{}_{11}} (z),\ \theta\in\{0,1,2\nu,3+2\nu\}. \tag{4.3}\]
On multiplying equation (4.2) on both sides by \(2(k_{{}_{\theta}}-1)\) for \(\theta\in\{\nu,3\nu,2+\nu,2+3\nu\}\), we get
\[2(k_{{}_{\theta}}-1)z^{n-s_{{}_{1}}-1}g_{{}_{11}}(z)=2(k_{{}_{\theta}}-1)a(z)g_{ {}_{11}}(z),\ \theta\in\{\nu,3\nu,2+\nu,2+3\nu\}. \tag{4.4}\]
The equations (4.3) and (4.4) are not possible as degrees of left hand side and right hand side in each of these equations do not match. Thus, \(z^{n-s_{{}_{1}}-1}g_{{}_{\theta_{1}}}(z)\) can not be written as a linear combination of other elements of \(\mathsf{B}_{\theta}\). Using a similar argument, it can be shown that none of \(z^{s_{{}_{1}}-s_{{}_{2}}-1}g_{{}_{\theta_{2}}}(z),z^{s_{{}_{1}}-s_{{}_{3}}-1}g_{{} _{\theta_{3}}}(z)\) and \(z^{\tilde{s}-s_{{}_{4}}-1}g_{{}_{\theta_{4}}}(z)\) can be written as a linear combination of other elements of \(\mathsf{B}_{\theta}\). Hence, \(\mathsf{B}_{\theta}\) is a minimal spanning set of \(\mathsf{C}_{\theta}\).
Further, \(rank(\mathsf{C}_{\theta})=\) Number of elements in \(\mathsf{B}_{\theta}=(n-s_{{}_{1}})+(s_{{}_{1}}-s_{{}_{2}})+(s_{{}_{1}}-s_{{}_{3}}) +(\tilde{s}-s_{{}_{4}})=n+s_{{}_{1}}+\tilde{s}-s_{{}_{2}}-s_{{}_{3}}-s_{{}_{4}}\), where \(\tilde{s}=min\{s_{{}_{2}},s_{{}_{3}}\}\).
Corollary 1 below follows immediately from the above theorem.
**Corollary 1** Let \(\mathsf{C}_{\theta}=\langle g_{{}_{\theta_{1}}}(z),g_{{}_{\theta_{2}}}(z),g_{{}_{ \theta_{3}}}(z),g_{{}_{\theta_{4}}}(z)\rangle\) be a cyclic code of arbitrary length \(n\) over the ring \(\mathsf{R}_{\theta},\theta\in\mathsf{S}\), where the generators \(g_{{}_{\theta_{1}}}(z)=g_{{}_{11}}(z)+2g_{{}_{12}}(z)+k_{{}_{\theta}}g_{{}_{13}} (z)+2k_{{}_{\theta}}g_{{}_{14}}(z)\), \(g_{{}_{\theta_{2}}}(z)=2g_{{}_{22}}(z)+k_{{}_{\theta}}g_{{}_{23}}(z)+2k_{{}_{ \theta}}g_{{}_{24}}(z)\), \(g_{{}_{\theta_{3}}}(z)=k_{{}_{\theta}}g_{{}_{33}}(z)+
\(2k_{\theta}g_{{}_{34}}(z)\), \(g_{{}_{\theta_{4}}}(z)=2k_{\theta}g_{{}_{44}}(z)\). Then Cardinality of \(\mathsf{C}_{\mathfrak{e}}\) is
\[|\mathsf{C}_{\mathfrak{e}}|=\begin{cases}2^{4n+s_{{}_{1}}+\tilde{s}-3s_{{}_{2}}-2 s_{{}_{3}}-s_{{}_{4}}}&;g_{{}_{23}}(z)\neq 0\\ 2^{4n+\tilde{s}-2s_{{}_{2}}-2s_{{}_{3}}-s_{{}_{4}}}&;g_{{}_{23}}(z)=0\end{cases},\]
where \(s_{{}_{i}}=\) deg \(g_{{}_{ii}}(z)\) for \(1\leq i\leq 4\) and \(\tilde{s}=min\{s_{{}_{2}},s_{{}_{3}}\}\).
The following examples illustrate some of our results.
**Example 4.2**.: Let \(\mathsf{C}_{\mathfrak{e}}=\langle z^{3}+z^{2}+z+1+\nu(z+3),2(z^{2}+1)+2\nu, \nu(z^{2}+1),2\nu(z+1)\rangle\) be a cyclic code of length \(4\) over the ring \(\mathtt{R}_{\mathfrak{e}}\) for \(\theta=2\nu\). Here \(s_{{}_{1}}=3,s_{{}_{2}}=2,s_{{}_{3}}=2,s_{{}_{4}}=1\). Using Theorem 4.1, minimal spanning set of \(\mathsf{C}_{\mathfrak{e}}\) is \(\{z^{3}+z^{2}+z+1+\nu(z+3),2(z^{2}+1)+2\nu,\nu(z^{2}+1),2\nu(z+1)\}\). Hence \(\operatorname{rank}(\mathsf{C}_{\mathfrak{e}})=4\) and \(|\mathsf{C}_{\mathfrak{e}}|=2^{9}\).
**Example 4.3**.: Let \(\mathsf{C}_{\mathfrak{e}}=\langle z^{3}+z^{2}+z+1+(1+\nu),2(z^{2}+1),(1+\nu)( z+1),2(1+\nu)\rangle\) be a cyclic code of length \(4\) over the ring \(\mathtt{R}_{\mathfrak{e}}\) for \(\theta=3+2\nu\). Here \(s_{{}_{1}}=3,s_{{}_{2}}=2,s_{{}_{3}}=1,s_{{}_{4}}=0\). Using Theorem 4.1, we have minimal spanning set of \(\mathsf{C}_{\mathfrak{e}}\) is \(\{z^{3}+z^{2}+z+1+(1+\nu),2(z^{2}+1),(1+\nu)(z+1),z(1+\nu)(z+1),2(1+\nu)\}\). Hence \(\operatorname{rank}(\mathsf{C}_{\mathfrak{e}})=5\) and \(|\mathsf{C}_{\mathfrak{e}}|=2^{11}\).
**Example 4.4**.: Let \(\mathsf{C}_{\mathfrak{e}}=\langle z^{5}+z^{4}+z^{3}+z^{2}+z+1+\nu(z^{4}+z^{2}+ 1),2(z+1)+\nu(z+1),\nu(z^{5}+z^{4}+z^{3}+z^{2}+z+1),2\nu\rangle\) be a cyclic code of length \(6\) over the ring \(\mathtt{R}_{\mathfrak{e}}\) for \(\theta=\nu\). Here \(s_{{}_{1}}=5,s_{{}_{2}}=1,s_{{}_{3}}=5,s_{{}_{4}}=0\). Using Theorem 4.1, minimal spanning set of \(\mathsf{C}_{\mathfrak{e}}\) is \(\{z^{5}+z^{4}+z^{3}+z^{2}+z+1+\nu(z^{4}+z^{2}+1),2(z+1)+\nu(z+1),2z(z+1)+\nu z (z+1),2z^{2}(z+1)+\nu z^{2}(z+1),2z^{3}(z+1)+\nu z^{3}(z+1),2\nu\}\). Hence \(\operatorname{rank}(\mathsf{C}_{\mathfrak{e}})=6\) and \(|\mathsf{C}_{\mathfrak{e}}|=2^{17}\).
**Example 4.5**.: Let \(\mathsf{C}_{\mathfrak{e}}=\langle z^{5}+z^{4}+z^{3}+z^{2}+z+1+\nu(z^{2}+z+1)+ 2\nu z,2(z^{4}+z^{2}+1),\nu(z^{3}+3),2\nu(z^{2}+z+1)\rangle\) be a cyclic code of length \(6\) over the ring \(\mathtt{R}_{\mathfrak{e}}\) for \(\theta=0\). Here \(s_{{}_{1}}=5,s_{{}_{2}}=4,s_{{}_{3}}=3,s_{{}_{4}}=2\). Using Theorem 4.1, minimal spanning set of \(\mathsf{C}_{\mathfrak{e}}\) is \(\{z^{5}+z^{4}+z^{3}+z^{2}+z+1+\nu(z^{2}+z+1)+2\nu z,2(z^{4}+z^{2}+1),\nu(z^{3}+ 3),2\nu(z^{3}+3),2\nu(z^{2}+z+1)\}\). Hence \(\operatorname{rank}(\mathsf{C}_{\mathfrak{e}})=5\) and \(|\mathsf{C}_{\mathfrak{e}}|=2^{11}\).
## 5. Conclusion
In this paper, the structure of cyclic codes of arbitrary length over the rings \(Z_{4}+\nu Z_{4}\) for those values of \(\nu^{2}\) for which these are non-chain rings has been established. A unique form of the generators of these codes has been obtained. Further, formulae for rank and cardinality of these codes have been established by finding minimal spanning sets for these codes.
|
2304.12514 | On encounter rates in star clusters | Close encounters between stars in star forming regions are important as they
can perturb or destroy protoplanetary discs, young planetary systems, and
stellar multiple systems. We simulate simple, viralised, equal-mass $N$-body
star clusters and find that both the rate and total number of encounters
between stars varies by factors of several in statistically identical clusters
due to the stochastic/chaotic details of orbits and stellar dynamics.
Encounters tend to rapidly `saturate' in the core of a cluster, with stars
there each having many encounters, while more distant stars have none. However,
we find that the fraction of stars that have had at least one encounter within
a particular distance grows in the same way (scaling with crossing time and
half-mass radius) in all clusters, and we present a new (empirical) way of
estimating the fraction of stars that have had at least one encounter at a
particular distance. | Krisada Rawiraswattana, Simon P. Goodwin | 2023-04-25T01:50:26Z | http://arxiv.org/abs/2304.12514v1 | # On encounter rates in star clusters
###### Abstract
Close encounters between stars in star forming regions are important as they can perturb or destroy protoplanetary discs, young planetary systems, and stellar multiple systems. We simulate simple, viralised, equal-mass \(N\)-body star clusters and find that both the rate and total number of encounters between stars varies by factors of several in statistically identical clusters due to the stochastic/chaotic details of orbits and stellar dynamics. Encounters tend to rapidly'saturate' in the core of a cluster, with stars there each having many encounters, while more distant stars have none. However, we find that the fraction of stars that have had at least one encounter within a particular distance grows in the same way (scaling with crossing time and half-mass radius) in all clusters, and we present a new (empirical) way of estimating the fraction of stars that have had at least one encounter at a particular distance.
methods: numerical -- stars: kinematics and dynamics -- open clusters and associations: general 0000-0002-4807-2886]Krisada Rawiraswattana
0000-0002-4882-7886]Simon P. Goodwin
## 1 Introduction
Young stars are commonly found with circumstellar discs (e.g. Hillenbrand et al., 1998; Lada et al., 2000; Haisch et al., 2000, 2001), and these discs are thought to be where planet formation occurs. Since most stars are formed in relatively dense environments (e.g. Lada & Lada, 2003), it is possible for the discs, and the on going planet formation process within, to be affected by close encounters between stars.
Simulations have shown that the effect of tidal perturbation from a stellar fly-by can range from slightly changing the density distribution in the disc to truncating or even destroying it (e.g. Clarke & Pringle, 1993; Cuello et al., 2022), depending on how close the encounter is. This dynamical truncation, as well as photoevaporation (e.g. Concha-Ramirez et al., 2022), and face-on accretion (Wijnen et al., 2017), can significantly affect the population of young stars with discs in the early stages of star formation. Perturbations can also trigger disc instabilities (e.g. Thies et al., 2005, 2010) and may determine the population of planets forming in the disc (Ndugu et al., 2022). Another interesting effect of encounters on the disc is the misalignment between the rotational planes of the disc and the host star due to a non-coplanar encounter (e.g. Heller, 1993; Larwood, 1997). Encounters may alter already formed planetary systems, changing orbits (e.g. Breslau & Pfalzner, 2019), or disrupting them (e.g. Parker & Quanz, 2012). And encounters can similarly alter or destroy multiple stellar systems (e.g. Goodwin, 2010; Reipurth et al., 2014).
Young stars with masses \(\lesssim 1\) M\({}_{\odot}\) typically have discs with radii of a few hundreds of au (e.g. Andrews & Williams, 2007). For the discs of those stars to be significantly perturbed in encounters, the periastron distance between the encountering stars needs to be less than \(\sim 1000\) au. Therefore, to understand how important encounters are in affecting discs/planet formation, a key question is how many young stars have encounters within 1000 au?
There are two approaches one might take to finding the rates and numbers of encounters in some star cluster of interest. The first is to perform \(N\)-body simulations of a variety of similar systems which is time consuming
and computationally expensive (e.g. Parker & Quanz, 2012; Craig & Krumholz, 2013), the second would be to have some (ideally analytic) estimate to quickly get at least a 'feel' for the expected values.
In this paper, we examine the encounter rate in a number of \(N\)-body simulations of bound star clusters. We show that encounter rates can vary by up to an order of magnitude between statistically identical clusters, but the fraction of stars that have had an encounter remains statistically the same. We present an empirical way of estimating the fraction of stars in a cluster that have had at least one encounter within a particular distance.
## 2 The encounter rate
The number of encounters per unit time (\(\varepsilon\)) for a star seems like it should depend on several factors: the encounter distance of interest, some average number density of stars, and the typical velocity of stars. The velocity of stars will affect the encounter rate by both changing how often stars encounter other stars, and also changing how effective gravitational focusing is.
### The standard method
The most common method of calculating encounter times is based on the fundamental assumptions that a star is travelling through an effectively infinite, uniform density medium at a constant speed (see e.g. the derivation in Binney & Tremaine, 2008, note that these assumptions are perfectly adequate if one is interested in e.g. the Galactic disc).
The encounter rate for any individual star is typically given by
\[\varepsilon=4\sqrt{\pi}n\sigma\left(r_{\rm e}^{2}+\frac{Gm}{\sigma^{2}}r_{ \rm e}\right), \tag{1}\]
where \(n\) is the number density of the stars, \(\sigma\) is the velocity dispersion, \(r_{\rm e}\) is the closest distance during the encounter, \(G\) is the gravitational constant, and \(m\) is the typical mass of the stars (Binney & Tremaine, 2008). The second term in the brackets is associated with the gravitational focusing effect which deflects the trajectories and decreases the distance of closest approach for slow encounters or encounters between more massive stars.
For an ensemble of \(N\) stars (e.g. a cluster), it seems reasonable to assume that the total encounter rate (total number of encounters per unit time, \(\mathcal{E}\)) scales with the total number of stars. Since each encounter involves two stars, the encounter rate should scale with \(N/2\), i.e. \(\mathcal{E}\simeq N\varepsilon/2\).
In convenient units where \(r_{\rm e}\) is in au, \(\sigma\) in km s\({}^{-1}\), \(n\) in pc\({}^{-3}\), and \(m\) in M\({}_{\odot}\), the encounter rate in a cluster of \(N\) stars is then
\[\mathcal{E}=8.5\times 10^{-11}Nn\sigma\left(r_{\rm e}^{2}+886\frac{m}{\sigma^{ 2}}r_{\rm e}\right)\;{\rm Myr}^{-1}. \tag{2}\]
Therefore, it would seem that to calculate the rate of encounters, \(\mathcal{E}\), at a particular distance of interest, \(r_{\rm e}\), in an ensemble of \(N\) stars, the correct values of (a) number density, \(n\), and (b) velocity dispersion, \(\sigma\), are required. If there is a distribution of stellar masses, an appropriate value for \(m\) must be taken.
While various assumptions that go into this simple calculation are clearly wrong for star clusters (e.g. moving through an effectively infinite uniform density medium at a constant speed) one might think that some simple variation on this approach might work (e.g. taking some appropriate average speed and density). However, we will show that this approach in star clusters gives an often wrong, and an always misleading, 'answer'.
### What do we want to know?
It is important to clarify what we want to know about encounters in a cluster. In most cases, what we would like is an estimate of _what fraction of stars have had a close encounter_ as this tells us the relative levels of disc/planetary system/multiple system perturbation/destruction. It is important to remember that this is _not_ what an estimate of an encounter rate gives without a further assumption of how the encounters are distributed between stars.
As an example let us take a cluster that we shall examine in detail later: an \(N=300\), \(M=300\) M\({}_{\odot}\), equal-mass (so \(m=1\) M\({}_{\odot}\)) virialised Plummer sphere cluster with half-mass radius \(r_{\rm h}=0.5\) pc. If we want to know the number of stars that have had an encounter within, e.g. \(r_{\rm e}=1000\) au, we can calculate that \(\mathcal{E}\sim 25\) Myr\({}^{-1}\) by taking the values for the velocity dispersion and half-mass density of this cluster and putting them into equation (2).
If we assume this encounter rate estimate is correct, to calculate how many stars have had an encounter after some time we need to make a further assumption that encounters are random so that after 2 Myr there will be 50 stars that have had an encounter, and after 10 Myr 250 stars (ie. \(>80\) per cent) will have had an encounter. (One could be somewhat more sophisticated and estimate as the encounter fraction starts to approach unity how many stars have had zero, one, two etc. encounters.)
In assuming encounters are random, this calculation ignores that encounters are much more likely to occur in the core, and that after a few crossing times some stars in the core are likely to have had multiple encounters,
while those in the halo may have had none. Indeed, when stated like this, this approach does seem extremely naive and it would be surprising if it gave the correct answer.
## 3 Simulations
We investigate encounter rates in clusters by performing \(N\)-body simulations in which we can record individual encounters, the stars involved, and their distances of closest approach. The simulations we report here are of the simplest bound systems: virialised Plummer spheres of equal-mass stars.
We simulate \(N=300\) and \(N=600\) virialised Plummer spheres (Plummer, 1911) initialised by the method described in Aarseth et al. (1974) with initial half-mass radii of 0.5, 0.75 and 1 pc. Simulations are run only with equal-mass stars so that we can ignore any complicating effects of mass spectra.
The number of stars (\(N\)), the stellar mass (\(m\)), the initial half-mass radius (\(r_{\rm h}\)), and the label of simulated clusters are shown in Table 1. Clusters with \(r_{\rm h}=0.5\) pc are truncated at 3 pc, those with \(r_{\rm h}=0.75\) pc are truncated at 5 pc, and \(r_{\rm h}=1\) pc clusters are truncated at 7 pc so that they have approximately the same relative sizes.
All simulations are run for 10 Myr using our own \(N\)-body code. The code uses a forth-order Hermite scheme (Makino & Aarseth, 1992) as the integrator. We keep the energy error well below \(10^{-4}\) by employing an adaptive timestep, i.e. using equation (7) from Makino & Aarseth (1992) with parameter \(\eta=4\times 10^{-4}\) for \(N=300\) clusters and \(\eta=1\times 10^{-4}\) for \(N=600\) clusters. We also use block timestepping for \(N=600\) runs to speed up the calculations.
The separation between any pair of stars is monitored at every timestep. Once two stars are closer to each other than 1000 au, whether they are bound or unbound, they are considered as having close encounter. During this period, the closest separation is recorded and taken as the encounter distance once the stars move away from each other beyond 1000 au. In binaries multiple 'encounters' will occur, if the separation stays below 1000 au this is only counted as one 'encounter'. This is to prevent hard binaries inflating the close encounter rate, however as we discuss below hard binaries form extremely rarely in our simulations.
Simulations are run with a gravitational softening length of 0.01 au to avoid collisions or computationally expensive very close encounters. This is only of importance for the details of extremely close encounters at \(\ll 10\) au which is much closer than the vast majority of encounters and at a distance that would completely disrupt discs or planetary systems.
### Number density and velocity dispersion
It would seem reasonable to assume that encounter rates and fractions should depend in some way on both number density and velocity dispersion. Below we go into some detail on how the number density and velocity dispersion of a cluster might be quantified. This is important to show that the encounter rates we measure in simulations disagree with simple calculations because the assumptions that underlie them are wrong for clusters, rather than that we are using the 'wrong' values for number density or velocity dispersion, or not accounting for how they change with time. A reader happy to take our word for this can skip the details below.
#### 3.1.1 Number density
The derivation of equation (2) assumes that stars are uniformly distributed and so \(n\) is constant in time and space. This is a reasonable assumption for e.g. encounters in the Galactic disc, but not for encounters in a cluster (or any region where the number density varies on short length-scales).
There are a number of ways one could quantify some average number density, and they can result in very different values for the estimated encounter rate.
The first average number density we use is simply calculated from the half-mass radius of the cluster (\(r_{\rm h}\)):
\[n_{\rm h}=\frac{3N}{4\pi r_{\rm h}^{3}}. \tag{3}\]
The second average number density is the mean number density defined by
\[n_{\rm m}=\frac{\int_{0}^{\infty}nf{\rm d}r}{\int_{0}^{\infty}f{\rm d}r}=\int _{0}^{\infty}nf{\rm d}r, \tag{4}\]
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Cluster IDs & \(N\) & \(m\) (M\({}_{\odot}\)) & \(r_{\rm h}\) (pc) & \(t_{\rm cr}\) (Myr) \\ \hline N3SMR050–A to J & & & 0.5 & 1.0 \\ N3SMR075–A to J & 300 & 1 & 0.75 & 1.9 \\ N3SMR100–A to J & & & 1 & 3.1 \\ \hline N5SMR050–A to J & 600 & 1 & 0.5 & 0.74 \\ \hline \end{tabular}
\end{table}
Table 1: The properties of cluster ensembles. Each ensemble contains 10 clusters with different random number seeds (labelled A to J) with the same number of stars (\(N\)), stellar masses (\(m\)), half-mass radii (\(r_{\rm h}\)), and crossing time (\(t_{\rm cr}\)). The ID of a simulation contains information on the initial conditions: N3 or N6 are \(N=300\) and \(N=600\) respectively, SM stands for single-mass, and after R is the initial half mass radius of the cluster.
where \(n\) and \(f\) are the number density and the probability density function of the distance of stars from the centre of mass of the cluster (\(r\)). In practice, we can approximate the integral by
\[n_{\rm m}\simeq\Delta r\sum_{i=1}^{N_{\rm bin}}n_{i}f_{i}, \tag{5}\]
where \(\Delta r\) and \(N_{\rm bin}\) are the size and the number of bins of stellar distances from the centre of mass of the cluster.
Theoretically, the probability density function is defined as \(f={\rm d}P/{\rm d}r\), where \({\rm d}P\) is the probability of finding stars at distance between \(r\) and \(r+{\rm d}r\) from the centre of mass of the cluster. But practically, the value of \(f_{i}\) in equation (5) may be obtained from
\[f_{i}=\left(\frac{\Delta P}{\Delta r}\right)_{i}=\frac{N_{i}}{N\Delta r}, \tag{6}\]
where \(N_{i}\) is the number of stars in the \(i^{\rm th}\)-bin and \(N\) is the number of stars in the cluster. The number density \(n_{i}\) in equation (5) is related to \(f_{i}\) via
\[n_{i}=\left(\frac{\Delta N}{\Delta V}\right)_{i}=\frac{N}{4\pi r_{i}^{2}} \left(\frac{\Delta P}{\Delta r}\right)_{i}=\frac{Nf_{i}}{4\pi r_{i}^{2}}, \tag{7}\]
where \(\Delta V\) is a spherical volume element containing \(\Delta N\) stars. Substituting equations (6) and (7) in (5) gives
\[n_{\rm m}\simeq\frac{1}{4\pi N\Delta r}\sum_{i=1}^{N_{\rm bin}}\frac{N_{i}^{2} }{r_{i}^{2}}. \tag{8}\]
The half-mass number density is often used as it is simple to calculate, the more complex mean number density has the advantage of including information on the full density distribution of the cluster. We also note here that the half-mass radius is often used as a characteristic radius as it remains roughly constant over the long-term evolution of a cluster (Aarseth et al., 1974), however the half-mass radius does fluctuate, especially at early times and so even the half-mass density changes (sometimes by factors of several).
#### 3.1.2 Velocity dispersion
The velocity dispersion (\(\sigma\)) in equation (1) and (2) comes from the assumption that the velocity distribution of the stars in the cluster is Maxwellian. From the Maxwell-Boltzmann distribution, the velocity dispersion is related to the mode of the distribution (\(v_{\rm m}\)) by \(\sigma=v_{\rm m}/\sqrt{2}\). The mode can simply be determined by constructing the velocity histogram and then fitting it with a polynomial regression to find the position of the peak.
It should be noted that this way of finding the velocity dispersion is only possible when all (3D) velocities are well known. In any observation the value of \(\sigma\) is either 'guessed' by assuming virial equilibrium, or observed in either 1D (radial velocities) or 2D (proper motions) with usually quite significant errors and biases (e.g. binary inflation, see Cottaar et al. (2012)).
#### 3.1.3 Time-averaging
There are two ways to calculate the number density and velocity dispersion to use in equation (2). One is to take the values of \(n\) or \(\sigma\) calculated instantaneously at the end of the simulation, the other is to take a time average.
In a simulation it is possible to calculate full 3D time-averaged values for various quantities. However, to estimate encounter rates in an observed region or for a single snapshot only the current values for any quantity can be calculated, and even they might be uncertain/guesstimated (e.g. no velocity data is available and only 2D positions).
For later reference, Table 2 shows the initial, time-averaged and final (i.e. those at 10 Myr) values of \(\sigma\), \(n_{\rm h}\), and \(n_{\rm m}\) for all simulations.
## 4 Results
In our simulations we follow all encounters at distances of \(<1000\) au. We record when the encounter occurred, which two stars were involved, and the distance of closest approach. This allows us to find the encounter rate within a particular distance (i.e. what equation (2) attempts to estimate), and the number of stars that have had such an encounter - something equation (2) cannot tell us without further assumptions, but is often what we wish to know.
### Comparing an \(N=300\) and an \(N=600\) cluster
We start by comparing encounter rates at various distances in two clusters with \(N=300\) and \(N=600\) equal-mass stars. Both are initially virialised Plummer Spheres with \(r_{\rm h}=0.5\) pc, with stars each of mass 1 M\({}_{\odot}\).
Figure 1 shows the cumulative number of encounters over 10 Myr in the \(N=300\) (top panel) and \(N=600\) (bottom panel) clusters. In each panel the lines from bottom-to-top are the cumulative numbers of encounters at \(r_{\rm e}<50\) (orange), 100 (green), 500 (magenta), and 1000 (blue) au respectively. At the top left of each sub-figure are three numbers for each of the encounter distances: the first is the actual encounter rate (Myr\({}^{-1}\)) as found in the simulation, the next two are time-averaged estimates that we will return to later, but note for now that all three numbers are often quite differ
ent. The two simulations shown are N3SMR050-A (top) and N6SMR050-A (bottom).
Figure 1 shows a number of features one might expect.
1. The total number of encounters grows roughly linearly with time (in these two clusters at least).
2. The number of encounters at different distances scales very roughly with \(r_{\rm e}^{2}\) (e.g. for \(N=300\) after 10 Myr there have been 317 encounters at \(r_{\rm e}<500\) au, and 27 at \(<50\) au).
3. Increasing both \(N\) (and therefore also \(n\)) by a factor of two results in about 4 times more encounters (e.g. 317 when \(N=300\), and 1165 when \(N=600\) at \(r_{\rm e}<500\) au).
The second and third numbers in the top left are the estimates of encounter rate as calculated from equation (2) using the time-averaged values of the half-mass number density (\(n_{\rm h}\)) and the mean number density (\(n_{\rm m}\)) respectively.
In both cases using \(n_{\rm h}\) under-estimates the number of encounters by a factor of \(\sim 2\). Using \(n_{\rm m}\) seems better, often giving a reasonable estimate (but sometimes being off by a factor of \(\sim 2\)). So at a first glance at just these two simulations one might consider that using \(n_{\rm m}\) often provides a reasonable estimate of the encounter rate in a cluster.
However, we have only compared two simulations which just happened to be those labelled A in our ensembles. As we show below, when we look at the whole ensemble the picture becomes _much_ more complicated, and this emphasises the importance of looking at ensembles of simulations when dealing with \(N\)-body systems.
### An ensemble of statistically identical clusters
We now consider all ten clusters in our \(N=300\) equal-mass stars, and a half-mass radius of \(r_{\rm h}=0.5\) pc ensemble. The only difference between these clusters is the random number seed used to generate the initial positions and velocities. Therefore one would expect that the encounter rates in each would be similar - and ideally be close to an analytic estimate.
The top panel of Fig. 2 shows the final encounter rates (\(\mathcal{E}_{\rm sim}\)) for each of our identical clusters measured after 10 Myr from \(r_{\rm e}=0\) to 1000 au.1 The simulation shown in the top panel of Fig. 1 is A which is the black line towards the middle-bottom of all the lines.
Footnote 1: The exact values of encounter rates at separations of less than a few au may be affected by our softening, but a close encounter did happen, we just might not be able to trust the distance of closest approach too precisely.
The most important thing to note about the top panel of Fig. 2 is the total encounter rates after 10 Myr varies very significantly between clusters with a difference of almost an order of magnitude. There appears to be no 'typical' clusters and some outliers - just a seemingly
Figure 1: Cumulative numbers of encounters in runs N3SMR050-A (top panel, a) and N6SMR050-A (bottom panel, b). The four curves in each panels are for encounter distances \(r_{\rm e}<50\) (orange), 100 (green), 500 (magenta), and 1000 (blue) au, from bottom to top. Numbers in the square brackets in the top left are the encounter rates [\(\mathcal{E}_{\rm sim}/\mathcal{E}_{\rm est}(n_{\rm h})/\mathcal{E}_{\rm est}( n_{\rm m})\)] (see text).
random spread in encounter rates between clusters that are initially statistically identical.
Interestingly, all of the curves in the top panel of Fig. 2 follow a distribution that goes roughly as \(r_{\rm e}^{2}\) suggesting that the distribution of encounter distances is what would be expected for unbound encounters. However, the distribution can slightly deviate from \(r_{\rm e}^{2}\), often due to three-body encounters between a single star and a binary (which has formed during the simulation). These encounters can cause an increase in the encounter rate at \(r_{\rm e}\sim 1000\) au, as can be most obviously seen in cluster I (dark red line at the top of the figure).
The other panels of Fig. 2 show the difference between the analytically estimated encounter rates and the actual encounter rates for each cluster. In the middle panel the half-mass density (\(n_{\rm h}\)) is used for the estimate, and in the bottom panel it is the mean number density (\(n_{\rm m}\)). A good match to the analytic estimate is the black dashed line at a ratio of unity.
Using the half-mass density never gives a good estimate, and can be wrong by an order of magnitude. Using the mean density is _slightly_ better - three or four clusters stay reasonably close to unity, but most clusters are always wrong by factors of several.
One obvious explanation would be that different clusters have changed significantly (some expanding and some contracting?).
In Table 2 we give the initial, time averaged and final values of \(\sigma\), \(n_{\rm h}\), and \(n_{\rm m}\) for all ten clusters, as well as the final cumulative encounter rate. The averages and variance of each quantity over all the clusters are given in the bottom line.
The velocity dispersions (\(\sigma\)) are very similar between all clusters, but measures of density can change significantly over time with quite different time averaged and final values, and between different measures (half mass or mean).
However, these variations do not seem to correlate with the vastly different encounter rates. The last three columns show the encounter rates estimated with \(n_{\rm h}\) and \(n_{\rm m}\) and then the actual encounter rates from the simulations. Only once (cluster F) does \(n_{\rm h}\) get close to predicting the actual encounter rate. Estimates using \(n_{\rm m}\) are reasonable for 5/10 of the clusters, but very wrong for 5/10. We think this was seen by Craig & Krumholz (2013) who note that their simulations had far more encounters than one might expect, but substructure complicated their analysis.
What is particularly interesting is that there is no systematic change in encounters rates with any measure of density or how they evolve. Cluster F has the lowest final mean density (484 pc\({}^{-1}\)) and the lowest encounter rate (30 Myr\({}^{-1}\)), but cluster I has the second lowest final mean density (531 pc\({}^{-1}\)), but the highest encounter rate (208 Myr\({}^{-1}\)). Clusters B and E have almost the same encounter rate, but final mean densities that are different by a factor of over two (735 and 1813 pc\({}^{-1}\)).
One might think that maybe there was some extreme deviation in density at some point in time that the time averaged densities do not properly include, however this is not the case. In Fig. 3 we show the numbers of
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline Cluster & & \multicolumn{2}{c}{\(\sigma\) (km s\({}^{-1}\))} & \multicolumn{4}{c}{\(n_{\rm h}\) (pc\({}^{-3}\))} & \multicolumn{4}{c}{\(n_{\rm m}\) (pc\({}^{-3}\))} & \multicolumn{4}{c}{\(\mathcal{E}\) at \(r_{\rm e}<1\) kau (Myr\({}^{-1}\))} \\ \cline{2-13} & Ini. & Avg. & End & Ini. & Avg. & End & Ini. & Avg. & End & \(\mathcal{E}_{\rm est}(n_{\rm h})\) & \(\mathcal{E}_{\rm est}(n_{\rm m})\) & \(\mathcal{E}_{\rm sim}\) \\ \hline N3SMR050-A & 0.57 & \(0.50\pm 0.04\) & 0.55 & 541 & \(662\pm 144\) & 786 & 1925 & \(1320\pm 475\) & 1650 & \(38\pm 8\) & \(76\pm 28\) & 77 \\ N3SMR050-B & 0.50 & \(0.51\pm 0.05\) & 0.43 & 543 & \(634\pm 112\) & 693 & 1735 & \(1018\pm 482\) & 735 & \(36\pm 7\) & \(58\pm 28\) & 58 \\ N3SMR050-C & 0.55 & \(0.49\pm 0.04\) & 0.43 & 557 & \(382\pm 66\) & 247 & 1535 & \(950\pm 780\) & 2402 & \(22\pm 4\) & \(56\pm 46\) & 131 \\ N3SMR050-D & 0.57 & \(0.54\pm 0.04\) & 0.57 & 571 & \(507\pm 92\) & 474 & 1877 & \(1396\pm 948\) & 1313 & \(28\pm 5\) & \(78\pm 53\) & 143 \\ N3SMR050-E & 0.57 & \(0.53\pm 0.06\) & 0.49 & 577 & \(647\pm 120\) & 475 & 964 & \(1232\pm 413\) & 1813 & \(36\pm 7\) & \(69\pm 24\) & 60 \\ N3SMR050-F & 0.53 & \(0.53\pm 0.05\) & 0.42 & 571 & \(682\pm 102\) & 532 & 766 & \(669\pm 174\) & 484 & \(38\pm 6\) & \(38\pm 10\) & 30 \\ N3SMR050-G & 0.49 & \(0.50\pm 0.05\) & 0.50 & 577 & \(669\pm 130\) & 480 & 1906 & \(1291\pm 537\) & 1158 & \(39\pm 8\) & \(75\pm 31\) & 90 \\ N3SMR050-H & 0.58 & \(0.51\pm 0.04\) & 0.48 & 555 & \(582\pm 125\) & 551 & 1435 & \(1710\pm 723\) & 1100 & \(33\pm 7\) & \(98\pm 42\) & 112 \\ N3SMR050-I & 0.57 & \(0.50\pm 0.04\) & 0.43 & 549 & \(437\pm 110\) & 228 & 1029 & \(956\pm 497\) & 531 & \(25\pm 6\) & \(55\pm 29\) & 208 \\ N3SMR050-J & 0.50 & \(0.50\pm 0.05\) & 0.46 & 577 & \(441\pm 103\) & 375 & 2359 & \(1583\pm 917\) & 978 & \(26\pm 6\) & \(92\pm 53\) & 198 \\ N3SMR050-A..J & \(0.54\pm 0.03\) & \(0.51\pm 0.05\) & \(0.48\pm 0.05\) & \(562\pm 14\) & \(564\pm 110\) & \(484\pm 175\) & \(1553\pm 506\) & \(1213\pm 595\) & \(1216\pm 603\) & \(32\pm 7\) & \(69\pm 34\) & \(111\pm 60\) \\ \hline \end{tabular}
\end{table}
Table 2: The first three triple columns are the initial (Ini.), time averaged (Avg.) and final (End) values of the velocity dispersion (\(\sigma\)), the half-mass number density (\(n_{\rm h}\)) and the mean number density (\(n_{\rm m}\)) of clusters N3SMR050-A..J (initial conditions in Table 1). In the last triple column are the analytic estimates of the encounter rates at \(r_{\rm e}<1000\) au, using the average half-mass number density (\(\mathcal{E}_{\rm est}(n_{\rm h})\)), and the average mean number density (\(\mathcal{E}_{\rm est}(n_{\rm m})\)), compared with the actual values of the encounter rate measured in the simulations (\(\mathcal{E}_{\rm sim}\)). Note that the encounter rate can also be calculated from the initial or final values of \(\sigma\), \(n_{\rm h}\) and \(n_{\rm m}\).
encounters with \(r_{\rm e}<1000\) au (top panel), half-mass density (middle panel), and mean density (lower panel) for clusters D and I (cf. Fig. 1).
Cluster D (black line) shows a relatively linear increase in encounter numbers to end with \(\sim 1400\) encounters within 10 Myr. Cluster I (red line) is similar to cluster D until a period between 6-7 Myr when the encounter rate increases significantly.
There is no reason to think that the increased encounter rate in cluster I is due to density variations however. The middle panel shows that both cluster's half-mass densities are very similar, and both fairly constant and declining slightly. The bottom panel shows more variation in the mean densities with short-lived fluctuations of factors of a few, but both clusters show this behaviour, and, if anything, cluster D has higher densities. There are fluctuations in the mean density of cluster I around when the encounter rate increases significantly, but there are others when it does not.
Examination of the data shows that the large numbers of encounters in cluster I at 6-7 Myr is due to a few pairs of stars having multiple self-encounters in weakly bound pairs (cf. Moeckel & Clarke, 2011).
In all the ensembles of statistically identical clusters we have run we find no systematic relationship between any measure of cluster density and encounter rates (apart from occasionally in just a few of the clusters, but these could be chance given that many fluctuations do not correlate).
#### 4.2.1 Binaries
All our simulations start with no binaries. An interesting question is how many binaries can form, and how they might alter the evolution.
Soft binaries are extremely easy to form (see Moeckel & Clarke, 2011), and can inflate the encounter rate. Any wide binary with periastron below 1000au and apastron above 1000au will be included as multiple encounters - however such binaries are extremely soft and short lived in our simulations (this was seen in cluster I).
Hard binaries, however, are _much_ more difficult to form. The soft binaries we find are very weakly bound and can appear due to fluctuations in the global potential. However, to form a hard, long-lived, binary system requires a three-body encounter as the third body is needed to carry-away the excess energy (see Goodman & Hut, 1993). A back-of-the-envelope calculation suggests hard binary formation should be rare in our clusters, and an examination of the simulations finds only a few hard binaries have managed to form (one every few simulations, and never more than one in a simulation).
### The number of stars having had an encounter
We clearly see that the _total number of encounters_ between stars at any particular distance can be different by maybe an order of magnitude in initially statistically identical clusters, in a way that cannot be explained by density fluctuations.
The encounter rates we have shown in Fig. 2 and Table 2 are the number of times two stars come closer together than a particular distance. However, this measure does not include information on if a particular star, or a particular pair of stars, have had multiple encounters.
Star clusters have a density distribution with a high density core and increasingly lower density as one moves outwards, and a Plummer profile is a reasonable approximation to young, bound clusters.
Figure 2: Top panel: the final encounter rates (\(\mathcal{E}_{\rm sim}\)) against the encounter distance (\(r_{\rm e}\)) from each cluster in the \(N=300\), \(r_{\rm h}=0.5\) pc ensemble. Each cluster has a different colour as shown in the top left. Middle and bottom panels: the ratio of the analytic estimate to the actual encounter rate using the half-mass density (middle panel), and mean density (bottom panel).
Within this density distribution stars can have a variety of orbits (which can change after encounters). Some stars will spend a significant amount of their time in the high density core, some will spend most of their time in the low density halo, and various combinations in between (orbits can be radial or circular etc. and can change over time).
This means that each individual star will have a unique encounter history. Those that spend a lot of time in the core may have multiple encounters, while those in the halo may have none. In addition (as seen above), some stars may get into loosely bound multiples (cf. Moeckel & Clarke, 2011) and potentially have numerous self-encounters which can inflate the encounter rate significantly (see above).
Despite the large variation in encounter rates, When we measure the encounter _fraction_ in each of our ten clusters we find that this value is statistically the same. In Table 3 we show the number (\(N_{\rm s}\)) and fraction (\(f_{\rm s}\)) of stars in each of the ten statistically identical clusters from Fig. 2 and Table 2 that have had an encounter within 1000 au after 10 Myr. This number is between 143 and 170 (157\(\pm\)9) - statistically consistent with being the same number, and a little over half the stars in the cluster at \(f_{\rm s}=0.58\pm 0.03\).
This tells us that in all clusters in this ensemble _the same fraction of stars are having very different numbers of encounters_.
We can also examine other ensembles of statistically identical clusters and we find that the encounter fraction in different clusters in the same ensemble is statistically the same.
The mean and variances of encounter fractions for each ensemble are given in Table 4, but to summarise for \(r_{\rm e}<1000\) au: for \(N=300\) clusters with \(r_{\rm h}=0.75\)pc, the encounter fraction is \(0.38\pm 0.02\); for \(N=300\) clusters with \(r_{\rm h}=1\)pc, the encounter fraction is \(0.28\pm 0.03\); and for \(N=600\) clusters with \(r_{\rm h}=0.5\)pc, the encounter fraction is \(0.60\pm 0.02\).
We do not present the data here in detail, but the same is true for different encounter distances: ie. the encounter fraction is lower when the distance is e.g. 500au, but the encounter fraction is statistically the same within each ensemble. (It is difficult to say anything about extremely close encounters as we are into small-\(N\) statistics.)
That the encounter fraction is constant is extremely interesting, as the most useful measure of encounters is often _how many_ stars have had at least one encounter closer than a particular distance over a particular timescale.
#### 4.3.1 Encounter fractions in different clusters
As we saw above, within an ensemble of statistically identical clusters the fraction of stars that have at least one encounter within 1000 au within 10 Myr is statistically the same, but it is different between different ensembles.
The top panel of Fig. 4 shows how the encounter fraction, \(f_{\rm s}\) increases with (absolute) time for all of our ensembles. The blue line and shaded region which shows the variance at the top are for the \(N=600\) clusters with \(r_{\rm h}=0.5\) pc. The red line and shaded region are for the \(N=300\) clusters with \(r_{\rm h}=0.5\) pc. The purple line and shaded region are for the \(N=300\) clusters with \(r_{\rm h}=0.75\) pc. And at the bottom the green line and shaded region are for the \(N=300\) clusters with \(r_{\rm h}=1\) pc.
As can be seen, in each case the encounter fraction within ensembles evolves in the same general way - ris
Figure 3: The evolution of encounter rates and density for cluster N3SMRO50-D (black lines) and cluster N3SMRO50-I (red lines). Top panel (a): the number of encounters with time for \(r_{\rm e}<1000\) au. Middle panel (b): the half-mass density \(n_{\rm h}\). Bottom panel (c): the mean density \(n_{\rm m}\).
ing rapidly and then flattening - but different ensembles seem to evolve at different rates.
That encounters occur at different rates in these different ensembles should not be a surprise as each of the clusters have a different internal dynamical timescale set by their crossing time. In the middle panel of Fig. 4 we show the encounter fractions by crossing time, rather than by physical time, and the differences between the different ensembles becomes less pronounced. That the (green) \(N=300\) clusters with \(r_{\rm h}=1\) pc have had the fewest encounters is clearly to a large extent because these clusters are dynamically much younger.
However, it is clearly not just dynamical age that is important as the lines are still somewhat different. The reason for this is that the cluster size plays a role. In all of these simulations we are counting encounters within 1000 au which is a more significant fraction of the distances between stars in an \(r_{\rm h}=0.5\) pc cluster than in an \(r_{\rm h}=1\) pc cluster. So, we would expect the encounter timescale to also be sensitive to the relative impact parameter \((r_{\rm h}/r_{\rm e})^{2}\).
In the bottom panel of Fig. 4, we plot encounter fraction against an 'encounter crossing time', \(t_{\rm cr}^{*}\), defined as
\[t_{\rm cr}^{*}=t_{\rm cr}\left(\frac{r_{\rm h}/{\rm pc}}{r_{\rm e}/1000\,{\rm au }}\right)^{2}. \tag{9}\]
Now in the bottom panel we appear to have found a timescale on which all clusters show extremely similar behaviour. For encounters within 1000 au there is a sharp rise in \(f_{\rm s}\) in the first 5 \(t_{\rm cr}^{*}\) to a point where roughly a third of all stars have had an encounter. It then takes another \(\sim 50\)\(t_{\rm cr}^{*}\) for the next third of stars to have an encounter.
To test the scaling with \((r_{\rm h}/r_{\rm e})^{2}\), in Fig. 5 we compare encounters within 500 au in an \(r_{\rm h}=0.5\) pc cluster (red) with encounters within 1000 au in an \(r_{\rm h}=1\) pc cluster (green). Here \((r_{\rm h}/r_{\rm e})^{2}\) is the same in both clusters (half the encounter distance, but half the half-mass radius), therefore we would expect the growth of the different
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Cluster & \(N\) & \(r_{\rm h}\) & \(t_{\rm cr}\) & \(N_{\rm s}\) & \(f_{\rm s}\) \\ \hline N3SMR050-A..J & 300 & \(0.503\pm 0.004\) & \(1.04\pm 0.01\) & \(157\pm 9\) & \(0.52\pm 0.03\) \\ N3SMR075-A..J & 300 & \(0.751\pm 0.002\) & \(1.90\pm 0.01\) & \(113\pm 5\) & \(0.38\pm 0.02\) \\ N3SMR100-A..J & 300 & \(1.012\pm 0.021\) & \(2.98\pm 0.09\) & \(84\pm 8\) & \(0.28\pm 0.03\) \\ N6SMR050-A..J & 600 & \(0.503\pm 0.004\) & \(0.74\pm 0.01\) & \(361\pm 13\) & \(0.60\pm 0.02\) \\ \hline \end{tabular}
\end{table}
Table 4: For each ensemble (column 1) with \(N\) stars (column 2), we show the mean and variance of the half-mass radii \(r_{\rm h}\) (column 3), crossing times \(t_{\rm cr}\) (column 4), and the total number \(N_{\rm s}\) (column 5) and fraction \(f_{\rm s}\) (column 5) of stars that have had an encounter within 1000 au in 10 Myr. Half-mass radii and crossing times are calculated explicitly for each cluster.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Cluster & \(N\) & \(r_{\rm h}\) & \(t_{\rm cr}\) & \(N_{\rm s}\) & \(f_{\rm s}\) \\ \hline N3SMR050-A & 300 & \(0.510\) & \(1.06\) & \(162\) & \(0.54\) \\ N3SMR050-B & 300 & \(0.509\) & \(1.06\) & \(163\) & \(0.54\) \\ N3SMR050-C & 300 & \(0.505\) & \(1.05\) & \(143\) & \(0.48\) \\ N3SMR050-D & 300 & \(0.500\) & \(1.03\) & \(155\) & \(0.52\) \\ N3SMR050-E & 300 & \(0.499\) & \(1.03\) & \(160\) & \(0.53\) \\ N3SMR050-G & 300 & \(0.501\) & \(1.03\) & \(164\) & \(0.55\) \\ N3SMR050-G & 300 & \(0.499\) & \(1.03\) & \(170\) & \(0.57\) \\ N3SMR050-H & 300 & \(0.505\) & \(1.05\) & \(153\) & \(0.51\) \\ N3SMR050-I & 300 & \(0.507\) & \(1.05\) & \(144\) & \(0.48\) \\ N3SMR050-J & 300 & \(0.499\) & \(1.03\) & \(157\) & \(0.52\) \\ \hline N3SMR050-A..J & 300 & \(0.503\pm 0.004\) & \(1.04\pm 0.01\) & \(157\pm 9\) & \(0.52\pm 0.03\) \\ \hline \end{tabular}
\end{table}
Table 3: For each cluster in the \(N=300\), \(r_{\rm h}=0.5\) pc ensemble (column 1) with \(N\) stars (column 2), we show the half-mass radius \(r_{\rm h}\) (column 3), crossing time \(t_{\rm cr}\) (column 4), and the total number \(N_{\rm s}\) (column 5) and fraction \(f_{\rm s}\) (column 6) of stars that have had an encounter within 1000 au in 10 Myr. Half mass radii and crossing times are calculated explicitly for each cluster. The final row shows the means and variances of each quantity over the ensemble.
encounter fractions with just the crossing time to be the same, which they are as is clear in the figure.
Note that there are various subtleties at play such as different velocity dispersions causing the effect of gravitational focusing to be different, and we are dealing with small-\(N\) stochastic systems. But overall, the overlap between evolution between different clusters within ensembles and very different ensembles is impressive when scaled by crossing time and relative encounter cross section.
#### 4.3.2 An empirical relationship
The bottom panel of Fig. 4 provides a way of getting a rough estimate of the encounter fraction of stars, \(f_{\rm s}\), at a particular encounter distance, \(r_{\rm e}\), after some time, \(t\), in any cluster if one knows the crossing time, \(t_{\rm cr}\) and half-mass radius, \(r_{\rm h}\). Then from equation (9) one can calculate \(t_{\rm cr}^{*}\), and so \(t/t_{\rm cr}^{*}\).
It is worth pointing-out that the curve in the bottom panel looks like it should have a fairly simple function that would provide a fit. However, we have struggled to find a simple (2 or 3 parameter) function that fits (the problem is that the initial rise is much steeper than e.g. an exponential will fit). Therefore we would suggest simply reading-off the value of \(f_{\rm s}\) from the bottom panel of Fig. 4 for whatever value of \(t/t_{\rm cr}^{*}\).
We stress that this is a rough estimate, however, as rough-and-ready as this may be, it will still almost certainly provide a _much_ better feeling for how many stars have had an encounter than any attempt to use equation (2), and then to extrapolate to an encounter fraction.
#### 4.3.3 An example
We can take a roughly Orion Nebula-like cluster with \(M=1000\) M\({}_{\odot}\), \(N=2500\), a half-mass radius \(r_{\rm h}=0.7\) pc, and age 3 Myr and attempt to estimate what fraction of stars have had an encounter at \(<1000\) au. For such a cluster, \(n=1800\) pc\({}^{-3}\), and assuming virial equilibrium \(\sigma=2.6\) km s\({}^{-1}\), and so \(t_{\rm cr}=0.27\) Myr.
From equation (9) if \(r_{\rm e}=1000\) au, then \(t_{\rm cr}^{*}=0.5t_{\rm cr}=0.14\) Myr. Therefore this cluster has an 'encounter age' of \(t/t_{\rm cr}^{*}\sim 20\). From the bottom panel of Fig. 4 this would suggest around 40 per cent of stars will have had an encounter within 1000 au.
If we use equation (2), we find \({\cal E}\sim 1000\) Myr\({}^{-1}\). For an age of 3 Myr this suggests 3000 encounters among
Figure 4: Encounter fractions for each ensemble against the absolute time \(t\) (top panel), crossing time \(t_{\rm cr}\) (middle panel), and the encounter crossing times \(t_{\rm cr}^{*}\) (bottom panel, see text). The top panel contains the legend for the IDs of each ensemble. The line is the mean value, and the shaded region shows the variance for \(N=600\) clusters with \(r_{\rm h}=0.5\) pc (blue), \(N=300\) clusters with \(r_{\rm h}=0.5\) pc (red), \(N=300\) clusters with \(r_{\rm h}=0.75\) pc (purple), and \(N=300\) clusters with \(r_{\rm h}=1\) pc (green).
Figure 5: The encounter fractions against crossing time with \(r_{\rm e}<1000\) au for \(N=300\), \(r_{\rm h}=1\) pc (green shaded region), and with \(r_{\rm e}<500\) au for \(N=300\), \(r_{\rm h}=0.5\) pc (red shaded region).
the \(N=2500\) stars. An extremely naive extrapolation might suggest that therefore all stars had an encounter within 1000 au. One can be a little more sophisticated and assume that encounters are random which finds that \(\sim 90\) per cent of stars will have had an encounter (as each one of the 3000 encounters involves two stars, even if they are random most stars will have been involved in one). Even if the value of 3000 encounters in 3 Myr happened, by luck, to be right, the extension of this encounter number to the number of stars involved in encounters is certainly not random.
## 5 Conclusions
We have performed \(N\)-body simulations of small star clusters to investigate stellar encounters with separations \(r_{\rm e}<1000\) au. This is the regime in which discs, planetary systems, and multiple stellar systems can be significantly perturbed or destroyed.
We find that the encounter _rates_ vary by up to an order of magnitude between statistically identical clusters. However, we find that the _fraction_ of stars that have had an encounter is statistically the same within statistically identical clusters.
The fraction of stars that have had an encounter increases rapidly at early dynamical times before flattening significantly once stars in orbits particularly susceptible to encounters have had at least one encounter. This depends on both the dynamical timescale of the cluster (\(t_{\rm cr}\)), and the relative impact parameter \((r_{\rm h}/r_{\rm e})^{2}\).
We find a consistent, and reasonably tight, relationship between the fraction of stars that have had an encounter and a modified crossing time \(t_{\rm cr}^{*}\propto t_{\rm cr}(r_{\rm h}/r_{\rm e})^{2}\).
The relationship we have found has a seemingly solid physical basis, but no detailed theoretical underpinning (we are working on this). However, it provides a simple way of extracting an estimate of the encounter fraction for a particular cluster of a particular age from a figure. While this is empirical, it almost certainly provides a _much_ better estimate of the true encounter fraction than any attempt to apply standard theory.
## Acknowledgments
SG was partly funded by STFC consolidated grant ST/V000853/1.
|
2308.12819 | DiCA: A Hardware-Software Co-Design for Differential Checkpointing in
Intermittently Powered Devices | Intermittently powered devices rely on opportunistic energy-harvesting to
function, leading to recurrent power interruptions. This paper introduces DiCA,
a proposal for a hardware/software co-design to create differential
check-points in intermittent devices. DiCA leverages an affordable hardware
module that simplifies the check-pointing process, reducing the check-point
generation time and energy consumption. This hardware module continuously
monitors volatile memory, efficiently tracking modifications and determining
optimal check-point times. To minimize energy waste, the module dynamically
estimates the energy required to create and store the check-point based on
tracked memory modifications, triggering the check-pointing routine optimally
via a nonmaskable interrupt. Experimental results show the cost-effectiveness
and energy efficiency of DiCA, enabling extended application activity cycles in
intermittently powered embedded devices. | Antonio Joia Neto, Adam Caulfield, Chistabelle Alvares, Ivan De Oliveira Nunes | 2023-08-24T14:23:10Z | http://arxiv.org/abs/2308.12819v2 | _D_iCA: A Hardware-Software Co-Design for Differential Check-Pointing in Intermittently Powered Devices
###### Abstract
Intermittently powered devices rely on opportunistic energy-harvesting to function, leading to recurrent power interruptions. Therefore, check-pointing techniques are crucial for reliable device operation. Current strategies involve storing snapshots of the device's state at specific intervals or upon events. Time-based check-pointing takes check-points at regular intervals, providing a basic level of fault tolerance. However, frequent check-point generation can lead to excessive/unnecessary energy consumption. Event-based check-pointing, on the other hand, captures the device's state only upon specific trigger events or conditions. While the latter reduces energy usage, accurately detecting trigger events and determining optimal triggers can be challenging. Finally, differential check-pointing selectively stores state changes made since the last check-point, reducing storage and energy requirements for the check-point generation. However, current differential check-pointing strategies rely on software instrumentation, introducing challenges related to the precise tracking of modifications in volatile memory as well as added energy consumption (due to instrumentation overhead).
This paper introduces _D_iCA, a proposal for a hardware/software co-design to create differential check-points in intermittent devices. _D_iCA leverages an affordable hardware module that simplifies the check-pointing process, reducing the check-point generation time and energy consumption. This hardware module continuously monitors volatile memory, efficiently tracking modifications and determining optimal check-point times. To minimize energy waste, the module dynamically estimates the energy required to create and store the check-point based on tracked memory modifications, triggering the check-pointing routine optimally via a non-maskable interrupt. Experimental results show the cost-effectiveness and energy efficiency of _D_iCA, enabling extended application activity cycles in intermittently powered embedded devices.
Intermittent Computing, Energy Harvesting, Check-pointing
## I Introduction
In contrast to traditional devices that depend on batteries or external power sources, energy harvesting devices capitalize on ambient energy from the surrounding environment to fuel their operations. By leveraging opportunistic energy sources such as solar power [19] and kinetic energy [8, 9], intermittent computing enables energy harvesting devices to operate under unpredictable power interruptions. By eliminating the need for batteries, these devices become more sustainable and environmentally friendly, as they reduce electronic waste [30]. Moreover, battery-less devices offer increased convenience and autonomy since they do not require frequent battery replacements or recharging. This allows for significantly reduced device size and weight, enabling sleeker and more compact designs [18]. By removing the need for large batteries, battery-less devices open up new possibilities for miniaturization and integration into various applications [9].
On the other hand, power disruptions on intermittent devices present challenges to reliable operation, including data loss, difficulties in state preservation, task resumption, and system instability [22, 28]. As a result, the execution of applications in intermittent devices follows cyclic patterns, where task execution occurs during periods of power availability, followed by power depletion. These cycles require strategies to maintain state across power depletion cycles, ensuring correct operation.
A _naive_ solution to this problem is the exclusive use of Non-Volatile Memory (NVM), such as FRAM, to store all data. While a completely FRAM-based solution ensures reliability, it increases energy consumption due to increased access latency when compared to volatile memory, such as SRAM. Conversely, an exclusively SRAM-based implementation offers high energy efficiency while lacking reliability across power depletion cycles [7, 17]. An alternative approach is to integrate check-pointing into the execution cycles of intermittent devices. Check-pointing involves regularly saving the volatile system state to NVM, creating snapshots that capture the current execution context. Therefore, after power depletion, the system can restore the latest check-point and resume execution properly.
One approach to implement the check-pointing is to modify the original software with additional logic to determine when a minimum energy level has been reached and save the program context to NVM [11, 29, 31]. Although functional, these techniques also extend the program's runtime and thus incur additional energy costs, reducing the original application's activity cycle. Alternative techniques propose one-time checkpointing [6, 7] that is triggered when the supplied voltage falls below a certain threshold. However, the latter does not track the modifications made to the volatile memory (VM), requiring the entire VM to be copied to NVM at each check-point. This process also incurs a significant energy cost.
Differential check-pointing is an approach aimed at minimizing the amount of data copied from the VM to the NVM. The fundamental concept is to track modified memory addresses and copy only the modified memory blocks to NVM for each check-point. This approach has led to a notable reduction in both run-time and energy costs associated with
check-pointing and has been systematically employed in prior work [2, 4, 10]. However, prior differential check-pointing schemes are implemented by instrumenting the application source code, still introducing energy and run-time overhead.
### _Contributions_
Given the popularity of intermittent computing applications, we argue that future devices could be manufactured with minimal hardware support to facilitate check-pointing and reduce associated energy and run-time costs. With that premise in mind, we propose \(\mathcal{D}\)iCA: a Differential Check-point Assistant based on a hardware/software co-design. \(\mathcal{D}\)iCA eliminates software instrumentation and application code modifications and dynamically determines optimal differential check-pointing times based on the amount of memory to be saved to NVM. More specifically, \(\mathcal{D}\)iCA comprises:
* called Memory Modification Tracker
- used to efficiently mark modified segments in VM. This module enables tracking of differential memory modifications without requiring any instrumentation or code modification, simplifying and optimizing the check-point generation.
* A new interrupt source that optimizes check-point timings and reduces energy consumption. This approach dynamically estimates the minimal supply voltage required to perform the check-point based on the number of segments modified in VM. When the dynamically defined threshold is reached, \(\mathcal{D}\)iCA hardware generates a non-maskable interrupt to initiate the check-pointing procedure.
* A software routine that interacts with the \(\mathcal{D}\)iCA hardware to copy appropriate memory segments from VM to NVM based on the optimal parameters configured by \(\mathcal{D}\)iCA hardware module.
The intangibility of applying hardware modifications to real devices has been a significant obstacle for check-pointing techniques, leading to a reliance on software instrumentation. Our approach is rooted in the premise that minimal hardware modifications are both feasible and realistic considering the recent popularity of and demand for energy harvesting devices. We believe that, given their distinct purposes, these devices can benefit from simple, practical, and inexpensive hardware modifications to enhance their performance without requiring massive architectural overhauls.
### _Scope_
Battery-less and intermittent computing devices are typically implemented with micro-controller units (MCUs) that run software at bare-metal, have low-cost, and are energy efficient. In this work, we focus on ultra low-energy MCUs (e.g., Atmel AVR ATMega [5], TI MSP430 [16]) which feature single-core \(8\)- or \(16\)-bit CPUs running at low clock frequencies (usually \(1\) to \(16\) Mhz). They use between \(4\) and \(16\) KBytes of SRAM as VM while the rest of the address-space is available for NVM. We implement \(\mathcal{D}\)iCA prototype atop an open-source version of TI MSP430 from openCores [15].
### _Organization_
This paper is structured as follows. Section II presents the high-level ideas in \(\mathcal{D}\)iCA design. Section III delves into the details of \(\mathcal{D}\)iCA architecture and specifies \(\mathcal{D}\)iCA hardware and software components. Section IV discusses the implementation of \(\mathcal{D}\)iCA prototype, the experimental set-up, and presents \(\mathcal{D}\)iCA empirical evaluation. Section V discusses related work and Section VI concludes the paper.
## II \(\mathcal{D}\)iCA High-Level Overview
\(\mathcal{D}\)iCA is a hardware/software co-design. It includes an inexpensive hardware module that tracks modified VM segments. Compared to methods that perform this tracking in software, it reduces energy consumption, enabling more instructions belonging to the original application to be executed per power cycle. \(\mathcal{D}\)iCA hardware also implements a new interrupt source that triggers the check-point generation, i.e., the process of copying modified VM segments to NVM, based on the estimated required time and available energy. Figure 1 illustrates \(\mathcal{D}\)iCA architecture.
### \(\mathcal{D}\)iCA _Hardware_
\(\mathcal{D}\)iCA hardware has two sub-modules: Memory Modification Tracker (MMT) and Voltage Threshold Tracker (VTT).
MMT divides the volatile memory into blocks. It detects whether these blocks have been written to by monitoring two internal CPU signals: the write enable bit (denoted \(W_{en}\)), which indicates whether the MCU is writing to the memory, and the \(D_{addr}\) signal, that defines the memory address being written to when \(W_{en}=1\). Whenever \(W_{en}\) is active, \(\mathcal{D}\)iCA takes the value of \(D_{addr}\) and uses it as an index to set a bit in a register vector called (Dirty-bits Table) DTable, indicating that the block to which the address \(D_{addr}\) belongs to has been modified. Figure 2 depicts \(\mathcal{D}\)iCA updating DTable as VM blocks are modified. To track the differential changes, \(\mathcal{D}\)iCA
Fig. 1: Illustration of \(\mathcal{D}\)iCA’s high level architecture
Fig. 2: Illustration of DTable memory tracking
also has a controller that allows software to reset DTable after loading the prior check-point.
(VTT) generates an interrupt signal based on how many memory segments in VM have been modified (as detected by MMT). It dynamically calculates a threshold voltage supply value, denoted \(V_{ths}\). The value of \(V_{ths}\) is determined by counting the number of modified VM memory blocks, i.e., the number of active bits in DTable, to allow sufficient time for the check-pointing routine. When the supplied voltage falls below \(V_{ths}\), indicating an impending power depletion, a non-maskable interrupt is triggered.
### \(\mathcal{D}\)iCa _Software_
\(\mathcal{D}\)iCa software component is implemented as an interrupt service routine (ISR) associated with the VTT-generated interrupt. Based on DTable, the ISR captures a snapshot of modified segments in VM and CPU registers, and copies them to NVM. After the successful check-point generation, the system is powered off until the energy harvesting component recharges the power supply. When the supplied voltage reaches a full charge threshold (denoted \(V_{full}\)), the MCU restarts and \(\mathcal{D}\)iCa software restores the most recent check-point, allowing the system to resume operation from the suspended state.
## III \(\mathcal{D}\)iCa in Details
This section details \(\mathcal{D}\)iCa design. For quick reference, Table I summarizes the notation used in the rest of the paper.
### _Memory Modification Tracker (MMT)_
MMT is a hardware sub-module designed, in its simplest form, to monitor the memory locations written by the CPU. Central to this component is the DTable, a peripheral that enables efficient differential check-point generation, obviating the need for instrumentation or application-specific code modifications.
DTable is a bit-vector where each bit maps to a memory block, sequentially, in VM. Memory blocks are of pre-defined size denoted as VM\({}_{size}^{block}\). DTable's size is determined by the total size of the VM (VMsize) divided by VM\({}_{size}^{block}\). VM\({}_{size}^{block}\) defines the granularity of memory tracking and is a design parameter chosen at MCU manufacturing time. To simplify the hardware implementation, we restrict VM\({}_{size}^{block}\) to powers of 2.
MMT monitors the CPU signals \(W_{en}\) and \(D_{addr}\) which are used by the CPU to write to memory. As part of the underlying CPU behavior, \(W_{en}\) is set to \(1\) whenever a write access to memory occurs, whereas \(D_{addr}\) contains the address of the memory location being written when \(W_{en}=1\). Therefore, whenever \(D_{addr}\) is within VM and \(W_{en}=1\), MMT determines a DTable index (\(Addr\)) by shifting the relative address of VM (\(D_{addr}-\textsf{VM}_{min}\)) by BSS bits, where BSS \(=\log^{\textsf{VM}_{size}^{block}}\). Then MMT sets the bit corresponding to \(Addr\) in DTable to \(1\). Figure 2 illustrates DTable functionality. The specification of MMT basic behavior is presented in Definition 1 (see Section III-B for MMT extended version that ignores de-allocated stack frames during check-point generation).
**Definition 1**: _Memory Modification Tracker Model_
\[i\in[1,\textsf{DTable}_{size}]\] \[Addr:=(D_{addr}-\textsf{VM}_{min})\gg BSS\] \[\textsf{DTable}[i]:=\begin{cases}0&\text{if}\quad reset\\ 1&\text{if}\quad(i=Addr)\quad\wedge\\ &(D_{addr}\in\textsf{VM})\wedge W_{en}\\ \textsf{DTable}[i]&\text{Otherwise}\end{cases}\]
**Rationale**. MMT detects differential memory changes between the last check-point and the current memory state. This feature reduces the number of memory blocks that must be copied to NVM in the next check-point, as unmodified data remains consistent in NVM and need not be copied. To support this functionality, the MMT module incorporates a control bit (\(reset\)) that clears DTable on each power cycle (i.e., at MCU boot). Figure 3 illustrates the differential check-pointing process across subsequent check-points based on DTable.
### _Extending MMT to Ignore De-Allocated Stack Frames_
MMT basic design only sets DTable bits to track memory modifications. However, it does not clear DTable if a memory block is no longer in use by the program. Since functions are called (and returned from) multiple times in most programs, MMT basic design would check-point function stack frames that are no longer in use. Considering nested function calls, this check-pointing approach would unnecessarily include a large number of memory blocks related to stack frames that are no longer in use. To address this issue, MMT is extended
to clear DTable bits associated with stack frames of functions upon their completion.
The stack frame cleaning is depicted in Figure 4. It uses a bit mask, called \(SF_{mask}\), that masks DTable entries. This mask indicates whether the corresponding D-table bit is no longer in use. \(SF_{mask}\) values are determined based on two values. The first, \(SP_{LIM}\), defines the lowest memory address1 that can be used for stack allocation in the MCU. The second, \(SP\), is the current stack pointer, i.e., a CPU signal that contains the address of the top of the currently allocated stack. Therefore, at any given time, the memory region between \(SP\) and \(SP_{LIM}\) is unallocated.
Footnote 1: The use of the “lowest memory address” as the limit assumes that the stack grows downwards in the underlying MCU architecture.
To produce \(SF_{mask}\), the relative memory positions of \(SP\) and \(SP_{LIM}\) with respect to the VM are computed by subtracting VM\({}_{min}\) from both input parameters. VM\({}_{min}\) indicates the lowest memory address of VM. Next, the indices that correspond to these relative memory addresses in DTable are obtained by shifting the resulting values by \(BSS\). This generates the indices \(ID_{SP}\) and \(ID_{SP_{LIM}}\). The mask is then generated by setting all indices between \(ID_{SP}\) and \(ID_{SP_{LIM}}\) to 0 and leaving all others as 1. MMT behavior, extended with the stack frame cleaner, is specified in Definition 2.
### _Voltage Threshold Tracker_
To reduce power consumption, \(\mathcal{D}\)iCA establishes the ideal moment to start the check-pointing routine. To this end, it implements a non-maskable interrupt source that is triggered when the system's supplied voltage (\(V_{supply}\)) falls below a dynamically defined threshold (\(V_{ths}\)). \(V_{ths}\) serves as a proxy for the amount of energy required for the check-pointing routine to fully execute before \(V_{supply}\) drops below to a level that is insufficient to sustain MCU operation. \(V_{ths}\) is calibrated based on the number (\(n_{d}\)) of active bits of in DTable (see Section III-D for details). When \(V_{ths}\) is reached, the interrupt is triggered.
\(n_{d}\) default value is \(0\) after loading a check-point. It is incremented by one whenever a DTable value changes from 0 to 1 (as detected with the operation \([-\mathcal{D}\textsf{Table}[Addr]\wedge W_{en}]\)). Conversely, when a bit in the DTable is cleared (due to stack frame cleaning) \(n_{d}\) is decremented. To determine the number of bits cleared due to stack frame cleaning, \(\mathcal{D}\)iCA checks the previous stack pointer index, denoted \(ID_{SP}^{t-1}\) and subtracts it from the current index \(ID_{SP}^{t}\). If the result is greater than zero, it is subtracted from \(n_{d}\). The value of \(n_{d}\) is then used to compute the \(V_{ths}\), as detailed in Section III-D. The interrupt signal generation is specified in Definition 3.
**Definition 3**: _Voltage Threshold Tracker Model:_
* Stack Frame reduction counter: \[ID_{d}:=ID_{SP}^{t}-ID_{SP}^{t-1}\]
* Dtable bit counter: \[n_{d}:=\begin{cases}0&\text{if }reset\\ n_{d}+1&\text{elif }(\neg\textsf{DTable}[Addr]\wedge W_{en})\\ n_{d}-ID_{d}&\text{elif }(ID_{d}>0)\\ n_{d}&\text{Otherwise}\end{cases}\]
* Interrupt Signal: \[IT_{sig}\gets V_{supply}<V_{ths}(n_{d})\]
### _Voltage Threshold (\(V_{ths}\)) Calibration_
\(V_{ths}\) is determined by two factors: \(n_{d}\) and a constant \(\lambda\). A device calibration phase is conducted at system deployment time to determine \(\lambda\). The calibration process involves fine-tuning \(\lambda\) to the specific MCU in order to obtain an optimal \(V_{ths}(n_{d})\) function for that particular device. Similar to prior work [6], we assume that the voltage supply decay between \(3.6\)V (fully charged supply) to \(2.0\)V (minimal operational threshold) is linear. Therefore, \(\lambda\) is determined by measuring the total amount of blocks that can be copied in one full device power cycle (\(N\)) and dividing \(1.6\)V by \(N\).
**Implementation of \(V_{ths}(n_{d})\)**: In order to avoid hardware multiplications, the product of \(\lambda\) and \(n_{d}\) is computed using
Fig. 3: Visualization of differential check-pointing using DTable
addition operations. Initially, the threshold voltage \(V_{ths}\) is set to \(V_{min}\). As \(n_{d}\) increases or decreases, an additional register \(n^{\prime}_{d}\) tracks the values of \(n_{d}\) with a unitary increment or decrement. This additional register is necessary because \(n_{d}\) can decrease by more than one value at a time, due to the stack frame cleaner. This is specified in Definition 4.
**Definition 4**: \(V_{ths}\) _Calibration Model:_
\[V_{ths}(0)=V_{min}\] \[n^{\prime}_{d}:=\begin{cases}n^{\prime}_{d}+1&\text{if}\quad n_{d }>n^{\prime}_{d}\\ n^{\prime}_{d}-1&\text{if}\quad n_{d}<n^{\prime}_{d}\\ n^{\prime}_{d}&\text{if}\quad n_{d}=n^{\prime}_{d}\end{cases}\] \[V_{ths}:=\begin{cases}V_{ths}+\lambda&\text{if}\quad n_{d}>n^{ \prime}_{d}\\ V_{ths}-\lambda&\text{if}\quad n_{d}<n^{\prime}_{d}\\ V_{ths}&\text{if}\quad n_{d}=n^{\prime}_{d}\end{cases}\]
### _Check-Point Generation_
When the check-pointing ISR is triggered (\(IT_{sig}=1\)), \(\mathcal{D}\mathsf{iCA}\) software executes to copy VM memory blocks that are marked in DTable to a dedicated region in NVM. Enough space in NVM should be reserved for this purpose.
Before the check-pointing process, a flag stored in the NVM is set to \(True\), indicating the active state of the check-pointing process. Once check-pointing is completed, the flag is unset. If the check-pointing process is not successfully completed, the value of \(\lambda\) can be adjusted to a more conservative value for the subsequent power cycle.
In order to generate the VM snapshot using the DTable, the \(\mathcal{D}\mathsf{iCA}\) software iterates through each bit of DTable. If a bit is 0, the iteration proceeds to the next bit. If the bit is 1, the memory block associated with that bit (of size \(\mathsf{VM}^{block}_{size}\)) is copied to its corresponding position in NVM. This process is shown in Algorithm 1.
```
Data:\(\mathsf{DTable}\),\(\mathsf{DTable}\)size; \(\mathsf{NVM}\) = pointer to the first position of nonvolatile memory related to the check-point, \(\mathsf{VM}\) = pointer to the first position of the ram memory, \(b=\mathsf{VM}^{block}_{size}\) for\(i\gets 0\)to\(\mathsf{DTable}_{size}-1\)do if\(\mathsf{DTable}[i]\) is 1then \(\mathsf{memcpy}(\mathsf{NVM}[b*i]\), \(\mathsf{VM}[b*i]\), \(b\)); end if end for
```
**Algorithm 1**Memory check-pointing using DTable
### _Execution Resumption_
When resuming execution in a new power cycle, \(V_{ths}\) can be re-calibrated by adjusting \(\lambda\). The system must also check the integrity of the current check-point. If no check-point is found or if it is corrupted (or incomplete), the application starts anew. If a valid check-point exists, it must be reloaded by copying the check-point data from NVM to the VM, and restoring the CPU registers. Additionally, after copying the check-point data to NVM, DTable and \(n_{d}\) are cleared. This step sets \(\mathcal{D}\mathsf{iCA}\) up for check-pointing the next power cycle. Finally, the Program Counter (PC) is loaded with the address that was meant to be executed by the application in the previous power cycle immediately before \(\mathcal{D}\mathsf{iCA}\) interrupt was triggered.
## IV Prototype & Experiments
We synthesize \(\mathcal{D}\mathsf{iCA}\) using a Xilinx Artix-7 FPGA [33], on a Basys-3 [13] prototyping board. The Xilinx Vivado tool-set [32] was used for synthesizing the on top of the openMSP430 [15] MCU core. \(\mathcal{D}\mathsf{iCA}\) was written in the Verilog hardware description language. Each module implements the logic outlined in Section III. \(\mathcal{D}\mathsf{iCA}\) was implemented in 368 lines of Verilog code for the MMT and VTT hardware modules (including their integration with the underlying openMSP430 core) and 201 lines of C code for \(\mathcal{D}\mathsf{iCA}\) software ISR and check-point recover.
In addition to the FPGA-based prototype, we perform complementary experiments using a low-energy device for realistic energy results. We use the MSP430FR2476 MCU for these experiments, featuring 8kB of SRAM (VM) and 64KB of FRAM (NVM) and running at a CPU clock frequency of 1MHz.
### _Profiling \(\mathsf{VM}^{block}_{size}\)_
An important manufacturing time decision is to determine \(\mathsf{VM}^{block}_{size}\) in \(\mathcal{D}\mathsf{iCA}\). This decision has a direct impact on the number of bits in \(\mathcal{D}\mathsf{Table}\), thereby affecting the hardware size. However, it also plays a vital role in enhancing the granularity of memory-tracking, which has the potential to reduce check-pointing time and energy consumption. To determine the optimal value of \(\mathsf{VM}^{block}_{size}\), we profile the check-pointing runtime against various values. The results for \(\mathsf{VM}^{block}_{size}\) varying from 8 to 512 Bytes are presented in the Figure 5.
Fig. 4: Visualization of the DTable stack frame trash cleaning
The results show that smaller \(\mathsf{VM}_{size}^{block}\) incur longer run-times for copying the same amount of memory, even for small memory amounts. This can be attributed to the inherent overhead associated with the copy operation, encompassing memory address calculations, data transfers, and synchronization. Moreover, smaller \(\mathsf{VM}_{size}^{block}\) result in larger \(\mathsf{DTable}\) sizes, necessitating a more extensive search to identify modified bits, which further contributes to the augmented check-pointing run-time. Conversely, as the \(\mathsf{VM}_{size}^{block}\) increases, it becomes apparent that larger values are constrained by a minimum overhead, which increases proportionally due to the granularity of memory-tracking.
Based on these experiments, we determined that the optimal \(\mathsf{VM}_{size}^{block}\) for the MSP430FR2476 is \(128\) Bytes, as it provides the most favorable balance between run-time and granularity. When selecting such parameters, it is crucial to take into account the characteristics of the device's memory and its clock frequency. These factors can vary across different devices and significantly influence the profile. Consequently, it is important to highlight that our prototype's profile may not be optimal to other devices or MCU models. That being said, we use \(128\)-Byte blocks as the reference value for the remainder of the experiments in \(\mathcal{D}\mathsf{iCA}\) evaluation.
### _Hardware Footprint Overhead_
We assess the hardware cost in terms of additional Look-up Tables (LUTs) and flip-flops/registers (FFs). The increase in LUTs reflects the additional chip cost/size attributed to combinatorial logic. The increase in FFs indicates the additional state required by sequential logic. Figure 6 shows the hardware cost of unmodified openMSP430 and the additional cost of \(\mathcal{D}\mathsf{iCA}\) hardware when configured to monitor memory blocks from 16 to 512 Bytes.
The cost of \(\mathcal{D}\mathsf{iCA}\) hardware is maximized when \(\mathsf{VM}_{size}^{block}\) is at the lowest value of 16-Bytes. With this configuration, \(\mathcal{D}\mathsf{iCA}\) hardware incurs the maximum overhead with additional 730 LUTs and 561 FFs. However, this additional cost decreases as \(\mathsf{VM}_{size}^{block}\) increases. For instance, configuring \(\mathcal{D}\mathsf{iCA}\) hardware to monitor 512-Byte blocks only requires 49 LUTs and 58 FFs. For a 128-Byte configuration, which we determined has an ideal check-pointing runtime (see Section IV-A), additional 114 LUTs and 106 FFs are required. Configuring \(\mathcal{D}\mathsf{iCA}\) hardware to monitor 128-Byte blocks causes an increase of \(\approx 8.7\)% relative to the unmodified openMSP430 core.
### _Hardware Energy Overhead_
While the added hardware modules eliminate the need for software-based memory tracking (and associated energy consumption), they also drain energy at runtime. We use Vivado synthesis tool to estimate \(\mathcal{D}\mathsf{iCA}\)'s power consumption on the Basys3 FPGA board. In this analysis, we consider \(\mathcal{D}\mathsf{iCA}\) configured for 128-Byte memory blocks. The MCU, including openMSP430 default set of peripherals and \(\mathcal{D}\mathsf{iCA}\), consumes 89 mW of static power whereas \(\mathcal{D}\mathsf{iCA}\) hardware alone is reported2 to draw less than \(1\) mW. Therefore, \(\mathcal{D}\mathsf{iCA}\) is responsible for less than \(1.1\)%, of the device's static power consumption.
Footnote 2: Vivado does not report energy consumption units under 1mW, treating such small values as negligible. Thus, \(1\)mW is the upper bound for \(\mathcal{D}\mathsf{iCA}\) consumption because it is not possible to obtain precise measures below this value. In reality, however, \(\mathcal{D}\mathsf{iCA}\) may consume significantly less than \(1\) mW.
The dynamic power drawn depends on the frequency of memory writes performed by the software. We consider an application that writes to all memory blocks in a loop to evaluate the worst case. In this scenario, the unmodified openMSP430 (along with peripherals) draws 234 mW of dynamic power. When equipped with \(\mathcal{D}\mathsf{iCA}\), the dynamic power increases to 241 mW, representing a \(\approx\) 2.9% increase.
We note that these estimates are based on the FPGA deployment and may vary once the MCU design is manufactured as an integrated circuit.
### _Power Cycle Efficiency_
To evaluate the efficacy of \(\mathcal{D}\mathsf{iCA}\), we compare its performance with prior related work with respect to the amount of power cycles required to complete five distinct computations. Our benchmark considers five well-known algorithms: an AES encryption block cypher, a matrix multiplication, a SHA256 cryptographic hash function, a bit counting function, and a Depth-First Search (DFS) algorithm. To gauge how well our approach performs in comparison to other methods that trigger the check-point based on Voltage thresholds, we implemented an optimized version of Hibernus++[6] (see related work in Section V). Our implementation of Hibernus++ triggers the check-point within the time required to copy VM (8 kB) before
Fig. 5: Profiling of copy run-time vs \(\mathsf{VM}_{size}^{block}\)
Fig. 6: Hardware cost of unmodified openMSP430 compared to additional cost of \(\mathcal{D}\mathsf{iCA}\) with \(\mathsf{VM}_{size}^{block}\) of 16 to 512 Bytes
energy depletes. Furthermore, to compare our method with existing differential check-pointing approaches, we integrated a software instrumentation version of a memory modification monitor into Hibernus++, drawing upon prior work [3] as a reference.
In our experiment, we set up four distinct experimental configurations, each considering an MCU equipped with a capacitors of different capacitance, used to store harvested energy. By varying the capacitance values, our objective is to examine the influence of varying power supply decays on the performance of \(\mathcal{D}\mathrm{i}\mathsf{C}\mathsf{A}\). The results are presented in Figure 7.
Across the four experimental capacitance configurations, we observed that \(\mathcal{D}\mathrm{i}\mathsf{C}\mathsf{A}\) consistently outperforms Hibernus++. It also outperforms the instrumentation-based differential check-pointing in most cases, requiring fewer execution cycles to run the bench-marked algorithms. The impact of this outcome becomes more pronounced in setups characterized by lower capacitance, where the rate of power supply decays is elevated. This occurs because, with more power cycles, the check-pointing routine tends to occupy a more substantial portion of the MCU execution time. As a result, the difference prior methods and \(\mathcal{D}\mathrm{i}\mathsf{C}\mathsf{A}\) is more pronounced.
It's important to note that \(\mathcal{D}\mathrm{i}\mathsf{C}\mathsf{A}\) and other techniques from the literature are not mutually exclusive. \(\mathcal{D}\mathrm{i}\mathsf{C}\mathsf{A}\) is compatible and interoperable with a diverse array of existing strategies, allowing for a potential combination of approaches to further enhance the efficiency and effectiveness of intermittent applications.
## V Related Work
### _Traditional Check-Pointing_
Differential (a.k.a. incremental) check-pointing [1, 21, 23, 26, 27] is widely studied for traditional computing systems. These systems are developed to operate on high-end devices and thus prioritize performance. Since they do not consider devices that operate on intermittent power supplies or have limited computational/storage resources, these techniques rely on tasks that low-end MCUs are not capable of performing, such as maintaining complex data structures [1, 20], or relatively expensive hardware support [14, 25, 26] to compute the differentials and store the check-point. Check-points in these systems also have different purposes, e.g., fault tolerance, load-balancing, and data concurrency across parallel servers.
### _Check-Pointing in Intermittently Powered Devices_
Energy-efficient check-pointing schemes are required for intermittently powered devices that harvest their own supply of energy. Compile-time techniques [11, 29] instrument the application source code with additional function calls that check the device's current power. They are placed at specific
Fig. 7: Number of power cycles required to compute five application test-cases: \(\mathcal{D}\mathrm{i}\mathsf{C}\mathsf{A}\) vs. two related check-pointing methods
points in the program's control flow, such as each backward edge of a loop or function returns. To properly create a checkpoint, the program context must be written in NVM, including all registers and data currently in use. Therefore frequent check-pointing can result in additional energy losses.
To reduce the overhead, runtime techniques [6, 7] determine the proper time for a check-point to be made while the device operates. The time to create a check-point is determined by employing a hardware-based interrupt to check the voltage supply periodically, and a check-pointing routine is executed once the voltage reaches a minimum threshold, reducing the number of check-points. In contrast with \(\mathcal{D}\)iCA, prior runtime techniques save all registers and the entire VM at each checkpoint.
An alternative approach to further reduce the size of checkpoints is to save just the changes in a similar manner to differential check-points in traditional computing. However, computing the differentials in an energy-efficient and low-cost manner for intermittent computing devices is challenging. Some techniques use software to iterate over all addresses of VM [10] and compare them to the prior check-point. Others compare the hashes of memory blocks [4]. DICE [3] computes the differentials by maintaining an internal _modification record_, and the application software is instrumented with function calls that records differentials. DINO [22] extends C's programming model by providing compiler-aided analysis to place _task-boundaries_, where check-points and data versioning takes place. As an alternative, new programming abstractions that operate on data in NVM at all times have been proposed. For instance, Chain [12] proposes a new program abstraction that guarantees that the state of self-contained tasks is preserved in NVM. Unlike \(\mathcal{D}\)iCA, these techniques either depend on additional software, software modification via instrumentation, or new programming abstractions.
## VI Conclusion
We proposed \(\mathcal{D}\)iCA, a lightweight hardware/software co-design to support efficient differential check-pointing for intermittently powered devices. \(\mathcal{D}\)iCA eliminates the need for application code modifications or instrumentation, simplifying and optimizing check-point generation. In addition, it implements a non-maskable interrupt to dynamically estimate optimal check-pointing times, thereby increasing the active period of applications during a power cycle. \(\mathcal{D}\)iCA interrupt triggers a software routine that complements the hardware, efficiently copying modified memory segments from volatile to non-volatile memory. We implemented and evaluated \(\mathcal{D}\)iCA with an FPGA deployment. \(\mathcal{D}\)iCA open-source prototype is available at [24].
## Acknowledgements
We thank the ICCAD anonymous reviewers for their constructive comments and feedback. This work was supported by the National Science Foundation (Award #2245531) as well as a Meta Research Award (2022 Towards Trustworthy Products in AR, VR, and Smart Devices RFP).
|
2305.05341 | More on Projected Type Iteration Method and Linear Complementarity
Problem | In this article, we establish a class of new projected type iteration methods
based on matrix spitting for solving the linear complementarity problem. Also,
we provide a sufficient condition for the convergence analysis when the system
matrix is an $H_+$-matrix. We show the efficiency of the proposed method by
using two numerical examples for different parameters.
Keywords. Iterative method, Linear complementarity problem, $H_{+}$-matrix,
$P$-matrix, Matrix splitting, Convergence. | Bharat Kumar, Deepmala, A. K. Das | 2023-05-09T11:03:35Z | http://arxiv.org/abs/2305.05341v1 | # More on Projected Type Iteration Method and Linear Complementarity Problem
###### Abstract
In this article, we establish a class of new projected type iteration methods based on matrix spitting for solving the linear complementarity problem. Also, we provide a sufficient condition for the convergence analysis when the system matrix is an \(H_{+}\)-matrix. We show the efficiency of the proposed method by using two numerical examples for different parameters.
**Keywords.** Iterative method, Linear complementarity problem, \(H_{+}\)-matrix, \(P\)-matrix, Matrix splitting, Convergence.
**Mathematics Subject Classification.** 90C33, 65F10, 65F50.
Introduction
The LCP frequently appears in an extensive range of applications that include scientific computing and engineering, such as the free boundary problem and the Nash equilibrium point of the bimatrix game; the American option pricing problem; mathematical economics; operations research; control theory; optimization theory; stochastic optimal control; economics; and elasticity theory. For details see [5], [27], [24][16], [10], [26], [7], [15], [18] and [21].
Assuming \({\cal A}\in{\cal R}^{n\times n}\) and a vector \(\,\sigma\,\in\,{\cal R}^{n}.\) The linear complementarity problem denoted as \({\rm LCP}(\sigma,{\cal A})\) is to find the solution \(\lambda\in{\cal R}^{n}\) to the following system
\[\lambda\geq 0,\ \ \ \ {\cal A}\lambda+\sigma\geq 0,\ \ \ \ \lambda^{T}({\cal A} \lambda+\sigma)=0 \tag{1}\]
The methods for solving linear complementarity problems are divided into two categories: the pivoting method [6][8], [14] and iterative method [25], [13], [17], [20] and [19]. Lemke and Howson [22] introduced the complementary pivot method, but some matrices are not processable by this method as well as by Lemke's Method. The linear complementarity problem can be solved in a number of ways by an iterative process; namely, the projected type methods [3], [13], [25], the modulus method [2], [9] and the modulus based matrix splitting iterative methods [23] and [28].
Fang proposed a general fixed point method (GFP) [11] assuming the case where \(\Omega=\omega A_{\cal D}^{-1}\) with \(\omega{>}0\) and \(A_{\cal D}\) is the diagonal matrix of \({\cal A}\). The GFP approach takes less iterations than the modulus-based successive over-relaxation (MSOR) [2] iteration method. However, the GFP approach calculates the numerical solution component by component of vectors, which takes a long time.
In this article, we present a class of new projected type iteration methods by using the ideas of Xi [11] and Ali [1]. Also, we show that the fixed point equation and the linear complementarity problem are equivalent, discuss convergence conditions and provide a convergence domain for our proposed method.
The article is organized as follows: some required definitions, notations and well-known lemmas are given in Section 2, which will be used for the discussions
in the remaining sections of this work. New projected type iteration methods are constructed in Section 3 with the help of the new equivalent fixed point form of the \(\mathrm{LCP}(\sigma,\mathcal{A})\). In Section 4, we establish the convergence domain of our proposed method. A numerical comparison between the proposed methods and modulus-based matrix splitting iteration methods, introduced by Bai [2], is illustrated in Section 5. Section 6 contains the conclusion of the article.
## 2 Preliminaries
In this section, we provide an overview of various essential notations, definitions, and foundational results.
Suppose \(\mathcal{A}=(a_{ij})\in\mathcal{R}^{n\times n}\) and \(\mathcal{B}=(b_{ij})\in\mathcal{R}^{n\times n}\) are real square matrices. For \(\mathcal{A}=(a_{ij})\in\mathcal{R}^{n\times n}\) and \(\mathcal{B}=(b_{ij})\in\mathcal{R}^{n\times n}\), \(\mathcal{A}\geq(>)\)\(\mathcal{B}\) means \(a_{ij}\geq(>)\)\(b_{ij}\) for all \(i,j\).
**Definition 2.1**.: [11] Let \(\mathcal{A}=(a_{ij})\in\mathcal{R}^{n\times n}\). Then \(|\mathcal{A}|=(c_{ij})\) is defined by \(c_{ij}=|a_{ij}|\ \forall\ i,j\) and \(|\mathcal{A}|\) represent that \(a_{ij}\geq 0\ \forall\ i,j\).
**Definition 2.2**.: [11] Let \(\mathcal{A},\mathcal{B}\in\mathcal{R}^{n\times n}\). Then \(|\mathcal{A}+\mathcal{B}|\leq|\mathcal{A}|+|\mathcal{B}|\) and \(|\mathcal{AB}|\leq|\mathcal{A}||\mathcal{B}|\). Moreover \(x,y\in\mathcal{R}^{n}\) then \(|x+y|\leq|x|+|y|\) and \(||x|-|y||\leq|x-y|\).
**Definition 2.3**.: [8] Let \(\mathcal{A}\in\mathcal{R}^{n\times n}\). \(\mathcal{A}\) is said to be a \(P\)-matrix if all its principle minors are positive i.e. \(\det(\mathcal{A}_{\gamma\gamma})>0\) for all \(\gamma\subseteq\{1,2,\ldots,n\}\).
**Definition 2.4**.: [11] Suppose \(\mathcal{A}\in\mathcal{R}^{n\times n}\). Then its comparison matrix is defined as \(\langle a_{ij}\rangle=|a_{ij}|\) if \(i=j\) and \(\langle a_{ij}\rangle=-|a_{ij}|\) if \(i\neq j\).
**Definition 2.5**.: [12] Suppose \(\mathcal{A}\in\mathcal{R}^{n\times n}\). \(\mathcal{A}\) is said to be a \(Z\)-matrix if all of its non-diagonal elements are less than or equal to zero; \(\mathcal{A}\) is said to be an \(M\)-matrix if \(\mathcal{A}^{-1}\geq 0\) as well as \(Z\)-matrix; \(\mathcal{A}\) is said to be an \(H\)-matrix if \(\langle\mathcal{A}\rangle\) is an \(M\)-matrix; \(\mathcal{A}\) is an \(H_{+}\)-matrix if it is an \(H\)-matrix with \(a_{ii}>0\ \forall\ i\in\{1,2,\ldots,n\}\).
**Definition 2.6**.: [12] Suppose \(\mathcal{A}\in\mathcal{R}^{n\times n}\). The splitting \(\mathcal{A}=\mathcal{M}-\mathcal{N}\) is called an \(M\)-splitting if \(\mathcal{M}\) is a nonsingular \(M\)-matrix and \(\mathcal{N}\geq 0\); an \(H\)-splitting if \(\langle\mathcal{M}\rangle-|\mathcal{N}|\) is an \(M\)-matrix; an \(H\)-compatible splitting if \(\langle\mathcal{A}\rangle=\langle\mathcal{M}\rangle-|\mathcal{N}|\).
**Lemma 2.1**.: _[_1_]_ _Let \(x,y\in\mathcal{R}^{n}\). \(x\geq 0\), \(y\geq 0\), \(x^{T}y=0\) if and only if \(x+y=|x-y|\)._
**Lemma 2.2**.: _[_12_]_ _Suppose \(\mathcal{A},B_{1}\in\mathcal{R}^{n\times n}\). If \(\mathcal{A}\) and \(\mathcal{B}\) are \(M\) and \(Z\)-matrices respectively with \(\mathcal{A}\leq\mathcal{B}\) then \(\mathcal{B}\) is an \(M\)-matrix. If \(\mathcal{A}\) is an \(H\)-matrix then \(|\mathcal{A}^{-1}|\leq\langle\mathcal{A}\rangle^{-1}\). If \(\mathcal{A}\leq\mathcal{B}\), then \(\rho(\mathcal{A})\leq\rho(\mathcal{B})\)._
**Lemma 2.3**.: _[_11_]_ _Let \(\mathcal{A}\in\mathcal{R}^{n\times n}\) be an \(M\)-matrix and \(\mathcal{A}=\mathcal{M}-\mathcal{N}\) be an \(M\)-splitting. Let \(\rho\) be the spectral radius, then \(\ \rho(\mathcal{M}^{-1}\mathcal{N})<1\)._
**Lemma 2.4**.: _[_4_]_ _Suppose \(\mathcal{A}\in\mathcal{R}^{n\times n}\) with splitting \(\mathcal{A}=\mathcal{M}-\mathcal{N}\). Let splitting be an \(H\)-compatible of an \(H\)-matrix, then it is an \(H\)-splitting but converse is not true._
**Lemma 2.5**.: _[_12_]_ _Suppose \(\mathcal{A}\geq 0\in\mathcal{R}^{n\times n}\), if there exists \(v>0\in\mathcal{R}^{n}\) and a scalar \(\alpha_{1}>0\) such that \(\mathcal{A}v\leq\alpha_{1}v\) then \(\rho(\mathcal{A})\leq\alpha_{1}\). Moreover, if \(\mathcal{A}v<v\) then \(\rho(\mathcal{A})\textless 1\)._
## 3 Main results
For a given vector \(\zeta\in\mathcal{R}^{n}\), we indicate the vector \(\zeta_{+}=\)max\(\{0,\zeta\}\) and matrix \(\mathcal{A}=(\mathcal{M}+I+D_{\mathcal{A}})-(\mathcal{N}+I+D_{\mathcal{A}})\), where \(D_{\mathcal{A}}\) is diagonal matrix of \(\mathcal{A}\). In the following result, we convert the LCP \((\sigma,\mathcal{A})\) into a fixed point formulation.
**Theorem 3.1**.: _Let \(\mathcal{A}\in\mathcal{R}^{n\times n}\) with the splitting \(\mathcal{A}=(\mathcal{M}+I+D_{\mathcal{A}})-(\mathcal{N}+I+D_{\mathcal{A}})\). Let \(\lambda=\zeta_{+}\), then equivalent formulation of the LCP\((\sigma,\mathcal{A})\) in form of fixed point equation is_
\[\zeta_{+}=(\mathcal{M}+2I+D_{\mathcal{A}})^{-1}[(\mathcal{N}+I+D_{\mathcal{A} })\zeta_{+}+|(\mathcal{A}-I)\zeta_{+}+\sigma|-\sigma] \tag{2}\]
Proof.: We have \(\lambda=\zeta_{+}\geq 0\) and \(\mathcal{A}\lambda+\sigma\geq 0\), from Lemma 2.1,
\[(\mathcal{A}\zeta_{+}+\sigma+\zeta_{+}) =|\mathcal{A}\zeta_{+}+\sigma-\zeta_{+}|\] \[(I+\mathcal{A})\zeta_{+} =|(\mathcal{A}-I)\zeta_{+}+\sigma|-\sigma\] \[(\mathcal{M}+2I+D_{\mathcal{A}})\zeta_{+} =(\mathcal{N}+I+D_{\mathcal{A}})\zeta_{+}+|(\mathcal{A}-I)\zeta_{ +}+\sigma|-\sigma,\]
the above equation can be rewritten as,
\[\zeta_{+}=(\mathcal{M}+2I+D_{\mathcal{A}})^{-1}[(\mathcal{N}+I+D_{\mathcal{A}}) \zeta_{+}+|(\mathcal{A}-I)\zeta_{+}+\sigma|-\sigma] \tag{3}\]
In the following, Based on Equation (2), we propose an iteration method which is known as Method 3.1 to solve the \(\text{LCP}(\sigma,\mathcal{A})\).
**Method 3.1**.: _Let \(\mathcal{A}=(\mathcal{M}+I+D_{\mathcal{A}})-(\mathcal{N}+I+D_{\mathcal{A}})\) be a splitting of the matrix \(\mathcal{A}\in\mathcal{R}^{n\times n}\) and the matrix \((\mathcal{M}+2I+D_{\mathcal{A}})\) be the nonsingular. Then we use the following equation for Method 3.1 is_
\[\zeta_{+}^{(\eta+1)}=(\mathcal{M}+2I+D_{\mathcal{A}})^{-1}[(\mathcal{N}+I+D_{ \mathcal{A}})\zeta_{+}^{(\eta)}+|(\mathcal{A}-I)\zeta_{+}^{(\eta)}+\sigma|-\sigma] \tag{4}\]
_Let Residual be the Euclidean norm of the error vector, which is defined as follows:_
\[Res(\lambda^{(\eta)})=|min(\lambda^{(\eta)},\mathcal{A}\lambda^{(\eta)}+ \sigma)|_{2}.\]
_Consider a nonnegative initial vector \(\lambda^{(0)}\in\mathcal{R}^{n}\). The iteration process continues until the iteration sequence \(\{\lambda^{(\eta)}\}_{\eta=0}^{+\infty}\subset\mathcal{R}^{n}\) converges. For \(\eta=0,1,2,\ldots\), the iterative process continues until the iterative sequence \(\lambda^{(\eta+1)}\in\mathcal{R}^{n}\) converges. The iteration process stops if \(Res(\lambda^{(\eta)})<\epsilon\). For computing \(\lambda^{(\eta+1)}\) we use the following steps._
_Step 1_: _Given an initial vector_ \(\zeta^{(0)}\in\mathcal{R}^{n}\)_,_ \(\epsilon>0\) _and set_ \(\eta=0\)_._
_Step 2_: _Using the following scheme, generate the sequence_ \(\lambda^{(\eta)}\)_:_
\[\zeta_{+}^{(\eta+1)}=(\mathcal{M}+2I+D_{\mathcal{A}})^{-1}[(\mathcal{N}+I+D_{ \mathcal{A}})\zeta_{+}^{(\eta)}+|(\mathcal{A}-I)\zeta_{+}^{(\eta)}+\sigma|- \sigma],\]
_and set_ \(\lambda^{(\eta+1)}=\zeta_{+}^{(\eta+1)}\)_, where_ \(\zeta_{+}^{(\eta+1)}\) _is the_ \((\eta+1)^{th}\) _approximate solution of Equation (_3_)._
_Step 3_: _If_ \(Res(\lambda^{(\eta)})<\epsilon\) _then stop; otherwise, set_ \(\eta=\eta+1\) _and return to step 2._
Moreover, Method 3.1 provides a general structure for solving \(\text{LCP}(\sigma,\mathcal{A})\). We obtain a class of new projected type iteration relaxation methods using matrix
splitting. We express the system matrix \(\mathcal{A}=(\mathcal{M}+I+D_{\mathcal{A}})-(\mathcal{N}+I+D_{\mathcal{A}})\). Then
1. when \(\mathcal{M}=D_{\mathcal{A}}-L_{\mathcal{A}}\) and \(\mathcal{N}=U_{\mathcal{A}}\), Equation (4) gives the new projected type Gauss Seidel iteration (NPGS) method \[\zeta_{+}^{(\eta+1)} =(D_{\mathcal{A}}-L_{\mathcal{A}}+2I+D_{\mathcal{A}})^{-1}[(U_{ \mathcal{A}}+I+D_{\mathcal{A}})\zeta_{+}^{(\eta)}\] \[+|(\mathcal{A}-I)\zeta_{+}^{(\eta)}+\sigma|-\sigma].\]
2. when \(\mathcal{M}=(\frac{1}{\alpha_{1}}D_{\mathcal{A}}-L_{\mathcal{A}})\) and \(\mathcal{N}=(\frac{1}{\alpha_{1}}-1)D_{\mathcal{A}}+U_{\mathcal{A}}\), Equation (4) gives the new projected type successive overrelaxation iteration (NPSOR) method \[\zeta_{+}^{(\eta+1)} =(D_{\mathcal{A}}-\alpha_{1}L_{\mathcal{A}}+\alpha_{1}(2I+D_{ \mathcal{A}}))^{-1}[((1-\alpha_{1})D_{\mathcal{A}}+U_{\mathcal{A}}\] \[+\alpha_{1}I+D_{\mathcal{A}})\zeta_{+}^{(\eta)}+\alpha_{1}|( \mathcal{A}-I)\zeta_{+}^{(\eta)}+\sigma|-\alpha_{1}\sigma].\]
3. when \(\mathcal{M}=(\frac{1}{\alpha_{1}})(D_{\mathcal{A}}-\beta_{1}L_{\mathcal{A}})\) and \(\mathcal{N}=(\frac{1}{\alpha_{1}})[(1-\alpha_{1})D_{\mathcal{A}}+(\alpha_{1}- \beta_{1})L_{\mathcal{A}}+\alpha_{1}U_{\mathcal{A}}]\), Equation (4) gives the new projected type accelerated overrelaxation iteration (NPAOR) method \[\zeta_{+}^{(\eta+1)} =(D_{\mathcal{A}}-\beta_{1}L_{\mathcal{A}}+\alpha_{1}(2I+D_{ \mathcal{A}}))^{-1}[((1-\alpha_{1})D_{\mathcal{A}}+(\alpha_{1}-\beta_{1})U_{ \mathcal{A}}\] \[+\alpha_{1}I+D_{\mathcal{A}})\zeta_{+}^{(\eta)}+\alpha_{1}|( \mathcal{A}-I)\zeta_{+}^{(\eta)}+\sigma|-\alpha_{1}\sigma].\]
When \((\alpha_{1},\beta_{1})\) takes the values \((\alpha_{1},\alpha_{1})\), \((1,1)\) and \((1,0)\), the NPAOR method transforms into the new projected type successive overrelaxation (NPSOR), new projected type Gauss-Seidel (NPGS) and new projected type Jacobi (NPJ) methods respectively.
## 4 Convergence analysis
In the following, we present the convergence condition when the system matrix \(\mathcal{A}\) of \(\mathrm{LCP}(\sigma,\mathcal{A})\) is a \(P\)-matrix.
**Theorem 4.1**.: _Let \(\mathcal{A}\in\mathcal{R}^{n\times n}\) be a \(P\)-matrix and \(\zeta_{+}^{*}\) be the solution of Equation (2). Let \(\rho(|(\mathcal{M}+2I+D_{\mathcal{A}})^{-1}|(|\mathcal{N}+I+D_{\mathcal{A}}|+| \mathcal{A}-I|))<1\). Then the sequence \(\{\zeta_{+}^{(\eta)}\}_{\eta=1}^{+\infty}\) generated by Method \(3.1\) converges to the solution \(\zeta_{+}^{*}\) for any initial vector \(\zeta^{(0)}\in\mathcal{R}^{n}\)._
Proof.: Let \(\zeta_{+}^{*}\) be the solution of Equation (2), then error is
\[(\mathcal{M}+2I+D_{\mathcal{A}})(\zeta_{+}^{(\eta+1)}-\zeta_{+}^{*}) =(\mathcal{N}+I+D_{\mathcal{A}})(\zeta_{+}^{(\eta)}-\zeta_{+}^{*}) +|(\mathcal{A}-I)\zeta_{+}^{(\eta)}+\sigma|\] \[-|(\mathcal{A}-I)\zeta_{+}^{*}+\sigma|\] \[|(\mathcal{M}+2I+D_{\mathcal{A}})(\zeta_{+}^{(\eta+1)}-\zeta_{+}^ {*})| =|(\mathcal{N}+I+D_{\mathcal{A}})(\zeta_{+}^{(\eta)}-\zeta_{+}^{*}) +|(\mathcal{A}-I)\zeta_{+}^{(\eta)}+\sigma|\] \[-|(\mathcal{A}-I)\zeta_{+}^{*}+\sigma||\]
\[\leq|(\mathcal{N}+I+D_{\mathcal{A}})(\zeta_{+}^{(\eta)}-\zeta_{+}^{*})+|( \mathcal{A}-I)(\zeta_{+}^{(\eta)}-\zeta_{+}^{*})||\]
\[\leq|(\mathcal{N}+I+D_{\mathcal{A}})||(\zeta_{+}^{(\eta)}-\zeta_{+}^{*})|+|( \mathcal{A}-I)||(\zeta_{+}^{(\eta)}-\zeta_{+}^{*})|\]
\[|(\zeta_{+}^{(\eta+1)}-\zeta_{+}^{*})| \leq|(\mathcal{M}+2I+D_{\mathcal{A}})^{-1}|[|(\mathcal{N}+I+D_{ \mathcal{A}})|+|(\mathcal{A}-I)|]|(\zeta_{+}^{(\eta)}-\zeta_{+}^{*})|\] \[|(\zeta_{+}^{(\eta+1)}-\zeta_{+}^{*})| <|(\zeta_{+}^{(\eta)}-\zeta_{+}^{*})|.\]
Hence \(\zeta_{+}^{(\eta)}\) converges to the solution \(\zeta_{+}^{*}\).
Now we discuss the convergence conditions for Method 3.1 when the system matrix \(\mathcal{A}\) of LCP\((\sigma,\mathcal{A})\) is an \(H_{+}\)-matrix.
**Theorem 4.2**.: _Let \(\mathcal{A}\in\mathcal{R}^{n\times n}\) be an \(H_{+}\)-matrix and \(\mathcal{A}=\mathcal{M}-\mathcal{N}=(\mathcal{M}+I+D_{\mathcal{A}})-(\mathcal{ N}+I+D_{\mathcal{A}})\) be an \(H\)-compatible splitting of the matrix \(\mathcal{A}\), such that \(\langle\mathcal{A}\rangle=\langle\mathcal{M}\rangle-|\mathcal{N}|=\langle \mathcal{M}+I+D_{\mathcal{A}}\rangle-|\mathcal{N}+I+D_{\mathcal{A}}|\) and either one of the following conditions hold:_
_(1)_ \(D_{\mathcal{A}}\ \geq\ I\) _and_ \(\langle\mathcal{A}\rangle+2I-D_{\mathcal{A}}-|B|\) _is an_ \(M\)_- matrix, where_ \(B=L_{\mathcal{A}}+U_{\mathcal{A}}\)_;_
_(2)_ \(D_{\mathcal{A}}<I\)_._
_Then the sequence \(\{\zeta_{+}^{(\eta)}\}_{\eta=1}^{+\infty}\) generated by Method 3.1 converges to the solution \(\zeta_{+}^{*}\) for any initial vector \(\zeta^{(0)}\in\mathcal{R}^{n}\)._
Proof.: Let \(\mathcal{A}=\mathcal{M}-\mathcal{N}=(\mathcal{M}+I+D_{\mathcal{A}})-(\mathcal{ N}+I+D_{\mathcal{A}})\) and it holds that
\(\langle\mathcal{A}\rangle\leq\langle\mathcal{M}+I+D_{\mathcal{A}}\rangle\leq diag (\mathcal{M}+I+D_{\mathcal{A}})\), hence \((\mathcal{M}+I+D_{\mathcal{A}})\) is an \(H_{+}\)-matrix and it holds that
\[|(\mathcal{M}+2I+D_{\mathcal{A}})^{-1}|\leq(\langle\mathcal{M}\rangle+2I+D_{ \mathcal{A}})^{-1}.\]
Let \(T=|(\mathcal{M}+2I+D_{\mathcal{A}})^{-1}|(|(\mathcal{N}+I+D_{\mathcal{A}})|+|( \mathcal{A}-I)|)\).
Then
\[T =|(2I+\mathcal{M}+D_{\mathcal{A}})^{-1}|[|\mathcal{N}+I+D_{\mathcal{A }}|+|(\mathcal{A}-I)|]\] \[\leq(2I+\langle\mathcal{M}\rangle+D_{\mathcal{A}})^{-1}[|\mathcal{ N}+I+D_{\mathcal{A}}|+|(\mathcal{A}-I)|]\] \[\leq(2I+\langle\mathcal{M}\rangle+D_{\mathcal{A}})^{-1}[|\mathcal{ N}+I+D_{\mathcal{A}}|+|(D_{\mathcal{A}}-I)-(L_{\mathcal{A}}+U_{\mathcal{A}})|]\] \[\leq(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}[(\langle \mathcal{M}\rangle+2I+D_{\mathcal{A}})-(\langle\mathcal{M}\rangle+2I+D_{ \mathcal{A}})\] \[+|\mathcal{N}+I+D_{\mathcal{A}}|+|D_{\mathcal{A}}-I|+|L_{ \mathcal{A}}+U_{\mathcal{A}}|].\]
Case 1. Suppose \(D_{\mathcal{A}}\ \geq\ I\) and \(\langle\mathcal{A}\rangle+2I-D_{\mathcal{A}}-|B|\) is an \(M\)-matrix then
\[T \leq I-(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}[( \langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})-|\mathcal{N}+I+D_{\mathcal{A}}|\] \[-D_{\mathcal{A}}+I-|L_{\mathcal{A}}+U_{\mathcal{A}}|]\] \[\leq I-(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}[I+( \langle\mathcal{M}\rangle+I+D_{\mathcal{A}})-|\mathcal{N}+I+D_{\mathcal{A}}|\] \[-D_{\mathcal{A}}+I-|L_{\mathcal{A}}+U_{\mathcal{A}}|]\] \[\leq I-(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}[( \langle\mathcal{M}\rangle+I+D_{\mathcal{A}})-|\mathcal{N}+I+D_{\mathcal{A}}|+ 2I-D_{\mathcal{A}}\] \[-|L_{\mathcal{A}}+U_{\mathcal{A}}|]\] \[\leq I-(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}( \langle\mathcal{A}\rangle+2I-D_{\mathcal{A}}-|B|).\]
Since \(\langle\mathcal{A}\rangle+2I-D_{\mathcal{A}}-|B|\) is an \(M\)-matrix, there exists a positive vector \(v>0\) such that
\[(\langle\mathcal{A}\rangle+2I-D_{\mathcal{A}}-|B|)v>0.\]
Therefore,
\[Tv\leq(I-2(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}(\langle\mathcal{ A}\rangle+2I-D_{\mathcal{A}}-|B|)v<v\]
\[Tv<v.\]
Case 2. Suppose \(D_{\mathcal{A}}{<}I\) then
\[T \leq I-(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}[(\langle \mathcal{M}\rangle+2I+D_{\mathcal{A}})-|\mathcal{N}+I+D_{\mathcal{A}}|\] \[+D_{\mathcal{A}}-I-|L_{\mathcal{A}}+U_{\mathcal{A}}|]\] \[\leq I-(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}[( \langle\mathcal{M}\rangle+I+D_{\mathcal{A}})-|\mathcal{N}+I+D_{\mathcal{A}}|\] \[+D_{\mathcal{A}}-|L_{\mathcal{A}}+U_{\mathcal{A}}|]\] \[\leq I-(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}[( \langle\mathcal{M}\rangle+I+D_{\mathcal{A}})-|\mathcal{N}+I+D_{\mathcal{A}}|+ D_{\mathcal{A}}\] \[-|L_{\mathcal{A}}+U_{\mathcal{A}}|]\] \[\leq I-2(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1} \langle\mathcal{A}\rangle.\]
Since \(\langle\mathcal{A}\rangle\) is an \(M\)-matrix, then there exists a positive vector \(v>0\) such that
\[(\langle\mathcal{A}\rangle)v>0.\]
Therefore,
\[Tv\leq(I-2(\langle\mathcal{M}\rangle+2I+D_{\mathcal{A}})^{-1}(\langle \mathcal{A}\rangle)v<v\]
This implies that
\[Tv<v.\]
From Cases 1 and 2 and based on Lemma 2.5, we obtain that \(\rho(T){<}1\). Therefore based on Theorem 4.1, the iteration sequence \(\{\zeta_{+}^{(\eta)}\}_{\eta=1}^{+\infty}\) generated by Method 3.1 converges to \(\zeta_{+}^{*}\) for any initial vector \(\zeta^{(0)}\).
## 5 Numerical examples
In this section, two numerical examples are given in this part to demonstrate the effectiveness of our proposed method and use some notation as number of iteration steps (denoted by IT), CPU time in seconds (denoted by CPU). Let \(\zeta^{(0)}=(1,0,\ldots 1,0,\ldots)^{T}\in\mathcal{R}^{n}\) be an initial vector and set \(\epsilon=10^{-5}\). We consider the \(\mathrm{LCP}(\sigma,\mathcal{A})\) which has always unique solution and define \(\sigma=-\mathcal{A}\lambda^{*}\), where \(\lambda^{*}=(1,2,1,\cdots,1,2)^{T}\in\mathcal{R}^{n}.\) The proposed new projected Gauss Seidel iteration method (NPGS) and the new projected successive over relaxation
iteration method (NPSOR) are compared with the modulus based Gauss Seidel (MGS) Method and the modulus based successive over relaxation (MSOR) method [2] respectively, which are effective in solving \(\text{LCP}(\sigma,\mathcal{A})\) and set \(\Omega=\frac{1}{2\alpha}D_{\mathcal{A}}\) for MGS and MSOR methods. Matlab version 2021a on an Acer Desktop (Intel(R) Core(TM) i7-8700 CPU @ 3.2 GHz, 3.19 GHz, 16.00GB RAM) is used for all calculations. Table 1 and Table 2 list the numerical results for new projected type iteration matrix splitting Method 3.1 (NPGS, NPSOR) and modulus based matrix splitting Methods (MGS, MSOR).
**Example 5.1**.: _The system matrix \(\mathcal{A}\) are generated by \(\mathcal{A}=P_{1}+\delta_{1}I\), where \(\delta_{1}\) are nonnegative real parameter and_
\[P_{1}=\begin{bmatrix}L_{1}&-I_{1}&0&\dots&0\\ -I_{1}&L_{1}&-I_{1}&\dots&0\\ 0&-I_{1}&L_{1}&-I_{1}&0\\ 0&\dots&I_{1}&\ddots&-I_{1}\\ 0&\dots&0&-I_{1}&L_{1}\end{bmatrix}\in\mathcal{R}^{n\times n}\text{, }L_{1}= \begin{bmatrix}4&-1&\dots&\dots&0\\ -1&4&-1&\dots&0\\ 0&-1&4&-1&0\\ 0&\dots&-1&\ddots&-1\\ 0&\dots&\dots&-1&4\end{bmatrix}\]
\(\in\mathcal{R}^{m\times m}\)_, where \(I_{1}\) is the identity matrix of order \(m\)._
**Example 5.2**.: _The system matrix \(\mathcal{A}\in\mathcal{R}^{n\times n}\) is generated by \(\mathcal{A}=P_{1}+\delta_{1}I\), where \(\delta_{1}\) are nonnegative real parameter and_
\[P_{1}=\begin{bmatrix}L_{1}&-0.5I_{1}&0&\dots&0\\ -1.5I_{1}&L_{1}&-0.5I_{1}&\dots&0\\ 0&-1.5I_{1}&L_{1}&-0.5I_{1}&0\\ 0&\dots&-1.5I_{1}&\ddots&-0.5I_{1}\\ 0&\dots&0&-1.5I_{1}&L_{1}\end{bmatrix}\text{, }L_{1}=\begin{bmatrix}4&-1&\dots&\dots&0\\ -1&4&-1&\dots&0\\ 0&-1&4&-1&0\\ 0&\dots&-1&\ddots&-1\\ 0&\dots&\dots&-1&4\end{bmatrix}\]
\(\in\mathcal{R}^{m\times m}\)_, where \(I_{1}\) is the identity matrix of order \(m\)._
From Table 1 and Table 2, we can observe that the number of iteration steps required for our proposed NPGS and NPSOR methods is less than the MGS and MSOR methods.
## 6 Conclusion
In this article, we introduce a class of new projected-type iteration methods based on matrix splitting for solving the linear complementarity problem LCP (\(\sigma,\mathcal{A}\)). During the iteration process, the large and sparse structure of \(\mathcal{A}\) is maintained by these iterative forms. Moreover, the sufficient conditions for convergence for \(H_{+}\) matrix or \(P\)-matrix are presented. Finally, two numerical examples are provided to demonstrate the effectiveness of the proposed methods.
**Conflict of interest** The authors declare that there is no conflicts of interest.
**Acknowledgment.** The first author is thankful to the University Grants Commission (UGC), Government of India, under the JRF fellowship programme no. 1068/(CSIR-UGC NET DEC. 2017).
\begin{table}
\begin{tabular}{|l|l|l l l l l l|} \hline & **n** & 100 & 900 & 2500 & 3600 & 6400 & 10000 \\ \hline \hline
**MGS** & **IT** & 36 & 40 & 41 & 41 & 42 & 42 \\ \(\alpha=1\) & **CPU** & 0.0030 & 0.0254 & 0.2550 & 0.6083 & 1.8468 & 2.7943 \\ & **Res** & 9.7e-06 & 8.0e-06 & 7.9e-06 & 8.9e-06 & 7.4e-06 & 8.4e-06 \\
**NPGS** & **IT** & 21 & 23 & 254 & 24 & 25 & 25 \\ \(\alpha_{1}=1\) & **CPU** & 0.0021 & 0.0035 & 0.0175 & 0.0636 & 0.1725 & 0.3785 \\ & **Res** & 5.2e-06 & 7.1e-06 & 6.5e-06 & 8.0e-06 & 5.5e-06 & 7.0e-06 \\ \hline
**MSOR** & **IT** & 15 & 17 & 18 & 18 & 18 & 19 \\ \(\alpha=0.85\) & **CPU** & 0.0024 & 0.0044 & 0.0134 & 0.0462 & 0.1118 & 0.2246 \\ & **Res** & 9.5e-06 & 7.6e-06 & 5.2e-06 & 6.5e-06 & 8.9e-06 & 4.3e-06 \\
**NPSOR** & **IT** & 15 & 16 & 17 & 17 & 17 & 17 \\ \(\alpha_{1}=1.7\) & **CPU** & 0.0019 & 0.0031 & 0.0138 & 0.0449 & 0.1108 & 0.2217 \\ & **Res** & 5.9e-06 & 6.8e-06 & 4.2e-06 & 4.9e-06 & 6.3e-06 & 7.7e-06 \\ \hline \end{tabular}
\end{table}
Table 1: Results for MGS and MSOR methods and NPGS and NPSOR methods, when \(\delta_{1}=4\). |
2310.01391 | A Restoration Network as an Implicit Prior | Image denoisers have been shown to be powerful priors for solving inverse
problems in imaging. In this work, we introduce a generalization of these
methods that allows any image restoration network to be used as an implicit
prior. The proposed method uses priors specified by deep neural networks
pre-trained as general restoration operators. The method provides a principled
approach for adapting state-of-the-art restoration models for other inverse
problems. Our theoretical result analyzes its convergence to a stationary point
of a global functional associated with the restoration operator. Numerical
results show that the method using a super-resolution prior achieves
state-of-the-art performance both quantitatively and qualitatively. Overall,
this work offers a step forward for solving inverse problems by enabling the
use of powerful pre-trained restoration models as priors. | Yuyang Hu, Mauricio Delbracio, Peyman Milanfar, Ulugbek S. Kamilov | 2023-10-02T17:48:42Z | http://arxiv.org/abs/2310.01391v1 | # A Restoration Network as an Implicit Prior
###### Abstract
Image denoisers have been shown to be powerful priors for solving inverse problems in imaging. In this work, we introduce a generalization of these methods that allows any image restoration network to be used as an implicit prior. The proposed method uses priors specified by deep neural networks pre-trained as general restoration operators. The method provides a principled approach for adapting state-of-the-art restoration models for other inverse problems. Our theoretical result analyzes its convergence to a stationary point of a global functional associated with the restoration operator. Numerical results show that the method using a super-resolution prior achieves state-of-the-art performance both quantitatively and qualitatively. Overall, this work offers a step forward for solving inverse problems by enabling the use of powerful pre-trained restoration models as priors.
## 1 Introduction
Many problems in computational imaging, biomedical imaging, and computer vision can be formulated as _inverse problems_, where the goal is to recover a high-quality images from its low-quality observations. Imaging inverse problems are generally ill-posed, thus necessitating the use of prior models on the unknown images for accurate inference. While the literature on prior modeling of images is vast, current methods are primarily based on _deep learning (DL)_, where a deep model is trained to map observations to images (Lucas et al., 2018; McCann et al., 2017; Ongie et al., 2020).
Image denoisers have become popular for specifying image priors for solving inverse problems (Venkatakrishnan et al., 2013; Romano et al., 2017; Kadkhodaie and Simoncelli, 2021; Kamilov et al., 2023). Pre-trained denoisers provide a convenient proxy for image priors that does not require the description of the full density of natural images. The combination of state-of-the-art (SOTA) deep denoisers with measurement models has been shown to be effective in a number of inverse problems, including image super-resolution, deblurring, inpainting, microscopy, and medical imaging (Metzler et al., 2018; Zhang et al., 2017; Meinhardt et al., 2017; Dong et al., 2019; Zhang et al., 2019; Wei et al., 2020; Zhang et al., 2022) (see also the recent reviews Ahmad et al. (2020); Kamilov et al. (2023)). This success has led to active research on novel methods based on denoiser priors, their theoretical analyses, statistical interpretations, as well as connections to related approaches such as score matching and diffusion models (Chan et al., 2017; Romano et al., 2017; Buzzard et al., 2018; Reehorst and Schniter, 2019; Sun et al., 2019; Sun et al., 2019; Ryu et al., 2019; Xu et al., 2020; Liu et al., 2021; Cohen et al., 2021; Hurault et al., 2022a,b; Laumont et al., 2022; Gan et al., 2023).
Despite the rich literature on the topic, the prior work has narrowly focused on leveraging the statistical properties of denoisers. There is little work on extending the formalism and theory to priors specified using other types of image restoration operators, such as, for example, deep image super-resolution models. Such extensions would enable new algorithms that can leverage SOTA pre-trained restoration networks for solving other inverse problems. In this paper, we address this gap by developing the _Deep **R**estoration **P**riors (**DRP**) methodology that provides a principled approach for using restoration operators as priors. We show that when the restoration operator is a _minimum mean-squared error (MMSE)_ estimator, DRP can be interpreted as minimizing a composite objective function that includes log of the density of the degraded
image as the regularizer. Our interpretation extends the recent formalism based on using MMSE denoisers as priors (Bigdeli et al., 2017; Xu et al., 2020; Kadkhodaie and Simoncelli, 2021; Laumont et al., 2022; Gan et al., 2023). We present a theoretical convergence analysis of DRP to a stationary point of the objective function under a set of clearly specified assumptions. We show the practical relevance of DRP by solving several inverse problems by using a super-resolution network as a prior. Our numerical results show the potential of DRP to adapt the super-resolution model to act as an effective prior that can outperform image denoisers. This work thus addresses a gap in the current literature by providing a new principled framework for using pre-trained restoration models as priors for inverse problems.
All proofs and some details that have been omitted for space appear in the appendix.
## 2 Background
**Inverse Problems.** Many imaging problems can be formulated as inverse problems that seek to recover an unknown image \(\mathbf{x}\in\mathbb{R}^{n}\) from from its corrupted observation
\[\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{e}, \tag{1}\]
where \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is a measurement operator and \(\mathbf{e}\in\mathbb{R}^{m}\) is the noise. A common strategy for addressing inverse problems involves formulating them as an optimization problem
\[\widehat{\mathbf{x}}\in\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{n}}f(\mathbf{x })\quad\text{with}\quad f(\mathbf{x})=g(\mathbf{x})+h(\mathbf{x})\, \tag{2}\]
where \(g\) is the data-fidelity term that measures the fidelity to the observation \(\mathbf{y}\) and \(h\) is the regularizer that incorporates prior knowledge on \(\mathbf{x}\). For example, common functionals in imaging inverse problems are the least-squares data-fidelity term \(g(\mathbf{x})=\frac{1}{2}\left\|\mathbf{A}\mathbf{x}-\mathbf{y}\right\|_{2}^{2}\) and the total variation (TV) regularizer \(h(\mathbf{x})=\tau\left\|\mathbf{D}\mathbf{x}\right\|_{1}\), where \(\mathbf{D}\) is the image gradient, and \(\tau>0\) a regularization parameter.
**Deep Learning.** DL is extensively used for solving imaging inverse problems (McCann et al., 2017; Lucas et al., 2018; Ongie et al., 2020). Instead of explicitly defining a regularizer, DL methods often train convolutional neural networks (CNNs) to map the observations to the desired images (Wang et al., 2016; Jin et al., 2017; Kang et al., 2017; Chen et al., 2017; Delbracio et al., 2021; Delbracio and Milanfar, 2023). Model-based DL (MBDL) is a widely-used sub-family of DL algorithms that integrate physical measurement models with priors specified using CNNs (see reviews by Ongie et al. (2020); Monga et al. (2021)). The literature of MBDL is vast, but some well-known examples include plug-and-play priors (PnP), regularization by denoising (RED), deep unfolding (DU), compressed sensing using generative models (CSGM), and deep equilibrium models (DEQ) (Bora et al., 2017; Romano et al., 2017; Zhang and Ghanem, 2018; Hauptmann et al., 2018; Gilton et al., 2021; Liu et al., 2022). These approaches come with different trade-offs in terms of imaging performance, computational and memory complexity, flexibility, need for supervision, and theoretical understanding.
**Denoisers as Priors.** PnP (Venkatakrishnan et al., 2013; Sreehari et al., 2016) is one of the most popular MBDL approaches for inverse problems based on using deep denoisers as imaging priors (see recent reviews by Ahmad et al. (2020); Kamilov et al. (2023)). For example, the proximal-gradient method variant of PnP can be written as (Hurault et al., 2022)
\[\mathbf{x}^{k}\leftarrow\mathsf{prox}_{\gamma g}(\mathbf{z}^{k})\quad\text{with}\quad \mathbf{z}^{k}\leftarrow\mathbf{x}^{k-1}-\gamma\tau(\mathbf{x}^{k-1}-\mathsf{D}_{\sigma}( \mathbf{x}^{k-1})), \tag{3}\]
where \(\mathsf{D}_{\sigma}\) is a denoiser with a parameter \(\sigma>0\) for controlling its strength, \(\tau>0\) is a regularization parameter, and \(\gamma>0\) is a step-size. The theoretical convergence of PnP methods has been established for convex functions \(g\) using monotone operator theory (Sreehari et al., 2016; Sun et al., 2019; Ryu et al., 2019), as well as for nonconvex functions based on interpreting the denoiser as a MMSE estimator (Xu et al., 2020) or ensuring that the term \((\mathsf{l}-\mathsf{D}_{\sigma})\) in (3) corresponds to a gradient \(\nabla h\) of a function \(h\) parameterized by a deep neural network (Hurault et al., 2022, 2022; Cohen et al., 2021). Many variants of PnP have been developed over the past few years (Romano et al., 2017; Metzler et al., 2018; Zhang et al., 2017; Meinhardt et al.,
2017; Dong et al., 2019; Zhang et al., 2019; Wei et al., 2020), which has motivated an extensive research on its theoretical properties (Chan et al., 2017; Buzzard et al., 2018; Ryu et al., 2019; Sun et al., 2019; Tirer and Giryes, 2019; Teodoro et al., 2019; Xu et al., 2020; Sun et al., 2021; Cohen et al., 2021b; Hurault et al., 2022a; Laumont et al., 2022; Hurault et al., 2022b; Gan et al., 2023).
This work is most related to two recent PnP-inspired methods using restoration operators instead of denoisers (Zhang et al., 2019; Liu et al., 2020). Deep plug-and-play super-resolution (DPSR) (Zhang et al., 2019) was proposed to perform image super-resolution under arbitrary blur kernels by using a bicubic super-resolver as a prior. Regularization by artifact removal (RARE) (Liu et al., 2020) was proposed to use CNNs pre-trained directly on subsampled and noisy Fourier data as priors for magnetic resonance imaging (MRI). These prior methods did not leverage statistical interpretations of the restoration operators to provide a theoretical analysis for the corresponding PnP variants.
It is also worth highlighting the work of Gribonval and colleagues on theoretically exploring the relationship between MMSE restoration operators and proximal operators (Gribonval, 2011; Gribonval and Machart, 2013; Gribonval and Nikolova, 2021). Some of the observations and intuition in that prior line of work is useful for the theoretical analysis of the proposed DRP methodology.
**Our contribution**. (1) Our first contribution is the new method DRP for solving inverse problems using the prior implicit in a pre-trained deep restoration network. Our method is as a major extension of recent methods (Bigdeli et al., 2017; Xu et al., 2020; Kadkhodaie and Simoncelli, 2021; Gan et al., 2023) from denoisers to more general restoration operators. (2) Our second contribution is a new theory that characterizes the solution and convergence of DRP under priors associated with the MMSE restoration operators. Our theory is general in the sense that it allows for nonsmooth data-fidelity terms and expansive restoration models. (3) Our third contribution is the implementation of DRP using the popular SwinIR (Liang et al., 2021) super-resolution model as a prior for two distinct inverse problems, namely deblurring and super-resolution. Our implementation that shows the potential of using restoration models to achieve SOTA performance.
## 3 Deep Restoration Prior
Image denoisers are currently extensively used as priors for solving inverse problems. We extend this approach by proposing the following method that uses a more general restoration operator.
```
1:input: Initial value \(\mathbf{x}^{0}\in\mathbb{R}^{n}\) and parameters \(\gamma,\tau>0\)
2:for\(k=1,2,3,\dots\)do
3:\(\mathbf{z}^{k}\leftarrow\mathbf{x}^{k-1}-\gamma\tau\mathsf{G}(\mathbf{x}^{k-1})\) where \(\mathsf{G}(\mathbf{x})\,\coloneqq\,\mathbf{x}-\mathsf{R}(\mathbf{H}\mathbf{x})\)
4:\(\mathbf{x}^{k}\leftarrow\mathsf{sprox}_{\gamma g}(\mathbf{z}^{k})\)
5:endfor
```
**Algorithm 1** Deep Restoration Priors (DRP)
The prior in Algorithm 1 is implemented in Line 3 using a deep model \(\mathsf{R}:\mathbb{R}^{p}\to\mathbb{R}^{n}\) pre-trained to solve the following restoration problem
\[\mathbf{s}=\mathbf{H}\mathbf{x}+\mathbf{n}\quad\text{with}\quad\mathbf{x}\sim p_{\mathbf{x}},\quad \mathbf{n}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}), \tag{4}\]
where \(\mathbf{H}\in\mathbb{R}^{p\times n}\) is a degradation operator, such as blur or downscaling, and \(\mathbf{n}\in\mathbb{R}^{p}\) is the _additive white Gaussian noise (AWGN)_ of variance \(\sigma^{2}\). The density \(p_{\mathbf{x}}\) is the prior distribution of the desired class of images. Note that the restoration problem (4) is only used for training \(\mathsf{R}\) and doesn't have to correspond to the inverse problem in (1) we are seeking to solve. When \(\mathbf{H}=\mathbf{I}\), the restoration operator \(\mathsf{R}\) reduces to an AWGN denoiser used in the traditional PnP methods (Romano et al., 2017; Kadkhodaie and Simoncelli, 2021; Hurault et al., 2022a). The goal of DRP is to leverage a pre-trained restoration network \(\mathsf{R}\) to gain access to the prior.
The measurement consistency is implemented in Line 4 using the _scaled_ proximal operator
\[\mathsf{sprox}_{\gamma g}(\mathbf{z})\,\coloneqq\,\mathsf{ prox}_{\gamma g}^{\mathbf{H}^{\mathsf{T}}\mathbf{H}}(\mathbf{z})=\underset{\mathbf{x}\in \mathbb{R}^{n}}{\arg\min}\left\{\frac{1}{2}\|\mathbf{x}-\mathbf{z}\|_{\mathbf{H}^{ \mathsf{T}}\mathbf{H}}^{2}+\gamma g(\mathbf{x})\right\}, \tag{5}\]
where \(\|\mathbf{v}\|_{\mathbf{H}^{\mathsf{T}}\mathbf{H}}\coloneqq\mathbf{v}^{\mathsf{T}}\mathbf{H} ^{\mathsf{T}}\mathbf{H}\mathbf{v}\) denotes the weighted Euclidean seminorm of a vector \(\mathbf{v}\). When \(\mathbf{H}^{\mathsf{T}}\mathbf{H}\) is positive definite and \(g\) is convex, the functional being minimized in (5) is strictly convex, which directly implies that the solution is unique. On the other hand, when \(g\) is not convex or \(\mathbf{H}^{\mathsf{T}}\mathbf{H}\) is positive semidefinite, there might be multiple solutions and the scaled proximal operator simply returns one of the solutions. It is also worth noting that (5) has an efficient solution when \(g\) is the least-squares data-fidelity term (see for example the discussion in Kamilov et al. (2023) on efficient implementations of proximal operators of least-squares).
The fixed points of Algorithm 1 can be characterized for subdifferentiable \(g\) (see Chapter 3 in Beck (2017) for a discussion on subdifferentiability). When DRP converges, it converges to vectors \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) that satisfy (see formal analysis in Appendix A.1)
\[\mathbf{0}\in\partial g(\mathbf{x}^{*})+\tau\mathbf{H}^{\mathsf{T}}\mathbf{H}\mathbf{G }(\mathbf{x}^{*}) \tag{6}\]
where \(\partial g\) is the subdifferential of \(g\) and \(\mathsf{G}\) is defined in Line 3 of Algorithm 1. As discussed in the next section, under additional assumptions, one can associate the fixed points of DRP with the stationary points of a composite objective function \(f=g+h\) for some regularizer \(h\).
## 4 Convergence Analysis of DRP
In this section, we present a theoretical analysis of DRP. We first provide a more insightful interpretation of its solutions for restoration models that compute MMSE estimators of (4). We then discuss the convergence of the iterates generated by DRP. Our analysis will require several assumptions that act as sufficient conditions for our theoretical results.
We will consider restoration models that perform MMSE estimation of \(\mathbf{x}\in\mathbb{R}^{n}\) for the problem (4)
\[\mathsf{R}(\mathbf{s})=\mathbb{E}\left[\mathbf{x}|\mathbf{s}\right]=\int\mathbf{x}p_{\mathbf{x}| \mathbf{s}}(\mathbf{x};\mathbf{s})\,\mathrm{d}\mathbf{x}=\int\mathbf{x}\frac{p_{\mathbf{s}|\mathbf{x}}(\bm {s};\mathbf{x})p_{\mathbf{x}}(\mathbf{x})}{p_{\mathbf{s}}(\mathbf{s})}\,\mathrm{d}\mathbf{x}. \tag{7}\]
where we used the probability density of the observation \(\mathbf{s}\in\mathbb{R}^{p}\)
\[p_{\mathbf{s}}(\mathbf{s})=\int p_{\mathbf{s}|\mathbf{x}}(\mathbf{s};\mathbf{x})p_{\mathbf{x}}(\mathbf{x})\, \mathrm{d}\mathbf{x}=\int G_{\sigma}(\mathbf{s}-\mathbf{H}\mathbf{x})p_{\mathbf{x}}(\mathbf{x})\, \mathrm{d}\mathbf{x}. \tag{8}\]
The function \(G_{\sigma}\) in (8) denotes the Gaussian density function with the standard deviation \(\sigma>0\).
**Assumption 1**.: _The prior density \(p_{\mathbf{x}}\) is non-degenerate over \(\mathbb{R}^{n}\)._
As a reminder, a probability density \(p_{\mathbf{x}}\) is degenerate over \(\mathbb{R}^{n}\), if it is supported on a space of lower dimensions than \(n\). Our goal is to establish an explicit link between the MMSE restoration operator (7) and the following regularizer
\[h(\mathbf{x})=-\tau\sigma^{2}\log p_{\mathbf{s}}(\mathbf{H}\mathbf{x}),\quad\mathbf{x}\in \mathbb{R}^{n}, \tag{9}\]
where \(\tau\) is the parameter in Algorithm 1, \(p_{\mathbf{s}}\) is the density of the observation (8), and \(\sigma^{2}\) is the AWGN level used for training the restoration network. We adopt Assumption 1 to have a more intuitive mathematical exposition, but one can in principle generalize the link between MMSE operators and regularization beyond non-degenerate priors (Gribonval and Machart, 2013). It is also worth observing that the function \(h\) is infinitely continuously differentiable, since it is obtained by integrating \(p_{\mathbf{x}}\) with a Gaussian density \(G_{\sigma}\)(Gribonval, 2011; Gribonval and Machart, 2013).
**Assumption 2**.: _The scaled proximal operator \(\mathsf{sprox}_{\gamma g}\) is well-defined in the sense that there exists a solution to the problem (5) for any \(\mathbf{z}\in\mathbb{R}^{n}\). The function \(g\) is subdifferentiable over \(\mathbb{R}^{n}\)._
This mild assumption is necessary for us to be able to run our method. There are multiple ways to ensure that the scaled proximal operator is well defined. For example, \(\mathsf{sprox}_{\gamma g}\) is always well-defined for any \(g\) that is proper, closed, and convex (Parikh and Boyd, 2014). This directly makes DRP applicable with the popular least-squares data-fidelity term \(g(\mathbf{x})=\frac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}\). One can relax the assumption of convexity by
considering \(g\) that is proper, closed, and coercive, in which case \(\mathsf{sprox}_{\gamma g}\) will have a solution (see for example Chapter 6 of Beck (2017)). Note that we do not require the solution to (5) to be unique; it is sufficient for \(\mathsf{sprox}_{\gamma g}\) to return one of the solutions.
We are now ready to theoretically characterize the solutions of DRP.
**Theorem 1**.: _Let \(\mathsf{R}\) be the MMSE restoration operator (7) corresponding to the restoration problem (4) under Assumptions 1-3. Then, any fixed-point \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) of DRP satisfies_
\[\mathbf{0}\in\partial g(\mathbf{x}^{*})+\nabla h(\mathbf{x}^{*}),\]
_where \(h\) is given in (9)._
The proof of the theorem is provided in the appendix and generalizes the well-known _Tweedie's formula_(Robbins, 1956; Miyasawa, 1961; Gribonval, 2011) to restoration operators. The theorem implies that the solutions of DRP satisfy the first-order conditions for the objective function \(f=g+h\). If \(g\) is a negative log-likelihood \(p_{\mathbf{y}|\mathbf{x}}\), then the fixed-points of DRP can be interpreted as _maximum-a-posteriori probability (MAP)_ solutions corresponding to the prior density \(p_{\mathbf{s}}\). The density \(p_{\mathbf{s}}\) is related to the true prior \(p_{\mathbf{x}}\) through eq. (8), which implies that DRP has access to the prior \(p_{\mathbf{x}}\) through the restoration operator \(\mathsf{R}\) via density \(p_{\mathbf{s}}\). As \(\mathbf{H}\to\mathbf{I}\) and \(\sigma\to 0\), the density \(p_{\mathbf{s}}\) approaches the prior distribution \(p_{\mathbf{x}}\).
The convergence analysis of DRP will require additional assumptions.
**Assumption 3**.: _The data-fidelity term \(g\) and the implicit regularizer \(h\) are bounded from below._
This assumption implies that there exists \(f^{*}>-\infty\) such that \(f(\mathbf{x})\geq f^{*}\) for all \(\mathbf{x}\in\mathbb{R}^{n}\).
**Assumption 4**.: _The function \(h\) has a Lipschitz continuous gradient with constant \(L>0\). The degradation operator associated with the restoration network is such that \(\lambda\succeq\mathbf{H}^{\mathsf{T}}\mathbf{H}\succeq\mu>0\)._
This assumption is related to the implicit prior associated with a restoration model and is necessary to ensure the monotonic reduction of the objective \(f\) by the DRP iterates. As stated under eq. (9), the function \(h\) is infinitely continuously differentiable. We additionally adopt the standard optimization assumption that \(\nabla h\) is Lipschitz continuous (Nesterov, 2004). It is also worth noting that the positive definiteness of \(\mathbf{H}^{\mathsf{T}}\mathbf{H}\) in Assumption 4 is a relaxation of the traditional PnP assumption that the prior is a denoiser, which makes our theoretical analysis a significant extension of the prior work (Bigdeli et al., 2017; Xu et al., 2020; Kadkhodaie & Simoncelli, 2021; Gan et al., 2023).
We are now ready to state the following results.
**Theorem 2**.: _Run DRP for for \(t\geq 1\) iterations under Assumptions 1-4 using a step-size \(\gamma=\mu/(\alpha L)\) with \(\alpha>1\). Then, for each iteration \(1\leq k\leq t\), there exists \(\mathbf{w}(\mathbf{x}^{k})\in\partial f(\mathbf{x}^{k})\) such that_
\[\min_{1\leq k\leq t}\|\mathbf{w}(\mathbf{x}^{k})\|_{2}^{2}\leq\frac{1}{t}\sum_{k=1}^{ t}\|\mathbf{w}(\mathbf{x}^{k})\|_{2}^{2}\leq\frac{C(f(\mathbf{x}^{0})-f^{*})}{t},\]
_where \(C>0\) is an iteration independent constant._
The exact expression for the constant \(C\) is given in the proof. Theorem 2 shows that the iterates generated by DRP satisfy \(\mathbf{w}(\mathbf{x}^{k})\to\mathbf{0}\) as \(t\to\infty\). Theorems 1 and 2 do not explicitly require convexity or smoothness of \(g\), and non-expansiveness of \(\mathsf{R}\). They can thus be viewed as a major generalization of the existing theory from denoisers to more general restoration operators.
## 5 Numerical Results
We now numerically validate DRP on several distinct inverse problems. Due to space limitations in the main paper, we have included several additional numerical results in the appendix.
We consider two inverse problems of form \(\mathbf{y}=\mathbf{Ax}+\mathbf{e}\): (a) _Image Deblurring_ and (b) _Single Image Super Resolution (SISR)_. For both problems, we assume that \(\mathbf{e}\) is the additive white Gaussian noise (AWGN). We adopt the traditional \(\ell_{2}\)-norm loss as the data-fidelity term in (2) for both problems. We use the Peak Signal-to-Noise Ratio (PSNR) for quantitative performance evaluation.
In the main manuscript, we compare DRP with several variants of denoiser-based methods, including SD-RED (Romano et al., 2017), PnP-ADMM (Chan et al., 2017), IRCNN (Zhang et al., 2017), and DPIR (Zhang et al., 2022). SD-RED and PnP-ADMM refer to the steepest-descent variant of RED and the ADMM variant of PnP, both of which incorporate AWGN denoisers based on DnCNN (Zhang et al., 2017). IRCNN and DPIR are based on half-quadratic splitting (HQS) iterations that use the IRCNN and the DRUNet denoisers, respectively.
In the appendix, we present several additional comparisons, namely: (a) evaluation of the performance of DRP on the task of image denoising; (b) additional comparison of DRP with the recent provably convergent variant of PnP called gradient-step plug-and-play (GS-PnP) (Hurault et al., 2022); (c) comparison of DRP with the diffusion posterior sampling (DPS) (Chung et al., 2023) method that uses a denoising diffusion model as a prior; and (d) illustration of the improvement of DRP using SwinIR as a prior over the direct application of SwinIR on SR using the Gaussian kernel.
### Swin Transformer based Super Resolution Prior
**Super Resolution Network Architecture.** We pre-trained a \(q\times\) super resolution model \(\mathsf{R}_{q}\) using the SwinIR (Liang et al., 2021) architecture based on Swin Transformer. Our training dataset comprised both the DIV2K (Agustsson and Timofte, 2017) and Flick2K (Lim et al., 2017) dataset, containing 3450 color images in total. During training, we applied \(q\times\) bicubic downsampling to the input images with AWGN characterized by standard deviation \(\sigma\) randomly chosen in [0, 10/255]. We used three SwinIR SR models, each trained for different down-sampling factors: \(2\times\), \(3\times\) and \(4\times\).
**Prior Refinement Strategy for the Super Resolution prior.** Theorem 1 suggests that as \(\mathbf{H}\rightarrow\mathbf{I}\), the prior in DRP converges to \(p_{\mathbf{x}}\). This process can be approximated for SwinIR by controlling the down-sampling factor \(q\) of the SR restoration prior \(\mathsf{R}_{q}(\cdot)\). We observed through our numerical experiments that gradual reduction of \(q\) leads to less reconstruction artifacts and enhanced fine details. We will denote the approach of gradually reducing \(q\) as _prior refinement strategy_. We initially set \(q\) to a larger down-sampling factor, which acts as a more aggressive prior; we then reduce \(q\) to a smaller value leading to preservation of finer details. This strategy is conceptually analogous to the gradual reduction of \(\sigma\) in the denoiser in the SOTA PnP methods such as DPIR (Zhang et al., 2022).
Figure 1: Illustration of the convergence behaviour of DRP for image deblurring and single image super resolution on the Set3c dataset. _(a)-(b)_: Deblurring with Gaussian blur kernels of standard deviations 1.6 and 2.0. _(c)-(d)_: \(2\times\) and \(3\times\) super resolution with the Gaussian blur kernel of standard deviation 2.0. Average distance \(\|\mathbf{x}^{k}-\mathbf{x}^{k-1}\|_{2}^{2}\) and PSNR relative to the groundtruth are plotted, with shaded areas indicating the standard deviation of these metrics across all test images.
### Image Deblurring
Image deblurring is based on the degradation operator of the form \(\mathbf{A}=\mathbf{K}\), where \(\mathbf{K}\) is a convolution with the blur kernel \(\mathbf{k}\). We consider image deblurring using two \(25\times 25\) Gaussian kernels (with the standard deviations 1.6 and 2) used in Zhang et al. (2019), and the AWGN vector \(\mathbf{e}\) corresponding to noise level of 2.55/255. The restoration model used as a prior in DRP is SwinIR introduced in Section 5.1, so that the operation \(\mathbf{H}\) corresponds to the standard bicubic downsampling. The scaled proximal operator \(\mathsf{sprox}_{\lambda g}\) in (5) with data-fidelity term \(g(\mathbf{x})=\frac{1}{2}\left\|\mathbf{y}-\mathbf{K}\mathbf{x}\right\|_{2}^{2}\) can be written as
\[\mathsf{sprox}_{\gamma g}(\mathbf{z})=(\mathbf{K}^{\mathsf{T}}\mathbf{K}+\gamma\mathbf{H}^ {\mathsf{T}}\mathbf{H})^{-1}[\mathbf{K}^{\mathsf{T}}\mathbf{y}+\gamma\mathbf{H}^{ \mathsf{T}}\mathbf{H}\mathbf{z}]. \tag{10}\]
We adopt a standard approach of using a few iterations of the conjugate gradient (CG) method (see for example Aggarwal et al. (2019)) to implement the scaled proximal operator (10) by avoiding the direct inversion of \((\mathbf{K}^{\mathsf{T}}\mathbf{K}+\gamma\mathbf{H}^{\mathsf{T}}\mathbf{H})\). In each DRP iteration, we run three steps of a CG solver, starting from a warm initialization from the previous DRP iteration. We fine-turned the hyper-parameter \(\gamma\), \(\tau\) and SR restoration prior rate \(q\) to achieve the highest PSNR value on the Set5 dataset and then apply the same configuration to the other three datasets.
Figure 1 (a)-(b) illustrates the convergence behaviour of DRP on the Set3c dataset for two blur kernels. Table 1 presents the quantitative evaluation of the reconstruction performance on two different blur kernels, showing that DRP outperforms the baseline methods across four widely-used datasets. Figure 2 visually illustrates the reconstructed results on the same two blur kernels. Note how DRP can reconstruct the fine details of the tiger and starfish, as highlighted within the zoom-in boxes, while all the other baseline methods yield either oversmoothed reconstructions or noticeable artifacts. These results show that DRP can leverage SwinIR as an implicit prior, which not only ensures stable convergence, but also leads to competitive performance when compared to denoisers priors.
Figure 3 illustrates the impact of the _prior-refinement strategy_ described in Section 5.1. We compare three settings: (i) use of only \(3\times\) prior, (ii) use of only \(2\times\) prior, and (iii) use of the prior-refinement strategy to leverage both \(3\times\) and \(2\times\) priors. The subfigure on the left shows the convergence of DRP for each configuration, while the ones on the right show the final imaging quality. Note how the reduction of \(q\) leads to better performance, which is analogous to what was observed with the reduction of \(\sigma\) in the SOTA PnP methods (Zhang et al., 2022).
### Single Image Super Resolution
We apply DRP using the bicubic SwinIR prior to Single Image Super Resolution (SISR) task. The measurement operator in SISR can be written as \(\mathbf{A}=\mathbf{S}\mathbf{K}\), where \(\mathbf{K}\) is convolution with the blur kernel \(\mathbf{k}\) and \(\mathbf{S}\) performs
\begin{table}
\begin{tabular}{c c|c c c c c} \hline Kernel & Datasets & SD-RED & PnP-ADMM & IRCNN+ & DPIR & DRP \\ \hline \multirow{4}{*}{\(\mathbf{\mathrm{e}}\)} & Set3c & 27.14 & 29.11 & 28.14 & 29.53 & **30.69** \\ & Set5 & 29.78 & 32.31 & 29.46 & 32.38 & **32.79** \\ & CBSD68 & 25.78 & 28.90 & 26.86 & 28.86 & **29.10** \\ & McMaster & 29.69 & 32.20 & 29.15 & 32.42 & **32.79** \\ \hline \multirow{4}{*}{\(\mathbf{\mathrm{e}}\)} & Set3c & 25.83 & 27.05 & 26.58 & 27.52 & **27.89** \\ & Set5 & 28.13 & 30.77 & 28.75 & 30.94 & **31.04** \\ \cline{1-1} & CBSD68 & 24.43 & 27.45 & 25.97 & **27.52** & 27.46 \\ \cline{1-1} & McMaster & 28.71 & 30.50 & 28.27 & 30.78 & **30.79** \\ \hline \end{tabular}
\end{table}
Table 1: PSNR (dB) of DRP and several SOTA methods for solving inverse problems using denoisers on image deblurring with the Gaussian blur kernels of standard deviation 1.6 and 2.0 on Set3c, Set5, CBSD68 and McMaster datasets. The **best** and **second** best results are highlighted. Note how DRP can outperform SOTA PnP methods that use denoisers as priors.
standard \(d\)-fold down-sampling with \(d^{2}=n/m\). The scaled proximal operator \(\mathsf{sprox}_{\lambda g}\) in (5) with data-fidelity term \(g(\mathbf{x})=\frac{1}{2}\left\|\mathbf{y}-\mathbf{S}\mathbf{K}\mathbf{x}\right\|_{2}^{2}\) can be write as:
\[\mathsf{sprox}_{\gamma g}(\mathbf{z})=(\mathbf{K}^{\mathsf{T}}\mathbf{S}^{\mathsf{T}}\mathbf{S} \mathbf{K}+\gamma\mathbf{H}^{\mathsf{T}}\mathbf{H})^{-1}[\mathbf{K}^{\mathsf{T}}\mathbf{S} ^{\mathsf{T}}\mathbf{y}+\gamma\mathbf{H}^{\mathsf{T}}\mathbf{H}\mathbf{z}], \tag{11}\]
where \(\mathbf{H}\) is the bicubic downsampling operator. Similarly to deblurring in Section 5.2, we use CG to efficiently compute (11). We adjust the hyper-parameter \(\gamma\), \(\tau\), and the SR restoration prior factor \(q\) for the best PSNR performance on Set5, and then use these parameters on the remaining datasets.
We evaluate super-resolution performance across two \(25\times 25\) Gaussian blur kernels, each with distinct standard deviations (1.6 and 2.0), and for two distinct downsampling factors (\(2\times\) and \(3\times\)), incorporating an AWGN vector \(\mathbf{e}\) corresponding to noise level of 2.55/255.
Figure 1 (c)-(d) illustrates the convergence behaviour of DRP on the Set3c dataset for \(2\times\) and \(3\times\) SISR. Figure 4 shows the visual reconstruction results for the same downsampling factors. Table 2 summarizes the PSNR values achieved by DRP relative to other baseline methods when applied to different blur kernel and downsampling factors on four commonly used datasets.
It is worth highlighting that the SwinIR model used in DRP was pre-trained for the bicubic super-resolution task. Consequently, the direct application of the pre-trained SwinIR to the setting considered in this section leads to the suboptimal performance due to mismatch between the kernels used. See Appendix B.4 to see how DRP improves over the direct application of SwinIR.
## 6 Conclusion
The work presented in this paper proposes a new DRP method for solving imaging inverse problems by using pre-trained restoration operators as priors, presents its theoretical analysis in terms of convergence, and applies the method to two well-known inverse problems. The proposed method and its theoretical analysis extend the recent work using denoisers as priors by considering more general restoration operators. The numerical validation of DRP shows the improvements due to the use of learned SOTA super-resolution models.
Figure 2: Visual comparison of DRP with several well-known methods on Gaussian deblurring of color images. The top row shows results for a blur kernel with a standard deviation (std) of 1.6, while the bottom row shows results for another blur kernel with std = 2. The squares at the bottom-left corner of blurry images show the blur kernels. Each image is labeled by its PSNR in dB with respect to the original image. The visual differences are highlighted in the bottom-right corner. Note how DRP using restoration prior improves over SOTA methods based on denoiser priors.
One conclusion of this work is the potential effectiveness of going beyond priors specified by traditional denoisers.
## Limitations
The work presented in this paper comes with several limitations. The proposed DRP method uses pre-trained restoration models as priors, which means that its performance is inherently limited by the quality of the pre-trained model. As shown in this paper, pre-trained restoration models provide a convenient, principled, and flexible mechanism to specify priors; yet, they are inherently self-supervised and their empirical performance can thus be suboptimal compared to priors trained in a supervised fashion for a specific inverse problem. Our theory is based on the assumption that the restoration prior used for inference performs MMSE estimation. While this assumption is reasonable for deep networks trained using the MSE loss, it is not directly applicable to denoisers trained using other common loss functions, such as the \(\ell_{1}\)-norm or SSIM. Finally, as is common with most theoretical work, our theoretical conclusions only hold when our assumptions are satisfied, which
Figure 4: Visual comparison of DRP and several well-known methods on single image super resolution. The top row displays performances for \(2\times\) SR, while the bottom row showcases results for \(3\times\) SR. The lower-left corner of each low-resolution image shows the blur kernels. Each image is labeled by its PSNR in dB with respect to the original image. The visual differences are highlighted by the boxes in the bottom-right corner. Note the excellent performance of the proposed DRP method using the SwinIR prior both visually and in terms of PSNR.
Figure 3: Illustration of the impact of different SR factors in the prior used within DRP for image deblurring. We show three scenarios: (i) using only \(3\times\) prior, (ii) using only \(2\times\) prior, and (iii) the use of the _prior refinement strategy_, which combines both the \(2\times\) and \(3\times\) priors. _Left_: Convergence of PSNR against the iteration number for all three configurations. _Right_: Visual illustration of the final image for each setting. The visual difference is highlighted by the red arrow in the zoom-in box. Note how the reduction of \(q\) can lead to about \(0.3\) dB improvement in the final performance.
might limit their applicability in certain settings. Our future work will continue investigating ways to extend our theory by exploring alternative strategies for relaxing our assumptions.
|
2309.02923 | Patched Line Segment Learning for Vector Road Mapping | This paper presents a novel approach to computing vector road maps from
satellite remotely sensed images, building upon a well-defined Patched Line
Segment (PaLiS) representation for road graphs that holds geometric
significance. Unlike prevailing methods that derive road vector representations
from satellite images using binary masks or keypoints, our method employs line
segments. These segments not only convey road locations but also capture their
orientations, making them a robust choice for representation. More precisely,
given an input image, we divide it into non-overlapping patches and predict a
suitable line segment within each patch. This strategy enables us to capture
spatial and structural cues from these patch-based line segments, simplifying
the process of constructing the road network graph without the necessity of
additional neural networks for connectivity. In our experiments, we demonstrate
how an effective representation of a road graph significantly enhances the
performance of vector road mapping on established benchmarks, without requiring
extensive modifications to the neural network architecture. Furthermore, our
method achieves state-of-the-art performance with just 6 GPU hours of training,
leading to a substantial 32-fold reduction in training costs in terms of GPU
hours. | Jiakun Xu, Bowen Xu, Gui-Song Xia, Liang Dong, Nan Xue | 2023-09-06T11:33:25Z | http://arxiv.org/abs/2309.02923v1 | # Patched Line Segment Learning for Vector Road Mapping
###### Abstract
This paper presents a novel approach to computing vector road maps from satellite remotely sensed images, building upon a well-defined Patched Line Segment (PaLiS) representation for road graphs that holds geometric significance. Unlike prevailing methods that derive road vector representations from satellite images using binary masks or keypoints, our method employs line segments. These segments not only convey road locations but also capture their orientations, making them a robust choice for representation. More precisely, given an input image, we divide it into non-overlapping patches and predict a suitable line segment within each patch. This strategy enables us to capture spatial and structural cues from these patch-based line segments, simplifying the process of constructing the road network graph without the necessity of additional neural networks for connectivity. In our experiments, we demonstrate how an effective representation of a road graph significantly enhances the performance of vector road mapping on established benchmarks, without requiring extensive modifications to the neural network architecture. Furthermore, our method achieves state-of-the-art performance with just 6 GPU hours of training, leading to a substantial 32-fold reduction in training costs in terms of GPU hours.
## 1 Introduction
By "vector road mapping", it refers to a process of converting the road features presented in satellite-borne remote sensing images into vector-based and symbolic graph representations, which is also known as _road graph extraction_ or _road network extraction_ within the community of remote sensing and plays a fundamental role in numerous downstream tasks including navigation [12, 13], urban planning [14, 15], and autonomous driving [16, 17, 18, 19].
The state-of-the-art methods for vector road mapping primarily rely on the strong representation capabilities of deep neural networks. These approaches formulate the problem as a supervised learning task, utilizing paired satellite images and annotated road graphs that use vertices and edges to depict the line and curve structures of roads. As the input images are in pixel form, it becomes crucial to establish an appropriate representation for facilitating the learning from the pixels of satellite images to the vector representation of roads. In the state-of-the-art methods [1, 13, 14], the "appropriate representation" of vector road annotations were initially come down to mask-based representation (_i.e._, road masks) and were then upgraded to the keypoint-based graph representation as the main representation in the pursuit of end-to-end learning.
While keypoint-based graph representations have demonstrated remarkable performance, many of these methods encounter a significant drawback: the substantial training cost involved. For instance, RNGDet++ model [13] requires approximately \(192\) GPU hours to train on a dataset
Figure 1: Illustration of graphs constructed by different representations. The predicted representations (keypoints and line segments) are denoted in yellow marks and the connectivities are denoted in orange marks.
of moderate size with thousands of images. This high training cost can be attributed to the prevalent oversampling strategy used to define the "keypoints" in the original annotations (depicted in Fig. 1(b)). This strategy involves densely sampling numerous points along each road, as shown in Fig. 1(c), lacking invariance to commonly employed image transformations used for data augmentation, such as random cropping and image translation, and eventually results in ambiguity during the learning process. As a consequence, methods employing keypoint-based graph representations must grapple with inherent representation ambiguity, requiring a greater number of training iterations. Such a prolonged training process often entails cluttered patterns in the keypoint detection outcomes, as illustrated by the enclosed regions in Fig. 1(d). Furthermore, the keypoint-based representations have to leverage additional modules to learn the connectives for the learned keypoints on the fly to accomplish the task of vector road mapping.
In this paper, we devote ourselves to finding a better representation of vector road annotations, to eliminate the ambiguity in the existing keypoint-based graph representations, for the sake of efficient learning during training and top-performing mapping results in the testing phase. Our study is motivated by the recently-proposed PaRK-Detect [23] that defines _patched keypoints_, in which each small patch (_e.g._, \(16\times 16\)) will have at most one keypoint for learning. Because the local patches are uniformly distributed over the image grids, such a definition largely eliminates the ambiguity for learning. However, since the keypoints are unary primitives that did not explicitly define the spatial relationships, PaRK-Detect [23] only obtained comparable performance in testing. Motivated by this, we are interested in presenting a patched representation to take its ambiguity-free merits while retaining the spatial context for facilitating the final vector road mapping.
Our work is inspired by an observation that _the spatial and geometric information of roads in local patches can be well represented by line segments instead of keypoints_. Based on this, we present a novel PaLiS (Patched Line Segment) representation to depict the annotated road graphs in a geometrically-meaningful way while enjoying the ambiguity-free merits of patch-based representation. By dividing the grid of input images into a set of local (_e.g._, \(8\times 8\)) patches, most of the local patches that contain a fragment of road path can uniquely define the only local line segment. To preserve the rich structural information of the local line segments, we use the closed-form \(xy-xy\) representation for the two endpoints of a line segment, which facilitates the computation of patch adjacency in a geometrically-meaningful way. As shown in Fig. 1(e), our proposed PaLiS representation could handle a variety of road graph patterns in a unified representation. With the proposed PaLiS representation, we find out that our PaLiS representation can be reliably learned via the rasterized road masks as supervision in differentiable rasterization, largely alleviating the need for vectorized road graph annotations.
In the experiments, we demonstrate that our proposed PaLiS representation clearly set new state-of-the-art performances on two public benchmarks, _i.e._, the City-Scale [14] and SpaceNet [26], without paying any extra efforts on the network design. Except for the competitive performance on these two benchmarks, our method only requires 6 GPU hours for the training, significantly reducing the training cost by 32 times for the prior art, RNGDet++ [23]. As shown in Fig. 2 for the performance evaluation by training iterations on the City-Scale dataset, our proposed method wins after the first \(1\)K iterations by significant margins and converges to the S.O.T.A. performance after 20K iterations of training.
In summary, our paper made the following contributions:
* We propose a novel representation of road graphs, the patched line segment representation, which facilitates the learning of road graphs with the best efficacy in both the training and testing phases.
* Based on our patched line segment representation, we present a graph construction strategy for the task of vector road mapping, which takes advantage of the geometric nature of our representation to produce vector graphs without using any additional neural networks for the learning of connectives between keypoints.
* Our proposed patched line segment representation is learnable and compatible with the mask-based representation by leveraging a differentiable soft rasterizer, which helps to learn the patched line segments efficiently without introducing additional vector labels.
## 2 Related Works
Road Graph Representations.There have been plenty of studies for vector road mapping, mainly relying on either the rasterized road map or the keypoint/vertex-based graph representations, and derived two categories, the segmentation-based [16, 17, 18, 19, 20, 21] and the keypoint-based approaches [14, 15, 16, 22, 23]. Regarding the popularity of end-to-end learning for better performance, the state-of-the-art approaches [14, 22, 23] mainly learn keypoints (_i.e._, graph vertices) and the connectivity between vertices while using the rasterized road masks/maps as the additional supervision signals to enhance the feature representation ability
Figure 2: Convergence curves on City-Scale dataset.
of ConvNets. Except for the representation ambiguity issue discussed in Sec. 1 for prolonged learning schedule, these representations mainly focus on point primitives instead of the line structure of road graphs, thus usually requiring additional design to learn or infer the connectivity between points/pixels. Regarding the above issues, we present a novel line-segment-based representation that defines the road graphs in the local image patches while characterizing the structural information of roads using line segments. We show that our well-defined and geometrically-meaningful representation largely facilitates the learning process of vector road mapping with the best efficacy.
Line Segment Learning and Differentiable Rasterization.There has been a vast body of literature studying the line segments from both computer vision (CV) and graphics (CG) communities. On one hand, many works study the problem of line segment detection [20, 21, 22], which is similar to vector road mapping but mainly focuses on the line segment itself instead of the road graphs. On another hand, some CG researchers study the differentiable vector graphics rasterization/rendering [14, 20], in which they aim at using graphic primitives such as points, lines, and curves to represent rasterized digital images. The differentiable rasterization techniques were also applicable to the polygonal shape representation with end-to-end learning in instance segmentation [15] and polygonal building extraction [11]. Our study is inspired by all these studies, but we pay more attention on the well-posedness of the primitive definition for the complicated road graphs/networks. By thinking of local patches, we eventually derive our novel PaLiS representation and set new state-of-the-art performance for the task of vector road mapping.
## 3 PaLiS Representation of Road Graphs
In this section, we elaborate on the proposed PaLiS representation of road graphs. Denoted by the input satellite image \(\mathbf{I}\in\mathbb{R}^{3\times H\times W}\) and the corresponding road graph annotation \(\mathcal{R}=\{\Gamma_{i}(t)\in\mathbb{R}^{2}|t\in[0,1]\}\), where \(\Gamma_{i}(t)\) is a parameterized 2D curve/line, \(\Gamma_{i}(0)\) and \(\Gamma_{i}(1)\) respectively represent the two endpoints of the parameterized curve. We use the local \(p\times p\) patches to patch-wisely define the "key" line segments and eventually form the new PaLiS representation of road graphs. We assume that the patch size \(p\) is divisible by \(H\) and \(W\) without loss of generality.
### The Main Representation
By generating a set of \(N\) non-overlapping \(p\times p\) patches \(\{\mathcal{P}_{i}\}\) where \(N=\frac{H}{p}\times\frac{W}{p}\), we define the patched line segment for each local patch \(\mathcal{P}_{i}\). As shown in Fig. 3(b), there are three cases for each patch \(\mathcal{P}_{i}\) depending on the number of roads passing through the patch, denoted by \(\mathcal{N}(\mathcal{P}_{i})\in\mathbb{N}\). If \(\mathcal{N}(\mathcal{P}_{i})=0\), we term it as the background patch (_i.e._, the gray patches in Fig. 3). If \(\mathcal{N}(\mathcal{P}_{i})=1\), we uniquely define its patched line segment, denoted by
\[\mathrm{PaLiS}(\mathcal{P}_{i})=(x_{i}^{u},y_{i}^{u},x_{i}^{v},y_{i}^{v})\in \mathbb{R}^{4}\mathrm{if}\;\mathcal{N}(\mathcal{P}_{i})=1. \tag{1}\]
For those patches that satisfy \(\mathcal{N}(\mathcal{P}_{i})>1\), we cannot uniquely define their line segments, but we found such patches are playing a key role to construct the expected road graphs. As shown in Fig. 4, we further study the properties of the patches that have \(\mathcal{N}(\mathcal{P}_{i})\geq 1\). In Fig. 4(a), the foreground patches clearly define a (local) straight road without ambiguity. But for the patches that have \(\mathcal{N}(\mathcal{P}_{i})>1\), there are two types as shown in Fig. 4(b) and 4(c), depending on if there is an annotated "keypoint" to connect the multiple road paths in one keypoint. If there is such keypoint annotation, we call such a type as the \(X\)-type. Otherwise, the multiple road paths passing through the patch \(\mathcal{P}_{i}\) will have different elevations like the overpasses, and we called them as the \(T\)-type patches.
In summary, the proposed PaLiS representation firstly samples \(N\) non-overlapping local patches and identifies the foreground patches by three different types, the \(I\)-type, \(X\)-type and \(T\)-type, and defines the local line segments for the \(I\)-type patches in the form of \((x_{i}^{u},y_{i}^{u},x_{i}^{v},y_{i}^{v})\) to retain the geometric information of road paths. In the next section, we will show how to learn our proposed PaLiS representation for the task of vector road mapping.
### Road Graph Reconstruction from PaLiS
Thanks to our geometric PaLiS representation, the road graphs can be reasonably reconstructed without leveraging another subnetwork for the learning of graph connectivity.
Figure 4: Illustration of different types of foreground patches. Patched line segments are denoted in cyan markers and connectivities are denoted in dashed markers.
Figure 3: An illustrative figure for the proposed Patched Line Segment (PaLiS) representation. Larger patch size is applied for better illustration.
Here, we hypothesize that the PaLiS representation can be reliably learned and defer the learning details in Sec. 4. We developed a geometrically-meaningful scheme to reconstruct the road graphs from our PaLiS representation (_see our supp. material for the pseudo code_) by considering the properties of \(I\)-type, \(X\)-type and \(T\)-type foreground patches in the following three cases:
* As shown in Fig. 4(a), we first consider the most common case for the \(I\)-type patches. For two adjacent \(I\)-type patches \(\mathcal{P}_{i}\) and \(\mathcal{P}_{j}\), their line segments \(\mathbf{l}_{i}=\mathrm{PaLiS}(\mathcal{P}_{i})\) and \(\mathbf{l}_{j}=\mathrm{PaLiS}(\mathcal{P}_{j})\) are connected with the observation that line segments of adjacent \(I\)-type patches share a common endpoint. We formulate the rule based on the shape distance \(\mathrm{d}_{\mathrm{s}}(A,B)\), which represents the shortest perpendicular distance between shapes \(A\) and \(B\). Two line segments are connected if the average of \(\mathrm{d}_{\mathrm{s}}(\mathbf{l}_{j},\mathbf{e}_{i})\) and \(\mathrm{d}_{\mathrm{s}}(\mathbf{l}_{i},\mathbf{e}_{j})\) is less than a given distance threshold \(\tau_{d}\), where \(\mathbf{e}_{i}\) is the endpoint of \(\mathbf{l}_{i}\) close to the line segment \(\mathbf{l}_{j}\) and \(\mathbf{e}_{j}\) is the endpoint of \(\mathbf{l}_{j}\) close to the line segment \(\mathbf{l}_{i}\).
* While encountering the \(X\)-type patch \(\mathcal{P}_{X}\) (_e.g._, cross roads), line segments surrounding the patch \(\mathcal{P}_{X}\) are extended to an intersection as shown in Fig. 4(b). To achieve this, candidate intersections are calculated by pairing up lines segments around the patch \(\mathcal{P}_{X}\). The intersection \(\mathcal{I}_{i,j}\in\mathcal{R}^{2}\) of the line segment pair \((\mathbf{l}_{i},\mathbf{l}_{j})\) is valid if the two line segments intersect within the patch \(\mathcal{P}_{X}\). And the final intersection \(\mathcal{I}_{final}\) is obtained by averaging the position of all candidate intersections and is connected to the surrounding line segments.
* Regarding the \(T\)-type patch \(\mathcal{P}_{T}\) (_e.g._, overpasses), the layouts with different height are made by the directional and spatial and extension of roads as shown in Fig. 4(c). We pair up lines segments around the patch \(\mathcal{P}_{T}\) and the connection of a line segments pair \((\mathbf{l}_{i},\mathbf{l}_{j})\) is valid if the shape distance \(\mathrm{d}_{\mathrm{s}}(\mathbf{l}_{i},\mathbf{l}_{j})\) and the angle difference \(\mathrm{d}_{angle}(\mathbf{l}_{i},\mathbf{l}_{j})\) are less than the distance threshold \(\tau_{d}\) and the angle threshold \(\tau_{a}\) respectively.
## 4 Learning PaLiS Representations
In this section, we show how to reliably learn the proposed PaLiS representation for vector road mapping in an off-the-shelf ConvNet. We use an encoder-decoder network, DLinkNet [16], with the lightweight ResNet-34 [10] as the backbone encoder to extract feature maps for the learning of PaLiS. Fig. 5 shows the overall pipeline of our approach. For the learning of PaLiS representation, two headnets are respectively leveraged, to classify the patches according to their PaLiS classes, and regress the two endpoints for each \(I\)-type patch. Apart from the main branches, an auxiliary segmentation head is leveraged to learn the rasterized masks from the final feature maps of the decoder network.
### Identifying Patch Classes/Types
Our PaLiS representation categorizes the foreground patches into three different types (\(I\)-type, \(X\)-type, and \(T\)-type) for a better understanding of intricate road graph structures. To achieve this, we use a patch classification head, which consists of four convolution layers all with \(3\times 3\) kernels and an MLP layer, to predict the class of each patch. The patch classification head takes patch-level feature maps \(\mathbf{F}_{\mathbf{P}}\) as input and produces the patch map \(\mathbf{M}\in\mathbb{R}^{C_{P}\times\frac{H}{r}\times\frac{W}{r}}\), where \(C_{P}\) is the number of patch classes (_i.e._, \(C_{P}=4\) by considering the background patches). During training, we compute
Figure 5: Overall pipeline of the proposed method. Given an input image, (1) an Encoder-Decoder network first extracts pixel-level feature maps \(\mathbf{F}_{\mathbf{I}}\) and patch-level feature maps \(\mathbf{F}_{\mathbf{P}}\). Then, (2) Patch-level branch predicts the line segment and class for each patch with the patch-level features \(\mathbf{F}_{\mathbf{P}}\). \((x_{i}^{u},y_{i}^{u},x_{i}^{v},y_{i}^{v})\) and \(c\) denote the coordinates of the line segment and type of the patch respectively. (3) Pixel-level branch outputs the binary mask of road centerlines with the pixel-level features \(\mathbf{F}_{\mathbf{I}}\). Finally, (4) the road graph is reconstructed from the predicted PaLiS representation. Larger-scale patches are used for better illustration.
the classification loss by comparing the predicted patch map \(\mathbf{M}\) with the corresponding ground truth \(\mathbf{M}^{*}\) which can be easily obtained from the original annotations of the dataset. Cross-entropy loss is employed for \(\mathbf{M}\):
\[\mathcal{L}_{\mathcal{M}}=\mathrm{CE}(\mathbf{M},\mathbf{M}^{*}). \tag{2}\]
### Line Segments Learning for \(I\)-type Patches
With the patch classification head, we focus on the \(I\)-type patches to learn the patched line segments. It should be noted that although the line segment \(\mathbf{l}_{i}\) for the patch \(\mathcal{P}_{i}\) is in the closed-form for the two endpoints, directly regressing their endpoint coordinates is suboptimal since the data augmentation techniques (like cropping) used in the training phase will incur inefficient computation in terms of cropping the vector road annotations. To avoid this issue, we propose to use the differentiable rasterization techniques to learn the line segment \(\mathbf{l}_{i}\) of the patch \(\mathcal{P}_{i}\) from the mask supervision, similar to [10, 11]. It is interesting to see that, although we use the rasterized road mask supervision instead of the vector annotations, such a design is prevailing than the vector annotations. Please move to our ablation studies in Sec. 5 for a detailed comparison.
By taking the feature map \(\mathbf{F}_{p}\), we set a regression head with four \(3\times 3\) convolution layers and an MLP layer, to predict line segments \(\mathbf{L}\in\mathbb{R}^{4\times\frac{H}{p}\times\frac{W}{p}}\) where \(\mathcal{A}\) is the number of coordinates of line segments. These patched line segments \(\mathbf{L}\) are then converted into a soft mask \(\mathbf{S_{soft}}\in\mathbb{R}^{\mathbf{H}\times\mathbf{W}}\) with the proposed rasterizer. As shown in Fig. 6, the proposed rasterizer produces a \(p\times p\) patch \(\mathbf{C}_{i}\in\mathbb{R}^{p\times p}\), where the scalar value at the pixel \(\mathbf{a}=(x,y)\) in the local coordinate of the patch is computed by
\[\mathbf{C}_{i}(\mathbf{a})=e^{\frac{-\mathrm{d}^{2}(\mathbf{l}_{i},\mathbf{a} )\times t}{\tau_{\mathrm{inv}}}}, \tag{3}\]
where \(\mathrm{d}(\mathbf{l}_{i},\mathbf{a})\) is the projection distance from the pixel \(\mathbf{a}\) to the line segment \(\mathbf{l}_{i}\). \(t\) and \(\tau_{\mathrm{inv}}\) are the projection factor and sharpness factor respectively. We empirically set \(t=10\) if the pixel \(\mathbf{a}\) is projected outside of the line segment otherwise \(t\) is set to 1. The values of projection factor \(t\) and the sharpness factor \(\tau_{\mathrm{inv}}\) are chosen to accurately reflect the position of the line segment in the patch.
The rasterized soft mask \(\mathbf{S_{soft}}\in\mathbb{R}^{\mathbf{H}\times\mathbf{W}}\) is obtained from the contributions of all pixels. During training, we efficiently compute the loss by comparing the soft mask \(\mathbf{S_{soft}}\) with the existing ground truth mask \(\mathbf{S}^{*}\) of road centerlines. Similar to BoundaryFormer [10], we employ the DICE [11] loss to measure the difference:
\[\mathcal{L}_{\mathcal{L}}=\mathrm{DICE}(\mathbf{S_{soft}},\mathbf{S}^{*}). \tag{4}\]
The rasterizer and backwards pass are fully implemented in CUDA, ensuring efficiency in the training process.
### Auxiliary Pixel-level Learning
In addition to the PaLiS representation, we incorporate the learning of an auxiliary binary mask for road centerlines to extract road information. We use a segmentation head, which consists of one \(3\times 3\) convolution layer and one \(1\times 1\) convolution layer followed by a sigmoid function, to predict the binary mask \(\mathbf{S}\in\mathbb{R}^{H\times W}\) of road centerlines from the pixel-level feature maps \(\mathbf{F_{I}}\). We compute the loss of the predicted binary mask \(\mathbf{S}\) with the ground truth mask \(\mathbf{S}^{*}\) of road centerlines by cross-entropy loss:
\[\mathcal{L}_{\mathcal{S}}=\mathrm{CE}(\mathbf{S},\mathbf{S}^{*}) \tag{5}\]
The total loss of the PaLiS learning can be summarized as
\[\mathcal{L}_{total}=\mathcal{L}_{\mathcal{S}}+\mathcal{L}_{\mathcal{M}}+ \mathcal{L}_{\mathcal{L}}. \tag{6}\]
## 5 Experiments
In this section, we run experiments for our proposed PaLiS-based approach on public benchmarks and provide a comprehensive analysis of our design choices. The implementation details are in our supplementary material.
\begin{table}
\begin{tabular}{l|c|c|c c c c|c c c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Backbone} & \multirow{2}{*}{Type} & \multicolumn{3}{c|}{City-Scale} & \multicolumn{3}{c}{SpaceNet} \\ & & & \multicolumn{3}{c|}{TOPO} & \multirow{2}{*}{APLS \(\uparrow\)} & \multicolumn{3}{c}{TOPO} \\ & & & & P \(\uparrow\) & R \(\uparrow\) & F1 \(\uparrow\) & \multirow{2}{*}{APLS \(\uparrow\)} & \multirow{2}{*}{APLS \(\uparrow\)} \\ \hline DLinkNet [11] & ResNet-34 & & & 78.63 & 48.07 & 57.42 & 54.08 & 88.42 & 60.06 & 68.80 & 56.93 \\ Orientation [1] & ResNet-34 & Mask & 75.83 & 68.90 & 72.20 & 55.34 & 81.56 & 71.38 & 76.13 & 58.82 \\ Seg-DLA [1] & DLA & & & 75.59 & 72.26 & 73.89 & 57.22 & 78.99 & 69.80 & 74.11 & 56.36 \\ \hline RoadTracer [1] & CNN & & & 78.00 & 57.44 & 66.16 & 57.29 & 78.61 & 62.45 & 69.90 & 56.03 \\ SaZGraph [1] & DLA & & & 80.70 & 72.28 & 76.26 & 63.14 & 85.93 & 76.55 & 80.97 & 64.43 \\ TD-Road [1] & ResNet-34 & & & 81.94 & 71.63 & 76.27 & 65.74 & 84.81 & 77.80 & 81.15 & 65.15 \\ PaRK-Detect [11] & ResNet-34 & & & 82.17 & 68.23 & 74.29 & 67.66 & 91.34 & 68.07 & 78.01 & 62.97 \\ RNGDet [11] & ResNet-50 & & & 85.97 & 69.78 & 76.87 & 65.75 & 90.91 & 73.25 & 81.13 & 65.61 \\ RNGDet++ [11] & & & 85.65 & 72.58 & 78.44 & 67.76 & **91.34** & 75.24 & 82.51 & 67.73 \\ \hline Ours & ResNet-34 & Line segment & **86.36** & **73.16** & **79.08** & **68.12** & 90.05 & **78.19** & **83.70** & **69.68** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative results on City-scale Dataset and SpaceNet dataset. Best results are highlighted in bold.
Figure 6: Illustration of the rasterization. Darker pixels contribute more to the line segment.
### Datasets and Evaluation Metrics
Datasets.We conduct experiments on two widely used datasets: City-Scale dataset [14] and SpaceNet dataset [21]. City-Scale dataset [14] covers \(720\)\(km^{2}\) area of 20 cities in the United States. It consists of 180 tiles, which we divide into 144, 9, and 27 tiles for training, validation, and testing respectively, following previous methods [14, 15, 16]. Each tile of the dataset has the resolution of \(2048\times 2048\) pixels, representing 1 meter in the real world. SpaceNet dataset [20] comprises 2549 satellite images, each with the resolution of \(400\times 400\) pixels. We use 2040, 127, and 382 images for training, validation, and testing respectively, following the partition used in Sat2Graph [14].
Evaluation metrics.Two quantitative metrics are utilized in the experiments: APLS [20] and TOPO [1]. APLS assesses the overall graph quality by comparing the similarity of shortest paths between two locations on the predicted and ground truth graphs. On the other hand, the TOPO metric (precision, recall, and F1-score) provides a stricter evaluation of detailed topology correctness by measuring the similarity of sub-graphs sampled from a seed location on the ground truth and predicted graphs. Higher scores indicate better performance for both APLS and TOPO metrics.
### Main Comparisons
Quantitative and Qualitative Evaluation.We compare our approach to state-of-the-art segmentation- and keypoint-based methods on the City-Scale and SpaceNet datasets. Table 1 presents the quantitative results. Segmentation-based methods exhibit substantially inferior performance on both TOPO and APLS metrics, because of their heuristic post-processing schema. In contrast, graph-based methods output and refine the graph of road networks directly, gaining better performance on the two metrics. Our method achieves the highest TOPO and APLS scores on the City-Scale dataset, demonstrating superior performance in capturing road network structures with our unified PaLiS representations. Additionally, our approach outperforms all other methods in terms of recall, F1-score, and APLS on SpaceNet dataset, further validating its effectiveness. These consistently superior evaluation results across metrics indicate that our approach generates more precise and complete road graphs both locally and globally. The same conclusions can be drawn from the qualitative comparisons in Fig. 7.
during training and testing involved further analysis. Fig. 8 first visualized the predicted keypoints heatmap and line segments on the early training epoch. Apparently, the learned keypoints heatmap was ambiguous in the early stage of training, whereas the line segments were accurately predicted. Subsequently, we studied the model's sensitivity to thresholds of keypoints (or line segments) prediction by varying the thresholds with the 0.1 step as shown in Fig. 9. Notably, our model demonstrated greater stability compared to keypoint-based methods, indicating the robustness of our PaLiS representation during testing.
Training efficiency.The training efficiency is also compared as shown in Fig. 2. The approach relying on our unified PaLiS representation achieves superior performance with considerably fewer training iterations while methods relying on keypoints [14, 15, 16] require much more iterations to converge.
### Ablation Studies
Mask-supervised line segment learning.To evaluate the efficacy of the proposed soft rasterizer, we conducted additional experiments using three different types of supervision for line segments learning: unsorted vector labels, sorted vector labels, and mask labels. The unsorted and sorted vector labels are denoted by \((\hat{x}^{u}_{i},\hat{y}^{u}_{i},\hat{x}^{v}_{i},\hat{y}^{v}_{i})\in\mathbb{R}^ {4}\), where the only difference is the direction. Directions of unsorted vector labels are random inherent from origin annotations, while sorted vector labels have consistent directions (\((\hat{x}^{u}_{i},\hat{y}^{u}_{i})\) is always the endpoint on the left). We use L1 loss to compute the difference between the predictions and ground truth vector labels. The results shown in Table 2 indicate that line segments are learned more precisely with the proposed rasterizer, leading to enhanced connectivity in the graph construction. Furthermore, our approach leverages the existing mask labels to guide the training process of patched line segments, without requiring the generation of vector labels.
Graph construction strategy.Road graphs can be reconstructed by PaLiS representation (geometric connectivity) without the learned relationships of patches (relationship connectivity) used in PaRk-Detect [16]. To compare the two different construction strategies, we learned additional relationships of patches following PaRK-Detect [16]. The results presented in Table 3 show that our approach outperforms the relationship connectivity on the two metrics, and provides more accurate and reasoned connectivity as shown in Fig. 10.
assess the impact of patch size, and the results are shown in Table 4. We observe that both smaller and bigger patch sizes cause the inferior performance. This is due to the PaLiS representation with small patch size yields results that are close to mask representation, suffering from the disconnected issue. Whereas PaLiS representation with big patch size struggles to provide precise shape of road graphs. Considering accuracy and efficiency, we set the patch size to 8.
## 6 Conclusions
This paper introduces a learning-based approach for vector road mapping using the innovative PaLiS (Patched Line Segment) representation. By leveraging local patches, our approach effectively represents road graphs. Through convolutional neural networks, we achieve state-of-the-art performance on public datasets, with efficient training in just 6 GPU hours. Additionally, the ability of PaLiS representation to learn line segment endpoint coordinates from rasterized road maps suggests a promising direction for large-scale vector road mapping without costly manual annotations in the near future.
|
2307.14435 | Dark matter in compact stars | White dwarfs and neutron stars are far-reaching and multi-faceted
laboratories in the hunt for dark matter. We review detection prospects of
wave-like, particulate, macroscopic and black hole dark matter that make use of
several exceptional properties of compact stars, such as ultra-high densities,
deep fermion degeneracies, low temperatures, nucleon superfluidity, strong
magnetic fields, high rotational regularity, and significant gravitational wave
emissivity. Foundational topics first made explicit in this document include
the effect of the ``propellor phase" on neutron star baryonic accretion, and
the contribution of Auger and Cooper pair breaking effects to neutron star
heating by dark matter capture. | Joseph Bramante, Nirmal Raj | 2023-07-26T18:11:27Z | http://arxiv.org/abs/2307.14435v2 | # Dark matter in compact stars
###### Abstract
White dwarfs and neutron stars are far-reaching and multi-faceted laboratories in the hunt for dark matter. We review detection prospects of wave-like, particulate, macroscopic and black hole dark matter that make use of several exceptional properties of compact stars, such as ultra-high densities, deep fermion degeneracies, low temperatures, nucleon superfluidity, strong magnetic fields, high rotational regularity, and significant gravitational wave emissivity. Foundational topics first made explicit in this document include the effect of the "propellor phase" on neutron star baryonic accretion, and the contribution of Auger and Cooper pair breaking effects to neutron star heating by dark matter capture.
+
Footnote †: journal: Elsevier
###### Contents
* 1 Introduction
* 2 The physics of compact objects
* 2.1 Fermi gas model and maximum masses
* 2.2 Structure equations and equation of state
* 2.3 Spin periods
* 2.4 Neutron star substructure
* 2.5 Thermonuclear explosions
* 2.6 Cooling
* 2.6.1 White dwarf cooling.
* 2.6.2 Neutron star cooling.
* 2.6.3 Comparison of white dwarf and neutron star late-stage cooling
* 2.7 Nucleon superfluidity
* 2.8 Neutron star magnetic field and spin-down
The white dwarf as a dark matter laboratory * 3.1 Dark matter annihilation inside and heating white dwarfs * 3.2 Non-annihilating dark matter converting white dwarfs into black holes * 3.3 White dwarf explosions via dark matter * 3.4 Dark matter's influence on white dwarf equations of state
* 4 The neutron star as a dark matter laboratory
* 4.1 Dark matter kinetic and annihilation heating of neutron stars
* 4.1.1 Capture and kinetic heating
* 4.1.2 Dark matter self-annihilations, nucleon co-annihilations, and induced nucleon decay
* 4.1.3 Improvements and uncertainties
* 4.1.4 Dark matter models that heat neutron stars through scattering and annihilation
* 4.1.5 Neutron star reheating mechanisms not involving dark matter
* 4.2 Neutron stars and halo substructure
* 4.3 Dark matter inducing superbursts in neutron stars
* 4.4 Dark matter that implodes neutron stars into black holes
* 4.4.1 Dark matter thermalization in neutron stars
* 4.4.2 Collapse of dark matter and formation of small black hole
* 4.4.3 Growth or evaporation of dark matter-formed black hole in the neutron star
* 4.4.4 Signatures of dark matter that implodes neutron stars
* 4.5 Primordial black hole dark matter and neutron stars
* 4.6 Neutron stars admixed with dark sectors
* 4.6.1 Impact on nuclear equation of state
* 4.6.2 More admixed neutron stars
* 4.7 Exotic compact stars
* 4.8 Dark sectors leading to internal heating of neutron stars
* 4.9 Dark matter signals in gravitational waves from neutron star mergers
* 4.10 Dark matter signals in pulsar timing
* 4.10.1 Pulsar timing arrays
* 4.10.2 Binary pulsar timing
* 4.10.3 Pulsar spin-down
* 4.11 Axion-like and very light dark matter, and neutron stars
* 5 Conclusions and perspective
## 1 Introduction
Dark matter is one of the foremost scientific mysteries of our times. Given how little is known about its microphysical properties, its possible identities seem limitless. This is famously encapsulated in the 90+ orders of magnitude that dark matter (DM) masses could span, from \(10^{-24}\) eV, set by the maximum possible Compton wavelength containable within a dwarf galaxy, to \(10^{8}M_{\odot}\simeq 10^{74}\) eV, the mass of DM in a small galaxy. Over this range of masses DM may be described as a wave/field, a particle, a macroscopic object, or galactic substructure - including black holes and topological defects. A promising strategy to
confront such remarkable diversity in possibility is to exploit physical systems with remarkable diversity in characteristics.
Compact stars - white dwarfs and neutron stars typically formed as relics of nuclear-powered stars - afford such an environment. Since their quantum properties were first described in the 1920s\(-\)30s by Fowler [1], Anderson [2], Stoner [3], Chandrasekhar [4], Zwicky and Baade [5] (and possibly Landau [6]), our understanding of compact stars has been enriched at the intersection of several branches of physics: astrophysics, general relativity, particle physics, nuclear physics, statistical physics, thermodynamics, and plasma physics. It is little wonder that they feature in numerous tests of fundamental physics [7; 8], and it should come as no surprise that they are also ideal laboratories to search for dark matter. Indeed, DM hunters would do well to take advantage of their striking properties: they have very high densities, with accompanying steep gravitational potentials, sometimes deeply degenerate constituent fermions, often very low temperatures, the presence of nucleon superfluidity, ultra-strong magnetic fields, extreme regularity in rotation rivalling the precision of atomic clocks, and powerful gravitational radiation emitted during binary mergers, to name a few.
The use of stars to look for evidence of DM dates to proposals that weakly interacting particle DM might alter nuclear reaction rates in the Sun [9; 10]. Shortly after, it was realized that neutron stars were useful for seeking out certain models of DM that could form black holes in their interior [11]. One immediate difference between a search for DM in compact stars and a terrestrial detector is that, since DM is accelerated in the deep gravitational well of a compact star, its interactions with stellar constituent particles occur at semi-relativistic velocities: \(\mathcal{O}(10^{-2}-10^{-1})\ c\) for a white dwarf and \(\mathcal{O}(0.5)\ c\) for a neutron star. This high DM speed provides enhanced sensitivity to theoretical models with velocity-suppressed rates for scattering on Standard Model (SM) states, since in the Milky Way's Galactic halo (and by extension in terrestrial detectors) the velocity of DM particles is only \(\mathcal{O}(10^{-3})c\). In particular, the environs of a NS are greatly suited to testing the origin of DM, since the kinetic energy of DM at speeds \(\sim 0.7c\) are similar to that during cosmological production, particularly for "freeze-out" processes [12].
This review is organized as follows. In Section 2 we provide an overview of the properties of neutron stars and white dwarfs, emphasizing aspects that will be important for dark matter searches. In Section 3, we describe white dwarf searches for dark matter, treating dark matter annihilation and heating of white dwarfs, conversion of white dwarfs to black holes, ignition of Type Ia supernovae, and effects of dark matter on white dwarf equations of state. In Section 4, we describe neutron star searches for dark matter, including dark matter heating neutron stars kinetically and via annihilations, models of dark matter that convert neutron stars to black holes, exotic compact stars that constitute dark matter, neutron stars admixed with dark matter, models of dark matter that lead to internal heating of neutron stars, signals of dark matter in neutron star-related gravitational waves and pulsar timing, and the utility of neutron stars in discovering axion-like and primordial black hole dark matter. In Section 5, we briefly discuss future research directions for dark matter in compact stars.
## 2 The physics of compact objects
A detailed account of the physical characteristics of white dwarfs and neutron stars is beyond the scope of this review, and for these we refer the reader to Refs. [13] and [14]. Here we outline key properties of these stars that make them useful dark matter detectors.
**White dwarfs** are stellar remnants formed from main sequence stars that undergo a red giant phase not hot enough to fuse carbon. Depending on its mass, a white dwarf will be composed of some proportion of helium, carbon, oxygen, neon and magnesium, which make up the bulk of the mass. A sea of electrons
co-habiting with nuclei provide, as we will see, the Fermi degeneracy pressure that supports the white dwarf against gravitational collapse.
Super-giant progenitors of mass around 10-25 \(M_{\odot}\) that undergo core-collapse leave behind short-lived "proto-neutron stars" through which neutrinos diffuse out carrying away 99% of the star's binding energy, following which **neutron stars** are born. They are composed mainly of Fermi degenerate neutrons formed by electron-proton capture, \(e^{-}+p\to n+v_{e}\), at extreme densities and temperatures. Due to beta chemical equilibrium, neutron stars are also thought to contain populations of protons, electrons, and muons; it is in fact the filled Fermi seas of these fermionic fields that keep neutrons from decaying to electrons and muons inside NSs. The supernova collapse is generically expected to be hydrodynamically asymmetric, resulting in a natal "kick" to the neutron star at 450-1000 km/s speeds in a random direction [15; 16; 17; 18; 19]; a 1% fractional anisotropy in the momenta of escaping neutrinos could be another source of the asymmetric kick [20; 21; 22].
### Fermi gas model and maximum masses
Compact stars, especially white dwarfs (WDs), are prevented from collapsing under their own gravity by Fermi degeneracy pressure. In a low-temperature Fermi gas of Fermi momentum \(p_{F}\), the number of fermions (of spin degeneracy \(g_{s}=2\)) filling up a real volume \(V\) and Fermi sphere volume \(V_{F}=4\pi p_{F}^{3}/3\), is \(N_{f}=g_{s}VV_{F}/(2\pi)^{3}\), from which we obtain:
\[p_{F}=(3\pi^{2}n)^{1/3}\, \tag{1}\]
where \(n\) is the fermion number density. The total energy of the Fermi gas given the energy of a state \(e(p)\) is
\[E=4\pi g_{s}V\int_{0}^{p_{F}}dpp^{2}e(p)\, \tag{2}\]
and for an energy density \(\varepsilon=E/V\) the pressure is obtained as
\[P=-\bigg{(}\frac{\partial E}{\partial V}\bigg{)}_{N_{f}}=n^{2}\frac{\mathrm{d }}{\mathrm{d}n}\bigg{(}\frac{\varepsilon}{n}\bigg{)}. \tag{3}\]
Setting \(e(p)=m_{f}+p^{2}/(2m_{f})\) in the non-relativistic limit and \(e(p)=p\) in the relativistic limit, and using Eqs. (1),(2) and (3), we get the Fermi degeneracy pressure of a species as
\[P=\begin{cases}[(3\pi^{2})^{2/3}/5m_{f}]\ n^{5/3}\,&p_{F}\ll m_{f}\,\\ [(3\pi^{2})^{1/3}/4]\ n^{4/3}\ \ \ \,&p_{F}\gg m_{f}\.\end{cases} \tag{4}\]
The net pressure of the compact star is the sum of the contributions of constituent species. In WDs, the electrons are unbound - the electron-nucleus Coulomb energy \(\simeq Ze^{2}(n_{e}/Z)^{1/3}\) is \(\mathcal{O}(10^{-2})\) times the typical kinetic energy, \((3\pi^{2}n_{e})^{1/3}\) - and thus form their own Fermi gas system. It may be seen from Eq. (2) that due to their lightness and abundance, it is the _electrons_ that contribute the greatest to the pressure of WDs. In contrast, in neutron stars (NSs) the constituent neutrons contribute the most to both the stellar mass and pressure.
The total energy of the star in the non-relativistic limit is given by
\[E_{\mathrm{tot}}^{\mathrm{non-rel}} \simeq \text{(total kinetic)}-\text{(gravitational binding)}\] \[= \frac{3}{5}N_{f}\frac{p_{F}^{2}}{2m_{\mathrm{f}}}-\frac{3}{5} \frac{GM_{\star}^{2}}{R_{\star}}\]
\[= \left(\frac{27\sqrt{3}\pi}{40\sqrt{10}}\right)^{2/3}\frac{1}{m_{f}R_ {\star}^{2}}\left(\frac{ZM_{\star}}{Am_{N}}\right)^{5/3}-\frac{3}{5}\frac{GM_{ \star}^{2}}{R_{\star}}\, \tag{5}\]
and in the relativistic limit,
\[E_{\rm tot}^{\rm rel} = \frac{3}{4}N_{t}p_{F}-\frac{3}{5}\frac{GM_{\star}^{2}}{R_{\star}} \tag{6}\] \[= \left(\frac{243\pi}{256}\right)^{1/3}\frac{1}{R_{\star}}\left( \frac{ZM_{\star}}{Am_{N}}\right)^{4/3}-\frac{3}{5}\frac{GM_{\star}^{2}}{R_{ \star}}\.\]
Eq. (5) shows that, in the ground state of the compact star where the virial theorem (potential energy = -2 \(\times\) kinetic energy) applies, we have \(R_{\star}\propto M_{\star}^{-1/3}\) as a mass-radius relation1. This implies that WDs, modeled accurately as a Fermi gas system, become smaller with increasing mass. Hence the heaviest WDs are the densest, and thus one expects electrons in them to be ultra-relativistic and Eq. (6) to apply. In Eq. (6) both the potential and kinetic terms fall as \(R_{\star}^{-1}\), however the former grows faster with \(M_{\star}\) than the latter, implying a maximum WD mass above which the star will collapse. This "Chandrasekhar limit" [23] (see also Sec. 2.5) is given by
Footnote 1: This shows that more compact stars are generally denser, which we will see is relevant to determining the speed of DM passing through and the density of DM collected in compact stars.
\[M_{\rm max-WD-rel}=\sqrt{\frac{5\pi}{G^{3}}}\frac{15}{16}\!\left(\frac{Z}{Am_{ N}}\right)^{2}\simeq 1.7\left(\frac{2Z}{A}\right)^{2}\,M_{\odot}. \tag{7}\]
A similar limit may be obtained for NS masses by setting \(A\to 1\), \(Z\to 1\):
\[M_{\rm max-NS-rel}=6.8\ M_{\odot}. \tag{8}\]
These estimates are not physically motivated: they assume relativistic fermions constituting the entire volume of the star (true for neither WDs nor NSs) and a non-interacting Fermi gas (not true for NSs). Nevertheless they ballpark the true limit to within \({\cal O}(1)\) factors. A more precise treatment must account for the stellar structure, which we will discuss below, but first let us make two more estimates of the maximum mass of NSs.
(i) If we assume non-relativistic neutrons, the virial theorem using Eq. (6) gives a mass-radius relationship:
\[R_{\rm NS}\simeq 12\ {\rm km}\left(\frac{M_{\odot}}{M_{\rm NS}}\right)^{1/3}. \tag{9}\]
In this picture the NS radius is a decreasing function of its mass; however it cannot become smaller than the Schwarzschild radius corresponding to a maximum mass, \(R_{\rm Schw}=3\ {\rm km}\ (M/M_{\odot})\). This condition gives
\[M_{\rm max-NS-nonrel}=2.8\ M_{\odot}. \tag{10}\]
(ii) Due to super-nuclear densities in the NS cores, strong interactions cannot be neglected in considerations of NS structure. A maximum mass can be obtained in the (unphysical) limit where such interactions solely support the star against gravitational collapse [24]. Strong interactions become repulsive at inter-nucleon distances roughly shorter than the reduced Compton wavelength of the mediating pion, \(m_{\pi}^{-1}\). This gives a maximum neutron density \(m_{N}m_{\pi}^{3}\), corresponding to a mass-radius relation of \(M_{\rm NS}=4\pi m_{N}m_{\pi}^{3}R_{\rm NS}^{3}/3\).
For a surface escape speed \(v_{\rm esc}\), we have \(R_{\rm NS}=R_{\rm Schw}v_{\rm esc}^{-1}=3\) km (\(M/M_{\odot}\)) \(v_{\rm esc}^{-1}\). Putting these together yields the maximum NS mass as
\[M_{\rm max-NS-strong}=\sqrt{\frac{3}{32\pi}}v_{\rm esc}^{3/2}\bigg{(}\frac{M_{ \rm Pl}^{6}}{m_{N}m_{\pi}^{3}}\bigg{)}^{1/2}\simeq 2\ M_{\odot}\left(\frac{v_{\rm esc }}{0.5c}\right)^{3/2}\,. \tag{11}\]
As we will see below, this turns out to be an excellent estimate.
As argued in, e.g., Ref. [25], a more accurate reason for the existence of a maximum mass for NSs is that the sound speed \(c_{s}\) in NS material cannot be arbitrarily large. In particular, \(c_{s}^{2}/c^{2}=(\partial P/\partial\varepsilon)_{\bar{s}}\leq 1\) must be satisfied everywhere in the NS, where \(\bar{s}\) is the specific entropy. Physically, increments in the self-gravitating energy density result in increments in equilibrium-restoring pressure, however this trend cannot extend forever due to the sound speed limitation, putting a cap on NS masses. This is also an important criterion in modelling the equation of state (EoS) of high-density NS matter.
Briefly, we review the argument for a _minimum_ NS mass. For a given EoS of the NS core fluid, as the central density (hence mass) of the NS is decreased, the gravitational binding energy will decrease, and at some minimum density, the NS will be unstable to small radial perturbations. This EoS-dependent minimum NS mass is typically \(\sim 0.1M_{\odot}\)[26; 27]. Such an NS would be primarily composed of an \(\mathcal{O}(100)\) km crust zone, with a percent-level fraction of mass in the central degenerate neutron fluid [27]. Be that as it may, a realistic minimum mass of NSs is about \(1M_{\odot}\), after neutrino pressure and other thermal effects during the formation of a NS in a core collapse supernova are considered2[31].
Footnote 2: A compact object of mass \(0.79\pm 0.18M_{\odot}\) has been observed in the supernova remnant HESS J1731\(-\)347 [28], exciting speculations as to its nature and the EoS of nuclear matter [29; 30].
### Structure equations and equation of state
Detailed reviews of neutron star structure and the role of EoSs may be found in Refs. [32; 33; 34], while we present here the essentials. Accurate estimates of compact star macroscopic properties are best obtained by solving the spherically symmetric stellar structure equations:
\[\frac{dP}{dr} = -\frac{Gm\varepsilon}{c^{2}r^{2}}\bigg{(}1+\frac{P}{\varepsilon} \bigg{)}\bigg{(}1+\frac{4\pi r^{3}P}{mc^{2}}\bigg{)}\bigg{(}1-\frac{2Gm}{c^{2} r}\bigg{)}^{-1}\,,\] \[\frac{dm}{dr} = 4\pi\frac{\varepsilon}{c^{2}}r^{2}\, \tag{12}\]
Here \(m\) is the mass enclosed within a radial distance \(r\), and all other quantities are as defined above. The first equation, the **Tolman-Oppenheimer-Volkoff (TOV) equation**, describes hydrostatic equilibrium, and the second describes the conservation of mass in the star. Given an equation of state \(P(\varepsilon)\) and the boundary conditions \(m(0)=0\), \(\varepsilon(0)=\varepsilon_{c}\) (a "central density"), the structure equations can be solved to obtain useful quantities: a mass-radius relation (which would capture the maximum allowed mass), radial profiles of pressure, energy or number density, chemical potential, and so on.
A reliable estimate of WD properties may be gained by assuming a polytropic equation of state: \(P(\varepsilon)=K\varepsilon^{\gamma}\). For WDs, one can set the second and third terms to unity on the right-hand side of the first equation in Eq. (12), as the \(c\)-dependent terms depict general relativistic corrections that are only important for NSs. It is then straightforward to solve Eq. (12) for polytropes [25; 46]. In particular, the cases of \(\gamma=5/3\) and \(\gamma=4/3\), applicable respectively to the limit of non-relativistic and relativistic electrons, result in the \(M\)-
scaling relations we derived from the virial theorem in Eqs. (5) and (6) with more refined numerical factors. Notably, for the relativistic case we obtain the Chandrasekhar mass as:
\[M_{\rm Ch-WD}\simeq 1.4M_{\odot}. \tag{13}\]
Realistic EoSs are non-polytropes accounting for Coulomb corrections arising from electron-ion interactions, e.g., the Feynman-Metropolis-Teller EoS [35]. Figure 1 shows representative \(M\)-\(R\) relations for white dwarfs of various nuclear compositions, taken from Ref. [35]. A simple analytical fit to translate between \(\rho\) and WD masses \(M_{\rm WD}\in[0.1,1.35]M_{\odot}\) is [47]
\[\left(\frac{\rho_{\rm WD}}{1.95\times 10^{6}\ {\rm g/cm^{3}}}\right)^{2/3}+1 \approx\bigg{[}\sum_{i=0}^{6}c_{i}\bigg{(}\frac{M_{\rm WD}}{M_{\odot}}\bigg{)} ^{i}\bigg{]}^{-2}\, \tag{14}\]
with \(\{c_{i}\}=\{1.003,-0.309,-1.165,2.021,-2.060,1.169,-0.281\}\).
In NSs the EoS of nuclear matter is non-trivial due to their high densities, where perturbative QCD breaks down. EoSs must account for nucleon-nucleon interactions, far more uncertain than Coulomb interactions, and must fit data on the per-nucleon binding energy in symmetric nuclear matter, the so-called
Figure 1: _Left._ White dwarf mass-radius relations derived from three different equations of state for a \({}^{12}\)C WD; quantitatively similar curves are obtained for \({}^{4}\)He and \({}^{16}\)O WDs. The point marked \({}^{\prime\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
symmetry energy that accounts for the energy above the \(N=Z\) ground state, the nuclear compressibility, and much else. For these reasons a wide range of EoSs has been proposed, resulting in multiple predictions for NS configurations. Figure 2 displays \(M\)-\(R\) curves obtained from a few popular EoSs. The top left is a region where \(c_{s}>c\) and hence causality is violated; the NS mass for various EoSs is seen to reach a maximum close to this region.
### Spin periods
Celestial bodies have a maximum angular speed: the gravitational force on a mass element on the equator must exceed the centrifugal force on it, giving a minimum spin period
\[P_{\rm min}\simeq\sqrt{\frac{3\pi}{G\rho}}=10^{4}\;{\rm s}\sqrt{\frac{{\rm g/ cm^{3}}}{\rho}}\;. \tag{15}\]
Figure 2: Neutron star mass-radius relations for various equations of state for nuclear matter at high densities. The blue shaded region is preferred by pulsar observations [33], the yellow region is a fit to the observation of binary NS mergers using a hadronic EoS [36], the green region is the 90% C.L. preferred region from an EoS-insensitive fit to GW170817 [37], and the red regions are Bayesian fits at 90% C.L. from a combination of gravitational wave and low-energy nuclear and astrophysical data [38]. The horizontal thick-dashed lines depict the measured mass of the heaviest observed pulsar MSP J0740+6620 [39]. The line-shaded bottom right region is excluded by centrifugal mass loss, with the limit coming from observations of the fastest-spinning (716 Hz) pulsar [40]. The line-shaded top left region is excluded by the condition of causality: for any EoS the sound speed \(c_{s}\leq c\). The rectangular regions are simultaneous fits at 68% C.L. of NS mass and radius by NICER (light brown by Refs. [41; 42] and dashed-green-enclosed by Refs. [43; 44]). This plot is taken from Ref. [45].
Thus for WDs with \(\rho=\mathcal{O}(10^{6})\) g/cm\({}^{3}\), \(P_{\min}\simeq 10\) s, and for NSs with \(\rho=\mathcal{O}(10^{14})\) g/cm\({}^{3}\), \(P_{\min}\simeq 10^{-3}\) s. And indeed, the first pulsars historically espied were identified as such by their gradually lengthening sub-second spin periods. Moreover, no pulsars with spin periods smaller than \(\mathcal{O}(\mathrm{ms})\) have been observed; those observed near this limit are called millisecond pulsars. The bottom right region of Fig. 2 is excluded by the fastest spinning pulsar observed with rotation frequency 716 Hz [40], a limit given by (\(R/10\) km)\({}^{3/2}\geq 1280\) Hz \(\left(M/1.5\ M_{\odot}\right)^{1/2}\)[32].
### Neutron star substructure
In the top panel of Figure 3 we show a schematic of the interior structure of a NS. The physics of substructure is obtained by solving Eq. (12) with the appropriate equation of state for each stellar region. For illustration here we will make use of the Brussels-Montreal "unified" equation of state ("BSk") accounting for all regions/densities in the NS, expressed in terms of analytic fits [49]. What follows is an overview of NS substructure; interested readers may gain further details from Ref. [50] and the references listed in Ref. [48].
The _crust_, about 1 km thick, spans over 10 decades in density and consists of several distinct layers corresponding to different phases of nuclear matter. The bottom left panel of Figure 3 shows the density of
Figure 3: _Top._ Schematic of the internal structure of a neutron star, taken from Ref. [48]. The layers of the crust are shown in the zoom. _Bottom left._ Density profile of (various layers of) a neutron star crust. **Right.** Nucleon number as a function of neutron star crust density. See Sec. 2.4 for further details.
material as a function of the proper depth for the various crustal layers, and the bottom right panel shows nucleon numbers of nuclei as a function of densities spanning the entire crust; both plots were made using the equation of state BsK21 [48]. These plots do not show the _atmosphere_ (density \(<10^{4}\) g/cm\({}^{3}\), thickness \(\mathcal{O}(\mu\)m), composed of hydrogen and lighter elements) and _ocean_ (density \(<10^{10}\) g/cm\({}^{3}\), thickness \(\mathcal{O}(10)\) m, composed of carbon and heavy metals); these layers affect the star's thermal spectrum, and are influenced by the star's magnetic field.
The _outer crust_ (density \(10^{4}-10^{11}\) g/cm\({}^{3}\)) is composed of nuclei forming a body-centered-cubic Coulomb crystal, interspersed with a degenerate and nearly-free relativistic gas of electrons. \(\dot{A}\)_la_ white dwarfs, electron degeneracy contributes dominantly to the pressure, while nuclei contribute dominantly to the mass. Deep in the crust, where the electron chemical potential is higher, nuclei become increasingly neutron-rich due to inverse beta decay. The outer crust terminates when the density and pressure become so high that free neutron states begin to appear.
The transition to the _inner crust_ is marked by the neutron drip line, marked by density \(\rho_{\rm drip}\simeq 4.2\times 10^{11}\) g/cm\({}^{3}\)[51], beyond which a fraction of neutrons becomes unbound from nuclei. Up to densities about 0.1 times the nuclear saturation density \(\rho_{0}\simeq 2\times 10^{14}\) g/cm\({}^{3}\), the inner crust comprises of heavy, neutron-rich nuclei (also known as proton clusters) forming a lattice, along with both an electron gas and a dripped-neutron gas. Such a system is inaccessible to terrestrial experiments, hence the composition of the inner crust is far more uncertain than the outer crust, and studies of this region are limited to theoretical calculations, _e.g._, the Compressible Liquid Drop Model, the Thomas-Fermi approximation, and many-body quantum calculations. As the NS cools down, the dripped neutrons are expected to form a superfluid phase.
Further down, the inner crust density approaches the nuclear saturation point, and homogeneous nuclear matter appears [52; 53]. This has led to the prediction of the so-called nuclear "_pasta_" phase at the bottom of the inner crust [54; 55; 56; 57; 58; 59]. Intricate competition between nuclear attraction and Coulomb repulsion forms these extended non-spherical phases of nuclear matter; as the density increases, gnocchi, then spaghetti, and then lasagna pasta phases become more prevalent. In the deepest layer of the inner crust there are "inverted pasta phases" where nuclear density material predominates over sparser, sub-nuclear density voids. This includes bucatini (anti-spaghetti) and swiss cheese (anti-gnocchi) phases. Nuclear pasta is confined to a thin layer, yet they constitute a significant fraction of the crustal mass as they span densities of \(0.1-1\)\(\rho_{0}\). They may also impact several properties of the NS such as its thermal and electrical conductivity, and the elasticity and neutrino opacity of the crust.
The inner crust terminates when the density reaches \(\rho_{0}\), beyond which nuclei "melt" into uniform nuclear matter, which form the core of the neutron star. The core is further sub-divided into the _outer core_ (densities 0.5\(-\)2 \(\rho_{0}\)), where the nuclear matter is expected to be comprised of neutrons, protons, and electrons, and the _inner core_ (densities 2\(-\)10 \(\rho_{0}\)), where exotic states of matter may possibly be present.
### Thermonuclear explosions
Astrophysical situations may arise in which a white dwarf exceeds its Chandrasekhar mass (Eq. (7)). For carbon-oxygen WDs, this would lead to ignition of runaway carbon fusion that unbinds the star. This is how Type Ia supernovae, conventionally used as "standard candles" in cosmological distance measurements, have been theorized to originate - via accreting material from a binary companion and going super-Chandrasekhar. This picture, however, is disputed by the lack of a specific "trigger" of the thermonuclear process along with a number of other observational inconsistencies [60]. As will be discussed later, other possible Type Ia progenitors include WD mergers and pyconuclear reactions in sub-Chandrasekhar mass WDs.
Yet another setting in which thermonuclear chain reactions create an explosion is in the ocean layer of neutron star crusts, and in particular the carbon component, which could be ignited by mass accretion from a binary companion. For accretion rates \(>10\%\) of the Eddington limit, the result is "superbursts", x-ray bursts that spew \(\mathcal{O}(10^{35})\) J of energy, lasting for hours, and in some cases recurring about every year [61, 62, 63, 64]. This must be distinguished from regular Type-I bursts in NSs, typically ignited by surface accretion, emitting \(10^{3}\) times less energy and lasting \(10^{3}\) times shorter.
Ref. [65] provides extended discussion on the physics of thermonuclear runaway fusion, while we provide here a brief summary. Two generic conditions must be satisfied: (1) a mimimum energy \(Q_{\rm dep}\) must be deposited to raise the temperature of a critical mass \(M_{\rm crit}\) of density \(\rho\) to a critical temperature \(T_{\rm crit}\) which can sustain fusion:
Condition 1 \[Q_{\rm dep}\geq M_{\rm crit}(\rho,T_{\rm crit})\bar{c}_{p}(\rho,T_ {\rm crit})T_{\rm crit}\.\] (16)
The temperature prior to heating is here assumed \(\ll T_{\rm crit}\), and \(\bar{c}_{p}\simeq c_{p}^{\rm c}/2+c_{p}^{\gamma}/4+c_{p}^{\rm ion}\) is the average isobaric specific heat capacity, with
\[c_{p}^{\ell}(\rho,T_{\rm crit})=\frac{a_{\ell}b_{\ell}}{u}\bigg{(}\frac{T_{\rm crit }}{E_{\rm F}}\bigg{)}^{\alpha_{\ell}}\bigg{[}1-\bigg{(}\frac{m_{e}}{E_{\rm F}} \bigg{)}^{2}\bigg{]}^{\beta_{\ell}}. \tag{17}\]
Here \(u\) is the atomic mass unit, \(m_{e}\) the electron mass, and for the {electronic, radiative, ionic} contributions, \(a_{\ell}=\{\pi^{2},4\pi^{4}/5,5/2\}\), \(b_{\ell}=\{\sum X_{i}Z_{i}/A_{i},\sum X_{i}Z_{i}/A_{i},\sum X_{i}/A_{i},\}\) (with \(X_{i}\), \(Z_{i}\), \(A_{i}\) the mass fraction, charge and atomic number of the ion species \(i\) respectively), \(\alpha_{\ell}=\{1,3,0\}\), and \(\beta_{\ell}=\{-1,-3/2,0\}\). The Fermi energy \(E_{\rm F}=[m_{e}^{2}+(3\pi^{2}n_{e})^{2/3}]^{1/2}\) with \(n_{e}=\rho b_{\rm e}/u\) (Eq. (1)). The trigger energy in Eq. (16) ranges \(\mathcal{O}(10^{17})\) GeV \(\mathcal{O}(10^{24})\) GeV for WD central densities corresponding to WD masses ranging \(1.4\ M_{\odot}\to 0.8\ M_{\odot}\).
Eq. (16) is necessary but not sufficient for runaway fusion. There is a second condition, through which the critical mass \(M_{\rm crit}=4\pi\rho\lambda_{\rm trig}^{3}/3\) is also defined. To wit, the rate of energy gain via nuclear fusion must exceed the rate of energy loss via diffusion over the volume set by the "trigger length" \(\lambda_{\rm trig}\):
Condition 2 \[\dot{Q}_{\rm nuc}>\dot{Q}_{\rm diff}\.\] (18)
Here we have \(\dot{Q}_{\rm nuc}=M_{\rm crit}\dot{S}_{\rm nuc}\) and \(\dot{Q}_{\rm diff}\simeq 4\pi k\lambda_{\rm trig}T_{\rm crit}\) for a nuclear energy deposition rate per mass \(\dot{S}_{\rm nuc}\) and thermal conductivity \(k\). Conductive diffusion from relativistic electrons provides the dominant source of diffusion in WDs at the temperatures and densities relevant for igniting thermonuclear fusion; see Ref. [66, 67] for analytic expressions for \(\dot{Q}_{\rm diff}\).
The estimation of \(\dot{S}_{\rm nuc}\) involves numerical simulations of flame propagation with a nuclear reaction network [65]. From this,
\[\lambda_{\rm trig} = \sqrt{\frac{3kT_{\rm crit}}{\rho\dot{S}_{\rm nuc}(\rho,T_{\rm crit} )}} \tag{19}\] \[= \begin{cases}\lambda_{1}\ (\frac{\rho}{\rho_{1}})^{-2}&,\rho\leq \rho_{1}\\ \lambda_{1}\ (\frac{\rho}{\rho_{1}})^{\ln(\lambda_{2}/\lambda_{1})/\ln(\rho_{2}/ \rho_{1})}&,\rho_{1}<\rho\leq\rho_{2}\end{cases}\]
where for WDs \(\{\lambda_{1}^{\rm WD},\lambda_{2}^{\rm WD}\}=\{1.3\times 10^{-4}\) cm, \(2.5\times 10^{-5}\) cm\(\}\) and \(\{\rho_{1},\rho_{2}\}=\{2\times 10^{8}\) g/cm\({}^{3}\), \(10^{10}\) g/cm\({}^{3}\}\). This analytic form was obtained in Ref. [47] by fitting to Figure 6 of Ref. [65] - that is restricted to \(\rho_{1}\leq\rho\leq\rho_{2}\)
- and extrapolating to lower densities assuming plausible density-scalings of \(k\) and \(\dot{S}_{\rm nuc}\). The fit is for \(T_{\rm crit}\) = 0.5 MeV and assumes equal carbon and oxygen masses in WDs. In the NS ocean, the mass fraction of carbon is 10% [61], implying \(\rho\to 0.1\rho\) in Eq. (19) if Eq. (19) holds for pure carbon burning3. One could also fit a relation among the WD central density, critical temperature and trigger mass [66]:
Footnote 3: It probably does, for the scalings of Eq. (19) are seen to be similar to those in Table 3 of Ref. [65], for conductive burning.
\[T_{\rm crit}\gtrsim 10^{9.7}\ {\rm K}\bigg{(}\frac{\rho}{10^{8}\ {\rm g/cm^{3}}} \bigg{)}^{3/140}\bigg{(}\frac{M_{\rm crit}}{\rm g}\bigg{)}^{3/70}. \tag{20}\]
### Cooling
As no nuclear fuel is burnt in compact stars, they cool down continually from the moment of their birth unless energy is deposited into them by some means, as discussed in Sections 3 and 4. Observations of compact star cooling are an important handle on the physics governing their internal dynamics.
#### 2.6.1 White dwarf cooling.
White dwarfs initially cool by shedding the thermal energy of constituent ions. Given the specific heat per ion \(c_{v}=3/2\), the total WD energy in thermal ions is
\[U=\frac{3T}{2}\bigg{(}\frac{M_{\rm WD}}{Am_{N}}\bigg{)}\,. \tag{21}\]
The WD luminosity \(L=-dU/dt\), and the cooling curve can be obtained from an independent expression for the luminosity in terms of the WD internal temperature \(T_{\rm int}\):
\[L=0.2\ {\rm J/s}\,\bigg{(}\frac{M_{\rm WD}}{M_{\odot}}\bigg{)}\bigg{(}\frac{T_{ \rm int}}{\rm K}\bigg{)}^{7/2}\, \tag{22}\]
derived from photon diffusion in the WD surface layers assuming Kramer's opacity, and combining it with the EoS; see Ref. [26] for a detailed treatment. The cooling timescale is then obtained as
\[t_{\rm cool}\simeq{\rm Gyr}\,\bigg{(}\frac{M/M_{\odot}}{L/(10^{-3}L_{\odot})} \bigg{)}^{5/7}. \tag{23}\]
Figure 4: Cooling curves. _Left._ Luminosity versus time of white dwarfs of various masses, taken from Ref. [68]. The onset of crystallization at about \(10^{8}\) yr takes cooling from the regime of thermal ions to the Debye regime. _Right._ Surface temperature versus time of a benchmark white dwarf and neutron star. Early cooling dominated by emission of neutrinos is distinctly faster than that of photons. See Sec. 2.6 for further details.
Thus the cooling times are long enough to keep WDs from becoming invisibly faint today, yet short enough to make them fainter than main-sequence stars. The above relation only holds for WDs with \(T_{\rm int}>T_{\rm Debye}\simeq 10^{7}\) K, the typical Debye temperature below which the ions crystallize. For smaller temperatures, corresponding to \(L\lesssim 10^{-4}L_{\odot}\), the specific heat comes from the vibration of the crystal lattice as opposed to thermal motion of the ions. Obtaining WD cooling times accounting for this effect involves a non-trivial treatment [26] that is beyond our scope. In Fig. 4 left panel we show a luminosity-vs-time cooling curve indicating the point at which crystallization effects become important. In the right panel we show a temperature-vs-time curve for a benchmark WD of mass 0.67 \(M_{\odot}\) corresponding to a 7000 km radius.
#### 2.6.2 Neutron star cooling.
Neutron stars cool by emitting neutrinos (generated in weak processes) and photons; the rate of neutrino cooling rate is initially larger and hence dominates up to a point, before photon cooling takes over. In describing the cooling of NSs, where GR effects are significant, it is necessary to distinguish between the temperature in the frame of the NS, \(T\), and in the frame of a distant observer, \(\widetilde{T}\), related by
\[\widetilde{T} \equiv T/(1+z)\;,\] \[1+z = \frac{1}{\sqrt{1-2GM_{\rm NS}/R_{\rm NS}}}\;. \tag{24}\]
The temperature evolution during passive cooling is given by
\[c_{\rm v}(\widetilde{T})\frac{d\widetilde{T}}{dt}=-L_{\rm v}^{\infty}( \widetilde{T})-L_{\rm v}^{\infty}(\widetilde{T})\;, \tag{25}\]
where the neutrino luminosity of the NS as measured by a distant observer of our benchmark NS is given by [69]
\[L_{\rm v}^{\infty}(\widetilde{T})=1.33\times 10^{39}\;{\rm J/yr}\left(\frac{ \widetilde{T}}{10^{9}\;{\rm K}}\right)^{8}\;, \tag{26}\]
applicable for slow/modified Urca ("Murca") processes such as \(N+n\to N+pe^{-}\bar{\nu}_{e}\) and \(N+pe^{-}\to N+nv_{e}\) (with \(N=n,p\)), the neutrino cooling mechanism as prescribed by the "minimal cooling" paradigm [70]. In principle there could also be fast/direct Urca ("Durca") processes such as \(n\to pe^{-}\bar{\nu}_{e}\) and \(pe^{-}\to nv_{e}\)[71]. These processes dominate the NS cooling down to \(\widetilde{T}=10^{8}\) K. The luminosity of photon blackbody emission from the NS surface is:
\[L_{\rm v}^{\infty}(\widetilde{T}_{s})=4\pi(1+z)^{2}R_{\rm NS}^{2}\widetilde{ T}_{s}^{4}\;. \tag{27}\]
The NS heat capacity \(c_{V}\) is given by [72]
\[c_{V}(\widetilde{T}) = 4.8\times 10^{26}\;{\rm J/K}\left(\frac{\widetilde{T}}{10^{4}\; {\rm K}}\right) \tag{28}\] \[= 2.7\times 10^{-21}\;M_{\odot}/{\rm K}\left(\frac{\widetilde{T}}{1 0^{4}\;{\rm K}}\right).\]
Solving Eq. (25) requires a relation between the surface (\(T_{s}\)) and internal (\(T\)) temperatures. Such a relation is highly sensitive to the composition of the NS' outermost envelope, which acts as an insulating layer for temperatures \(\gtrsim\mathcal{O}(10^{3})\) K, and becomes too thin for insulation at smaller temperatures [71; 73]. For an iron envelope at high temperatures [74; 75],
\[T_{s}=10^{6}\;{\rm K}\bigg{[}\bigg{(}\frac{M_{\rm NS}}{1.5\;M_{\odot}}\bigg{)} \cdot\bigg{(}\frac{10\;{\rm km}}{R_{\rm NS}}\bigg{)}^{2}\bigg{]}^{1/4}\bigg{[} \frac{T}{9.43\times 10^{7}\;{\rm K}}\bigg{]}^{0.55}\;. \tag{29}\]
One can then identify the thin-envelope regime by solving for \(T_{s}=T\) in the above equation, which gives \(T_{\rm env}=3908\) K, below which one can simply set \(T_{s}=T\).
The solution of Eq. (25) can now be written down as the time for the NS to cool to a temperature \(\widetilde{T}_{\rm cool}\) (\(\ll\) the initial temperature) [76]:
\[t_{\rm cool}(\widetilde{T}_{9})/{\rm yr}=\begin{cases}t_{\rm env}=s_{1}^{-k}q^ {-\gamma}[(1+(s_{1}/q)^{k}\widetilde{T}_{9}^{2-n})^{-\gamma/k}-1],\;\widetilde {T}_{\rm cool}>\widetilde{T}_{\rm env}\;,\\ t_{\rm env}+(3s_{2})^{-1}(\widetilde{T}_{9}^{-2}-\widetilde{T}_{\rm env}^{- 2}),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
does not influence the equation of state of NS matter (therefore bearing no consequence on NS mass-radius relations), but does play a major role in setting the NS' heat capacity and neutrino emissivity. This is because these quantities are sensitive to particle-hole excitations close to the Fermi surface: the energy gap exponentially suppresses the nucleon contribution to the heat capacity for NS temperatures \(\ll T_{c}\).
Neutrons in the NS inner crust are expected to form Cooper pairs in the singlet \({}^{1}S_{0}\) state and in the core in a triplet \({}^{3}P_{2}\) state: at higher densities, singlet pairing becomes repulsive [80]. The less dense protons are expected to pair in the singlet state in the NS core. A quark core in the NS could give rise to "color superconductivity" with \(ud\), \(ds\), \(su\) Cooper pairs carrying color charges [81]. Nucleon pairing models play a central role in the possibility of rotochemical heating, as discussed in Sec. 4.1.5. The presence of superfluidity in NSs also gives rise to macroscopic vortices and flux tubes, the former of which may play a role in late-stage reheating of NSs (Sec.4.1.5).
### Neutron star magnetic field and spin-down
When a progenitor star turns into an NS, its surface area shrinks by a factor of about \(10^{10}\). As a result, thanks to conservation of magnetic flux (Gauss' law for magnetism, \(B\times R_{\rm NS}^{2}=\) constant) the stellar magnetic fields increase by this factor, and thanks to conservation of angular momentum the rotational speed also rises by this factor. Flux conservation also implies that the total energy in the NS due to the magnetic field decreases with the NS size:
\[E_{B}^{\rm NS}=\frac{B^{2}}{8\pi}\cdot\frac{4\pi R_{\rm NS}^{3}}{3}=\frac{{ \rm const.}}{R_{\rm NS}}\, \tag{32}\]
w
Figure 5: _Left._ The “light cylinder” around a neutron star within which the co-rotating magnetosphere is confined [82]. Acceleration of charges in this region are thought to produce electromagnetic beams that are detected terrestrially as regular pulses, making young NSs “pulsars”. _Right._\(P\)-\(\dot{P}\) diagram taken from Ref. [83], illustrating the evolution of pulsars. For a description of the types of pulsars displayed here, see Ref. [82]. See Sec. 2.8 for further details.
hence the presence of \(B\) fields tends to enlarge the NS. However \(E_{B}^{\rm NS}\) is bounded by the gravitational binding energy of the NS, giving the condition
\[B\leq\sqrt{\frac{18}{5}\frac{GM_{\rm NS}^{2}}{R_{\rm NS}^{4}}}\simeq 10^{18} \ {\rm Gauss}\ \bigg{(}\frac{M_{\rm NS}}{M_{\odot}}\bigg{)}\bigg{(}\frac{10\ {\rm km}}{R_{\rm NS}}\bigg{)}^{2}. \tag{33}\]
A stricter upper limit can be obtained from considerations of hydromagnetic stability [84]. Measurements from pulsar spin-down (discussed here) find that millisecond pulsars typically have \(B\) field strengths of about \(10^{8}\) Gauss, classical pulsars about \(10^{12}\) Gauss, and magnetars about \(10^{15}\) Gauss. NSs have a "magnetosphere", a region of plasma surrounding the NS and co-rotating with it due to their coupling through the \(B\) field. One can see that this region is finite by simply locating the equatorial radius at which its tangential speed \(=c\) for a spin period \(P\):
\[R_{\perp}^{\rm LC}=\frac{cP}{2\pi}=48\ {\rm km}\bigg{(}\frac{P}{\rm ms} \bigg{)}. \tag{34}\]
This region defines the "light cylinder" shown in Figure 5, left panel. The presence of strong moving magnetic fields in the light cylinder generates electric fields that accelerate charged particles at the stellar surface, leading to emission of electromagnetic beams from near the magnetic poles of the NS. This beam, as we will soon see, is powered by the rotational energy of the NS. The lighthouse-like sweep of the beam, detected as regular pulses on Earth, serves to reveal neutron stars as pulsars4. This is how NSs were historically discovered by Bell and Hewish, and continues to be the primary method for finding NSs in the sky [87].
Footnote 4: At the time of writing, two “white dwarf pulsars” have been discovered [85; 86], but these refer to regular pulsation corresponding to the beat frequency of orbital rotation with a binary companion and the spin of the WD.
The NS spin varies over the lifetime of the NS due to a number of factors, chief among which is magnetic dipole radiation extracting rotational kinetic energy, an effect known as pulsar spin-down. The radiation power of a rotating magnetic dipole of moment \(m\), with a component \(m_{\perp}\) perpendicular to the NS spin axis, and angular speed \(\omega=2\pi/P\), is given by [82]
\[E_{\rm rad,B}=\frac{2}{3c^{3}}m_{\perp}^{2}\omega^{4}=\frac{2}{3c^{3}}(B_{\perp }R_{\rm NS})^{3}\bigg{(}\frac{2\pi}{P}\bigg{)}^{4}\, \tag{35}\]
where in the second equation we have used the expression for a sphere uniformly magnetized with field strength \(B\). The rotational power of an NS of moment of inertia \(I=2M_{\rm NS}R_{\rm NS}^{2}/5\) is given by
\[\dot{E}_{\rm rot}=I\omega\dot{\omega}=-4\pi^{2}\frac{I\dot{P}}{P^{3}}. \tag{36}\]
For sub-kHz frequencies this radiation cannot penetrate the ISM nebula surrounding the NS, and is hence deposited in it; the observed \(P\), \(\dot{P}\) and luminosities of supernova remnants such as the Crab Nebula (\(P\) = 0.033 sec, \(\dot{P}\) = 1 sec/80,000 yr, luminosity = \(10^{5}L_{\odot}\), much higher than that of the Crab Pulsar within) bears out the supposition that \(-\dot{E}_{\rm rot}\simeq\dot{E}_{\rm rad,B}\)[82].
NS spin-down provides a remarkably valuable handle on the age of an NS through measurement of just its \(P\) and \(\dot{P}\), i.e. without requiring knowledge of its radius, mass and \(B\) field. Assuming the \(B\) field remains constant, by equating Eqs. (35) and (36) we see that \(P\dot{P}\) is constant over time. For an initial spin period \(P_{0}\),
\[\int_{0}^{\tau}dt(P^{\prime}\dot{P}^{\prime}) = \int_{P_{0}}^{P}dP^{\prime}P^{\prime}\]
\[\Rightarrow P\dot{P}\tau = \frac{P^{2}-P_{0}^{2}}{2}\] \[\Rightarrow \tau = \frac{P}{2\dot{P}}\, \tag{37}\]
where the last equality assumed that the initial period \(P_{0}\ll P\). This _characteristic age_\(\tau\) due to spin-down is often an excellent order-of-magnitude estimate of an observed NS's true age. It slightly overestimates the latter for young NSs as an NS' spin may initially decelerate via gravitational radiation due to an oblate shape. For instance, for the Crab Pulsar, whose supernova was observed in 1054 A.D., one finds \(\tau=1300\) years. In the case of older pulsars, Eq. (37) must again be used with special care, specifically when being applied to NSs that are thought to have spun up at some point in their life. These could be, _e.g._, millisecond pulsars that are modelled as accreting mass and angular momentum from a binary companion; these have been observed with a characteristic age older their actual age [88]. In particular, there are millisecond pulsars with \(\tau>\)13.8 Gyr [87], the measured age of the universe [89].
We note in passing that for NSs for which precise data on distances and proper motions are available, their _kinematic age_ may also be estimated by tracing back their trajectories and locating a plausible birth site [90]. This technique is possible thanks to the kick velocity imparted to the NS by the asymmetric explosion of the progenitor, as mentioned in the beginning of Sec. 2.
The pulsar braking index \(n\) is defined via \(\dot{\omega}\propto\omega^{n}\). With a little elementary calculus, it may be seen that
\[n\equiv\frac{\omega\ddot{\omega}}{\dot{\omega}^{2}}=2-\frac{P\ddot{P}}{\dot{P} ^{2}}. \tag{38}\]
For spin-down induced by magnetic dipole radiation, one finds by equating Eqs. (35) and (36) that \(n=3\), although pulsars with braking indices of 1.4\(-\)3 have been observed, suggesting other spin-down mechanisms [82].
It is useful to place observed pulsars on a \(P\)-\(\dot{P}\) diagram such as the one shown in Fig. 5 right panel. Pulsars typically begin life at the north-west region of the diagram, and move south-east along contours of constant \(B\) strengths while crossing contours of constant spin-down age. Eventually as they age to about 10 Myr the rotational energy is insufficient to generate the pulsar beam, and they cross the "pulsar death line", sometimes referred to as the "death valley". However, the death line is not well-understood, for the exact mechanism by which pulsar beams are created is still unknown and is an active area of research. This is evident in the \(P\)-\(\dot{P}\) diagram: quite a few pulsars lie beyond various models of the death line [91; 92; 93; 94], with PSR J2144-3933 lying well beyond all the canonical death lines. We will re-encounter this oddball pulsar, which also happens to be the coldest NS observed, in Sec. 4.1.
## 3 The white dwarf as a dark matter laboratory
White dwarfs have been used as DM detectors via a number of mechanisms. There are four main effects, which we will detail in the rest of the section: (1) DM can collect and annihilate inside WDs, heating them to above the temperature that would be expected from a standard WD cooling curve such as in Sec. 2.6. (2) So much non-annihilating DM accumulates in a WD that the DM collapses and forms a black hole deep in the WD interior. This small black hole can grow to accrete the entire WD, thereby converting its host into a solar mass black hole. (3) DM encounters with and collection in the WD can cause it to explode. (4) WDs' internal structure could be altered if a substantial fraction of its mass were comprised of DM.
In addition, resonant conversion of axion-like particle DM to photons in the corona of a magnetic WD may be observed [95]; we relegate discussion of this phenomenon in the context of NSs to Sec. 4.11.
### Dark matter annihilation inside and heating white dwarfs
The possibility that dark matter can accumulate inside and change the internal thermal properties of stars has long been appreciated [9; 10]. A number of works has proposed that old WDs could have their late-time temperature altered through accumulation and annihilation of DM in the interior [98; 97; 99; 100]. To a good approximation the amount of collisionless DM (for local DM density \(\rho_{\chi}\) and average DM-NS relative speed \(v_{\rm rel}\)) flowing through a WD with mass \(M_{\rm WD}=1.2\) M\({}_{\odot}\), radius \(R_{\rm WD}=4000\) km, and surface escape velocity \(v_{\rm esc}=\sqrt{2GM_{\rm WD}/R_{\rm WD}}\) is
\[\dot{M}=\rho_{\chi}v_{\rm rel}\times\pi\left(\frac{R_{\rm WD}v_{\rm esc}}{v_{ \rm rel}}\right)^{2}=10^{-7}\ \frac{\rm M_{\odot}}{\rm Gyr}\ \left(\frac{R_{\rm WD}}{4000\ \rm km}\right)^{2}\left(\frac{M_{\rm WD}}{1.2\ \rm M_{\odot}}\right)\left(\frac{\rho_{\chi}}{0.4\ \rm GeV/cm^{3}} \right), \tag{39}\]
where we have normalized to the mass accumulated over a gigayear to emphasize that the DM mass accumulated inside the WD over the lifetime of the universe is only a tiny fraction of the stellar mass. This expression assumes that all DM incident on the WD is captured; for the DM-nucleon or DM-electron cross section dependence of the capture rate, see Refs. [101; 67].
The late-time temperature of a benchmark WD described above, assuming it is determined by the capture and annihilation of all DM transiting the WD, is given by [96]
\[T_{\rm WD}\approx 4000\ \rm K\left(\frac{350\ \rm km/s}{v_{\rm rel}}\right)^{1/ 4}\left(\frac{\rho_{\chi}}{10^{3}\ \rm GeV/cm^{3}}\right)^{1/4}, \tag{40}\]
where here we have normalized this expression to a typical \(v_{\rm rel}\), but have chosen \(\rho_{\chi}\) more than three orders of magnitude greater than the inferred DM density near most WDs whose temperatures have been determined. This is the DM density required for heating WDs above their expected late-time temperature shown in Figure 4. In practice, this means that in order to find or exclude DM this way, one would need to find an ancient WD in a region that conclusively has a high DM density.
Reference [97] studied the heating effect that certain inelastic DM models would have on the late-stage temperature of WDs, and found that for a background DM density of \(\rho_{\chi}\simeq 3\times 10^{4}\ \rm GeV/cm^{3}\), they
Figure 6: _Left._ Upper bounds from Ref. [96] on DM density distributions in the globular cluster M4, compared with an estimate of the DM densities (labelled “1101.2737”) from Ref. [97] using a spherical collapse model. Also shown are the range of DM densities required to match the observed luminosities of white dwarfs in M4 via DM annihilations within the WD as well as kinetic heating by infalling DM; the horizontal range of the rectangles spans the uncertainty in the positions of the WDs. _Right._ Bounds on dark matter using an old white dwarf in the Milky Way taken from [67]. See Secs. 3.1 and 3.2 for further details.
would be sensitive to inelastic inter-state mass splittings of about \(10-10^{3}\) keV and per-nucleon scattering cross sections \(\sigma_{n_{X}}\gtrsim 10^{-41}\) cm\({}^{2}\). These authors proceeded to investigate whether WDs observed in a very dense self-bound stellar system, the globular cluster NGC 6121, a.k.a. Messier 4 (M4), might reside in a background density of DM large enough to observe heating from DM. Assuming that M4 was formed from a subhalo that was then tidally stripped by the Milky Way parent halo, using a spherical collapse model first derived in Ref. [98], adopting an NFW density profile, and accounting for the slight adiabatic contraction of densities from the baryon potential, they estimated that the DM density was approximately 800 GeV/cm\({}^{3}\) at a cluster-centric distance \(r=2.3\) pc, where the farthest WDs were observed in the Hubble Space Telescope. Following this, a number of authors investigated the implications of DM in globular clusters capturing in celestial bodies, under the assumption of a large (\(10^{3}\)-\(10^{4}\) GeV/cm\({}^{3}\)) DM density [102; 103; 104; 105; 106; 107; 108; 109].
A recent study [96] set empirical limits on the DM densities in M4 using measurements of stellar line-of-sight velocities and performing a spherical Jeans analysis; Figure 6 shows these limits on various DM density profiles corresponding to upper bounds on NFW scale parameters. The density estimate of Ref. [97], denoted by an asterisk, is safe from these limits. Nevertheless, it was argued that the use of globular clusters as copious sources of DM resulting in far-reaching conclusions about its microscopic properties is problematic for several reasons.
1. The origin story of globular clusters is unclear. While Ref. [97] echoed a popular theory - corraborated in \(N\)-body simulations - that globular clusters originate in DM subhalos that then strip tidally [110; 111; 112], alternative simulations suggest they may form with no aid from dark matter via the collapse of giant molecular clouds [113; 114; 115; 116].
2. The V-band mass-to-light ratios of globular clusters in solar units is 1-5, which is equivocal about the presence of DM in them, unlike, say, dwarf galaxies (10-100), the Coma Cluster of galaxies (660) or the Milky Way (25) which are known to harbor significant amounts of DM. In fact, a structure defined as a stellar "cluster" is _defined_ as a system whose dynamics need not be explained by DM, unlike a "galaxy" [117]. Accordingly, studies of more than 20 globular clusters looking for DM in them have either failed to detect DM or come to ambiguous conclusions [96].
3. There is no guarantee that any invisible mass favored in globular cluster data is in fact DM, as it may also be from faint stellar remnants [118].
4. The interpretation of the presence or absence of DM in ambiguous datasets is sensitive to priors and parametrizations. Ref. [119] found no evidence for DM when analyzing NGC 2419 by fitting a Michie model for the stellar and a generalized NFW profile for the DM distributions, but found strong evidence for DM when fitting these quantities with _no_ analytic form, floating instead 389 free parametes.
One could conclude that, due to these significant uncertainties and the related infeasibility of determining DM density distributions in globulars with current and imminent measurement sensitivities, globular clusters are sytems that are far from robust for making statements about DM interactions. On that note, there are proposals for finding white dwarfs in dwarf galaxies like Segue I and II [120].
### Non-annihilating dark matter converting white dwarfs into black holes
If enough non-annihilating DM accumulates in WDs, the DM can collapse, and subsequently form a small black hole that accretes surrounding WD material, eventually consuming the entire WD [102; 67;
121]. Typically the DM is assumed to be "asymmetric" since in such models DM typically does not self-annihilate [122]. If in the process of accumulation and collapse DM self-annihilates efficiently, too much of it may be lost to form a black hole in the WD core.
The routine by which DM could form a small black hole in the interior of a WD is very similar to the more studied case of DM forming black holes in NSs5, which is detailed in length in Section 4.4. To avoid repetition, here we will emphasize aspects that are distinct from the case of the NS. The WD-to-BH conversion process is as follows. First, DM accumulates in the WD over time, through scattering on nuclei or electrons in its interior. Then, the captured DM thermalizes with the WD interior, i.e., after repeated scattering it is expected to localize within a small volume determined by the WD's internal temperature and gravitational potential.
Footnote 5: See also Ref. [123; 124; 125], which study black hole formation in other astrophysical bodies like the Earth, Sun, and Population III stars.
One chief difference here between WDs and NSs is that during thermalization, DM will scatter with a Coulomb lattice of ions in the core of the WD, which is stabilized by relativistic electron degeneracy pressure. This effect considerably suppresses DM-nucleus scattering rates at low momentum transfers, the regime that determines the thermalization timescale \(t_{\rm th}^{\rm WD}\). For a carbon WD, this is given by [67]
\[t_{\rm th}^{\rm WD}\simeq 20~{}{\rm yr}\,\left(\frac{10^{-40}~{}{\rm cm}^{2}} {\sigma_{n_{\chi}}}\right)\left(\frac{m_{\chi}}{10^{6}~{}{\rm GeV}}\right)^{2 }\left(\frac{10^{7}~{}{\rm K}}{T_{\rm WD}}\right)^{5/2}. \tag{41}\]
Thus for \(m_{\chi}>10^{10}\) GeV, it can take \(>\) Gyr for DM to thermalize with the WD interior. Another difference between DM collapsing to form black holes in WDs and NSs is that, during the collapse inside a WD, DM may trigger a star-destroying thermonuclear explosion. We now turn to this topic.
### White dwarf explosions via dark matter
Dark matter accumulated inside WDs might trigger a Type Ia-like supernova explosion through the deposition of enough energy to prompt runaway fusion reactions in the carbon/oxygen/neon interior of the WD [126; 66]; see also Ref. [127] for an early discussion of DM cores affecting Type Ia supernovae.
Figure 7: Illustration of mechanisms by which white dwarfs may be prompted to explode by dark matter. (a) DM acumulates to the point of collapse in the center of the WD, then while collapsing (or after collapsing and forming a black hole) heats the WD to a temperature inducing a thermonuclear chain reaction. (b) The internal potential or mass energy of spatially extended DM is deposited as WD nuclei enter its state, prompting local heating that initiates the thermonuclear runaway. (c) Macroscopic DM transiting the WD transfers explosive kinetic energy via scattering on WD constituents.
More generally, DM triggering WDs into supernovae can proceed in a number of ways:
* Attendant to DM converting WDs to black holes, DM can collect into a core region of the WD, collapse, and as a result of the collapse, deposit enough energy to ignite the WD. Ignition can occur either directly through nuclear scattering during the collapse of the DM core [66, 126, 47] or through the evaporation of a small black hole that forms out of the collapsed DM core [128, 121, 47].
* DM can have internal properties that result in energy being liberated as WD particles enter the DM state. A simple example of this is captured (and possibly thermalized) DM annihilating and depositing energy in the WD medium with which it is admixed [129]. Other interesting possibilities are composite DM with an internal potential for baryons [130], solitonic Q-ball DM that absorbs baryonic charge and dissociates nuclei in the process [129], monopoles that possibly induce nucleon decay in similar fashion (Sec. 4.1.2), and accretion of WD carbon ions onto a black hole formed from collapse of electrically charged DM states [47].
* During an initial transit through the WD, DM can deposit kinetic energy gained by falling into the WD's gravitational potential. The DM could be in the form of primordial black holes (PBHs), in which case energy is transferred via dynamical friction [126, 131], or particles inducing nuclear scatter recoils [129, 132, 133]. Tightly bound asteroid-like DM triggering WD explosions via stellar shocks has also been suggested [134].
However the WD is heated, a number of requirements must be met for thermonuclear reactions sparked to sustain themselves and cause the WD to explode. These requirements are described in Sec. 3.3. We now discuss some subtle aspects of this phenomenon as explored in the literature.
A detailed simulation of PBH energy deposition in a WD, including the effect of turbulent flows in the wake of the passing PBH, found that heavier PBHs were required to ignite WDs [131] compared to initial estimates [126]. This study employed a 1D+1D hydrodynamic simulation of the shock front created by a transiting PBH, and found that the development of hydrodynamic instabilities dissipating heat deposited through dynamical friction appeared to occur more rapidly than ignition processes, which were modeled using the same carbon fusion reaction rates used in Ref. [65]. Another study investigated ignition during DM core collapse using a system of differential equations that track the evolution of per-particle energies [135]. Contrary to the results of Refs. [66, 67, 121], this study found that the WD medium did not reach the temperature quoted in Ref. [65] for ignition. However, Ref. [135] used a differential equation that did not model the full nuclear reaction network used in Ref. [65], and used a carbon fusion rate with a normalization lower than and scaling different from the modern rate used in Ref. [65]. Future work looking to improve on WD ignition estimates should also consider convective flows of heated WD material moving carbon through the collapse region, and whether WD ignition can occur via thermal energy transported out of the collapse region. This is especially important, since studies on carbon fusion occurring inside DM bound states found that fusion can be induced in the region surrounding the collapsing region that is the source of heat, either through the evaporation of black holes of size much smaller than the ignition region, or through effluence of thermal energy outside of the transiting DM composite [67, 121, 130, 47]. Finally, the ignition of WD supernovae via oxygen burning typically requires a temperature somewhat higher than that of carbon6[65], and future detailed treatments of WD ignition by collapsing DM should account for this possibility.
Footnote 6: We thank Melissa Diamond for correspondence on this point.
Ref. [129] set limits on a wide range of DM-nucleus scattering cross sections and DM masses assuming point-like elastic scattering of DM particles on carbon in WDs. These constraints were placed using the
condition for the minimum stopping power,
\[n_{\rm T}\sigma_{\rm T\chi}m_{\rm T}v_{\rm esc}^{2}\gtrsim\rho\bar{c}_{p}T_{\rm crit }\lambda_{\rm trig}^{2} \tag{42}\]
for a heating region of size \(\leq\lambda_{\rm trig}\). However, this condition does not account for the finite number of nuclei that the DM particle would encounter during its transit through the heating region. Suitably modified, the above condition should be
\[N_{\rm hit}\frac{m_{\rm T}v_{\rm esc}^{2}}{\lambda_{\rm trig}} \gtrsim \rho\bar{c}_{p}T_{\rm crit}\lambda_{\rm trig}^{2}\,\] \[N_{\rm hit} = \max[n_{\rm T}\sigma_{\rm T\chi}\lambda_{\rm trig},n_{\rm T}^{1/3 }\lambda_{\rm trig}] \tag{43}\]
where \(N_{\rm hit}\) is the number of point-like scatters on nuclei as the DM particle traverses the length \(\lambda_{\rm trig}\). One can see that Eq. (43) reduces to Eq. (42) for \(\sigma_{\rm T\chi}<n_{\rm T}^{-2/3}\). Ref. [129] considers a 1.25 \(M_{\odot}\) WD to set limits, for which \(n_{\rm T}\simeq 10^{-31}\) cm\({}^{-3}\), implying that \(\sigma_{\rm T\chi}\lesssim 2\times 10^{-21}\) for Eq. (42) to be valid. However, \(\sigma_{\rm T\chi}>10^{-12}\) cm\({}^{2}\) is shown to be excluded in Ref. [129]. One could also see the error in this result by estimating the maximum energy transferred by DM elastic recoils by a linear transit across a length \(\lambda_{\rm trig}\). This is \((n_{\rm T}^{1/3}\lambda_{\rm trig})(m_{\rm T}v_{\rm esc}^{2})\simeq=10^{4.5}\) GeV, which may be compared with the trigger energies ranging across WD masses, \(10^{17-24}\) GeV (Sec. 2.5). One could contrast this analysis against Refs. [132; 133], which considered WD explosions triggered by the transit of macroscopic composite DM. In these studies, the requisite number of WD nuclei within a trigger volume may indeed be excited to ignite the region into runaway fusion. In Fig. 11 bottom left panel we show the masses and radii of DM mini-clusters constrained by the observed existence of WDs in our Galaxy, taken from Ref. [133]. Overlaid here are contours of the minimum DM-nucleus elastic scattering cross sections required to transfer sufficient kinetic energy to the WD trigger volume to induce stellar explosion.
A number of phenomena have been linked to the DM-induced ignition of thermonuclear explosions in WDs. It has been posited that DM core collapse in WDs might account for a large fraction of observed Type Ia supernovae [66], as a solution to the Type Ia progenitor problem [60] and consistent with the apparent observation of sub-Chandrasekhar WDs as the origin of most Type Ia supernovae [136]. Reference [66] also found that a trend in existing Type Ia data [137], showing that more massive progenitors explode sooner, is consistent with certain DM models that induce WD explosions through DM core collapse, where this would occur sooner for heavier WDs. The accumulation in certain sub-Chandrasekhar WDs of charged massive particles (CHAMPs) making up DM, which might occur preferentially outside galaxies with magnetic fields that serve to deflect CHAMPs, could be an explanation of the distribution of calcium-rich gap transient WD supernovae [47] that do explode preferentially on the outskirts of galaxies [138]. Finally, a separate study has investigated whether WD explosions from DM could explain the aforementioned Ca-rich gap transient distribution, through the ignition of WDs in dwarf spheroidal galaxies expected to be located at some distance from larger galactic DM halos [139].
### Dark matter's influence on white dwarf equations of state
White dwarf mass-radius relationships can also be observably impacted by DM. If a substantially massive core of DM accumulated in the interior of a WD, its stable configurations would be altered through revised TOV equations [140; 127; 141]. For a typical circumambient DM density, the amount of collisionless DM required to induce these effects, \(10^{-4}M_{\odot}-10^{-1}\)\(M_{\odot}\), well exceeds what could be collected in the WD over the lifetime of the universe; see Eq. (39). However, future studies could investigate whether such a large quantity of DM might be collected through collisional accretion, analogous to the NS treatment in
Ref. [142] (discussed in Sec. 4.2). Another effect comes through the axion: its existence implies that its non-derivative coupling to nucleons would displace it from its usual minimum in a finite-density medium. This results in a reduction of the nucleon mass and alters the TOV equations [143].
## 4 The neutron star as a dark matter laboratory
### Dark matter kinetic and annihilation heating of neutron stars
#### 4.1.1 Capture and kinetic heating
Neutron stars are excellent captors of particle dark matter by virtue of their extreme densities and steep gravitational potentials, and also quite serviceable as thermal detectors thanks to their typically small temperatures. While the capture of DM in NSs and its subsequent thermal relaxation was first treated in Ref. [11], it was only recently realized that this could be a minimal probe of dark matter scattering on Standard Model (SM) states: the transfer of DM kinetic energy to the NS's constituent particles during the infall of DM at semi-relativistic speeds overheats the NS [152]. It was also proposed that upcoming infrared telescopes, _e.g._, the Thirty Meter Telescope (TMT) [153] and the Extremely Large Telescope (ELT) [154] are sensitive to this "dark kinetic heating" mechanism [152] for neutron stars out to about 100 pc from Earth; a study has also been dedicated to the sensitivity at the recently launched James Webb Space Telescope (JWST) [155; 45], which has shown that finding an NS much closer than 100 pc would likely be required. Thermal observations of nearer pulsars could be made following the discovery of old, isolated NSs in radio telescopes such as FAST [156], CHIME [157] and SKA [158]. Though their \(B\) fields and rotational velocities are expected to be low, implying they populate regions near the "pulsar death line" in \(P\)-\(\dot{P}\) space beyond which NSs are supposed to stop pulsing, NSs have been observed beyond the death
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline effect & change in capture rate & applicability & reference \\ \hline \hline \multirow{2}{*}{EoS of star effects} & \(\mathcal{O}(1)\): BSk20 \(\to\) 21 & all \(m_{\chi}\) & [145] \\ & none: QMC-2 \(\to\) BSk24 & all \(m_{\chi}\) & [146] \\ \hline mass-radius configuration & \(\mathcal{O}(100)\) as \(1\to 2.2M_{\odot}\) & all \(m_{\chi}\) & [147] \\ \hline nuclear self-energies & \multirow{2}{*}{30–100} & \(m_{\chi}>100\) MeV, any EoS & [148] \\ nucleon structure & & \(\mathcal{O}(10^{3})\) for 2 \(M_{\odot}\) NSs & [146] \\ \hline non-elastic scattering & subdominant & – & [146] \\ \hline “collective” effects & \(\mathcal{O}(1-10^{3})\) & 2 \(M_{\odot}\) NS, & [149] \\ & & \(m_{\chi}<100\) MeV, & \\ & & \(A^{\prime}\) mediator & \\ \hline superfluidity: energy gap & maybe \(\mathcal{O}(1)\) & \(m_{\chi}\lesssim 35\) MeV, & [48] \\ & & single phonon excitation & [142] \\ \hline NS opacity/ extinction factor & \(\mathcal{O}(1)\) & \(m_{\chi}>\) GeV & [147] \\ \hline relativistic kinematics & \(\sim 4\) & \(m_{\chi}>\) GeV & [147] \\ & \(\sim 10\) & \(m_{\chi}<\) GeV & [147] \\ \hline gravitational focusing & \(<2\) & all \(m_{\chi}\) & [147] \\ \hline light mediator kinematics & \(\mathcal{O}(1)\) & \(m_{\phi}/\mu_{\rm red}<10^{-1}\) & \\ & voided & \(m_{\phi}/m_{\chi}<10^{-4}\) & [150] \\ \hline DM halo velocity distribution & \(<2\) & all \(m_{\chi}\) & [151] \\ \hline \end{tabular}
\end{table}
Table 1: Known effects that modify the rates of dark matter capture in neutron stars. See Sec. 4.1.3 for further description.
## 5 Conclusion
Figure 8: _Top left._ Cartoon showing the dark kinetic heating effect in neutron stars. Scattering interactions of the infalling dark matter flux contribute to the luminosity of a typical NS at the level of a 1500 K blackbody temperature. _Top middle._ The nucleon Auger effect that contributes to kinetic (and possibly annihilation) heating by dark matter in neutron stars. The total energy deposited after scattering turns out to be the dark matter energy transfer, although physically it comes as the sum of two contributions: the energy spilled during the rapid filling of the hole left behind by the struck target, and the energy carried by the target in excess of the Fermi energy. _Top right._ The breaking and re-pairing of Cooper pairs that contributes to kinetic (and possibly annihilation) heating by dark matter in neutron stars. This phenomenon takes place for dark matter with mass above about 35 MeV; for smaller masses, dark matter capture proceeds through collective excitations in the nucleon superfluid medium. _Bottom left._ Cartoon showing possible additional heating of neutron stars via self-annihilations of dark matter possibly collected in a thermalized volume. This highly model-dependent process could heat the NS to blackbody temperatures around 2000 K. _Bottom right._ As a function of NS mass, NS effective temperatures imparted by dark kinetic+annihilation heating that can be measured at the James Webb Space Telescope at various signal-to-noise ratios, taken from Ref. [45]. The band denotes variations over NS radii predicted by numerous equations of state as well as NS-DM relative velocities from estimates by various NS population models. See Sec. 4.1.1 for further details.
## 4 Conclusions
Figure 9: _Top._ Capture cross section sensitivities for light dark matter scattering in a neutron star crust (_left_) (via excitation of superfluid phonons in the inner core) and in the NS core (via Pauli-blocked contact scattering on neutrons, although see Sec. 4.1.1 for a discussion on scattering in the superfluid core), and for heavier dark matter scattering in various layers of the crust and the core (_right_). These two plots are taken from Ref. [48]. See Sec. 4.1.1 for further details. _Bottom._ Sensitivities to the cutoff of effective CP-even scalar interactions of dark matter with relativistic, degenerate electrons in a neutron star, for DM that is spin-1/2 (_left_) and spin-0 (_right_). Also shown are the sensitivities for interactions with muons, protons and neutrons. The electron scattering limits are seen to widely complement terrestrial searches. These two plots are taken from Ref. [144]. See Sec. 4.1.1 for further details.
line [93; 91; 94; 92] calling into question models of NS pulsation (as also discussed in Sec. 2.8). It is estimated that about \(10^{5}\) NSs in the Galaxy lie beyond the death line [93].
To illustrate the idea of dark kinetic heating let us consider the following representative NS configuration:
\[M_{\rm NS} = 1.5\ M_{\odot},\ \ R_{\rm NS}=12.85\ {\rm km}\] \[\Rightarrow v_{\rm esc} = \sqrt{\frac{2GM_{\rm NS}}{R_{\rm NS}}}\simeq 0.59. \tag{44}\]
where \(v_{\rm esc}\) is the escape speed at the surface. This configuration is obtained for a Quark Meson Coupling (QMC) EoS of matter [146].
For local DM density \(\rho_{\chi}\) and average DM-NS relative speed \(v_{\rm rel}\) (which in the solar vicinity are 0.4 GeV/cm\({}^{3}\) and 350 km/s [159]), the DM mass capture rate is given by [11]
\[\dot{M}=m_{\chi}C_{n\chi} = \rho_{\chi}v_{\rm rel}\times\pi b_{\rm max}^{2}\times p_{\rm v} \times p_{\sigma}\, \tag{45}\] \[= p_{\rm v}p_{\sigma}\ \times\ 1.76\times 10^{25}\ {\rm GeV/s}\,\]
where \(b_{\rm max}=R_{\rm NS}(1+z)(v_{\rm esc}/v_{\rm rel})\) is the maximum impact parameter of DM intersecting the NS, with \(1+z=(1-v_{\rm esc}^{2})^{-1/2}\) a blueshift factor magnifying the NS radius to a distant observer, and \(p_{\rm v}\) is the probability that a scattered DM particle loses sufficient energy to be captured. For instance, this probability \(\simeq 1\) for scalar- or vector-mediated scatters, but may be suppressed for pseudoscalar-mediated interactions that favor soft forward scatters [150]. Eq. (45) is, of course, the DM capture rate for an isolated NS; an NS in a binary system could capture DM at a rate greater by up to a factor of a few thanks to gravitational assist [160].
The probability that incident DM is scattered is given by \(p_{\sigma}=1-e^{-\tau}\simeq\tau=\sigma_{n\chi}/\sigma_{\rm cap}\) where, for optical depth \(\tau\), the approximate equality in the first line holds in the optically thin limit. The "capture cross section" above which \(\tau>1\) in the NS core is:
\[\sigma_{\rm cap}=\begin{cases}\sigma_{0}(\bar{m}_{n}/m_{\chi})&,\ \ m_{\rm evap}<m_{\chi}<\bar{m}_{n}\,\\ \sigma_{0}&,\ \bar{m}_{n}\leq m_{\chi}\leq{\rm PeV}\,\\ \sigma_{0}(m_{\chi}/{\rm PeV})&,\ \ m_{\chi}>{\rm PeV}\,\end{cases} \tag{46}\]
where the NS geometric cross section \(\sigma_{0}=\pi(\bar{m}_{n}/M_{\rm NS})R_{\rm NS}^{2}\simeq 2.2\times 10^{-45} \,{\rm cm}^{2}\). One understands the dependence on \(m_{\chi}\) in Eq. (46) by considering the typical neutron recoil energy in the neutron rest frame:
\[\Delta E_{\rm DM}\simeq\frac{\bar{m}_{n}m_{\chi}^{2}(1+z)^{2}v_{\rm esc}^{2}} {(\bar{m}_{n}^{2}+m_{\chi}^{2}+2(1+z)\bar{m}_{n}m_{\chi})}\, \tag{47}\]
The above expression is a good approximation to describe DM-neutron scattering in the stellar rest frame as well, since the neutrons are typically non-relativistic: their Fermi momenta, varying over a few 100 MeV across the NS, are smaller than their \(\sim\)GeV mass. For \(m_{\chi}<\bar{m}_{n}\), only a fraction \(\simeq 3\Delta p/p_{F}\) of degenerate neutrons close enough to their Fermi surface receive the typical momentum transfer \(\Delta p=\sqrt{2\bar{m}_{n}\Delta E_{\rm DM}}\) to scatter to a state above the Fermi momentum \(p_{F}\simeq 0.4\ {\rm GeV}\). This "Pauli-blocking" effect gives \(\sigma_{\rm cap}\propto\Delta E_{\rm DM}^{-1/2}\propto m_{\chi}^{-1}\). The so-called evaporation mass,
\[m_{\rm evap}\simeq 20\ {\rm eV}\left(\frac{T_{\rm NS}}{10^{3}\ {\rm K}} \right), \tag{48}\]
is the DM mass below which the thermal energy of the NS would kinetically eject the captured DM from the stellar potential well [145, 161]. For \(\bar{m}_{n}\!\leq\!m_{\chi}\!\leq\!10^{6}\,\mathrm{GeV}\), a single scatter suffices for capture: \(\Delta E_{\mathrm{DM}}\simeq\bar{m}_{n}v_{\mathrm{esc}}^{2}\gamma^{2}>\mathrm{ KE}_{\mathrm{halo}}\), the DM halo kinetic energy. For \(m_{\chi}>\mathrm{PeV}\), multiple scatters are required for capture, so that approximately \(\sigma_{\mathrm{cap}}\propto\mathrm{KE}_{\mathrm{halo}}/\Delta E_{\mathrm{DM }}\propto m_{\chi}\). The expression in Eq. (45) can be refined to account for the velocity distribution of DM far from the NS [162].
The heating of the NS comes not only from the recoil of incident DM but from two other secondary effects. As depicted in Fig. 8, a target neutron (or a lepton) that is upscattered by DM leaves behind a hole in the Fermi sea. The hole is filled up immediately by a nearby neutron from a higher energy level, which in turn leaves a hole, and so on. This process spills over energy in the form of radiation and kinetic energy, and is reminiscent of the Auger effect observed in electron levels in superconductors; we will encounter this effect as a means of NS internal heating in Sec. 4.8. The net energy deposited in the NS by this effect, \(E_{\mathrm{Auger}}\), is simply the difference in energy between the Fermi surface and the position of the original hole. The energy carried by the struck nucleon/lepton in excess of the Fermi energy, \(E_{\mathrm{kin}}\), is dissipated as kinetic energy above the Fermi surface. Thus the total energy deposit \(E_{\mathrm{Auger}}+E_{\mathrm{kin}}\) comes out to be simply the DM recoil energy \(\Delta E_{\mathrm{DM}}\). Yet another effect comes from the superfluidity of nucleons (see Sec. 2.7). For \(m_{\chi}\gtrsim 35\) MeV, DM participates in elastic scattering by first breaking a nucleon Cooper pair, which is bound with an energy given by the superfluidity energy gap \(\sim\) MeV. The absorbed \(\sim\) MeV energy is redeposited into the NS when the free nucleon finds another and pairs up, liberating the gap energy. For \(m_{\chi}\lesssim 35\) MeV nucleons in the NS might not scatter elastically as there isn't enough energy transfer to break nucleon Cooper pairs, leaving DM to capture via collective excitations instead [48, 142]. Light DM capture in certain models through collective effects in NSs has been studied [149]. The presence of DM self-interactions can enhance the capture rate by orders of magnitude as initially captured DM particles can serve as captors of ambient DM [163].
Once captured in the potential well, a DM particle repeatedly scatters on and thermalizes with the NS until its orbit shrinks to within the radius of the star, by which times most of its kinetic energy is transferred. Under equilibrium, the kinetic power of the infalling dark matter, constituting the NS heating rate, equals the rate at which photons are emitted from the NS surface, constituting the NS cooling rate. The latter is dominated by such photon emission for NSs older than \(\sim\)Myr, as we saw in Sec. 2.6. The NS luminosity corresponding to a temperature \(T\) (in the NS frame) is then \(L=z\dot{M}=4\pi R_{\mathrm{NS}}^{2}T^{4}\), which attains a maximum value \(L_{\mathrm{max}}\) for unit capture probabilities \(p_{\sigma}\) and \(p_{\nu}\). For our representative NS configuration (Eq. (44)), \(L_{\mathrm{max}}=7.6\times 10^{24}\) GeV/s, corresponding to a NS temperature seen by a distant observer \(\widetilde{T}=T/(1+z)\) of \(\widetilde{T}=1400\) K. Temperatures in this range are measurable within reasonable integration times at current and imminent infrared telescope missions [152, 164], in particular at the recently launched JWST [45], and the forthcoming ELT and TMT. For instance, the NIRCam instrument at JWST could constrain the surface NS temperature at 1750 K with a signal-to-noise ratio (SNR) of 2 in 27.8 hr(\(d/10\) pc)\({}^{4}\), where \(d\) is the distance to the NS [152]; the IRIS instrument at TMT could do the same in 19.4 hr(\(d/10\) pc)\({}^{4}\). In the bottom right panel of Fig. 8 are displayed the NS effective temperatures constrainable at JWST at various SNRs for integration times of 5.5 hr and 24.3 hr, using the F150W2 filter on NIRCam. In this plot taken from Ref. [45], the band spans the range of the NS radii (which determines the range of DM capture rates) predicted by various EoSs, and integrates over the NS-DM relative velocities predicted by various NS population models in Ref. [165, 166]. These sensitivities are for the case of NSs being heated not only by the kinetic energy of infalling DM but also by DM annihilations, which we will discuss in Sec. 4.1.2. Searches for DM using NS thermal emissions are best carried out with NSs whose "standard" temperatures are expected to be below approx. 1000 K. Thus one would need NSs older than 10 Myr (Fig. 4), making the determination of their age via spin-down or kinematic considerations (Sec. 2.8) crucial. One would also need them sufficiently
isolated to ensure no accretion of material from a binary companion.
The DM-nucleon scattering cross section may be so large that DM scatters dominantly with the \(\sim\)km-thick low-density crust of the NS before reaching the \(\sim\)20 km-diameter denser core. Moreover, the core may consist of exotic phases of high-density matter such as meson condensates and deconfined \(ud\) or \(uds\) quark matter, the latter of which may exist in a color-flavor-locked phase; in such cases, the dynamics governing DM scattering cannot be unambiguously computed, whereas the better understood crust can be treated robustly as a DM captor. DM scattering with the NS crust leads to surface emission of photons under thermal equilibrium analogous to capture in the NS core discussed above, hence the observational signatures of NS heating are unchanged. In Figure 9 we show the DM capture cross section \(\sigma_{\rm cap}\) for every layer of the NS described in Sec. 2.4, derived in Ref. [48] for a 1.8 \(M_{\odot}\) mass, 12.5 km radius NS. For DM masses below about 10 MeV (left panel), DM capture can occur by scattering on superfluid neutrons in the inner crust, and exciting phonons. The single-phonon emission mode is expected to dominate, which proceeds via a static structure function \(=\Delta p/(2m_{n}c_{s})\) that relates the per-nucleon cross section to the phonon-excitation cross section. Here \(c_{s}\) is the phonon speed. Due to the proportionality to the transfer momentum, \(\sigma_{\rm cap}\propto m_{\chi}^{-1}\) similar to the Pauli-blocking regime of the NS core discussed above. The latter sensitivity (applicable to when the core is populated mainly by neutrons) is also shown for comparison in the plot. For DM masses above about 100 MeV (right panel), DM capture can occur by scattering on individual nucleons locked up in nuclei in the outer crust by transferring energies greater than their \(\sim\)MeV binding energy. Scattering on nuclei is generally suppressed: large \(\Delta p\) leads to loss of nuclear coherence over multiple nucleons, and small \(\Delta p\) leads to loss of coherence over multiple nuclei, described by a lattice structure function. Deeper down in the inner crust, heavier-than-100-MeV DM capture proceeds by scattering on loosely bound nucleons, and even further down, by scatterig on the pasta phase. Pasta scattering may either be on individual nucleons at high DM masses or on multiple nucleons at low DM masses as described by response functions accounting for inter-nucleon correlations. A resonant peak in the response function is seen to enhance the capture sensitivity near \(m_{\chi}\simeq 100\) MeV. For comparison is also shown the DM capture cross section for scattering in an NS core dominated by neutrons.
Even in the absence of exotic phases, NS cores are expected to contain \(\sim\)10% level populations of protons, electrons, and muons thanks to beta chemical equilibrium. DM may be possibly be leptophilic, such that scattering at tree level is solely on \(e^{-}\) and/or \(\mu^{-}\), or iso-spin violating, such that scattering is dominantly on protons. NS capture and heating applies to these scenarios, too [152]. While the Fermi momenta of protons and muons are smaller than their mass, making them non-relativistic and amenable to the above treatment, that of electrons are 1-2 orders of magnitude greater than \(m_{e}\), warranting relativistic kinematics to treat their DM capture in the stellar rest frame [167, 145, 168, 169, 170, 171, 172, 144]. This also makes the treatment of Pauli-blocking non-trivial [170, 144]. In particular, the capture probability accounting for Pauli-blocking, relativistic scattering and summing over multiple scatters is [144]
\[df=\sum_{N_{\rm hit}}\,d\sigma_{\rm CM}\,v_{\rm Mol}\,dn_{\rm T}\,\frac{ \Delta t}{N_{\rm hit}}\,\Theta\left(\Delta E-\frac{E_{\rm halo}}{N_{\rm hit} }\right)\Theta\left(\frac{E_{\rm halo}}{N_{\rm hit}-1}-\Delta E\right) \Theta\left(\Delta E+E_{p}-E_{\rm F}\right)\,, \tag{49}\]
where \(v_{\rm Mol}\) is the Moller velocity that relates the cross section in any frame to that in the center of momentum frame (\(d\sigma_{\rm CM}\)), \(dn_{\rm T}\) is the differential volume of the target momentum space normalized to the Fermi volume, \(E_{\rm halo}\) is the DM halo kinetic energy, and \(\Delta E\) is the energy transfer. We refer the reader to Ref. [144] for a detailed formalism. In Figure 9's bottom panels we show the NS capture sensitivity to contact interaction cutoffs versus \(m_{\chi}\) for scalar-type operators involving spin-1/2 and spin-0 DM. For electron scattering the NS capture reach is seen to be orders of magnitude greater than that of terrestrial direct searches for \(m_{\chi}>\) MeV, and indeed completely complements the latter for sub-MeV DM masses.
NS capture-and-heating can also provide orders-of-magnitude improvement over Earth-bound searches for DM with scattering that is
1. spin-dependent, since scattering directly on fermions instead of nuclei does not lead to the loss of nuclear coherence that limits spin-dependent searches at direct detection [164; 173; 174],
2. and/or velocity-dependent [164; 173; 175], since semi-relativistic DM speeds at the point of capture overcome velocity-suppressed scattering rates,
3. inelastic [174; 175; 152], since again the high DM speeds ensure that \(\mathcal{O}(100)\) MeV mass splittings between the DM and its excited state can be probed, as opposed to \(\mathcal{O}(100)\) keV at direct detection, and
4. below the so-called neutrino floor at direct searches, coming from irreducible neutrino backgrounds that are irrelevant for NS capture; see Fig. 9 top right panel,
5. with heavier-than-PeV DM, where DM capture proceeds mainly through multiple scattering in transit [176; 177; 152].
#### 4.1.2 Dark matter self-annihilations, nucleon co-annihilations, and induced nucleon decay
While the discussion above focused NS heating from the transfer of captured DM kinetic energy, applicable to any particulate dark matter model - in particular to non-annihilating DM such as asymmetric DM - certain scenarios may lead to DM annihilation inside the NS that further brightens it [178; 162] and thereby facilitate observations [152; 48; 45; 164], in some cases reducing telescope integration times by a factor of 10. For instance, JWST/NIRCam could constrain a 2480 K NS, heated by local DM kinetic energy + annihilations, with SNR 2 in 2.5 hr (\(d/10\)pc)\({}^{4}\), and TMR/IRIS could do so in 0.56 hr (\(d/10\)pc)\({}^{4}\)[152]; compare these with kinetic heating-only exposure times in Sec. 4.1.1. Fig. 8 shows JWST sensitivities in more detail, as discussed in Sec. 4.1.1.
Self-annihilations of DM into most SM states would result in NS heating, the exception being neutrinos with sub-100 MeV energies as their optical depth in the NS material is too small to be trapped [179]. In any case, this phenomenon relies intricately on whether or not the DM thermalizes with the NS within its lifetime, since DM may possibly annihilate much more efficiently if it is collected within a small volume in the NS core; this is a highly model-dependent question [167; 172; 48] as discussed in Sec. 4.4.1. To understand this, consider the evolution of the number of DM particles \(N_{\chi}\) within a volume \(V\) of the NS self-annihilating with a thermally averaged cross section \(\langle\sigma_{\rm ann}v\rangle\), and its solution:
\[\frac{dN_{\chi}}{dt} = C_{\chi}-\frac{\langle\sigma_{\rm ann}v\rangle N_{\chi}^{2}}{V}\,\] \[N_{\chi}(t) = \sqrt{\frac{C_{\chi}V}{\langle\sigma_{\rm ann}v\rangle}}\tanh \left(\frac{t}{\tau_{\rm eq}}\right), \tag{50}\] \[\tau_{\rm eq} = \sqrt{\frac{V}{C_{\chi}\langle\sigma_{\rm ann}v\rangle}}\,\]
where \(C_{\chi}=C_{n\chi}+C_{\chi\chi}\) is the total DM capture rate via scattering on nucleons (Eq. (45)) and, through self-interactions, on DM already accumulated in the NS, and \(\tau_{\rm eq}\) is the characteristic timescale for equilibrium between capture and annihilation to establish, after which \(N_{\chi}(t)\) achieves a steady state (\(dN_{\chi}/dt\to 0\)). Thus for \(t>\tau_{\rm eq}\), the total annihilation rate equals the capture rate. When \(V\) is the thermal volume (Eq. (52)), one
can then compute the minimum annihilation cross section required for capture-annihilation equilibrium to occur well within the age of an observed NS, \(\tau_{\rm NS}\). Using a partial-wave expansion \(\langle\sigma_{\rm ann}v\rangle=a+bv^{2}\), the condition may be written for \(s\)-wave and \(p\)-wave domination as [172]
\[a >7.4\times 10^{-54}~{}{\rm cm}^{3}/{\rm s}\,\bigg{(}\frac{{\rm Gyr} }{\tau_{\rm NS}}\bigg{)}^{2}\bigg{(}\frac{C_{\rm max}}{C_{\chi}}\bigg{)}\bigg{(} \frac{{\rm GeV}}{m_{\chi}}\,\frac{T_{\rm NS}}{10^{3}~{}{\rm K}}\bigg{)}^{3/2}\,\] \[b >2.9\times 10^{-44}~{}{\rm cm}^{3}/{\rm s}\,\bigg{(}\frac{{\rm Gyr} }{\tau_{\rm NS}}\bigg{)}^{2}\bigg{(}\frac{C_{\rm max}}{C_{\chi}}\bigg{)}\bigg{(} \frac{{\rm GeV}}{m_{\chi}}\,\frac{T_{\rm NS}}{10^{3}~{}{\rm K}}\bigg{)}^{1/2}\, \tag{51}\]
where \(C_{\rm max}\) is the maximum capture rate achieved at the saturation cross section.
Interestingly, a thermal Higgsino of 1.1 TeV mass, a largely unconstrained true electroweak WIMP [120]), would thermalize with just the NS crust rapidly enough to heat a reasonably old NS through annihilations in equilibrium with the rate of capture [48]. We also remark that due to different scalings of the NS luminosity from kinetic or annihilation heating on the NS mass and radius, in principle it must be possible to distinguish between the two heating mechanisms using an ensemble of NSs [152; 45].
An interesting way to probe DM self-annihilations in NSs is possible if the primary annihilation products are feebly interacting DM-SM mediators that live long enough to exit the star before decaying to SM states. One could search for a flux of these states sourced by DM "focused" in celestial bodies via capture. For gamma-ray final states, limits have been imposed with Fermi and H.E.S.S. data on DM-nucleon scattering and DM self-annihilation cross sections using brown dwarfs for sub-GeV DM and NSs for TeV mass DM [106]. For neutrino final states, limits in the TeV-PeV range come from IceCube, KM3NeT and ANTARES [180; 181].
Dark matter species that carry negative baryon number, arising for instance from "hylogenesis" models, could annihilate with baryons in a NS post-capture leading to possibly observable heating signals [182; 183; 184]. Such co-annihilations with nucleons are also possible in models of "dark baryons" that undergo quantum mixing with the neutron [185]. Yet another co-annihilation-like scenario resulting in NS heating is when a component of the DM comes in the form of magnetically charged black holes (MBHs) [186]. Subspecies that come with electroweak-symmetric coronas are expected to be near-extremal in mass, however upon encountering NSs they may become non-extremal: first they may capture in the NS by stopping due to the Fermi-degenerate gas, then they could absorb nucleons that are emitted back as (baryon number-violating) Hawking radiation, overheating NSs. A smaller deposit of heat could come from mergers of captured MBHs that enhance Hawking radiation, mimicking a self-annihilation process. Energy depositions from DM annihilations may also possibly nucleate quark matter bubbles in the NS interior, resulting in emission of radiation, cosmic rays and gravitational waves [187].
The production of x-rays and other high-energy effluents emitted from NSs, resulting from monopoles passing through and catalyzing nucleon decay, have been studied [188; 189]. This provides a strong bound on the abundance of monopole species that induce nucleon decay, which is a well-motivated class of monopoles arising from symmetry-breaking in Grand Unified Theories.
#### 4.1.3 Improvements and uncertainties
The above treatment has been improved by accounting for a number of physical effects in the NS, which in some cases leads to observational uncertainties; these effects are collected in Table 1. The largest uncertainty in the capture rate, spanning two orders of magnitude, comes from the unknown mass of the NS candidate that will be observed [147], unless some precise mass-radius measurement is performed. Other effects that may modify the DM capture rate, applicable to different DM and NS mass ranges, are variations
in the EoS of NS matter, self-energies from the nuclear potential, nucleon structure that suppresses coherent scattering, nucleon superfluidity, extinction in the NS in the optically thick regime, scattering on relativistic nucleons, gravitational focusing in the NS interior layers, suppression of scattering via the propagator of a mediator of mass smaller than the transfer momentum, and the Galactic velocity distribution of DM. Table 1 lists the appropriate references that treat these effects.
#### 4.1.4 Dark matter models that heat neutron stars through scattering and annihilation
Making use of the general effects discussed above, specific UV-complete and self-consistent DM models have been explored in the context of NS capture and heating. These include the supersymmetric partner of the Higgs field, the Higgsino, that captures through inelastic scattering to electrically neutral and charged excited states [152; 48], a generalization of this to electroweak multiplets [191], a model of DM with a vector force-carrier of a gauged \(L_{\mu}-L_{\tau}\) interaction [169], DM in the form of a GeV-scale "dark baryon" that mixes with the neutron [185], simplified models of DM (specifying a single state each for DM and the mediator) with various mediator species [192; 193; 194], DM that arises as a pseudo-Goldstone boson [195], models of dark sectors that can explain the muon \(g-2\) anomaly [196], and consistent models of DM interacting with nucleons through a pseudoscalar mediator: axion-like particles and a CP-odd state that arises in a Two-Higgs doublet model [173]. The sensitivities to parameters of some of these scenarios are shown in Fig. 10.
#### 4.1.5 Neutron star reheating mechanisms not involving dark matter
A search for DM reheating NSs in the late stages of their cooling must encompass understanding other astrophysical mechanisms that could possibly do the same. We discuss below those that feature prominently in the literature.
1. DM capture in NSs would not encounter a "neutrino floor" due to very dilute ambient neutrino densities that produce suppressed recoils/absorption on NS constituents, owing to low cross sections and Pauli-blocking. However, it is natural to ask if there is an "ISM floor" from accretion of interstellar material. It turns out that old, isolated NSs that have spin periods \(<1000\) seconds do not accrete ISM as they are in an _ejector phase_[197]: a "pulsar wind" of ISM outflow powered by the NS' magnetic field, being much denser than the inflowing material attempting accretion, would pre-empt accretion via kinetic pressure. Even if the pulsar wind happens to be weak enough for the ISM to overcome it, there is a second barrier to accretion: the magnetosphere co-rotating with the NS will impute centrifugal acceleration to the ISM, spraying away the gas - the _propellor phase_. For NSs with unusually large spin periods of \(>1000\) seconds, these arguments do not apply, instead infalling ISM would be deflected along the magnetic field lines of the NS and accretion will be confined to a small polar region, which can be distinguished from all-surface thermal emission. In any case, the ISM density in the local 100 pc is \(10^{-3}\) GeV/cm\({}^{3}\)[198] so that any ISM accretion will be outdone by present-day DM capture near geometric cross sections.
2. _Rotochemical heating_ could result from an imbalance in chemical potentials as the NS material is driven out of beta chemical equilibrium by deceleration in the rotation of NSs. Reactions that seek to restore chemical equilibrium deposit heat in the NS. This mechanism could occur for NSs with small (sub-7 ms) pulsar spin periods at birth for certain nucleon pairing models [199] - a requirement in tension with studies that find that natal spin periods are likely \(\mathcal{O}(10-100)\) ms (see the references listed in Ref. [200]).
## 4 Conclusions
Figure 10: Sensitivities of self-consistent dark matter models to neutron star kinetic heating; see Sec. 4.1.4. _Top left_. [152] Electroweakino singlet and doublet mass parameters for various \(\tan\beta\equiv\) ratio of Higgs VEVs, that may be cornered through inelastic scattering of thermal Higgsino DM in the NS via excitation to charged and neutral states (regions marked by “\(\delta<\) GeV”). _Top right_. [173] As a function of DM mass, gluonic coupling to an axion-like particle that mediates velocity-dependent scattering interactions. The gray region depicts limits from beam dumps, rare meson decays, and astrophysics. NS capture can also proceed through mediation by a CP-even scalar in the theory, which gives rise to limits from direct detection. _Bottom left_. [190] The orange region can be probed for spin-0 DM scattering on muons in the NS by exchanging a \(U(1)_{t_{\mu}-L_{e}}\) gauge boson. Also shown are constraints from DM self-interactions. _Bottom right_. [185] NS temperatures achieved by capture and heating of the anti-particle of DM carrying baryon number = 1, in a scenario where DM self-interacts repulsively and annihilates to the mediator \(\phi\) that then decays to SM states that deposit heat.
3. Other astrophysical late-time NS reheating mechanisms include [201; 202]_magnetic field decay_ that dissipates energy into the NS material, _crust cracking_ which arises when the NS crust breaks as the NS relaxes from an oblate to spherical shape, releasing accumulated strain energy, and _vortex creep_ which develops as superfluid vortex lines travel outward as the NS spins down, and get pinned to the nuclear lattice in the inner crust thereby introducing a velocity difference between the crust and the superfluid core, which dissipates energy in the star.
We note that these mechanisms are speculative, and none have been unequivocally observed. An exclusion set by non-observation of DM-induced heating at imminent telescopes would also rule out these mechanisms. Another notable point is that while the rotational power of NSs goes into dipole radiation, which in turn illuminates the nebula surrounding the pulsar as we saw in Sec. 2.8, it very likely does not contribute to the NS thermal luminosity. This is already apparent in the Crab Nebula example discussed in Sec. 2.8, but can also be inferred from x-ray emission bounds on observed pulsars which have thermal luminosities 5-6 orders smaller than the rotational power: see Table 5 of Ref. [203]. Further, the diffusion of the \(B\) field in the NS is also unlikely to heat the NS; as argued in Ref. [204], NSs older than about Myr are cool enough for magnetic diffusion timescales to exceed the NS age, effectively shutting off \(B\) field dissipation regardless of the initial strength of the field.
### Neutron stars and halo substructure
Numerous cosmologies predict enhanced small-scale power, for instance via an early matter-dominated era or DM self-interactions assisting primordial density perturbations, resulting in a substantial fraction of DM surviving in substructure termed variously as clumps, subhalos, minihalos and miniclusters [206; 207; 208; 209; 210; 211; 212; 213]. If DM has scattering interactions with the SM, and if the interacting component resides in clumps, direct searches may have observed no conclusive signal simply because the Earth has yet to encounter a subhalo since their inception. In this senario, subhalo DM may be observed by its heating of old, nearby NSs: the latter may travel through DM clumps and capture constituent DM particles, giving rise to kinetic and/or annihilation heating.
In the top left panel of Fig. 11, taken from Ref. [142], is shown the cooling time of NSs as a function of the NS surface temperature in green, and in the same plot is shown the energy deposited by clumps in NSs during encounters, \(E_{\rm meet}^{T}\), as a function of the time between NS-clump encounters for various clump sizes. The \(E_{\rm meet}^{T}\) in the top x-axis correspond to the NS temperatures imparted in the bottom x-axis immediately following the encounter. For encounter times shorter than cooling times, the NS will glow at a steady-state luminosity, whereas for those longer than cooling times, NSs would be expected to glow brightly for short durations following encounters before dimming. In the latter case, sky surveys of large populations of NSs may be able to pick out the fraction that is still above some temperature to which the telescope is sensitive. In the top right panel, also taken from Ref. [142], are shown clump mass vs radius regions that may discovered by observing more than 100 NSs above \(10^{4}\) K in the local kiloparsec, _e.g.,_ by Roman/WFIRST and Rubin/LSST, and excluded by observing a single NS with temperature \(<1000\) K, e.g. by JWST, ELT and TMT. Also shown is a region that is already excluded by the observation of the coldest (\(<\)30,000 K) known NS PSR J2144\(-\)3933 by the Hubble Space Telescope (HST) [214] for clumps made of dissipative or strongly self-interacting DM, which would accrete onto NSs through the Bondi-Hoyle-Lyttleton mechanism [215; 216; 217].
In addition, in the presence of a long-range fifth force, NS heating by clumps may be enhanced by greater focusing effects, greater DM kinetic energies upon arrival at the NS surface, and seismic oscillations induced by an effective tidal force. In the bottom right panel of Fig. 16, taken from Ref. [218], is shown the
Figure 11: _Top._ Neutron star cooling timescale versus surface temperature (obtained from Eq. (30)), superimposed on a plot of time between DM clump-NS encounters versus the energy deposited by kinetic heating during the passage of a clump for various NS radii (_left_). The ticks on either x-axis and either y-axis correspond one-to-one to each other. This plot shows the region in which NSs are expected to glow at a steady temperature, so that observing a single NS is enough to set constraints, and the region where overheated NSs cool down rapidly between clump encounters, so that astronomical surveys are required to observe the fraction of overheated NSs in an ensemble. On the _right_ are future sensitivities of astronomical observations of NSs on DM clump radii and masses, exploiting dark kinetic heating, seen to be complementary to limits from other experiments. These limits are valid for DM-nucleon cross sections greater than the values for which the effects in these searches are relevant. These two plots are taken from Ref. [142]; see Sec.4.2 for further details. _Bottom left_ Dark clump masses and radii constrained by compact stellar thermonuclear explosions, occurring for the minimum DM-nucleus cross sections per DM mass overlaid; see Sec.4.2. _Bottom right._ For two different internal density profiles, the mean flux density of transient radio signals at various telescopes from encounters of axion miniclusters as a function of the transit time (= signal burst duration), taken from Ref. [205]. See Ref. 4.11 for further details.
limit from overheating PSR J2144\(-\)3933 on the effective NS-clump coupling versus clump mass, for four values of the range of the fifth force arising from a Yukawa potential [219]. (We do note that the DM need not be in the form of a clump for these limits to apply, but could also be a tightly bound composite.) The curve labelled "NS kinetic heating" corresponds to having an additional short-range interaction enhance DM capture. These limits are complementary to those coming from the Bullet Cluster on DM self-interactions mediated by the light mediator, from weak equivalence principle tests using Galacto-centric motions of celestial bodies on the inter-baryonic force mediated by the same, and from the 15 year dataset of the NANOGrav pulsar timing array (see also Sec. 4.10.1).
Yet another signature of clumps with nucleon scattering interactions is thermonuclear explosions induced in compact stars, as discussed in Sec. 3.3. These could be Type Ia-like supernovae in carbon-oxygen WDs or x-ray superbursts in the carbon ocean layer in NS crusts (Sec. 2.5). Constraints from the observed frequency of NS superbursts (Sec. 4.3) and from the existence of WDs (Sec. 3.3) are shown in the left bottom panel of Fig. 11 in the plane of clump size and mass; the contours overlaid are the minimum reduced nuclear cross sections required to ignite a trigger mass of the stellar material. This method of constraining clumps could be extended to those with baryonic long-range forces discussed above. In that case, limits on the effective coupling apply to far smaller values (all the way to unity) than shown in Fig. 16 bottom right panel, and to much higher clump masses. See Ref. [133].
Clumps encountering NSs can also be made of axions, leading to interesting signatures depicted in the right bottom right panel of Fig. 11, which we discuss in Sec. 4.11. We also note that the phenomenology of black hole formation inside NSs (Sec. 4.4) would be applicable here if NS-clump encounters are frequent enough.
### Dark matter inducing superbursts in neutron stars
Superbursts in NS carbon oceans, described in Sec. 2.5, can be induced by transiting DM if it is sufficiently heavy to deposit the requisite trigger energy. Ref. [132] set limits on the cross sections and (super-Planckian) masses of macroscopic DM opaque to nuclei by satisfying the runaway criteria (Eqs. (16) and (18)) and requiring that the time between DM-NS encounters is smaller than the inferred maximum recurrence time of the superburst 4U 1820+30. Ref. [133] set limits on the masses, radii and interaction strengths of dark clumps (shown in Fig. 11 bottom left panel) and nuggets with long-range baryonic forces, using inferred recurrence times of the six superbursts (out of 16 detected in total) that have been observed to repeat [63; 64].
### Dark matter that implodes neutron stars into black holes
Dark matter that is captured by an NS, after repeated re-scattering with the NS medium, will settle into a small thermalized region at the center of the NS. As more DM is collected, this spherical agglomeration can grow to a large enough mass that it collapses and forms a small black hole, which may (depending on its mass) subsequently accrete the entirety of the NS, transforming it into a solar mass black hole [11]. The processes of DM capture in NSs, thermalization, accumulation to the point of collapse, collapse, formation of a black hole, and its possible evaporation via Hawking radiation or growth consuming the NS, have been investigated in Refs. [220; 162; 98; 178; 221; 222; 223; 224; 225; 226; 227; 66; 228; 229; 230; 231; 232; 128; 121; 233; 234; 235; 236; 123; 130; 237; 238]. In addition, possible astrophysical signatures of DM converting NSs to black holes have been identified in, _e.g._, Refs. [227; 239; 229; 230].
The kind of DM that is by and large studied in this context is "asymmetric dark matter", DM primarily made of its _particles_ as opposed to a symmetric population of _particles and anti-particles_. This emulates the
visible universe, which is primarily matter (electrons, nucleons) and not anti-matter; indeed, the asymmetry in DM may be linked to that of the visible sector [122; 240], but this is not necessary for the discussion that follows. The primary feature that permits asymmetric DM to convert NSs into black holes is that it is typically7 non-annihilating, and so as it collects inside the NS, it is not expected to annihilate to Standard Model states. This may be compared with symmetric, annihilating DM discussed in Sec. 4.1.2. Investigation into what fraction of the DM may self-annihilate or co-annihilate with nucleons, while still forming a black hole inside the NS, was undertaken in Refs. [225; 226; 242].
Footnote 7: For the exception, see Ref. [241].
Another kind of DM which could convert NSs into black holes is primordial black holes [243; 244]. A PBH captured in an NS can settle inside, accrete NS material, and convert the NS into another black hole [245; 246; 247; 248; 249; 229; 250; 251; 252; 253; 254; 255; 256]; this is detailed in Section 4.5. For the remainder of this sub-section we will focus on particle DM.
We now turn to details of the processes leading asymmetric dark matter to convert NSs into black holes. They proceed as follows: (1) DM is captured in the NS and thermalizes with the NS interior, forming a small ball of DM at the center, (2) the DM ball reaches a critical mass at which it collapses, and through some cooling process continues to collapse until (3) a small black hole forms which, provided accretion of NS material outstrips Hawking radiation, will result in the conversion of the NS to a black hole. Figure 12 left panel shows a simple schematic of this process.
#### 4.4.1 Dark matter thermalization in neutron stars
In step (1) above, the size of the thermalized region is determined by the temperature of the NS, which sets the final temperature of the DM particles, and by the central density of the NS, which sets the gravitational potential binding energy. A simple application of the virial theorem yields an estimated DM thermal
Figure 12: _Left._ Schematic of asymmetric dark matter converting a neutron star into a black hole. _Right._ Dark matter per-nucleon scattering cross section versus mass bounds on heavy fermionic asymmetric dark matter from the observation of old Milky Way pulsars that have not been converted to black holes [229], compared with terrestrial direct search limits and their neutrino floor. Also shown are prospects for observating NS mergers with accompanying kilonovae, localized to 1 kpc precision inside Milky Way-like spiral galaxies. A detailed discussion of Milky Way pulsar ages, and in particular PSR J1738+0333, which has a characteristic age confirmed by the age of its white dwarf companion, can be found in Ref. [257].
radius of [225]
\[r_{\rm th}\approx 20\ {\rm cm}\left(\frac{\rm GeV}{m_{\chi}}\right)^{1/2}\left( \frac{T_{NS}}{10^{3}\ {\rm K}}\right)^{1/2}\left(\frac{10^{15}\ {\rm g/cm^{3}}}{\rho_{\rm NSc}}\right)^{1/2}, \tag{52}\]
where \(\rho_{\rm NSc}\) is the NS central density. The time it takes for DM to sink to this region depends on a few timescales (see _e.g._, Ref. [123], Section 3 for a review), but usually the longest is the time it takes for DM to scatter with its lowest velocities/temperatures on nucleons, after having mostly settled inside the NS. A detailed calculation of this timescale requires modeling the NS core, and so the result will depend on the density, degeneracy, and possibly even new QCD phases in the NS interior. For neutrons treated as a degenerate fluid, we have [167]
\[t_{\rm th}\approx 3000\ {\rm yr}\ \frac{\frac{m_{\chi}}{m_{u}}}{\left(1+\frac{ m_{\chi}}{m_{u}}\right)^{2}}\left(\frac{2\times 10^{-45}\ {\rm cm}^{2}}{\sigma_{n_{\chi}}}\right)\left(\frac{T_{NS}}{10^{5}\ {\rm K}}\right)^{2}, \tag{53}\]
where this expression assumes a momentum-independent cross section for spin-1/2 DM scattering on nucleons via a heavy mediator. Extensions to spin-0 DM, Lorentz structures of DM-nucleon interactions leading to momentum-dependent cross sections, and light mediators were investigated in Ref. [172]. In the above expression, the thermalization timescale counter-intuitively _decreases_ with increasing DM mass above \(m_{n}\): one would naively expect that heavier DM takes _longer_ to thermalize. But the effect comes about because \(t_{\rm th}\) is set by the inverse of the energy loss rate (in turn depending on the DM-nucleon scattering rate) in the NS degenerate medium with phase space restrictions, and this rate goes as positive powers of the (continually degrading) DM momentum \(k_{\rm cold}\). For DM energies close to the NS temperature, \(k_{\rm cold}\simeq\sqrt{3m_{\chi}}T_{\rm NS}\), implying energy is lost faster in the last few scatters for heavier DM, i.e., implying quicker thermalization. In Figure 13 we show the per-nucleon cross section or effective field theory coupling necessary for DM to thermalize inside an NS on 10 Gyr year timescales for certain models.
As discussed in Sec. 4.1.2, depending on the DM annihilation cross section, thermalized DM collected within \(r_{\rm th}\) can annihilate efficiently enough to yield interesting signals.
#### 4.4.2 Collapse of dark matter and formation of small black hole
In step (2), after enough DM has collected in the thermalized region in the NS, it will reach a critical mass at which it collapses. The exact density of DM required to initiate collapse will depend on its self
Figure 13: _Left._[167] Cross section for dark matter to thermalize in a neutron star in 10 billion years, assuming a momentum-independent cross section with neutrons. _Right._[172] The case of scattering on neutrons through the scalar current operator indicated in the figure with mediator mass \(m_{\phi}\). Thermalization in 10 Myr and 10 Gyr for different NS temperatures, and a curve indicating parameters that lead to DM capture in the NS through geometric cross sections, are shown.
interactions, and by extension its equation of state and sound speed while contained in the NS. Assuming negligible self-interactions, the critical mass required for collapse is
\[M_{\rm crit}\approx 7\times 10^{46}\ {\rm GeV}\left(\frac{10^{7}\ {\rm GeV}}{m_{ \chi}}\right)^{3/2}\left(\frac{T_{\rm NS}}{10^{3}\ {\rm K}}\right)^{3/2}\left(\frac{10^{15}\ {\rm g/cm^{3}}}{\rho_{\rm NSc}}\right)^{1/2}\,. \tag{54}\]
For a detailed review of the conditions for collapse see _e.g._, Section 4 of [123]. It is generally the case that if DM thermalizes rapidly through scattering with neutrons in the NS interior, then when it reaches the point of collapse it will also rapidly shed the gravitational energy required to form a black hole. This is because the temperature is higher during collapse and hence the time to shed gravitational energy is typically shorter. As the shortness of this timeframe is common, this part of the collapse dynamics is not always treated explicitly, but Refs. [66; 123; 128; 11] provide more detailed treatment, both in compact stars and other astrophysical bodies. The time for the DM sphere to collapse below its Schwarzschild radius will depend on whether it cools via scattering with neutrons or through other radiative process, _e.g._, emission of a light particle in the dark sector [66].
An additional consideration is whether enough DM will have collected to exceed the dark sector Chandrasekhar mass (analogous to Eq. (7)), parametrically of order
\[M_{\rm Chand,f}\approx\frac{M_{\rm Pl}^{3}}{m_{\chi}^{2}}\approx M_{\odot} \left(\frac{{\rm GeV}^{2}}{m_{\chi}^{2}}\right)\,, \tag{55}\]
while for bosons this is
\[M_{\rm Chand,b} \approx \frac{2M_{\rm Pl}^{2}}{m_{\chi}}\left(1+\frac{\lambda}{32\pi} \frac{m_{pl}^{2}}{m_{\chi}^{2}}\right)^{1/2} \tag{56}\] \[\rightarrow \begin{cases}2\frac{M_{\rm Pl}^{2}}{m_{\chi}}\,\ \ \lambda\ll 1,\\ \frac{\lambda}{2\sqrt{2}}\frac{M_{\rm Pl}^{3}}{m_{\chi}^{2}}\,\lambda>100m_{\chi}/M_{ \rm Pl}\,\end{cases}\]
where \(\lambda\) is the boson \(\phi\)'s repulsive self-interaction coupling arising in the Langrangian \(\mathcal{L}\supset-(\lambda/4!)\phi^{4}\).
Attractive DM self-interactions could alter the amount of asymmetric fermionic DM necessary for collapse to a black hole [224; 225; 257]. The collapse of light fermionic DM is in principle permitted by the attractive self-interaction mediated by a light scalar, however, a detailed study of the final stage collapse to a black hole has pointed out an important caveat [258]: for a simple scalar field potential consisting only of a mass term and a coupling to the fermions, the effective mass of the scalar could grow during DM fermion collapse, preventing collapse to a black hole. Whether bosonic self-interactions let DM form black holes in NSs is a non-trivial question. In particular, a large value of \(\lambda\) can shift bosonic asymmetric DM bounds from old NSs to higher DM masses [223; 225; 226]. Bosonic asymmetric DM forming black holes inside a NS do so by forming a Bose-Einstein condensate (BEC) [223; 222; 259; 225; 226; 230], from which collapse will proceed for GeV-mass DM. The dynamics of the BEC prior to and following collapse would affect whether a black hole is produced and is an area of investigation [259; 225; 226; 230; 260].
#### 4.4.3 Growth or evaporation of dark matter-formed black hole in the neutron star
After a black hole is formed inside the NS, step (3) is to determine whether it is so small that it will rapidly evaporate away via Hawking radiation, or whether it is so large that through accumulating surrounding baryonic material it will grow to consume the NS in a relatively short timeframe. Initial studies of this
process estimated whether Bondi accretion by the black hole would proceed faster than Hawking radiation, which is entirely determined by the initial mass of the black hole [223; 222]. Later studies incorporated the accumulation of DM particles additionally collected into the NS and onto the black hole, finding that this can substantially influence whether the black hole would grow to consume the NS [225; 226; 242].
Altogether, the requirement that the black hole grows in the NS is given by
\[\dot{M}^{\rm(NS\ accretion)}+\dot{M}^{\rm(DM\ accretion)}-\dot{M}^{\rm(Hawking )}>0, \tag{57}\]
where the first term is the NS accretion rate onto the black hole, the second is the DM accretion rate onto the black hole, and the third is the Hawking radiation rate. Each of these terms has been individually studied in the context of asymmetric DM which causes neutron stars to implode:
1. _NS accretion_: The simplest treatment of NS accretion onto the black hole assumes Bondi accretion. In practice, angular momentum of the NS fluid around the black hole, for a rapidly spinning NS, can diminish accretion relative to naive Bondi accretion, but the high viscosity of the NS fluid results in infall rates consistent with spherical Bondi accretion despite angular momentum effects [261]. Sufficiently small black holes will have a quantum penalty to accumulation of neutrons due to the neutron de Broglie wavelength; this effect can be pronounced for black hole masses near the edge of growth vs. evaporation [238]. The accretion of the NS fluid onto the black hole inside a NS has been studied in a detailed simulation that accounts for hydrodynamic and general relativistic effects [233], finding that in the final stages of accretion, the mass of NS fluid ejected from the accretion zone is likely less than about \(10^{-4}\ M_{\odot}\).
2. _DM accretion_: The accretion of DM onto the small black hole inside the NS can substantially affect whether it grows or shrinks, especially when the DM accretion rate onto the NS is maximized [225; 226; 242; 230].
3. _Hawking radiation_: There is a correction to the evaporation rate of black holes inside NSs, coming from the Pauli blocking of Hawking radiation, since the region around the black hole will be inhabited by a degenerate sea of SM fermions [262].
#### 4.4.4 Signatures of dark matter that implodes neutron stars
A number of striking astrophysical signatures arise from DM that converts NSs to black holes. Firstly, the oldest known pulsars can be used to set limits on asymmetric DM, since for a given background DM density, the existence of these pulsars limits the accumulation of DM [220; 221; 222; 223; 224; 225; 226; 178; 162]. However, the "characteristic age" of pulsars comes with caveats: it is not always a good indicator of the actual age of the pulsar as discussed in Sec. 2.8. One particular pulsar, PSR J1738+0333, is in a binary system with an old WD, and thus to go with its characteristic age, has an additional age-marker in its WD companion, both of which point to a \(\gtrsim 5\) Gyr-old NS. Hence this pulsar has been used to set bounds on asymmetric DM [227; 229]. A recent work [263] integrates over the density of DM that NSs traverse during their orbits around the Milky Way, refining bounds that use characteristic ages of old, nearby pulsars.
A number of prompt and delayed signatures may be sought if DM is converting Gyr-old NSs to black holes in regions where DM is denser than in the outer regions of the Milky Way. In particular, the absence of millisecond pulsars in the Galactic Center [264; 265] has been linked to models of DM that would convert old NSs to black holes [227; 178]. These studies predict a maximum pulsar age that increases with Galactocentric distance, corresponding to a decrease in the amount of DM accumulating in pulsars. It has been
## 6 Conclusions
Figure 14: _Left._ From a simulation of a neutron star accreting onto a black hole at is center [233]. Neutron star fluid density and velocity vectors are shown in the left half of these figures, with angular momentum density shown on the right half, for a different NS spin parameters \(\Omega\). _Right._ Number of neutron stars converted to black holes in a Milky Way-like galaxy, along with the current NS implosion rate for dark matter models that would cause NSs near Earth to implode in 10 Gyr (“ADM1”) and 50 Gyr (“ADM2”) [229].
shown that collapsing NSs can shed their magnetospheres and emit a radio pulse about a millisecond in duration, and thus the implosion of pulsars via DM is a possible origin of non-repeating fast radio bursts (FRBs) [239]. The Galactic localization of FRBs sourced from DM-induced NS implosions could be used to confirm or limit this hypothesis [229]. Based on these estimates, approximately ten FRBs localized in Milky Way-equivalent galaxies would be required to differentiate between an FRB population sourced by DM-induced NS implosions and one that simply matches the NS distribution in these galaxies.
The implosion of NSs into black holes has also been explored as a source of \(r\)-process elements. NS fluid ejected during the implosion could undergo \(r\)-process enrichment, sourcing early \(r\)-process elements observed in ultra-faint dwarf spheroidal galaxies [228; 229]. \(r\)-process enrichment has also been studied for rapidly spinning NSs and NSs that capture primordial black holes [266], along with associated high-energy events [250; 251]. However, the dynamics of ejecta from NS implosions later investigated in simulations [233; 254; 267] disfavor large amounts of NS material being ejected in any single NS implosion event. In particular, Ref. [233] found in their simulations that even maximally spinning NSs would eject less than \(10^{-4}\ M_{\odot}\) of their material during implosion. Recently, neutrinos produced during DM-induced NS implosions, and the prospects for detecting them as a diffuse background were studied [268].
Figure 14, from Ref. [233], shows a detailed simulation of the final stages of NS accretion onto a black hole in its interior, and the expected rate of these NS implosions in Milky Way-like galaxies for a generalized NS-implosion-DM framework laid out in Ref. [229]. Since the accumulated mass of DM onto NSs will be proportional to DM density and inversely proportional to the DM-NS relative velocity, one can parameterize how quickly particle DM will convert NSs into black holes:
\[\left(\frac{t_{\rm convert}}{\rm Gyr}\right)\!\!\left(\frac{\rho_{\chi}}{\rm GeV /cm^{3}}\right)\!\!\left(\frac{200\ {\rm km/s}}{v_{\chi}}\right)\lesssim 2\, \tag{58}\]
where we have quoted the bound from Ref. [229] using the \(\gtrsim 5\) Gyr-old PSR J1738-0333.
Many additional pathways for discovering DM that implodes NSs arise from the population of solar mass black holes that would be created, which would reside preferentially in the centers of galaxies [229; 231]. The number of NS merger events with accompanying kilonovae required to increase sensitivity to these DM models was estimated for 10-100 kilonovae in Ref. [229]. As stated in that study, kilonovae would not occur following an apparent "neutron star merger" if both solar mass compact objects were black holes converted from NSs.. The prospects of gravitational wave observatories like LIGO/VIRGO and their successors for finding a population of NSs converted to black holes have been detailed in Refs. [229; 231; 235; 234; 236; 235; 269]. The capability of future gravitational wave observatories to differentiate solar mass black hole mergers from solar mass NS mergers, depending on minute variations in the waveform just prior to the merger, was examined in Ref. [270], which noted that a large population of mergers may be needed to find evidence for a solar mass black hole population.
### Primordial black hole dark matter and neutron stars
Black holes could form in the infant universe through some mechanism producing sub-horizon overdensities, and for masses above \(5\times 10^{14}\) g these "primordial black holes" do not evaporate away within the age of the universe [244; 243] - thus constituting non-baryonic dark matter. PBHs are constrained by a number of phenomena they give rise to: evaporation, gravitational (micro)lensing, disruption of dwarf galaxies and wide binaries, gravitational waves, and accretion [243]. Wide-ranging as they are, these limits nevertheless leave open a mass window in which PBHs could make up all the DM: roughly \(10^{17}-10^{22}\) g (or \(10^{-16}-10^{-11}\ M_{\odot}\)).
PBHs encountering NSs may capture through the energy loss mechanism of dynamical friction: the NS constituent particles absorb the momentum of the transiting PBH [271]. In this picture there are no collective excitations of the NS medium, as the PBH travels at supersonic speeds after being accelerated by the NS' gravity. An alternative treatment of the energy loss is by considering oscillations of the NS medium excited by the passing PBH8, which seemingly extracts more energy from the PBH [272]. However, these approaches were shown to be equivalent by modelling the NS as a semi-infinite incompressible fluid [273]. All said and done, the captured PBH proceeds to grow via Bondi accretion of the NS material [274; 275; 271] (see Section 4.4 for additional considerations regarding black hole growth in NSs), eventually destroying the NS in a catastrophic event that emits telltale electromagnetic signals and gravitational waves [229; 252]. Collisions between PBHs and NSs may also explain fast radio bursts [239; 229; 276]. In the case that black holes are charged, their capture and consumption of the neutron star is treated differently [125], where specifically this work employed the capture rate for monopoles and a different accretion rate appropriate for extremal black holes. Finally, as mentioned in Sec. 4.1, magnetically charged black holes that may constitute DM could capture in NSs and heat them via Hawking radiation of absorbed nucleons.
Footnote 8: This analysis was reused in Ref. [219] to treat the tidal heating of NSs by composite particulate DM.
### Neutron stars admixed with dark sectors
Neutron stars that contain an appreciable fraction of exotic particle species could give rise to interesting observational signatures such as modified NS mass-radius relations and distinct gravitational wave signals. But to obtain such "admixed" neutron stars one cannot rely on their capture of ambient DM. One can see this immediately from Eq. (45): for an NS of age \(10\;\mathrm{Gyr}\simeq 3\times 10^{17}\;\mathrm{s}\) in the solar vicinity, the total DM mass accumulated over its lifetime is only about \(5\times 10^{-15}\;M_{\odot}\). Thus other mechanisms must be in play.
#### 4.6.1 Impact on nuclear equation of state
Hidden GeV-mass states charged with baryon number \(B\) could explain the long-standing neutron lifetime puzzle [282] and take a crucial part in baryogenesis (see, _e.g._, Refs. [283; 284]). One consequential species is the "**dark neutron**" with \(B=1\) that can mix with the standard neutron, and could arise either as an elementary particle [282] or as a composite in, _e.g._, mirror matter models [285]. The dark neutron could be cosmologically long-lived if its interactions with the visible sector are small enough, in which case it could constitute the dark matter of the universe. Dark neutrons \(\chi\) may be produced in NSs in neutron-nucleon scattering processes \(nN\to\chi N\) and neutron decay \(n\to\chi\) + anything. If the production of \(\chi\) occurs on timescales shorter than NS lifetimes, the \(\chi\) fluid that is in chemical equilibrium with the nucleonic fluid would generally soften the EoS of NS matter. Consequently, the maximum mass of these admixed NSs is reduced compared to standard NSs, giving rise to constraints on dark neutrons from observations of high-mass NSs [286; 287; 278; 288; 289]; see Figure 15 for representative limits. The above arguments also apply to a variation of the dark neutron \(\chi\) with \(B=1/3\) such that it is produced in the NS via the decay \(n\to\chi\chi\chi\)[290]. These constraints can be evaded in models that introduce repulsive self-interactions between the dark neutrons, which would pre-empt the softening of the EoS [291; 292]. Admixed NSs also exhibit mass-radius relations that could span a 2-dimensional area rather than follow a 1-dimensional sequence [293]. Another interesting diagnostic of admixed NSs is the tidal Love number impacted by the formation of extended atmospheres [294; 295], and the second Love number [296]. This is measurable as a phase shift in a binary merger gravitational wave signal at forthcoming observatories such as Advacned LIGO, the Einstein Telescope and LISA. Yet another diagnostic, stemming from the modification of NS mass-radius relations, is the NS pulse profile as measured by precision probes such as NICER [297]. In the
case where the production of \(\chi\) takes longer than NS lifetimes, other effects come into play, chief among which is the overheating of NSs due to formation of holes in the nucleon Fermi sea; we review this in Section 4.8.
A **sexaquark/hexaquark** electrically neutral state \(uuddss\) that is elusive to accelerator searches has been proposed as a dark matter candidate [298]; to ensure nuclear stability and cosmological lifetimes for the hexaquark, its mass must lie between 1860-1890 MeV [299]. It was shown that due to rapid thermalization in the early universe, the hexaquark freezes out at an abundance of \(10^{-11}\) of the total baryon number, and can thus only be a minuscule relic [300]. Moreover, hexaquarks in this mass range would be produced within seconds of the birth of a proto-neutron star during a core-collapse supernova, and the energy released in this process would unbind the proto-neutron star, strongly disfavoring the existence of this state [279]. As the latter authors argue, even if the proto-neutron star somehow survives, since all baryons are converted rapidly to hexaquarks, the EoS of the resulting star would be softer than that of a standard NS, which would run afoul of observations of high NS masses as in the case of dark neutrons. Nevertheless, the latter limit may be satisfied if the NS undergoes early quark deconfinement such that hexaquarks are not present, or a later deconfinement that leaves a quark core inside a neutron-hexaquark shell [301].
#### 4.6.2 More admixed neutron stars
Further mechanisms to obtain neutron stars admixed with dark sectors have been proposed. In analogy with dark neutrons, spin-0 states \(\phi_{\chi}\) carrying baryon number may interact with the neutron via the vertex \(\mathcal{L}\supset y_{m}n\phi_{\chi}\bar{\nu}\), giving rise to \(n\to\nu\phi_{\chi}\) decays producing \(\phi_{\chi}\) that could constitute 1-10% of the NS mass [288]. In a different model with \(\mathcal{L}\supset y_{m}n\phi_{\chi}\bar{n}\), nucleon bremsstrahlung \(nn\to nn\phi_{\chi}\) could populate NSs with \(\phi_{\chi}\) for NS internal temperatures \(T_{\rm NS}\) satisfying \(m_{\phi_{\chi}}/3<T_{\rm NS}\lesssim m_{\phi_{\chi}}/2\)[277; 288]. This condition is to ensure that neutron kinetic energies are sufficient to produce \(\phi_{\chi}\) while keeping the products from escaping the
Figure 15: _Left._ Mass-radius relations of neutron stars admixed with a large population of dark baryons, taken from Ref. [277]. The presence of dark baryons without self-interactions softens the equation of state, resulting in a maximum NS mass smaller than those observed above 2 \(M_{\odot}\) (dashed horizontal lines). [Since the appearance of Ref. [277], an even heavier \(2.35\pm 0.17\)\(M_{\odot}\) NS has been observed [278].] Analogous constraints could apply to dibaryons/hexaquarks, a QCD bound state that could make up dark matter [279]. _Right._ Limits on the neutron mixing amplitude of dark/hidden/mirror neutrons from the heating of neutron stars via the “nucleon Auger effect”, taken from Ref. [280]. These are shown for neutron stars of various surface temperatures, with the ceiling for each set by the timescale of neutron-to-dark-neutron conversions being smaller than the NS age. While these limits are valid for neutron-dark neutron mass splittings of up to the NS nuclear self-energy \(\simeq\) 10–100 MeV, terrestrial limits from ultracold neutron facilities are only valid for mass splittings up to the Zeeman splitting from the Earth’s magnetic field, which is 19 decades smaller. Also shown is a weaker bound from Ref. [281] from NS mass loss due to neutron-to-dark-neutron conversions, measured with binary pulsar timing (Sec. 4.10.2). See Secs. 4.6.1 and 4.8 for further details.
NS' gravity. For \(m_{\phi_{\chi}}=100\) MeV one obtains percent-level NS mass fractions of \(\phi_{\chi}\). Dark compact stars formed from a possibly dissipative dark sector [302; 303] could accrete surrounding baryonic matter to form admixed stars [288], though the exact mechanism required to obtain comparable amounts of exotica and nucleons in these structures is far from clear. One probe of admixed NSs orthogonal to EoS effects is the NS cooling curve [304]; in particular, sub-GeV DM that annihilates to neutrino final states and participating in the NS' thermal conduction could have an observable effect on NS cooling.
### Exotic compact stars
Dark baryons such as mirror neutrons may give rise not only to neutron stars admixed with them, but also to compact stars composed entirely of them. Such exotic "**mirror neutron stars**" may constitute an appreciable fraction of DM, providing smoking gun signatures in tidal deformability measurements from gravitational waves and binary pulsar observations [305; 293]. A mirror-like hidden sector may also give rise to **exotic white dwarfs** that can be detected using their mergers via gravitational waves [306]. In the presence of kinetic mixing between the photon and its hidden counterpart, mirror NSs may capture interstellar material and emit radiation observable at Gaia [307]. Mirror stars made of mirror anti-neutrons may grow a population of SM anti-matter in their cores which can accrete ISM and produce radiation observable at Fermi-LAT [308].
Analogously, asymmetric DM with large self-interactions could form **dark compact stars**[309] that could capture ISM protons and electrons which consequently sink to the core and form a hot radiative gas, observable in telescopes as X-ray or gamma-ray point sources [310]. Other exotic compact objects with near-Schwartzschild compactness may exist and constitute DM, _e.g._, some classes of **boson stars**, **Q-balls**, **non-topological solitons**, and **ultra-compact minihalos**. For reviews see Ref. [311; 312; 313; 314].
In the vein of a first-order QCD phase transition giving rise to expanding bubbles of the low-temperature phase compressing regions of the high-temperature phase into dense quark nuggets [315], an excess of "dark quarks" charged under a confining gauge group and residing in a false vacuum may be compressed by the true vacuum of a first-order transition [316]. In the case of fermionic dark quarks heavier than the confinement scale, this process can lead to compact "**dark dwarf**' stars supported by Fermi degeneracy pressure _a la_ white dwarfs [317]. The compression may even go on to form primordial black holes, which is the only end state for bosonic dark quarks.
### Dark sectors leading to internal heating of neutron stars
In Sec. 4.6 we mentioned that neutron scattering and decay processes producing dark baryons \(n^{\prime}\) occurring within the lifetime of the NS would give rise to dark neutron-admixed NSs. The two-state Hamiltonian for the \(|n\rangle\)-\(|n^{\prime}\rangle\) system with \(m_{n}\simeq m_{n^{\prime}}\) and mixing amplitude \(\epsilon_{nn^{\prime}}\) is
\[H=\begin{pmatrix}m_{n}+\Delta E&\epsilon_{nn^{\prime}}\\ \epsilon_{nn^{\prime}}&m_{n^{\prime}}\end{pmatrix}, \tag{59}\]
where \(\Delta E\) is the medium-dependent energy splitting. The dominant channel of \(n^{\prime}\) production is neutron-nucleon scattering, \(nN\to n^{\prime}N\), with cross section [280]
\[\sigma_{n^{\prime}N}\simeq g_{N}\Big{(}\frac{\epsilon_{nn^{\prime}}}{\Delta E }\Big{)}^{2}\sigma_{nN\to nN}\;, \tag{60}\]
where \(\sigma_{nN\to nN}\) is the neutron-nucleon cross section determined experimentally, \(g_{n}(g_{p})=2(1)\) is a multiplicity factor, and \(\epsilon_{nn^{\prime}}/\Delta E\) an effective in-medium \(n\)-\(n^{\prime}\) mixing angle. The typical rate of \(n^{\prime}\) production in
an NS may then be computed from the above as
\[\Gamma_{n^{\prime}}=\frac{1}{10^{7}\ \text{yr}}\left(\frac{\epsilon_{nn^{\prime}}}{10^{ -15}\ \text{eV}}\right)^{2}\left(\frac{n_{\text{nuc}}}{0.3\ \text{fm}^{-3}}\right)\,, \tag{61}\]
for a total nucleon density \(n_{\text{nuc}}\).
If the timescale of \(n^{\prime}\) production \(\Gamma_{n^{\prime}}^{-1}\) exceeds NS lifetimes, which typically occurs in parametric regions where \(n^{\prime}\) also constitutes (the cosmologically long-lived) DM, it would give rise to NS overheating via the "nucleon Auger effect" discussed in the context of DM capture in Sec. 4.1.1. When nucleons leave behind holes in their Fermi seas, through either conversion to \(\chi\) or upscattering, higher-energy ambient nucleons rapidly fill them in, in the process liberating heat in the form of electromagnetic and kinetic energy. The total power liberated by \(n\to n^{\prime}\) conversions in the NS is [280]
\[L_{n\to n^{\prime}}=\int d^{3}r\,n_{n}(\mathbf{r})\dot{E}_{n^{\prime}}( \mathbf{r})\,\ \ \text{with}\ \ \dot{E}_{n^{\prime}}=\sum_{N=n,p}f_{N}n_{N}\left\langle\left(\widetilde{\mu}_ {n}-\frac{p_{n^{\prime}}^{2}}{2m_{n^{\prime}}}\right)\sigma_{n^{\prime}N}v \right\rangle_{p_{N}>p_{F_{N}}}\ \, \tag{62}\]
where the subscript in the second equation denotes the inclusion only of scattering events that result in spectator nucleons kicked above their Fermi sea.
This effect arrests the passive cooling of NSs, and very stringent constraints on dark neutrons [318; 280] may be placed using HST observations of the coldest (\(<\) 30,000 K) observed pulsar PSR J2144\(-\)3933 [214]. The reach on the mixing between neutrons and dark neutrons could be further extended with present and forthcoming ultraviolet, optical and infrared campaigns suited to observe colder NSs: LUVOIR [319], Rubin [320; 321], DES [322], Roman [323], JWST [155], TMT [153], and ELT [154]. This is depicted in the right panel of Fig. 15 in the plane of the off-diagonal mass or transition amplitude versus the surface temperature of various NSs. The ceiling for these limits arises from the fact that the nucleon Auger effect is effectively non-existent if the timescale of dark neutron production is smaller than the NS age, in which case the Fermi sea of the dark neutron is filled up and thus neutron conversion is Pauli-blocked. It must also be noted that these limits apply to dark neutron masses in excess of the neutron mass (939.6 MeV) by 10-100 MeV, the values of neutron self-energies from the nuclear potential of the NS medium that effectively raise \(m_{n}\). Above this mass conversions of neutrons to dark neutrons are kinematically suppressed.
One scenario in which the above limits may be weakened is when the dark neutron arises as a mirror equivalent of the neutron in mirror world theories, with exact mass degeneracies [324]. In this case, mirror electrons produced via mirror beta decay could take away heat by scattering with electrons via a millicharge, and emit mirror bremsstrahlung via mirror photons. Internal heating of NSs can also occur by baryon number-violating neutron decays, depositing nearly the mass energy of the neutrons, and for some models with long-range potentials this provides the strongest limits from observations of PSR J2144\(-\)3933 [325].
The heating of NSs from neutron losses via baryon number-conserving and violating processes, and from NS capture of DM in the various ways described in Sec. 4.1, provide compelling fundamental physics motivations for upcoming astronomical missions to perform systematic measurements of NS luminosities, masses and radii.
### Dark matter signals in gravitational waves from neutron star mergers
Admixed neutron stars containing large quantities of DM could interact with each other via a long-range force (either attractive or repulsive) acting on the dark component. This would leave a distinct signature in the waveforms of the gravitational waves (GWs) picked up during their mergers at detectors such as LIGO and VIRGO [327; 322; 328]. This comes about via two effects: one, a modification to the measured chirp
Figure 16: _Top left_. Limits on the the mass fraction of dark matter in neutron stars in a binary as a function of the coupling of an attractive long-range dark force relative to gravity, for DM mass = 1 GeV. The solid red contour indicates the minimum value of the effective dark force coupling \(\alpha^{\prime}\) that will have a detectable impact at LIGO; the dashed red contour depicts the value of the coupling when the dark charge resides in only one NS. In the purple region, the NSs turns into black holes. In the gray region, DM is expelled from the NSs when screening due to the dark force lifts. See Ref. [232] for further details. _Top right_. Future limits from gravitational waves on an axion-like species sourced by inspiraling binary NSs: the region bounded by \(m_{u}\lesssim 10^{-12}\) eV and \(F_{a}\gtrsim 10^{15}\) GeV can be probed by Advanced LIGO. For particulars on the other limits shown here, see Ref. [326]. _Bottom left_. Limits on the coupling of ultralight dark matter to baryons as a function of its mass from the 15-year dataset of the NANOGrav pulsar timing array [218] for correlated (red solid) and uncorrelated (red dashed) signals. Also shown are erstwhile limits from the Parkes Pulsar Timing Array (PPTA) and MICROSCOPE’s equivalence principle test, and the region where the signal amplitude is smaller than a gravitational one. _Bottom right_. Limits on the effective baryonic long-range Yukawa coupling of dark matter subhalos/nuggets versus DM mass using NANOGrav [218] (red region). The [solid, dashed, dot-dashed, dotted] curves correspond to the range of the force \(10^{[0,-1,-2,-3]}\) pc. Stronger limits from white dwarf explosions and neutron star superbursts, not shown here, could come into play for certain additional DM properties [133]. Also shown are limits from kinetic heating of the coldest observed pulsar PSR J2144\(-\)3933 (see Sec. 4.2), weak equivalence principle tests, and the Bullet Cluster.
mass \(\mathcal{M}\equiv\mu^{3/5}(M_{1}+M_{2})^{2/5}\) where \(M_{1,2}\) are the NS masses and \(\mu\) their reduced mass, since the effect of the new force on the evolution of the binary period is degenerate with a shift in NS masses; two, energy loss through dipole radiation of the force-carrier, which would again show up int the binary period evolution. In Fig. 16 top left panel we show limits from Ref. [232] on the NS mass fraction of DM as a function of the dark force strength relative to gravity. For sizeable abelian forces, the charge carried by the DM must not be too large last it unbind the star, limiting the hidden force to be weaker than gravity if gravitational tests are to be made [329]. A long-range dark sector force such as felt by muons in the NS could also induce the effects above [330].
Axions may be sourced by NSs due to in-medium corrections to the axion potential, giving rise to inter-NS forces that can be detected during the inspiral [326]. In Fig. 16 top right panel we show ensuing limits from Ref. [326] on the decay constant as a function of the mass of an axion-like species. Ultra-light scalar DM with baryonic interactions can induce time-varying mass shifts in the NS from their coherent background, which could show up in broadband measurements [331]. Ultra-light DM could also modify the dispersion relation of neutrinos escaping a proto-neutron star, giving rise to asymmetric emission and hence the natal kick of the NS (Sec. 2) as well as a non-oscillatng permanent strain in the local metric, a "gravitational memory" signal, that can be picked up at GW detectors [332]. Dark photons produced in NS mergers could convert to detectably luminous \(\gamma\)-rays [125].
### Dark matter signals in pulsar timing
Pulsating neutron stars are incredibly precise celestial metronomes. As discussed in Sec. 2.8, their spin periods slow down at rates smaller than 1 sec per \(10^{12}\) sec, affording a pulse regularity second only to atomic clocks. This precision is exploited to probe fundamental physics in a number of ways, and in this section we collect the implications for dark matter searches.
#### 4.10.1 Pulsar timing arrays
Pulsar timing arrays (PTAs), by constraining correlations in the arrival times of pulses emitted by \(\mathcal{O}(10)\) millisecond pulsars9, are primarily detectors of nanoHertz to milliHertz gravitational waves [333; 334]. These measurements can also be used to constrain DM substructure (including primordial black holes): their transits could induce a shift in the signal phase,
Footnote 9: Although about 3000 pulsars have been discovered [87], only these many provide the level of stability and noise-free emission required to achieve interesting precision at PTAs.
\[\varphi(t)=\varphi_{0}+\nu t+\frac{1}{2}\dot{\nu}t^{2}+\frac{1}{6}\dot{\nu}t^ {3}+\mathcal{O}(t^{A})\, \tag{63}\]
from a shift in the frequency via two effects:
\[\left(\frac{\delta\nu}{\nu}\right)_{\rm Dopp} = \mathbf{\hat{d}}\cdot\int\nabla\Phi dt\,\] \[\left(\frac{\delta\nu}{\nu}\right)_{\rm Shap} = 2\int\mathbf{v}_{\chi}\cdot\nabla\Phi dz\, \tag{64}\]
where \(\mathbf{\hat{d}}\) is the Earth-to-pulsar direction, \(\mathbf{v}_{\chi}\) the velocity of the DM structure, \(\Phi\) the gravitational potential due to it, and \(z\) traces the pulsar-to-Earth path of photons. The first line of Eq. (64) describes a _Doppler time delay_ in the observed period of the pulsar brought about by the acceleration of the NS or Earth due
to transiting DM [335]. The second line describes a _Shapiro time delay_ coming from a change in the arriving photon's geodesic due to DM structure's gravitational potential [336]. Signals of DM substructure could either be _dynamic_[336, 335, 337, 338, 339], as when the transit times are much smaller than the total observation time, giving rise to blips in the data, or could be _static_[340, 341, 342], as when they are much larger, showing up as a sizeable contribution to the \(\ddot{v}\) term in Eq. (63). Detailed analyses of dynamic and static signals, treating Doppler and Shapiro time delays, can be found in Refs. [343, 344, 345], with implications for DM substructure origins explored in Ref. [346]. Limits from North American Nanohertz Observatory for Gravitational Waves (NANOGrav) data [347, 348] are derived in Ref. [219, 218] on dark nuggets (that could range from point-like to \(>10^{9}\) km in size) interacting with baryons through a long-range Yukawa force; see Fig. 16 bottom right panel.
Another scenario amenable to searches at PTAs is that of ultralight (\(10^{-24}\)-\(10^{-22}\) eV) DM, which could induce oscillations of the gravitational potential of the Galactic halo at nanohertz frequencies [349, 350]. Various PTAs have constrained these models [351, 352, 218]; see Fig. 16 bottom left panel. In addition, a network of topological defects that alter fundamental constants could be uncovered through a variation of pulsar periods across a network of well-timed pulsars [354].
PTAs currently operational are the European Pulsar Timing Array (EPTA) [355] that uses the Westerbork Synthesis, Effelsberg, Lovell, Nancay and Sardinia radio telescopes, Parkes Pulsar Timing Array (PPTA) [356], NANOGrav [357] that uses the Green Bank Telescope, Arecibo Observatory and Very Large Array, Indian Pulsar Timing Array (InPTA) [358] that uses the Upgraded Giant Meterwave Radio Telescope (uGMRT) (these four make up the International Pulsar Timing Array (IPTA) [359]), MeerTime at MeerKAT [360], and the Chinese Pulsar Timing Array (CPTA) [361] that uses the Five-hundred-meter Aperture Spherical Telescope (FAST). The future has the Square Kilometer Array (SKA) [362, 363]. It would be of significant interest to look for various DM substructure as well as ultra-light DM in the datasets of all these PTAs, an exercise we urge of expert authors.
A recent data release of multiple PTAs announced detection of signals consistent with a stochastic gravitational wave background (SGWB) sourced by supermassive black hole (SMBH) mergers [218]. This has prompted many authors to investigate PTA signatures of dark matter, including re-interpetation of the data in terms of primordial black holes [364, 365], a modification of the spectral index of the SGWB due to cosmic DM-induced dynamical friction [366], a modulation in the amplitude due to a DM spike surrounding the SMBHs [367], an inflationary blue-tilted tensor power spectrum in a setup thermally overproducing WIMPs [368], a soliton of ultralight DM enclosed by the SMBHs [369], an electroweak phase transition induced by a potential involving DM [370], an ultralight radion arising from a fifth spacetime dimension [371], and dark photon DM produced by the decay of cosmic string loops that may have sourced the SGWB [372].
#### 4.10.2 Binary pulsar timing
Pulsar timing without the use of a pulsar array can also be a valuable tool to detect DM. The orbits of inspiraling binary pulsars may undergo seasonal modulation due to dynamical friction caused by DM, which may be used to set bounds on the density of DM in the environment [373]. Measurements of the orbital period and period decay of binary pulsars also help to set limits on ultralight scalars that could constitute DM via their radiation from NSs [374] and their coherent oscillations [375, 376, 377], and the rate of mass loss in NSs due to, _e.g._, conversions of neutrons to dark baryons [281, 378, 289, 379], although these latter limits are superseded by considerations of NS heating via the Auger effect (Sec. 4.8) as detailed in Ref. [280].
#### 4.10.3 Pulsar spin-down
Milli-charged DM accreting onto NSs provides surplus charge that must be expelled from the polar caps to maintain charge neutrality. Thus an additional electric current (following open magnetic field lines) is induced, contributing to the NS \(B\) field, and therefore to the slowing down of its spin as per the discussion in Sec. 2.8. In this way milli-charged DM could explain the observation of pulsar braking indices \(n<3\)[380; 381].
### Axion-like and very light dark matter, and neutron stars
The above sub-sections considered particulate DM confronting NS detectors. Now we will see that very light, wave-like dark matter can also be discovered with NSs. The poster child for such DM is the QCD axion and its extension, axion-like particles (ALPs). In this section we will focus on the utility of compact stars in discovering ALP DM, referring the reader to the review in Ref. [384] for a thorough treatment of generic ALPs interacting with compact objects.
ALPs of masses \(m_{a}\) in the range \(10^{-7}\)-\(10^{-4}\) eV, when falling into NSs, convert to photons in their magnetospheres via the Primakoff effect [385], _i.e._ via the interaction
\[\mathcal{L}\supset\frac{g_{\alpha\gamma}}{4}aF_{\mu\nu}\widetilde{F}^{\mu\nu} \rightarrow-g_{a\gamma}\;a\;\mathbf{E}\cdot\mathbf{B}\;. \tag{65}\]
The conversion probability is enhanced by the large magnetic field, and resonantly so for ALP masses degenerate with the photon plasma mass [386; 387; 388; 389; 390; 391]. The result is radio emission observable as a monochromatic line signal at a frequency set by the ALP mass: the narrow bandwidth is set by the small dispersion in the speed of the ALP DM. Limits exploiting this signature on the ALP-photon coupling and ALP mass have been set using the Green Bank Telescope, the Very Large Array, and the Effelsberg 100-m Radio Telescope [382; 392; 393; 391; 383]. In the near future, observations can be made at the Parkes Observatory, the Sardinia Radio Telescope, MeerKAT, the Murchison Widefield Array, and the Hydrogen Epoch of Reionization Array [394]. Analogous limits have been placed on \(\mathcal{O}(10^{-5})\) eV mass spin-1 DM converting to photons via kinetic mixing in the plasma of NSs and accreting WDs in the Galactic Center [395]. Time-domain data (as opposed to frequency-domain) of PSR J2144\(-\)3933 at MeerKAT have
Figure 17: _Left._ 95% C.L. bounds on the ALP-photon coupling versus ALP mass from Ref. [382] using the Green Bank and Effelsberg Radio Telescopes’ observations of neutron stars nearby and in the Galactic Center. The green bands span theoretical uncertainties in the Effelsberg analysis. Also shown are limits from CAST (helioscope), and UF and RBF (haloscopes), and the region where the ALP could be the QCD axion. _Right._ Same as the left panel, from Ref. [383], using VLA data on the pulsar J1745\(-\)2900 and ray-tracing models. The bands depict uncertainties in viewing and magnetic angles. Limits from CAST, ADMX and HAYSTAC are also shown. See Sec. 4.11 for further details.
been used to set limits on ALP emission [396], taking advantage of time variation of the periodic signal due to pulsar rotation and magnetospheric plasma effects. Resonant conversion of ALP DM can also occur in the corona of a magnetic white dwarf, which could prove more sensitive than NSs to sub-microeV ALPs [95]. In Fig. 17 we show recent NS-based constraints on the ALP-photon coupling versus ALP mass.
On the theoretical side, accounting for the semi-relativistic speeds of ALP DM infall and anisotropy of the magnetic field can potentially modify the ALP-photon conversion rate by orders of magnitude [397]. One can also consider enhanced radio emission from NSs in binary systems with intermediate mass (\(10^{3}\)-\(10^{5}M_{\odot}\)) black holes, taking advantage of a possible spike in the DM density [398]. Ray-tracing simulations accounting for signal photon propagation in the NS gravitational potential are an important direction toward accurate determinations of the outgoing radio flux [399; 400; 401]. Some of the uncertainties addressed by these efforts relate to the absorption of radio waves in the plasma and their refraction and reflection in the magnetosphere, signal broadening due to photon-plasma interactions, and time-dependence of the signal.
As with particulate DM discussed in Sec. 2.4, ALP DM could also form substructure such as axion mini-clusters and axion stars. If large fractions of the DM mass are present in such substructure the sensitivities of laboratory searches for ALP DM may be diluted, and NSs as ALP probes grow in importance. Encounters of NSs with ALP substructure can produce distinct transient radio signals, including fast radio bursts [402; 403; 404; 405; 406; 205]. In the bottom right panel of Fig. 11, taken from Ref. [205], are shown for two different axion mini-cluster profiles the mean flux densities of the radio signal and the sensitivities of telescope arrays. These signal rates may be smaller for more conservative estimates of the NS magnetic fields that account for possible mechanisms of their decay [406].
ALPs that are not necessarily the cosmic DM may be also be studied in NS systems. Produced with \(\mathcal{O}\)(100 keV) energies in the core via nucleon bremsstrahlung, ALPs could convert in the magnetosphere to photons that may observed as x-ray and gamma-ray emissions. As these ALPs carry away energy they also provide an additional cooling mechanism for the NS. For further details and observation prospects of ALP emissions from within NSs, see Ref. [394] and the references therein. ALPs could also overheat NSs by contributing to the NS magnetic energy via a dynamo mechanism generated by axio-electrodynamics [407].
As a reminder, other signatures of ALPs have already been discussed in this review: their modification of white dwarf equations of state (Sec. 3.4) and their considerable effects on pulsar timing measurements (Sec. 4.10).
## 5 Conclusions and perspective
In this review we have described how various authors have exploited the unique and extreme properties of compact stars to enable far-reaching searches for a vast assortment of dark matter scenarios10. We have, of course, chosen to focus on the commonly accepted and well-observed classes of compact stars: white dwarfs and neutron stars. Several adjacent stellar entities may play a role in the discovery of dark matter, _e.g._, black holes, proto-neutron stars, supernovae and their remnants, and Thorne-Zytkow objects; for accounts of some of these, we refer the reader to Ref. [394] and other literature.
Footnote 10: Although there are some for which compact stars get in the way of discovering dark matter, _e.g._ searches via microlensing [408] and globular clusters [118]!
Given the richness of physics involving both dark matter and compact stars, it is impossible to exhaust the progress that can be made hereupon. We mention a few possibilities.
* While reseach has gone into detecting wave-like, particulate, macroscopic and black hole dark matter with compact stars, what happens to DM in the form of topological defects (macroscopic monopoles,
cosmic strings, and domain walls) [409; 410] in a compact star environment is as yet relatively unexplored, although for some initial work in this direction see Ref. [354]. This points to a property of white dwarfs and neutron stars that could be further exploited - as sirens of dark matter well-distributed through the galaxy, they have special sensitivity to variations in dark matter properties across the halo and substructure [142].
* The physics of BEC and BCS states formed by dark matter in neutron stars, essential to understanding collapse to black holes, is yet to be worked out satisfactorily. Many of the computations have used coarse estimates.
* Detecting dark matter via thermonuclear explosions in white dwarf cores and neutron star oceans requires knowledge of trigger lengths, which while numerically estimated in Ref. [65] with a large nuclear reaction network, is only available for a narrow range of densities and compositions. We believe that the significance of these computations for a scientific question as fundamental as dark matter warrants further exploration of the 31 year-old results of Ref. [65] by the nuclear astrophysics community.
* The impact of white dwarf explosions on the evolution on galactic structure and star formation is yet to be scrutinized.
In closing, dark matter may first become manifest through some effect on compact stars detailed in this document. On the other hand, since compact stars have the distinction of being the densest objects composed of known particles, it may be that some variety of dark matter, hitherto unexpected, will first become evident through a surprising and as-yet-unforeseen interaction with them. In either case, we can look forward to the interplay between our burgeoning understanding of compact stars and the ebullient search for dark matter in the coming decades.
## Acknowledgments
For helpful interactions we thank Melissa Diamond, Michael Fedderke, Raghuveer Garani, Bradley Kavanagh, Ranjan Laha, and Camellia Sinensis. The work of J. B. is supported by the Natural Sciences and Engineering Research Council of Canada. N. R. acknowledges support from TRIUMF Inc. and the Arthur B. McDonald Canadian Astroparticle Physics Research Institute at Queen's University during the course of this work. This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund through the Arthur B. McDonald Canadian Astroparticle Research Institute. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities.
|
2302.11393 | How Ready Is DNS for an IPv6-Only World? | DNS is one of the core building blocks of the Internet. In this paper, we
investigate DNS resolution in a strict IPv6-only scenario and find that a
substantial fraction of zones cannot be resolved. We point out, that the
presence of an AAAA resource record for a zone's nameserver does not
necessarily imply that it is resolvable in an IPv6-only environment since the
full DNS delegation chain must resolve via IPv6 as well. Hence, in an IPv6-only
setting zones may experience an effect similar to what is commonly referred to
as lame delegation. Our longitudinal study shows that the continuing
centralization of the Internet has a large impact on IPv6 readiness, i.e., a
small number of large DNS providers has, and still can, influence IPv6
readiness for a large number of zones. A single operator that enabled IPv6 DNS
resolution -- by adding IPv6 glue records -- was responsible for around 20.3%
of all zones in our dataset not resolving over IPv6 until January 2017. Even
today, 10% of DNS operators are responsible for more than 97.5% of all zones
that do not resolve using IPv6. | Florian Streibelt, Patrick Sattler, Franziska Lichtblau, Carlos H. Gañán, Anja Feldmann, Oliver Gasser, Tobias Fiebig | 2023-02-22T14:19:18Z | http://arxiv.org/abs/2302.11393v1 | # How Ready Is DNS for an IPv6-Only World?
###### Abstract
DNS is one of the core building blocks of the Internet. In this paper, we investigate DNS resolution in a strict IPv6-only scenario and find that a substantial fraction of zones cannot be resolved. We point out, that the presence of an AAAA resource record for a zone's nameserver does not necessarily imply that it is resolvable in an IPv6-only environment since the full DNS delegation chain must resolve via IPv6 as well. Hence, in an IPv6-only setting zones may experience an effect similar to what is commonly referred to as lane delegation.
Our longitudinal study shows that the continuing centralization of the Internet has a large impact on IPv6 readiness, i.e., a small number of large DNS providers has, and still can, influence IPv6 readiness for a large number of zones. A single operator that enabled IPv6 DNS resolution-by adding IPv6 glue records-was responsible for around 20.3% of all zones in our dataset not resolving over IPv6 until January 2017. Even today, 10% of DNS operators are responsible for more than 97.5% of all zones that do not resolve using IPv6.
## 1 Introduction
With the recent exhaustion of the IPv4 address space, the question of IPv6 adoption is gaining importance. More end-users are getting IPv6 prefixes from their ISPs, more websites are reachable via IPv6, hosting companies start billing for IPv4 connectivity or give discounts for IPv6-only hosting and IoT devices further push IPv6 deployment. Yet, one of the main entry-points for Internet services--the DNS--is suffering from a lack of pervasive IPv6 readiness. While protocols such as Happy Eyeballs [41, 45] help to hide IPv6 problems, they complicate detection and debugging of IPv6 issues. Indeed, the threat of DNS name space fragmentation due to insufficient IPv6 support was already predicted in RFC3901, over 18 years ago [18]. Hence, in this paper, we measure the current state of IPv6 resolvability in an IPv6-only scenario.
In Figure 1 we show two common misconfigurations, which prevent DNS resolution over IPv6 and lead to an effect similar to what is commonly called lane delegation. Note, that RFC8499 [26] defines lane delegation as incorrect NS entries or nameservers _not responding properly_. While the observed behaviour might look the same, the underlying misconfiguration, e.g., missing AAAA or
GLUE for IPv6, often is different. Hence, in this paper we use the term broken IPv6-delegation to avoid unnecessary ambiguity and distinguish the case of zones that are not IPv6 ready, e.g. show no intent to support IPv6 by not having any AAAA records, and zones that appear to intend supporting IPv6, Section 2.
In the first example, the external nameservers ("out-of-bailiwick") of example.org do not have AAAA records and, thus, the resolution via IPv6 is impossible. In the second example, the zone example.org misses the AAAA glue records. These glue records make the A/AAAA records available for resolution if they have to be resolved from the zone being delegated, i.e., the names of the NS {ns3,ns4}.sub.example.org are in-bailiwick.
These examples highlight (a) that it needs cooperation between multiple parties for proper configuration, i.e., sub.example.net cannot be resolved via IPv6 even though it is correctly configured; (b) that dual-stack hides issues, i.e., both examples work for dual-stack enabled hosts where the AAAA records for ns3 and ns4 are resolvable. This demonstrates how working IPv4 resolution hides broken IPv6-delegation for dual-stack DNS recursors.
To be IPv6 _ready_, DNS resolution must work in IPv6-only scenarios. In this paper, we leverage passive DNS data--the Farsight SIE dataset [17]--to identify scenarios in which the DNS delegation chain breaks when only IPv6 is available. Our main contributions can be summarized as follows:
* We identify common broken IPv6-delegation scenarios and point out the importance of checking the full delegation chain.
* We show that big players have a major impact on the number of zones affected by broken IPv6-delegation. Today, 10 DNS providers are responsible for about 24.8% of IPv6-only-unresolvable domains we observe. Just by adding correct glue records, in Jan. 2017 one single provider fixed the IPv6-only name resolution of more than 45.6 M domains (20.3% of the domains in the dataset).
* Resilience mechanisms often hide misconfigurations. For example, broken IPv6-delegation is hidden by the combined efforts of DNS resilience and Happy
Figure 1: Broken IPv6-delegation for example.org (missing AAAA resource records in example.net for NS) and sub.example.org (missing IPv6-GLUE in parent).
Eyeballs. Correctly configuring ones own DNS zone is not sufficient and dependencies are often non-obvious.
* Additionally, we conduct a thorough validation of our methodology. We assess the coverage of the Farsight SIE data in comparison to available ground-truth zonefile data, finding it to provide sufficient coverage for our analysis. Furthermore, we cross-validate our passive measurement results using active measurements, again finding our results to be robust.
* We implemented a DNS measurement tool instead of using, e.g., ZDNS [29], as we need IPv6 support which ZDNS does not (yet) support. The dataset from our active measurements and an implementation of our scanning methodology, including a single-domain version operators can use to evaluate IPv6 support for their own domains, are publicly available at: [https://github.com/mutax/dns-v6-readyness](https://github.com/mutax/dns-v6-readyness)
## 2 Broken IPv6 Zone Delegation
In this section, we briefly recap DNS zone delegation, and sketch common DNS resolution failure scenarios.
### Background: DNS Zone Delegation
The DNS is organized in a hierarchical structure where each node represents a zone that can be operated separately from its parent or child zones. For a zone to be resolvable, NS resource records have to be set in two places. First, the parent of the zone has to explicitly delegate the zone to authoritative nameservers via NS resource records. If an authoritative server has a domain name within the delegated zone itself or a child zone, i.e., if it is "in-bailiwick" [26], the parent zone must also contain A and AAAA resource records for this name, called GLUE, that are returned in the ADDITIONAL section of the DNS responses whenever the NS record is returned. This process breaks the circular dependency in the resolution chain. Furthermore, the zone itself must contain appropriate NS records as well as A and AAAA records if they are in-bailiwick. If the name in an NS record is not within the zone itself or a child zone, i.e., it is out-of-bailiwick, then the zone of the NS' name must also resolve for the initial zone to be resolvable.
### Reasons for Broken IPv6 Delegation
In this paper, we focus on a subset of DNS misconfigurations. In an IPv6-only scenario these misconfigurations can lead to effects similar to what is commonly referred to as lame delegation. To avoid ambiguity, we use the term broken IPv6-delegation referring to any set of misconfiguration specific to IPv6, that breaks the DNS delegation chain of a zone and prevents any of its records from resolving in an IPv6-only scenario. Other issues where a zone does not resolve due to, e.g., DNSSEC problems or unresponsive nameservers, i.e., the strict definition of "lame delegation" (see RFC8499 [26]) are out-of-scope. The issues we discuss
can also occur in IPv4 DNS resolution, but are usually quickly discovered given the currently still large number of sites with IPv4-only connection to the Internet, that will not be able to resolve the affected zones.
For a zone to be IPv6-resolvable --i.e., resolvable using IPv6-only-- the zones of the authoritative nameservers have to be resolvable via IPv6 and at least one nameserver must be accessible via IPv6. This has to be the case _recursively_, i.e., not only for all parents of the zone itself but also for all parents of the authoritative nameservers in such a way that at least for one4 of the authoritative nameservers of a zone a delegation chain from the root zone exists, that is fully resolvable using IPv6. We identify the following misconfigurations which can cause broken IPv6-delegation in an IPv6-only setting:
Footnote 4: RFC2182 [21] suggests to avoid such single points of failure
* **No AAAA records for NS names:** If none of the NS records for a zone in their parent zone have associated AAAA records, resolution via IPv6 is not possible.
* **Missing GLUE:** If the name from an NS record for a zone is in-bailiwick, i.e., the name is within the zone or below [26], a parent zone must contain an IPv6 GLUE record, i.e., a parent must serve the corresponding AAAA record(s) as ADDITIONAL data when returning the NS record in the ANSWER section.
* **No AAAA record for in-bailiwick NS:** If an NS record of a zone points to a name that is in-bailiwick but the name lacks AAAA record(s) in its zone, IPv6-only resolution will fail even if the parent provides GLUE, when the recursive server validates the delegation path. One such example is Unbound [35] with the setting harden-glue: yes-the default.
* **Zone of out-of-bailiwick NSes not resolving:** If an NS record of a zone is out-of-bailiwick, the corresponding zone must be IPv6-resolvable as well. It is insufficient if the name pointed to by the NS record has an associated AAAA record.
* **Parent zone not IPv6-resolvable:** For a zone to be resolvable via IPv6 the parent zones up to the root zone must be IPv6-resolvable. Any non-IPv6-resolvable zone breaks the delegation chain for all its children.
The above misconfigurations are not mutually exclusive. For example, if the NS sets between parent and child differ, a common misconfiguration [42], the NS in the parent may not resolve due to missing GLUE (as they are in-bailiwick) _but also_ the NS in the child may not resolve due to having no AAAA for their names, if they are out-of-bailiwick. In this paper we investigate the prevalence of these misconfigurations to evaluate the IPv6 readiness of the DNS ecosystem.
## 3 Datasets and Methodology
In this section, we present our choice of datasets as well as our active and passive measurement methodology for identifying DNS misconfigurations that break IPv6-only resolution.
### DNS Dataset: Farsight SIE
For our evaluation we are looking for a dataset that enables us to (a) perform a longitudinal study, (b) detect IPv6 DNS misconfigurations, (c) analyze not just top level domains (TLDs) but _also_ zones deeper in the tree, and (d) focus on zones that are used in-the-wild. As such we select the Farsight SIE dataset for our study.
The _Farsight Security Information Exchange_ (SIE) dataset [17] is collected by Farsight Inc. via globally distributed sensors, co-located with recursive DNS resolvers. Each sensor collects and aggregates all DNS cache misses that the recursive DNS resolver encounters, i.e., the outgoing query and the received answer. By only recording cache-misses and providing aggregates, Farsight reduces the risk of exposing Personally Identifiable Information (PII). Cache-misses occur when a recursive DNS resolver does not have a DNS record for a specific domain name in its cache (or the record's TTL has expired). The recursive resolver then has to ask the authoritative nameserver for the requested name, which is then recorded by Farsight SIE. Farsight does not share the exact number and location of its sensors for business confidentiality reasons. Farsight's SIE dataset has been used in previous research [22, 27, 32] and its efficacy, coverage, and applicability for research has been demonstrated in the past [23]. We discuss ethical considerations of using this dataset in Section 3.5.
We use monthly aggregates from January 2015 to August 2022, containing unique tuples of: requested name, requested RRtype, bailiwick of the response, and returned data record, also for the additional sections, see Table 1. Thus, the Farsight dataset contains essential information for us, as it also records additional data as entries with the bailiwick of the parent. In addition, the Farsight dataset reaches deeper into the DNS hierarchy than, e.g., OpenINTEL [36], as it monitors DNS requests in the wild instead of resolving a set of names below zones sourced from TLD zone files.
Farsight Global Zone CoverageA common question when using a passive dataset like the one provided by Farsight is how well it actually covers zones on the Internet. In order to determine the coverage of the Farsight dataset, we evaluated the overlap of the second-level domains (SLDs) observed in the dataset with
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Field** & **Description** & **Example** \\ \hline count & \# of times the tuple \textless{}rrname, rrtype, 12 & \\ & bailiwick, \textless{}rada\textgreater{}ns been seen. & \\
**time\_first** & **Unix timestamp of the first occurrence of 14222251650** \\ & **the unique tuple during the data slice** \\ \hline time\_last** & Unix timestamp of the last occurrence of 1422251650 \\ & the unique tuple during the data slice. & \\
**frame** & **Requested name in the DNS** & **example.com** \\ \hline rrtype & Requested RRtype of the query. & NS \\
**bailiwick** & **Zone authoritative for the reply. & **com** \\ \hline rdata & List of all responses received in a single query. & [”ns1.example.com”, \\ & & **”ns2.example.com”]** \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of data fields in the Farsight SIE dataset.
ground-truth data, i.e., the names extracted from available zone files. Specifically, we are comparing to.com,.net, and other gTLD (generic TLD) zone files starting from mid of 2016. Additionally, from April 2017 onward, we also obtained CZDS (ICANN Centralized Zone Data Service) zone file data for all available TLDs. Moreover, we use publicly available zone file data from.se,.nu, and.ch for the coverage analysis. In total, this allows us to compare Farsight's data to more than 1.1k zones as of August 2022.
Looking at coverage over time, we find a significant overlap between the Farsight dataset and the number of actually delegated zones based on zone files, see Figure 2. Coverage averages above 95 % from 2019 onwards, with especially since May 2021, our coverage reaches over 99 %. Furthermore, we find a reduced average coverage in the beginning of 2017. A closer investigation revealed that these relate to the introduction of various vanity gTLDs with an overall small size, i.e., below 100 delegated zones in the TLD. This implies that missing coverage for just a few zones would lead to a significant reduction in aggregate coverage. Nevertheless, our analysis shows that a significant share of zones is covered in the Farsight dataset. Hence, we the Farsight dataset-especially due to the historic perspective it provides-is ideal to investigate our research questions.
Despite this high coverage, we still face the drawback of the Farsight dataset relying on real-world usage. As such, a missing record in the passive dataset does not necessarily indicate non-existence. Hence, we independently corroborate all major findings with data from TLD zone files for a specific period to check for missing glue records in the zone file, see Section 5.4.
### Domain Classification
There are many ways to cluster DNS domains into subgroups. For example, one may look only at the _Top Level Domains_ as specified by ICANN [28], or use the _Public Suffix List (PSL)_ provided by the Mozilla Foundation [34] to identify second level domains. The PSL is used by browser vendors to decide if a domain
Figure 2: Zone coverage of Farsight data and number of zones used for the evaluation. We used available zone files to determine the share of covered second level domains by Farsight’s dataset. Please note the dip in the graph from February to August 2019, where our zone file collection was limited, i.e., we only collected few zones with high coverage (February - April and July, including.com), or no data at all (May and June).
is under private or public control, e.g., to prevent websites from setting a _supercookie_ for a _domain_ such as.co.uk. Based on matching monthly samples of the ICANN TLDs and the PSLs we identify _TLDs_ as well as _Z\({}^{nd}\) Level Domains_, and _Zones Below 2\({}^{nd}\) Level_, i.e., all zones _below_ 2\({}^{nd}\) Level Domains.
Another way of grouping DNS domains is to use the Alexa Top-1M list [3]. Using, again, matching monthly samples, we distinguish between the Top 1K, Top 1K-10K, Top 10K-100K, and Top 100K-1M domains. We note that there are limitations in the Alexa Top List [39; 40], but compared to other toplists such as Tranco [31], the Alexa list is available throughout the measurement period.
### Misconfiguration Identification
Here, we describe how we identify whether zones can be resolved only via IPv4, only via IPv6, via IPv4 and IPv6, or not at all from the dataset.
**1. Per zone NS set identification:** We first identify all zone delegations by extracting all entries with rrtype = NS. Next, for all names used in these delegations, we find all associated IPs by extracting all A and AAAA records. We do not consider CNAMEs since they are invalid for NS entries, see RFC2181 [20].
We then iterate over all zones, i.e., names that have NS records, to create a unique zone list. In this process, we record the NS records for each bailiwick sending responses for this zone observed in the dataset, and for each NS name all AAAA and A type responses, again grouped by bailiwick from which they were seen. This also captures cases where parent and child return _different_ NS sets.
**2. Per zone DNS resolution:** We consider a zone to be resolvable via IPv4 or IPv6 if _at least one_ of the NS listed for the zone can be resolved via IPv4 or IPv6 respectively. Hence, to check which zones can be resolved using which IP protocol version we simulate the DNS resolution, starting at the root, i.e, we assume the Root zone. to be resolvable by IPv4 and IPv6. We then iterate over the zone set with attached NS and A/AAAA data. For each zone, except the root zone, we initialize an empty state marking the zone as not resolving.
We then attempt to resolve each zone. For that, we first check if the zone's parent has been seen.
If so we check for each NS of the zone we are trying to resolve as listed in the parent whether its name resolves via IPv4 and/or IPv6. This is the case if:
1. The NS is outside the zone we are trying to resolve, the NS' zone has been recorded as resolving in the zone state file (via IPv4 and/or IPv6), and there are A/AAAA records with that zone's bailiwick for the NS.
2. The NS is in the zone we are resolving and there is an A/AAAA glue record for the name with the bailiwick of the zone's parent (only if an in-bailiwick NS is listed in the parent).
```
1:\(zone\_res\leftarrow\{\}\)
2:\(ns\_res\leftarrow\{\}\)
3:\(prev\_res\_ones\leftarrow-1\)
4:\(cur\_res\_zones\gets 0\)
5:
6:while!\(prev\_res\_zones==cur\_res\_zones\)do
7:\(prev\_res\_zones\gets cur\_res\_zones\)
8:\(cur\_res\_zones\gets 0\)
9:for\(zone\) in \(input\)do
10:if\(zone\_res[zone.parent][res]\)then
11:\(glue\_resolve\gets false\)
12:\(zone\_resolve\gets false\)
13:for\(NS\) in \(glue\)do
14:if\(NS\) in \(ns\_res\)\(||\) (\(NS\) in \(zone\)\(\&\&\)\(zone\_parent\) has \(NS.ip\)) \(||\)\((zone\_res[ns\_zone][res]\)\(\&\&\)\(ns\_zone\) has \(NS.ip\)) then
15:if\(zone\_res[ns\_zone][res]\)\(\&\&\)\(ns\_zone\) has \(NS.ip\)then
16:\(ns\_res[NS]\gets true\)
17:\(glue\_resolve\gets true\)
18:for\(NS\) in \(zone\)do
19:if\(NS\) in \(ns\_res\)\(||\) (\(NS\) in \(zone\)\(\&\&\)\(zone\) has \(NS.ip\)) \(||\)\((zone\_res[ns\_zone][res]\)\(\&\&\)\(ns\_zone\) has \(NS.ip\)) then
20:if\(zone\_res[ns\_zone][res]\)\(\&\&\)\(ns\_zone\) has \(NS.ip\)then
21:\(ns\_res[NS]\gets true\)
22:\(zone\_resolve\gets true\)
23:\(zone\_res[zone][glue\_res]\gets glue\_resolve\)
24:\(zone\_res[zone][zone\_res]\gets zone\_resolve\)
25:if\(glue\_resolve\)\(\&\&\)\(zone\_resolve\)then
26:\(zone\_res[zone][res]\gets true\)
27:\(cur\_res\_zones\gets cur\_res\_zones\) + 1
```
**Algorithm 1** Resolve Zones from Passive Data
To ensure full resolution, we also have to check that the NS listed in the child resolve. For NS with names under the zone this is the case if the NS listed for this zone in the parent can be reached via IPv4/IPv6, see above, and they have A/AAAA records with the bailiwick of the zone itself. For out-of-bailiwick NS, this is again the case if their own zone resolves and they have A/AAAA records.
A single iteration of this process is not sufficient, as zones often rely on out-of-bailiwick NS. Hence, we continue iterating through the list of zones until the number of unresolved zones no longer decreases. For a simplified pseudo-code description, see Algorithm 1.
```
1:\(zone\_res[ns\_zone][res]\)
2:\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\)\(\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\(\triangleright\)\)\(\(\)\(\triangleright\)\(\)\(\(\triangleright\)\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\)\(\triangleright\)\(\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\(\)\(\triangleright\)(\(\)\(\triangleright\)\(\)\(\)\(\
the DNS tree. From there, we query all authoritative nameservers recorded in the parent on each layer of the DNS hierarchy using IPv4 and IPv6 where possible for the NS of that zonelayer. Furthermore, we try to obtain any possibly available GLUE (A and AAAA) for in-bailiwick NS. For out-of-bailiwick NS, we try to resolve the NS, again starting from the root. If there is an inconsistency between parent and child, i.e., if we discover additional NS when querying the NS listed in the parent, we also perform all queries for this layer against these, noting that they were only present in the child.
To limit the amount of queries sent to each server, our implementation follows the underlying principles of QNAME minimization as described in RFC7816 [5]. By using the NS resource record type to query the parent zones we can directly infer zonecuts and store GLUE records from the additional section, if present. Note that RFC8020 [6] is still not implemented by all nameservers, thus we cannot rely on NXDOMAIN answers to infer that no further zones exist below the queried zone. Our measurement tool will retry queries using TCP on truncation and disable EDNS when it receives a FORMERR from the upstream server.
To further limit the number of queries sent, all responses, including error responses or timeouts, are cached. We limit the number of retries (4) as well as the rate (20 second wait time) at which they are sent. To further enrich the actively collected dataset, we query all authoritative nameservers of a zone for the NS, TXT, SOA and MX records of the given zone as well as the version of the used server software using the _version.bind_ in the CHAOS class. Queries and replies are recorded tied to the NS that provided them.
We ran these measurements between October \(10^{th}\) to \(14^{th}\) and \(22^{nd}\) to \(24^{th}\) 2022 against the Alexa Top1M from August \(15^{th}\) 2022 containing 476,242 zones, collecting responses to a total of 32M queries sent via IPv4 and 24M queries sent via IPv6. Our active measurement dataset (101GB of json data), and a tool implementing our measurement toolchain are publicly available at: [https://github.com/mutax/dns-v6-readyness](https://github.com/mutax/dns-v6-readyness)
### Ethical Considerations
The _Farsight Security Information Exchange_ (SIE) dataset [17] used in this work is collected by Farsight Inc. at globally distributed vantage points, co-located to recursive DNS resolvers. These sensors collect and aggregate DNS cache misses they encounter, i.e., outgoing queries of the recursors and the received answers. Only collecting cache misses is a conscious choice by Farsight to ensure PII is protected. The dataset also does not contain which sensors collected a specific entry. We specifically use a per-month aggregated version of the dataset, see Section 3.1. For details on the fields in the dataset, see Table 1. Data has been handled according to best practices in network measurement data handling as outlined by Allman and Paxson [2].
Before running the active measurements for validation purposes (cf. Section 3.4), we consult the Menlo report [30] as well as best measurement practices [19]. We limit our probing rate, send only well-formed DNS requests, and make use of dedicated servers which have informative rDNS names set. Addition
ally, we run a webserver providing additional information and contact details on the IP as well as on the rDNS name. We also focused our measurements on the Alexa Top 1M, i.e., sites for which the impact of additional requests at the scale of our measurements is not significant, while also limiting repeated requests using caching. During our active measurements, we did not receive any complaints. In summary, we conclude that this work does not raise any ethical issues.
## 4 Results
Here, we first provide an aggregate overview of the Farsight dataset. Subsequently, we present the results of our analysis of broken IPv6-delegation based on passive measurement data. Finally, we validate our passive measurement results against active measurements run from \(10^{th}\) to \(24^{th}\) of October 2022.
### Dataset Overview
Our passive dataset spans 7 years starting on January \(1^{st}\), 2015 and ending on August \(31^{st}\), 2022. During this period, the number of unique zones increased from \(126\,\mathrm{M}\) to \(368\,\mathrm{M}\). Similarly, the number of PSL \(2^{nd}\) level domains increased from \(116\,\mathrm{M}\) to \(326\,\mathrm{M}\). For a visualization see the gray line in Figure 3 (right y-axis). To highlight our findings, we present results for selected subsets of domains only. The full results for all domain subsets are in shown in Appendix 0.A.
### IPv6 Resolution in DNS Over Time
In Figure 3 we show how the fraction of zones that is resolvable via IPv4-only, IPv6-only, both protocols, or fails to resolve, changes across time. We also show how the total number of zones changes (gray line). The figure shows data for all zones, the ICANN TLDs, PSL \(2^{nd}\) domains, zones deeper in the tree, Alexa Top-1K and Alexa Top-1M.
Overall, see Figure 4a, we find that \(11.4\%\) of all zones are IPv6-resolvable in January 2015. This is significantly higher than the sub \(1\%\) reported by Czyz et al. [13] in 2014. However, they only accounted for glue records, which does not consider zones with out of bailiwick NS. Over time IPv6 adoption steadily increases, with \(55.1\%\) of zones resolving via IPv6 in August 2022. A notable increase of IPv6 resolvable zones by \(17.3\%\) occurs in January 2017. Further investigation we find, that this increase relates to two major DNS providers--a PaaS provider and a webhoster--adding AAAA glue for their NS.
For ICANN TLDs, see Figure 4b, we find that the majority of zones is IPv6-resolvable. Throughout our observation period nearly all TLDs are IPv6-resolvable. The remaining not IPv6-resolvable zones are several vanity TLDs as well as smaller ccTLDs.
While PSL \(2^{nd}\) level domains, see Figure 4c, mirror the general trend of all zones, we find that zones deeper in the tree (Figure 4d) are generally less likely to be IPv6-resolvable. Still, we observe an upward trend. We attribute this to the
Figure 3: Per month: # of zones (gray line–right y-axis) and IPv4/IPv6 resolvability in % (left y-axis).
fact that the process of entering such domains into TLDs for \(2^{nd}\) level domains still receives oversight by NICs, e.g., regarding the RFC compliant use of at least two NS in different networks [21], while zones below \(2^{nd}\) level domains can be freely delegated by their domain owners. Also, for sub-domains, we observe three distinct spikes in Figure 3(d) which correspond to the spikes seen for all domains, recall Figure 3(a). These spikes occur when a single subtree of the DNS spawns millions of zones. These are artifacts due to specific configurations and highlight that lower layer zones may not be representative for the overall state of DNS.
Finally, comparing PSL \(2^{nd}\) level domains, see Figure 3(d), to the Alexa Top-1K domains, see Figure 3(e), we find that IPv6 adoption is significantly higher among popular domains, starting from 38.9% in 2015 and rising to 80.6% in 2021. There are two notable steps in this otherwise gradual increase, namely January 2017 and January 2018. These are due to a major webboster and a major PaaS provider enabling IPv6 resolution (2017), and a major search engine provider common in the Alexa-Top-1K enabling IPv6 resolution (2018).
**Comparison with Active Measurements:** Evaluating zone resolvability from our active measurements, see Section 3.4, we find that 314,994 zones (66.14%) support dual stack DNS resolution, while 159,166 zones (33.42%) are only resolvable via IPv4.
A further 2066 zones (0.43%) could not be resolved during our active measurements, and 16 zones (\(\leq\)0.01%) were only resolvable via IPv6. In comparison to that, our passive measurements-see also Figure 3(f)-map closely: We find 66.18% (+0.04% difference) of zones in the Alexa Top 1M resolving via both, IPv4 and IPv6, and 32.23% (-1.19% difference) of zones only resolving via IPv4. Similarly, 1.16% (+0.73%) of zones do not resolve at all, and 0.42% (+0.42% difference) of zones only resolve via IPv6 according to our passive data. Hence, overall, we find our passive approach being closely aligned with the results of our active measurements for the latest available samples. The, in comparison, higher values for non-resolving and IPv6 only resolving zones are most likely rooted in the visibility limitations of the dataset, see Section 5.4. Nevertheless, based on the low deviation between two independent approaches at determining IPv6 resolvability of zones we have confidence in the results of our passive measurements.
### IPv6 Resolution Failure Types
Next, we take a closer look at zones that show some indication of IPv6 deployment, yet, are not IPv6-resolvable. These are zones where an NS has an AAAA record or an AAAA GLUE. To find them we consider NS entries within the zone as well as NSes for the zone in its parent. In Figure 5 we show how their absolute numbers evolve over time (gray line) as well as the failures reasons (in percentages).
We find that for all four subsets of zones shown--all zones, ICANN TLDs, Alexa Top-1K, Alexa Top-10K-100K--the most common failure case is missing resolution of NS in the parent. This occurs mostly when the NS is out-of-bailiwick
Figure 5: Per month: # of zones not IPv6-resolvable with AAAA or GLUE for NS (gray line–right y-axis) and causes for IPv6 resolution failure in % (left y-axis).
and _does_ have AAAA records, but the NS's zone itself is not IPv6-resolvable. Furthermore, there is a substantial number of zones per category--especially in the Alexa Top-1K--where the NS in the parent lacks AAAA while the NS listed in the zone has AAAA records, commonly due to missing GLUE. We also observe the inverse scenario, i.e., GLUE is present but no AAAA record exist for the NS within the zone itself. Both cases can also occur if NS sets differ between the parent and its child [42].
We see a major change around January 2017, i.e., a sharp increase in zones that are IPv6-resolvable, which is also visible in Figure 3: For all zones as well as for the Alexa Top 10K-100K, we observe that several million zones not resolving via IPv6 since the start of the dataset but having NSes with AAAA records, now are IPv6-resolvable. The reason is that a major provider added missing glue records. Interestingly, we do not see this in the Alexa Top 1K.
In the Alexa Top 1K, and to a lesser degree in the Alexa Top 10K-100K, we observe a spike of zones that list AAAA records for their NS but are not IPv6-resolvable in Oct. 2016. This is the PaaS provider mentioned before, first rolling out AAAA records for their NS, and then three months later also adding IPv6 GLUE. Operationally, this approach makes sense, as they can first test the impact of handling IPv6 DNS queries in general. Moreover, reverting changes in their own zones is easier than reverting changes in the TLD zones-here the GLUE entries. Again, the major webhoster is less common among the _very_ popular domains, which is why its effect can be seen in Figures 5(a) and 5(d), but not in Figure 5(b). Also, this operator had AAAA records in place since the beginning of our dataset, as seen by the plateau in Figure 5(d). These observations have been cross-confirmed by inspecting copies of zonefiles for the corresponding TLDs and time-periods.
### Centralization and IPv6 Readiness
Finally, we focus on the nameservers hosting most non IPv6-resolvable zones. We first identify the top NS sets in terms of the number of hosted zones, aggregating NS names to their PSL \(2^{nd}\) level domain and known operators' NS under a multiple well-patterned zones. Then, we compute a CDF over the number of zones per NS set for each time bin. Figure 7 shows how this CDF changed across time and highlights the impact of centralization within the DNS providers. Over 97.5% of the non-IPv6-resolvable zones are hosted by the Top 10% of NS sets.
Again, we see the impact of a change by a major webhoster in January 2017--it is the top NS set among all zones (Figure 7(a)). Similarly, the PaaS provider is pronounced among the Alexa Top-1K, i.e., part of the Top 10 of NS sets (Figure 7(c)) and the top NS set for the Alexa Top-10K-100K (Figure 7(d)). Finally, the major search engine operator's impact can especially be seen among TLDs (Figure 7(b)) and the Alexa Top-1K (Figure 7(c)), where--in both cases--this operator is the top NS set for non IPv6-resolvable zones.
Figure 7: Per month: # of zones not IPv6-resolvable (gray line–right y-axis) and distribution of zones over NS sets in % (left y-axis).
### Resolvability and Responsiveness of NS in Active Measurements
During our active measurements, we also had the opportunity to validate whether NS records listed in zones did actually reply to DNS requests or not. During our evaluation of the Alexa Top 1M, we discovered a total of 176,207 NS records, of which 212 had A or AAAA records associated that were invalid, as for example :: as a AAAA record. Of the remaining 175,995 records, 116,504 needed glue, i.e., they were in-bailiwik NS for their own. Among these, 19,310 NS were dual-stack, while 94,192 only had A records associated with them, and a further 108 NS only had associated AAAA records. Furthermore, 85,213 (90.47%) of A-only NS needing glue had correct glue set. For dual-stack configured NS, 14,072 (72.87%) have complete (A and AAAA) glue. A further 3,932 (20.36%) NS only has A glue records, while 24 (0.12%) NS only have AAAA glue, despite generally having a dual-stack DNS configuration. Finally, of the 108 NS records only having AAAA records associated, 70 (64.81%) NS have correctly set AAAA glue.
Moving on to the reachability of these NS, we find that of the total number of NS that have an A record (169,547) _and_ are reachable is at 164,255, i.e., 96.88% actually responds to queries. For IPv6, these values are slightly worse, with 30,193 of 32,285 NS (93.52%) responding to queries via IPv6. This highlights a potential accuracy gap of 3-6% for research work estimating DNS resolvability from passive data. Notably, this gap is larger for IPv6.
## 5 Discussion
In this section, we first state our key-findings, and then discuss their implications.
### The Impact of Centralization
Centralization is one of the big changes in the Internet over the last decade. This trend ranges from topology flattening [4, 7] to the majority of content being served by hypergiants [8] and--as we show--also applies to the DNS. An increasing number of zones are operated by a decreasing number of organizations. As such, an outage at one big DNS provider [44]--or missing support for IPv6--can disrupt name resolution for a very large part of the Internet as we highlight in Section 4. In fact, out-of-bailiwick NS not being resolvable via IPv6 is the most common misconfiguration in our study, often triggered by missing GLUE in a single zone. Given that _ten_ operators could enable IPv6 DNS resolution for 24.8% of not yet IPv6 resolving zones, we claim that large DNS providers have a huge responsibility for making the Internet IPv6 ready.
### IPv6 DNS Resolution and the Web
In general, as we travel down the delegation chain we find more misconfigurations and a smaller fraction of IPv6-resolvable zones. Given that common web assets-JavaScript, Style Sheets, or images-are often served from FQDNs further down the DNS hierarchy, we conjecture that this may have a another huge, yet still hidden, impact on IPv6 readiness for web. We encourage operators to be mindful of this issue, and study its effect in future work.
### Implications for Future Research
Our findings demonstrate that it is not sufficient to test for the presence of AAAA records to asses the IPv6 readiness of a DNS zone. Instead, measurements have to assess whether the zones are IPv6-resolvable. The same applies to email setups and websites.
Furthermore, given the centralization we observe in the DNS, network measurements of IPv6 adoption should consider and quantify the impact of individual operators. More specifically, researchers should distinguish between effects caused by a small number of giants vs. the behavior of the Internet at large. Artifacts that can occur temporarily should be recognized and then excluded.
### Limitations
Since our dataset relies on DNS cache misses, we are missing domains that are not requested or not captured by the Farsight monitors in a given month. Moreover, our use of monthly aggregates may occlude short-term misconfigurations. To address this, we support major findings on misconfigurations with additional ground-truth data from authoritative TLD zone files.
Similarly, we use the Alexa List with its known limitations [39, 40]. Thus, we cluster the Alexa list into different rank tiers, which reduces fluctuations in the higher tiers. Furthermore, we only assess zones' configuration states, and not actual resolution, i.e., "lame delegation" for other reasons is out of scope.
Furthermore, we cannot make statements on whether the zones we measure _actually_ resolve, e.g., if there is an authoritative DNS server listening on a configured IP address returning correct results. Still, we have certainty that zones we measure as resolvable are at least sufficiently configured for resolution. Similarly, we can not assess the impact of observed DNS issues on other protocols, e.g., HTTPs. To further address this limitation of our passive data source, we conducted active measurements, which validated the observations from our passive results and added further insights on the actual reachability of authoritative DNS servers for zones.
Naturally, our active measurements also have several limitations that have to be recorded. First, we conducted our measurements from a single vantage point. Given load balancing in CDNs via DNS [43], this may have lead to a vantage point specific perspective. Nevertheless, we argue that misconfigurations [14] are likely to be consistent across an operator, i.e., the returned A or AAAA records may change, but not the issue of, e.g., missing GLUE. Furthermore, DNS infrastructure tends to be less dynamic than A and AAAA records.
Second, our measurements were only limited to the Alexa Top 1M and associated domains. We consciously made this choice instead of, e.g., running active measurements on _all_ zones in the Farsight dataset to reduce our impact on the Internet ecosystem.
In summary, our study provides an important first perspective on IPv6 only resolvability. We suggest to complement our study with active measurements of IPv6 only DNS resolution and the impact of broken IPv6-delegation on the IPv6 readiness of the web due to asset dependencies as future work.
## 6 Related work
Our related work broadly clusters into two segments: _i)_ Studies on IPv6 adoption and readiness, and _ii)_ Studies about DNS and DNS misconfigurations.
### IPv6 Adoption and Readiness
With the exhaustion of the IPv4 address space [38], IPv6 adoption has been a frequent topic of study. In 2014, Czyz et al. [13] conducted a primer study on IPv6 adoption, taking a multi-perspective approach that also covered DNS. Our measurements shed light on the time after their measurements which concluded in 2014. Furthermore, they estimate IPv6 adoption in DNS by only surveying AAAA glue records in net. and com., while we consider the full resolution path. Work by Foremski et al. [24] and Plonka & Berger [37] investigate IPv6 adoption at the edge, which is orthogonal to our work. In recent years, various researchers took country and domain specific perspectives on IPv6 adoption, e.g., [12, 25, 33].
### DNS and DNS Misconfiguration Studies
Since DNS is a core component of the Internet, it has been studied regularly over the past decades, including studies regarding the adoption of new protocol features, e.g., [9, 10, 11, 15, 16, 43]. Such studies use various active datasets, e.g., OpenINTEL [36], as well as passive datasets, e.g., the Farsight SIE dataset which we rely on, to, e.g., study operational aspects of the DNS [23]. More specifically focusing on DNS (mis)configuration, Sommese et al. [42] study inconsistencies in parent and child NS sets and Akiwate et al. [1] work on lame delegation. However, contrary to our work, the latter two either do not consider the IP part of DNS delegation (Sommese et al.), or explicitly focus on IPv4 (Akiwate et al.). More recently, Izhikevich et al. presented ZDNS, a tool for large-scale studies of the DNS ecosystem in the Internet [29]. Unfortunately, ZDNS is tailored towards IPv4 and does not support querying authoritative nameservers over IPv6. Therefore, we cannot make use of ZDNS in our study. Instead we perform active DNS measurements with our own implementation of a DNS resolution methodology, which implements IPv6 resolution.
### Summary
We expand on earlier contributions regarding IPv6 adoption. We provide a more recent perspective on the IPv6 DNS ecosystem and take a more complete approach to asses the IPv6 readiness in an IPv6-only scenario. This focus on IPv6 is also our novelty in context to earlier work on DNS measurements and DNS misconfigurations, which did not focus on how IPv6 affects DNS resolvability. Additionally, our active measurements for validating our passive measurement results also highlight that the presence of AAAA records does not necessarily imply IPv6 resolvability. Instead, to measure IPv6 resolvability, the resolution state of provided IPv6 resources has to be validated.
## 7 Conclusion
In this paper, we present a passive DNS measurement study on root causes for broken IPv6-delegation in an IPv6 only setting. While over time we see an increasing number of zones resolvable via IPv4 and IPv6, in August 2022 still 44.9% are not resolvable via IPv6. We identify not resolvable NS records of the zone or its parent as the most common failure scenario. Our recommendations to operators include to explicitly monitor IPv6 across the entire delegation chain.
Additionally, we conducted a dedicated validation of our results using active measurements. This validation broadly confirmed our results from the passive measurements and further highlighted the importance of not only relying on the presence of specific records, as nameservers for which IPv6 addresses are listed in the DNS may not actually be responsive.
We plan to provide an open-source implementation of our measurement methodology along with the paper. Furthermore, we will provide a reduced implementation of our measurement toolchain which will enable operators to explicitly check a given zone or FQDN for IPv6-resolvable. Similarly, we will provide the results of our active measurements as open data.
For future work we suggest to systematically expand our active measurement campaign to assess resolvability, e.g., for websites including all web assets. Using active measurements, one can explicitly resolve a hostname and run active checks on the delegation chain, validating the responses of all authoritative nameservers and find inconsistencies not only between a zone and its parent but also within the NS set. We conjecture that-especially given the widespread use of subdomains for web assets-the reduced IPv6 resolvability we observe may have a significant impact on the IPv6-readiness of the web, i.e., a website using assets on domains that do not resolve via IPv6 is not IPv6 ready.
## Acknowledgments
We thank Farsight Security, Inc. (now DomainTools) for providing access to the Farsight Security Information Exchange's passive DNS data feed. Without this data, the project would not have been possible. The authors express their gratitude to the anonymous reviewers for their thoughtful and encouraging input during the reviewing process. manner. This work was partially funded by the German Federal Ministry of Education and Research under the project PRIMEnet, grant 16KIS1370, and 6G-RIC, grant 16KISK027. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Farsight Security, Inc., DomainTools, the German Federal Ministry of Education and Research or the authors' host institutions and further affiliations. |
2310.11182 | On the Effectiveness of Creating Conversational Agent Personalities
Through Prompting | In this work, we report on the effectiveness of our efforts to tailor the
personality and conversational style of a conversational agent based on GPT-3.5
and GPT-4 through prompts. We use three personality dimensions with two levels
each to create eight conversational agents archetypes. Ten conversations were
collected per chatbot, of ten exchanges each, generating 1600 exchanges across
GPT-3.5 and GPT-4. Using Linguistic Inquiry and Word Count (LIWC) analysis, we
compared the eight agents on language elements including clout, authenticity,
and emotion. Four language cues were significantly distinguishing in GPT-3.5,
while twelve were distinguishing in GPT-4. With thirteen out of a total
nineteen cues in LIWC appearing as significantly distinguishing, our results
suggest possible novel prompting approaches may be needed to better suit the
creation and evaluation of persistent conversational agent personalities or
language styles. | Heng Gu, Chadha Degachi, Uğur Genç, Senthil Chandrasegaran, Himanshu Verma | 2023-10-17T11:59:39Z | http://arxiv.org/abs/2310.11182v1 | # On the Effectiveness of Creating Conversational Agent Personalities Through Prompting
###### Abstract.
In this work, we report on the effectiveness of our efforts to tailor the personality and conversational style of a conversational agent based on GPT-3.5 and GPT-4 through prompts. We use three personality dimensions with two levels each to create eight conversational agents archetypes. Ten conversations were collected per chatbot, of ten exchanges each, generating 1600 exchanges across GPT-3.5 and GPT-4. Using Linguistic Inquiry and Word Count (LIWC) analysis, we compared the eight agents on language elements including clout, authenticity, and emotion. Four language cues were significantly distinguishing in GPT-3.5, while twelve were distinguishing in GPT-4. With thirteen out of a total nineteen cues in LIWC appearing as significantly distinguishing, our results suggest possible novel prompting approaches may be needed to better suit the creation and evaluation of persistent conversational agent personalities or language styles.
LIMs, Conversational Agents, Sentiment Analysis, Personality 2019
2
## 1. Introduction
Tailoring conversational agents (CAs) to express personality has long been a subject of interest for researchers. The perceived friendliness and (in)formality of a CA affect the human factors of user-chatbot interaction, including dimensions such as trust (Sundundar et al., 2016; Gu et al., 2017), engagement (Gundundar et al., 2018), and acceptance (Gundar et al., 2019). CAs with personality create a more consistent user experience, and can even improve the overall user experience (Sundar et al., 2019; Gu et al., 2019). Some researchers have even worked to create complementary alignments of CA-user personalities for improved user experience (Gundar et al., 2018; Gundar et al., 2018).
The link between personality and language style is well established. For example, introverted speakers tend more towards formal language, with fewer exaggerations and more hedging, or tentative, phrasing (Gundar et al., 2018; Gu et al., 2017; Gu et al., 2017; Gu et al., 2017). In prior work, such language markers have been used in a rule-based system to generate text expressing associated personalities to some success (Gundar et al., 2018). Language use has also been linked to outcomes of negotiation and persuasion: for instance, language style in solicitations for charitable giving has been shown to be a predictor of the donation or assistance given (Gu et al., 2017). It thus follows that personality expressed through language in CAs can influence the outcome of an end-users' discussion with the CA.
Recent advances in text generation using Large Language Models (LLMs), present a unique opportunity for non-rule-based, heuristic approaches to express personality through language. Yet, within this field, a conspicuous gap emerges when it comes to the capacity of LLMs to generate language styles corresponding to personality archetypes. Though the use of LLMs to power CAs is on the rise (Gu et al., 2017; Gu et al., 2017), only one study (Gu et al., 2017) begins to investigate the possibilities of personality generation in GPT-3.5, but little work quantifies LLMs' fidelity to, or linguistic variances influenced by, personality prompts over the length of a conversation.
There have been studies in persuasive technologies and language design, in fields such as public health communication and marketing (Gundar et al., 2018; Gundar et al., 2018). However, these predominantly concentrate on direct user feedback or discernible shifts in persuasion outcomes to evaluate the effectiveness of linguistic element tailoring, rather than on quantifying the actual variance and consistency of their chosen language cues.
In this work, we report on an approach to express personality through language in a goal-oriented CA. Specifically, we investigate the expression of personality communicated through language in the context of solicitation for charitable giving. We leverage prior research on charitable giving (Stein
As mentioned, one study (Zhou et al., 2017) so far has taken advantage of the rise of LLMs across the natural language generation space, to investigate whether these models could be prompted to create agents with personality. In their work, Jiang et al. (Jiang et al., 2017) used Big-Five personality dimensions, as well as gender, to prompt a GPT-3.5 agent to create an 800-word childhood story analysed using LIWC language elements. Though the authors concluded that the personas they created were significantly different in the LIWC language categories they exhibited, they did not investigate whether this pattern would hold in naturalistic longer human-agent interaction settings such as conversation. 7 ] use a somewhat similar methodology to evaluate the PaLM models (Luo et al., 2018) in their ability to embody personality, using prompt engineering to create personas and asking the CA to answer psychometric personality tests. They similarly concluded CAs to be cable of consistent personality embodiment.
## 3. Method
We prompt-engineered a series of charity solicitation CAs employing the GPT-3.5 and GPT-4 models. The two models with similar architectures but different sizes and parameters were chosen for comparison with prior studies focusing primarily on aptitudes across various human expertise categories, such as through standardized exams (Beng et al., 2015; Chen et al., 2015; Li et al., 2017; Li et al., 2017), we believe it novel to compare the CA outputs through the lens of linguistic outputs. These CAs, representing a fabricated charity organization named the "Wildlife Bridge Foundation," were designed to simulate a solicitation event with potential donors.
Building on prior research on effective charity solicitation, we integrated variations in the CA personality and solicitation strategy across three dimensions: _Attitude_, i.e., optimistic vs pessimistic (Zhou et al., 2017), _Authority_, i.e., authoritative vs submissive (Beng et al., 2015; Chen et al., 2015), and _Reasoning_, i.e., analytical vs affective (Zhou et al., 2017), reflected in the language used to persuade the potential donor. These three dimensions--each with two polar opposite attributes--combinatirially resulted in eight (2\({}^{3}\)) distinct CA personalities. We powered one set of all eight CA personalities with the GPT-3.5 model and another set with the GPT-4 model.
We collected the ten most, and least, popular petitions from Avaaz and Change.org, two popular online petition platforms, to analyse with LIWC, identifying the most frequently used language categories and psychological processes within. These characteristics included _cloud_, _tone_, _authenticity_, _emotion_, _ognition_, and others. Once identified, we grouped these characteristics along the three personality scales derived from literature. For example, authority was associated with the LIWC categories of cloud and authenticity, and an authoritative text may have used more words and phrases such as "must", "have to", and "should", in contrast to a submissive text which used words and phrases such as "if it's okay with you", "maybe", and "if you don't mind". Details of each personality dimension and associated LIWC categories are described in Sec. 4.1.
### Prompt Engineering
We first designed a core prompt with modifiable slots for varying solicitor traits. Grounded in advanced prompt design methodologies (Zhou et al., 2017), this foundational prompt encompasses four principal elements:
1. **Task:**: Act as a charity solicitor for...
2. **Goal:** Get speaker to donate...
3. **Rules:**: Do not provide URLS, keep response short...
4. **Persona:** The solicitor's name is [NAME], personality: [optimistic/pessimistic], and [Authoritativeness/Submissiveness]. Only speak as Alex from now on. Use [Emotion/Logic-based reasoning] to convince the donor.
### Benchmark Test
To assess the performance of the 16 uniquely personalized CAs (8 GPT-3.5 and 8 GPT-4), we devised a standardized script composed of the same 10 dialogue excerpts, which were queried to each CA over 10 sessions, resulting in 160 unique conversations and 1,600 pieces of generated responses, 100 per CA.
This set of interactions was designed based on internal pilot tests simulating potential donor conversations. We distilled the most pertinent questions and responses to form this script.
## 4. Analysis
Following synthesis of this benchmarking sample dataset of generated responses, our primary objective was to evaluate their consistency relative to our prompts indicating the desired output qualities. We utilized LIWC to assess the cognitive and emotional attributes of the generated text, measured along dictionary categories that we deemed relevant to the personality dimensions.
### Measures
LIWC's psycholinguistic dictionary categories (Beng et al., 2015) provided us with the means to gauge the linguistic attributes of each agent's outputs.
Specifically, we focused on LIWC categories known to reflect our three chosen traits:
**Attitude:**: For this trait, which could take the attribute of either _optimistic_ or _pessimistic_, we looked at linguistic elements pointing towards sentiment. In the current version of LIWC, _optimism_ is a subcategory of Affect (Zhou et al., 2017). This parallels the _Tone_ categories (_tone_pos_, _tone_neg_) which are indicative of sentiment of the text as opposed to embodied emotions (Beng et al., 2015). Positive emotion(_Emo_Pos_) and tentativeness(_tent_) as indicators have also been used for optimism (Li et al., 2017). Additionally, based on the psychological correlates outlined by LIWC guidelines, we also monitored _Futterense_ representing future and goal-oriented words, and markers of _Anxiety_ indicating future-oriented emotions (e.g., words like worried, fearful, nervous) (Beng et al., 2015; Li et al., 2017).
**Authority:**: For this trait, the attributes of which could be _authoritative_ or _submissive_, we looked at linguistic elements pointing towards status, dominance, and social hierarchy. Clout serves as an aggregate category representing social status and power dynamics, with subcategories of personal pronouns (_ppron_) indicative of these dynamics associated with
authority (Gutner et al., 2017; Gutner et al., 2018). Additionally, according to the psychometric guidelines, _certitude_ and absoltist language (_All-none_) could also be an indicator of authoritative language, and _Assent_ category could be an indicator of submissiveness (Bordes et al., 2019).
**Reasoning**: In the context of solicitation traits, _analytical_ specifically refers to a solicitation strategy of providing statistical evidence, and _affective_ refers to providing individualized details (Krause et al., 2019; Krause et al., 2019). For this trait we looked at linguistic elements pointing to thinking styles and cognitive mechanism words (_Cognition_, _Analytic_, _Quantity_, _Numbers_), as well as the presence of emotional words (_Emotion_, _Affect_, _Authentic_) (Bordes et al., 2019; Gutner et al., 2018).
### Procedure
Given the selected traits, interaction effects between them are anticipated. Our objective was to discern the parts-worth effect attributable to the prompted solicitor parameters of Attitude, Reasoning, and Authority. To accomplish this, we employed conjoint analysis, aiming to deconstruct the individual impact of these traits on the output linguistic quality. The conceived equation is a linear regression model which relates the LIWC category values to the factors of Attitude, Authority, and Reasoning prompt components, along with their interaction effects.
The equation representing this relationship is:
\[\begin{split} M_{LIWC}=&\beta_{0}+\beta_{1}(\text{ Attitude})+\beta_{2}(\text{Authority})+\\ &+\beta_{3}(\text{Reasoning})+\beta_{4}(\text{Attitude}\times \text{Authority})\\ &+\beta_{5}(\text{Attitude}\times\text{Reasoning})\\ &+\beta_{6}(\text{Reasoning}\times\text{Authority})\\ &+\beta_{7}(\text{Attitude}\times\text{Authority}\times\text{ Reasoning})+\epsilon\end{split} \tag{1}\]
Where:
* \(M_{LIWC}\) is the measure for a given LIWC category (e.g., Tone)
* \(\beta_{0}\) is the baseline effect present in the core prompt and LLM.
* \(\beta_{1},\beta_{2},\beta_{3}\) are coefficients represent the influence of the factors _Attitude_, _Authority_, and _Reasoning_ respectively on the linguistic quality.
* \(\beta_{4},\beta_{5},\beta_{6},\beta_{7}\) represent the interaction effects between the factors. (ie. \(\beta_{4}\) represents the interaction between Attitude and Authority.)
* \(\epsilon\) represents the error term.
## 5. Results
Table 1 shows the LIWC categories used per personality dimension. A category is correlated with a dimension if it is significantly different on the low and high end of that personality scale, e.g. Tone is significantly correlated with Attitude if it is sufficiently different in optimistic and pessimistic chatbots. If a coefficient is positive, it implies that as the factor increases, the linguistic quality (in that particular category) increases.
## 6. Discussion
Our study evaluated the capacity of LLMs to manifest engineered prompted traits. The results offer several key insights:
### GPT Model Variation
GPT-4 appears to be more sensitive to the prompted traits across a broader range of LIWC categories compared to GPT-3.5. The wider range of significant categories in GPT-4 might suggest an enhanced capability to capture and reflect certain nuances, but it doesn't automatically imply superior overall performance in expressing the prompted traits. The explanation may also lie to some extent with the nature of LIWC dictionary categories.
Consider the following text generated by a CA powered by GPT-3.5 with the personality traits of optimistic-authoritative-analytical "As _someone_ who deeply _cares_ about the welfare of animals and the environment, I _found_ my _purpose_ in _helping_ organizations like the Wildlife Bridge Foundation." Compare this with another CA with the same GPT-3.5 model, but with the personality traits of optimistic-authoritative-affective: "As _someone_ who deeply _cares_ about the _well-being_ of animals, it breaks my heart to see their habitats _destroyed_ by urban expansion."
In both examples, the italicized terms correspond to not just LIWC categories corresponding to analytical and affective personalities, but also optimistic and pessimistic personalities. This is because LIWC dictionary categories are not mutually exclusive: the same word can appear in multiple categories. A more thorough evaluation of this approach would also need to incorporate a measure of overlapping words across relevant LIWC categories from the generated text.
### Conversation Benchmarking
Our demonstrated approach of iterative simulated conversations has potential to serve as a method for assessing the variability in LLM output within defined prompted traits. While this approach would need a more rigorous validation of the representation of these personalities, the use of expert-curated psycholinguistic dictionary categories--such as those in LIWC--could provide CA designers with a standard set of linguistic measures to use for specific applications.
### Trait Variability
Prompted traits have very different impact on linguistic quality. _Attitude_ significantly influences linguistic qualities across GPT models. _Authority_ has a stronger impact on the GPT-4 model, while _Reasoning_ shows the least effect across models. Identifying which traits have stronger impacts allows developers to better tailor LLM outputs.
Furthermore, we wish to highlight the nuanced relationship between prompt engineering and resultant LLM outputs, especially when a dearth of good methods for evaluating output variability makes it doubly difficult to quantify this relationship. By demonstrating one method, we hope to bring attention to the remaining scope for refining evaluation and monitoring techniques for quality and consistency of generated content.
### Implications for personality-based CAs.
Given these results, we formulate the following design and development recommendations for effective and consistent personality-based CAs:
1. **Prompt Design:** We hypothesize that introducing more synonyms to a given prompt when creating personality archetypes may lend the personality element more weight and create more long-standing effects. For example, an optimistic agent may be described as optimistic, positive, and hopeful in a prompt instead of simply optimistic.
2. **Prompt Programming:** We suggest periodic re-injection of the persona prompt components during the conversation could enforce model adherence to the intended persona, potentially mitigating drift.
3. **LIWC-Annotated Prompt Programming:** by real-time monitoring present LIWC categories in conversation output, any significant deviation or drift from the desired personality could be detected. This then can create a feedback loop to nudge the model back to the desired personality traits, ensuring consistent alignment with the initial prompt.
Future research could also explore the role of user input in conversation on steering agent personality over time, the generalizability of our domain-driven personality crafting approaching, and expand the set of LLM models evaluated in this work.
## 7. Conclusion
We demonstrate in this paper the feasibility of generating CA personalities through LLM prompt engineering. We craft personality archetypes from literature and charity solicitation domain knowledge, evaluating the effect of personality prompts in LLM Cas on language style using LIWC computational text analysis. We show that the performance of LLM Cas in this area is model dependant, and personality dimension dependant. Based on our results, we present design and development recommendations to follow researchers, including recommendations on prompt engineering and future research directions.
|
2310.18818 | A thousand fermions in a 3D harmonic trap via Monte Carlo simulations | By use of a special wave function derived from similarly transformed
propagators, this work shows that the energy of a thousand spin-balanced
fermions in a three-dimensional harmonic potential can be accurately computed
using the Monte Carlo method. | Siu A. Chin | 2023-10-28T20:49:09Z | http://arxiv.org/abs/2310.18818v1 | # A thousand fermions in a 3D harmonic trap via Monte Carlo simulations
###### Abstract
By use of a special wave function derived from similarly transformed propagators, this work shows that the energy of a thousand spin-balanced fermions in a three-dimensional harmonic potential can be accurately computed using the Monte Carlo method.
Introduction
It is well known that the sign problem has plagued the Monte Carlo simulations of many-fermion systems in Path Integral [1], Diffusion [2] and Ground State Path Integral [3; 4; 5] Monte Carlo methods. Even when there is no sign problem, as in the Variational Monte Carlo (VMC) method with a trial wave function of the form
\[\Psi=\mathrm{det}_{\uparrow}|\phi_{k}(\mathbf{r}_{i})|\mathrm{det}_{\downarrow} |\phi_{k}(\mathbf{r}_{i})|\prod_{i<j}f(\mathbf{r}_{ij}), \tag{1}\]
where \(k=\{n\ell m\}\) specifies a set of single particle states and \(f(\mathbf{r}_{ij})\) is the Jastrow correlation function, it remains technically burdensome to sample such a wave function for large number of fermions. For more than, say, 40 fermions, it is very tedious to specify the set of lowest energy single particle states, or to evaluate them in the Slater determinant \(\mathrm{det}|\phi_{k}(\mathbf{r}_{i})|\). Except for atoms, most finite fermion systems use the harmonic oscillator basis states. In this work, we show that there is a much simpler wave function for describing any number of non-interacting fermions in a harmonic oscillator, with _no need of knowing the analytical form of its single particle wave functions_. As shown in Sect.IV, the energy of up to a thousand spin-balanced fermions can be computed using this special wave function (1), suggesting the feasibility of doing very large scale VMC, or even Ground State Path Integral Monte Carlo [3; 4; 5] calculations on 3D fermion systems.
The discovery of this wave function (1) is rather circuitous. In order to account for the charge-density-wave (CDW) or Wigner crystal (WC) density distributions observed in earlier calculations [6; 7] on the fractional quantum Hall effect, Maki and Zotos [8] postulated an ansatz wave function of the form
\[\Psi(\mathbf{x}_{1},\mathbf{x}_{2}\ldots\mathbf{x}_{N})\propto\mathrm{det} \Big{|}\mathrm{exp}[-\frac{1}{2}(\mathbf{x}_{i}-\mathbf{s}_{j})^{2}]\Big{|}, \tag{2}\]
where each lowest Landau state's position \(\mathbf{x}_{i}\) is localized at a variational position \(\mathbf{s}_{i}\). Unfortunately, this wave function's ground state energy at filling factor \(\nu=1/3\) remains higher than that of Laughlin's wave function [9], even when \(\{\mathbf{s}_{i}\}\) is the optimal two-dimensional triangular lattice. However, this wave function remained useful for describing WC states in quantum dots with and without a magnetic field [10; 11] when \(\{\mathbf{s}_{i}\}\) can be determined by minimizing the classical electron potential energy.
For a given set of \(\{\mathbf{s}_{i}\}\), the ansatz wave function (2) breaks translational and rotational invariance and remains _ad hoc_ despite the fact that symmetry-breaking density distributions
similar to those produced by (2) can be seen in unrestricted Hartree-Fock calculations [12]. It was not until recently that there is a natural way of deriving such symmetry-breaking wave functions from similarly transformed propagators [5].
In this work we will first review how wave function such as (2) can be derived in Sect.II. In Sect.III, we verify analytically that the special wave function (11) correctly gives wave functions for up to four free fermions in a \(D\)-dimension harmonic oscillator. in Appendix A, we show that the special wave function (11) gives the correct wave function for any number of fermions in a 1D harmonic oscillator. In Sect.IV, the Monte Carlo method is used to evaluate the ground state energy of up to a thousand spin-balanced fermions in a 3D harmonic oscillator. Conclusions are stated in Sect.V.
## II Similarly transformed propagators
For completeness, we give here an expanded review below on similarly transformed propagators [5]. The key insight here is that the diagonal element of the imaginary time propagator
\[G({\bf x},{\bf x};\tau)=\langle{\bf x}|{\rm e}^{-\tau H}|{\bf x}\rangle, \tag{12}\]
needed for extracting the ground state energy and wave function of a \(D\)-dimension Hamiltonian
\[H=-\frac{1}{2}\nabla^{2}+V({\bf x}) \tag{13}\]
at the large \(\tau\) limit
\[\lim_{\tau\to\infty}G({\bf x},{\bf x},\tau)\longrightarrow\psi_{0}^{2}({\bf x }){\rm e}^{-\tau E_{0}}+\cdots, \tag{14}\]
is invariant under the similarity transformation
\[G({\bf x},{\bf x};\tau)=\phi({\bf x})\langle{\bf x}|{\rm e}^{-\tau H}|{\bf x} \rangle\phi^{-1}({\bf x})=\langle{\bf x}|\phi({\bf x}){\rm e}^{-\tau H}\phi^{- 1}({\bf x})|{\bf x}\rangle=\langle{\bf x}|{\rm e}^{-\tau\bar{H}}|{\bf x}\rangle \tag{15}\]
and can be computed equally from the transformed Hamiltonian [13]
\[\widetilde{H}\rho=\phi({\bf x})H\phi^{-1}({\bf x})\rho=-\frac{1}{2}\nabla^{2} \rho+\nabla\cdot\left[{\bf v}({\bf x})\rho\right]+E_{L}({\bf x})\rho, \tag{16}\]
where the drift velocity \({\bf v}({\bf x})\) and local energy \(E_{L}({\bf x})\) are defined by
\[{\bf v}({\bf x})=\frac{\nabla\phi({\bf x})}{\phi({\bf x})}\qquad{\rm and} \qquad E_{L}({\bf x})=\frac{H\phi({\bf x})}{\phi({\bf x})}, \tag{17}\]
provided that \(\phi({\bf x})\neq 0\) at all \({\bf x}\).
As an example of this invariance, consider the case of the harmonic oscillator with \(V({\bf x})={\bf x}^{2}/2\). The exact propagator for \(H\) is known to be [14]
\[G({\bf x},{\bf x}_{0};\tau)=\frac{1}{[2\pi\sinh(\tau)]^{D/2}}\exp\left(-\frac{1} {2\sinh(\tau)}[\cosh(\tau)({\bf x}^{2}+{\bf x}_{0}^{2})-2{\bf x}\cdot{\bf x}_{0 }]\right), \tag{7}\]
while that of \(\widetilde{H}\), with \(\phi({\bf x})=\psi_{0}({\bf x})=\exp(-{\bf x}^{2}/2)\), is the Ornstein-Uhlenbeck [15] propagator
\[\widetilde{G}({\bf x},{\bf x}_{0};\tau)=\frac{1}{[2\pi T(\tau)]^{D/2}}\exp \left[-\frac{1}{2T(\tau)}({\bf x}-{\bf x}_{0}{\rm e}^{-\tau})^{2}\right]{\rm e }^{-\tau E_{0}}, \tag{8}\]
where \(T(\tau)=(1-{\rm e}^{-2\tau})/2\) and \(E_{0}=D/2\). Despite their distinct appearances, they are indeed the same when \({\bf x}_{0}={\bf x}\):
\[G({\bf x},{\bf x};\tau) = \frac{1}{[2\pi T(\tau)]^{D/2}}{\rm e}^{-\tau E_{0}}\exp\left(- \frac{{\bf x}^{2}}{\sinh(\tau)}[\cosh(\tau)-1]\right), \tag{9}\] \[= \frac{1}{[2\pi T(\tau)]^{D/2}}{\rm e}^{-\tau E_{0}}\exp\left(- \frac{{\bf x}^{2}}{2T(\tau)}[1+{\rm e}^{-2\tau}-2{\rm e}^{-\tau}]\right).\] \[= \widetilde{G}({\bf x},{\bf x};\tau).\]
The reason why one should consider the transformed propagator is that while the exact propagator (4) can be computed using \(H\) or \(\widetilde{H}\) as above, their low order (in \(\tau\)) approximate propagators can be very different. The first-order propagator computed from \(H\) and \(\widetilde{H}\) are respectively
\[G_{1}({\bf x},{\bf x}_{0};\tau) = \frac{1}{[2\pi\tau]^{D/2}}\exp\left[-\frac{1}{2\tau}({\bf x}-{\bf x }_{0})^{2}\right]{\rm e}^{-\tau V({\bf x}_{0})}, \tag{10}\] \[\widetilde{G}_{1}({\bf x},{\bf x}_{0};\tau) = \frac{1}{[2\pi\tau]^{D/2}}\exp\left[-\frac{1}{2\tau}({\bf x}-{\bf x }(\tau))^{2}\right]{\rm e}^{-\tau E_{L}({\bf x}_{0})}, \tag{11}\]
where \({\bf x}(\tau)\) is the trajectory satisfying the drift equation
\[\frac{d{\bf x}(\tau)}{d\tau}={\bf v}({\bf x}(\tau))=\frac{\nabla\phi({\bf x}( \tau))}{\phi({\bf x}(\tau))} \tag{12}\]
having the same initial position \({\bf x}(0)={\bf x}_{0}\).
In the case of the harmonic oscillator, (10) bears no resemblance to (7). However, for \(\widetilde{H}\) with \(\phi({\bf x})=\psi_{0}({\bf x})=\exp(-{\bf x}^{2}/2)\), the drift equation
\[\frac{d{\bf x}(\tau)}{d\tau}=-{\bf x}(\tau) \tag{13}\]
yields the solution
\[{\bf x}(\tau)={\bf x}_{0}{\rm e}^{-\tau}, \tag{14}\]
resulting in the transformed first-order propagator
\[\widetilde{G}_{1}({\bf x},{\bf x}_{0};\tau)=\frac{1}{[2\pi\tau]^{D/2}}\exp\left[- \frac{1}{2\tau}({\bf x}-{\bf x}_{0}{\rm e}^{-\tau})^{2}\right]{\rm e}^{-\tau E_{ 0}} \tag{2.15}\]
which is already close to the exact propagator (2.8), differs only in having the variance \(\tau\) rather than \(T(\tau)\). As a matter of fact, its trace
\[\int d{\bf x}\,\widetilde{G}_{1}({\bf x},{\bf x};\tau)=[2\sinh(\tau/2)]^{-D}= \int d{\bf x}\,\widetilde{G}({\bf x},{\bf x};\tau)=\int d{\bf x}\,G({\bf x},{\bf x };\tau) \tag{2.16}\]
gives same partition function as the exact propagator (2.7). If one regards the _variance_\(\tau\) in (2.15) as a variational parameter fixed at \(\tau=1\), then the (2.15) will yield the correct wave function (not its square) as \(\tau\to\infty\)_in the trajectory equation_\({\bf x}(\tau)={\bf x}_{0}{\rm e}^{-\tau}\).
This is of course an ideal case where one can take \(\phi({\bf x})\) to be the exact ground state \(\psi_{0}({\bf x})\), but this illustrates the possibility of improving the propagator by exploiting the invariance (2.4) under \(\phi({\bf x})\). Historically, the transformed propagator (2.11) with an approximate ground state \(\phi({\bf x})\) has been used to accelerate the convergence of the Feynman-Kac path integral [16] and is the formal basis for doing Diffusion Monte Carlo with importance-sampling [13; 17; 18].
Generalizing to the case of \(N\)_interacting_ particles in a harmonic oscillator, if one knows the exact bosonic ground state \(\psi_{0}({\bf x}_{1},{\bf x}_{2}\cdots{\bf x}_{N})\), then an excellent approximation to the (unnormalized) single particle state would be
\[\psi_{i}({\bf x}_{i})=\exp\left[-\frac{1}{2}({\bf x}_{i}-{\bf s}_{i})^{2} \right], \tag{2.17}\]
where \({\bf s}_{i}={\bf x}_{i}(\tau\to\infty)\). If \({\bf s}_{i}\neq 0\), then it must be a _stationary_ point of (2.12), implying that
\[\nabla_{{\bf x}_{i}}\psi_{0}({\bf x}_{1},{\bf x}_{2}\cdots{\bf x}_{N})|_{{\bf s }_{i}}=0. \tag{2.18}\]
This means that \(\{{\bf s}_{i}\}\)_is a set of particle positions that maximizes the bosonic ground state wave function, rather than just minimizes the classical potential energy_. Such a discrete set of none zero \(\{{\bf s}_{i}\}\) would automatically breaks translational and rotational symmetry even if \(\psi_{0}({\bf x}_{1},{\bf x}_{2}\cdots{\bf x}_{N})\) does not. This is because one must necessarily start with a discreet set of initial positions \(\{{\bf x}_{i0}\}\). The drift equation then evolves them toward the set of discrete stationary points \(\{{\bf s}_{i}\}\). Anti-symmetrizing the set of single particle states (2.17) then yields Maki-Zotos type wave function (1.2).
A fundamental test of this transformed propagator approach of deriving the many-fermion wave function is whether for \(N\)_non-interacting bosons_ in a harmonic oscillator, the resulting
wave function (2) correctly describe \(N\)_non-interacting fermions_ in the same potential. We first verify analytically that this is indeed the case for up to four fermions in the next Section.
## III Analytical few-fermion wave functions
For \(N\)-particles in a harmonic oscillator, the bosonic ground state wave function (ignoring normalization) is
\[\psi_{B}({\bf x}_{1},{\bf x}_{2}\cdots{\bf x}_{N})=\prod_{i=1}^{N}{\rm e}^{-{ \bf x}_{i}^{2}/2}, \tag{30}\]
thereby yielding the same drift equation (13) and the same solution (14) for each particle: \({\bf x}_{i}(\tau)={\bf x}_{i0}{\rm e}^{-\tau}\). Anti-symmetrizing the single particle state (17) then yields the \(N\)-fermion wave function as
\[\Psi({\bf x}_{1},{\bf x}_{2}\ldots{\bf x}_{N})=\lim_{{\bf s}_{j}\to 0}\det\Bigl{(} \exp[-\frac{1}{2}({\bf x}_{i}-{\bf s}_{j})^{2}]\Bigr{)}. \tag{31}\]
It is crucial to note here that the formalism requires anti-symmetrizing the single particle states first, before taking the \({\bf s}_{j}\to 0\) limit. Note also that at a large finite \(\tau\), \({\bf s}_{j}={\bf x}_{j0}{\rm e}^{-\tau}\) is very close to zero, but never actually zero. Therefore analytically, the limit \({\bf s}_{j}\to 0\) means that one should expand and keep only the leading non-vanishing powers of \({\bf s}_{j}\).
For \(N=2\), the unnormalized antisymmetrized wave function is
\[\Psi({\bf x}_{1},{\bf x}_{2})\;=\;\det\left(\begin{array}{cc}{\rm e}^{-({\bf x }_{1}-{\bf s}_{1})^{2}/2}&{\rm e}^{-({\bf x}_{1}-{\bf s}_{2})^{2}/2}\\ {\rm e}^{-({\bf x}_{2}-{\bf s}_{1})^{2}/2}&{\rm e}^{-({\bf x}_{2}-{\bf s}_{2}) ^{2}/2}.\end{array}\right) \tag{32}\]
Factoring out \({\rm e}^{-{\bf x}_{1}^{2}/2}\) and \({\rm e}^{-{\bf x}_{2}^{2}/2}\) from rows one and two respectively and \({\rm e}^{-{\bf s}_{1}^{2}/2}\) and \({\rm e}^{-{\bf s}_{2}^{2}/2}\) from columns one and two respectively gives
\[\Psi({\bf x}_{1},{\bf x}_{2}) = \lim_{{\bf s}_{k}\to 0}{\rm e}^{-({\bf x}_{1}^{2}+{\bf x}_{2}^{2 }+{\bf s}_{1}^{2}+{\bf s}_{2}^{2})/2}\det\left(\begin{array}{cc}{\rm e}^{{ \bf x}_{1}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{1}\cdot{\bf s}_{2}}\\ {\rm e}^{{\bf x}_{2}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{2}\cdot{\bf s}_{2}} \end{array}\right), \tag{33}\] \[= {\rm e}^{-({\bf x}_{1}^{2}+{\bf x}_{2}^{2})/2}{\bf x}_{21}\cdot{ \bf s}_{21}, \tag{34}\]
where we have expanded to first order in \({\bf s}_{j}\) and defined \({\bf x}_{ij}={\bf x}_{i}-{\bf x}_{j}\) and \({\bf s}_{ij}={\bf s}_{i}-{\bf s}_{j}\). This is the correct two-fermion wave function in a harmonic oscillator of any dimension. For example, in three dimension, the two-fermion wave functions are three-fold degenerate given by
\[\Psi({\bf x}_{1},{\bf x}_{2}){\rm e}^{({\bf x}_{1}^{2}+{\bf x}_{2}^{2})/2}\; \propto\;\det\left(\begin{array}{cc}1&x_{1}\\ 1&x_{2}\end{array}\right)\quad{\rm or}\quad\det\left(\begin{array}{cc}1&y_{ 1}\\ 1&y_{2}\end{array}\right)\quad{\rm or}\quad\det\left(\begin{array}{cc}1&z_{1} \\ 1&z_{2}\end{array}\right). \tag{35}\]
One sees that (3.5) is correctly a linear superposition of these three degenerate states with arbitrary coefficients \(({\bf s}_{2}-{\bf s}_{1})_{k}\). Note that one must have \({\bf s}_{1}\neq{\bf s}_{2}\), otherwise the wave function vanishes.
Generalizing (3.4) to \(N=3\) fermions gives
\[\Psi({\bf x}_{1},{\bf x}_{2},{\bf x}_{3})\;=\;\lim_{{\bf s}_{k}\to 0}{\rm e}^{-({\bf x }_{1}^{2}+{\bf x}_{2}^{2}+{\bf x}_{3}^{2})/2}\det\left(\begin{array}{ccc}{\rm e }^{{\bf x}_{1}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{1}\cdot{\bf s}_{2}}&{\rm e}^{{ \bf x}_{1}\cdot{\bf s}_{3}}\\ {\rm e}^{{\bf x}_{2}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{2}\cdot{\bf s}_{2}}&{ \rm e}^{{\bf x}_{2}\cdot{\bf s}_{3}}\\ {\rm e}^{{\bf x}_{3}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{3}\cdot{\bf s}_{2}}&{ \rm e}^{{\bf x}_{3}\cdot{\bf s}_{3}},\end{array}\right). \tag{3.7}\]
Multiply the first row by \({\rm e}^{({\bf x}_{2}-{\bf x}_{1})\cdot{\bf s}_{1}}={\rm e}^{{\bf x}_{21}\cdot {\bf s}_{1}}\) and \({\rm e}^{{\bf x}_{31}\cdot{\bf s}_{1}}\) and subtract that from the second and third row respectively gives
\[\det\left(\begin{array}{ccc}{\rm e}^{{\bf x}_{1}\cdot{\bf s}_{1}}&{\rm e}^{ {\bf x}_{1}\cdot{\bf s}_{2}}&{\rm e}^{{\bf x}_{1}\cdot{\bf s}_{3}}\\ {\rm e}^{{\bf x}_{2}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{2}\cdot{\bf s}_{2}}&{ \rm e}^{{\bf x}_{2}\cdot{\bf s}_{3}}\\ {\rm e}^{{\bf x}_{3}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{3}\cdot{\bf s}_{2}}&{ \rm e}^{{\bf x}_{3}\cdot{\bf s}_{3}}\end{array}\right)=\det\left(\begin{array} []{ccc}{\rm e}^{{\bf x}_{1}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{1}\cdot{\bf s }_{2}}&{\rm e}^{{\bf x}_{1}\cdot{\bf s}_{3}}\\ 0&{\rm e}^{{\bf x}_{2}\cdot{\bf s}_{2}}-{\rm e}^{{\bf x}_{1}\cdot{\bf s}_{2}+ {\bf x}_{21}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{2}\cdot{\bf s}_{3}}-{\rm e}^{ {\bf x}_{1}\cdot{\bf s}_{3}+{\bf x}_{21}\cdot{\bf s}_{1}}\\ 0&{\rm e}^{{\bf x}_{3}\cdot{\bf s}_{2}}-{\rm e}^{{\bf x}_{1}\cdot{\bf s}_{2}+ {\bf x}_{31}\cdot{\bf s}_{1}}&{\rm e}^{{\bf x}_{3}\cdot{\bf s}_{3}}-{\rm e}^{ {\bf x}_{1}\cdot{\bf s}_{3}+{\bf x}_{31}\cdot{\bf s}_{1}}\end{array}\right)\]
\[={\rm e}^{{\bf x}_{1}\cdot{\bf s}_{1}}\det\left(\begin{array}{ccc}{\rm e}^{ {\bf x}_{2}\cdot{\bf s}_{2}}(1-{\rm e}^{-{\bf x}_{21}\cdot{\bf s}_{21}})&{ \rm e}^{{\bf x}_{2}\cdot{\bf s}_{3}}(1-{\rm e}^{-{\bf x}_{21}\cdot{\bf s}_{1}} )\\ {\rm e}^{{\bf x}_{3}\cdot{\bf s}_{2}}(1-{\rm e}^{-{\bf x}_{31}\cdot{\bf s}_{21}} )&{\rm e}^{{\bf x}_{3}\cdot{\bf s}_{3}}(1-{\rm e}^{-{\bf x}_{31}\cdot{\bf s}_ {31}}).\end{array}\right) \tag{3.8}\]
In the \({\bf s}_{j}\to 0\) limit, to second-order in \({\bf s}_{j}\), the above determinant becomes
\[= \det\left(\begin{array}{ccc}({\bf x}_{21}\cdot{\bf s}_{21})&({ \bf x}_{21}\cdot{\bf s}_{31})\\ ({\bf x}_{31}\cdot{\bf s}_{21})&({\bf x}_{31}\cdot{\bf s}_{31})\end{array}\right) \tag{3.9}\] \[= ({\bf x}_{21}\cdot{\bf s}_{21})({\bf x}_{31}\cdot{\bf s}_{31})-({ \bf x}_{31}\cdot{\bf s}_{21})({\bf x}_{21}\cdot{\bf s}_{31})\] (3.10) \[= ({\bf x}_{21}\times{\bf x}_{31})\cdot({\bf s}_{21}\times{\bf s}_{ 31})\] (3.11) \[= ({\bf x}_{1}\times{\bf x}_{2}+{\bf x}_{2}\times{\bf x}_{3}+{\bf x }_{3}\times{\bf x}_{1})\cdot({\bf s}_{1}\times{\bf s}_{2}+{\bf s}_{2}\times{\bf s }_{3}+{\bf s}_{3}\times{\bf s}_{1}) \tag{3.12}\]
The final form (3.12) shows that the cross-product \({\bf x}_{21}\times{\bf x}_{31}\) is anti-symmetric with any interchange \({\bf x}_{i}\leftrightarrow{\bf x}_{j}\) but symmetric in all \({\bf x}_{i}\).
In 3D, there are also three degenerate states for three fermions:
\[\det\left(\begin{array}{ccc}1&y_{1}&z_{1}\\ 0&y_{21}&z_{21}\\ 0&y_{31}&z_{31}\end{array}\right)=y_{21}z_{31}-y_{31}z_{21}=({\bf x}_{21} \times{\bf x}_{31})_{x}, \tag{3.13}\]
\[\det\left(\begin{array}{ccc}1&z_{1}&x_{1}\\ 0&z_{21}&x_{21}\\ 0&z_{31}&x_{31}\end{array}\right)=z_{21}x_{31}-x_{31}z_{21}=({\bf x}_{21} \times{\bf x}_{31})_{y}, \tag{3.14}\]
\[\det\left(\begin{array}{ccc}1&x_{1}&y_{1}\\ 0&x_{21}&y_{21}\\ 0&x_{31}&y_{31}\end{array}\right)=x_{21}y_{31}-x_{31}y_{21}=({\bf x}_{21}\times{ \bf x}_{31})_{z}. \tag{3.14}\]
The most general three-fermion wave function is therefore a linear combination of these three state as given by (3.10), with arbitrary coefficients \(({\bf s}_{21}\times{\bf s}_{31})_{k}\).
In 2D, only \(({\bf s}_{21}\times{\bf s}_{31})_{z}\) is possible and therefore the three-fermion wave function is give by (3.14) only, with no degeneracy, because \(N=3\) is a closed-shell state in 2D.
In 1D, (3.9) vanishes, and one must expand (3.8) to the third order in \({\bf s}_{j}\) to obtain the 3-fermion wave function. In Appendix A, we derive this wave function and show that (3.2) correctly produces the general \(N\)-fermions wave function in 1D.
For \(N=4\) fermions, similar steps in the limit of \({\bf s}_{k}\to 0\) gives
\[\det\left(\begin{array}{cccc}{\bf e}^{{\bf x}_{1}\cdot{\bf s}_{1}}&{\bf e}^{ {\bf x}_{1}\cdot{\bf s}_{2}}&{\bf e}^{{\bf x}_{1}\cdot{\bf s}_{3}}&{\bf e}^{{ \bf x}_{1}\cdot{\bf s}_{4}}\\ {\bf e}^{{\bf x}_{2}\cdot{\bf s}_{1}}&{\bf e}^{{\bf x}_{2}\cdot{\bf s}_{2}}&{ \bf e}^{{\bf x}_{2}\cdot{\bf s}_{3}}&{\bf e}^{{\bf x}_{2}\cdot{\bf s}_{4}}\\ {\bf e}^{{\bf x}_{3}\cdot{\bf s}_{1}}&{\bf e}^{{\bf x}_{3}\cdot{\bf s}_{2}}&{ \bf e}^{{\bf x}_{3}\cdot{\bf s}_{3}}&{\bf e}^{{\bf x}_{3}\cdot{\bf s}_{4}}\\ {\bf e}^{{\bf x}_{4}\cdot{\bf s}_{1}}&{\bf e}^{{\bf x}_{4}\cdot{\bf s}_{2}}&{ \bf e}^{{\bf x}_{4}\cdot{\bf s}_{3}}&{\bf e}^{{\bf x}_{4}\cdot{\bf s}_{4}} \end{array}\right)\rightarrow\det\left(\begin{array}{cccc}({\bf x}_{21} \cdot{\bf s}_{21})&({\bf x}_{21}\cdot{\bf s}_{31})&({\bf x}_{21}\cdot{\bf s}_{ 41})\\ ({\bf x}_{31}\cdot{\bf s}_{21})&({\bf x}_{31}\cdot{\bf s}_{31})&({\bf x}_{31} \cdot{\bf s}_{41})\\ ({\bf x}_{41}\cdot{\bf s}_{21})&({\bf x}_{41}\cdot{\bf s}_{31})&({\bf x}_{41} \cdot{\bf s}_{41})\end{array}\right). \tag{3.15}\]
To evaluate this determinant, choose a coordinate system such that \({\bf s}_{21}\) is along the x-axis with \({\bf s}_{21}=(s_{21},0,0)\) and \({\bf s}_{31}\) at an angle \(\theta\neq 0\) relative to \({\bf s}_{21}\) with \({\bf s}_{31}=(s_{31}\cos\theta,s_{31}\sin\theta,0)\) and \({\bf s}_{41}=(s_{41x},s_{41y},s_{41z})\). Multiply the \({\bf s}_{21}\) column in (3.15) by \(s_{31}\cos\theta/s_{21}\) and subtract it from the \({\bf s}_{31}\) column so that that column now has \({\bf s}_{31}=(0,s_{31}\sin\theta,0)\). Multiply the \({\bf s}_{21}\) column above by \(s_{41x}/s_{21}\) and subtract it from \(s_{41}\) column column gives \({\bf s}_{41}=(0,s_{4y},s_{41z})\). Finally, multiply the \({\bf s}_{31}\) column by \(s_{41y}/(s_{31}\sin\theta)\) and subtract it from \(s_{41}\) column gives effectively \({\bf s}_{4}=(0,0,s_{41z})\). This then results
\[\det(\cdots) = [s_{21}s_{31}\sin\theta s_{41z}]{\bf x}_{21}\cdot({\bf x}_{31} \times{\bf x}_{41}) \tag{3.16}\] \[= [({\bf s}_{21}\times{\bf s}_{31})\cdot{\bf s}_{41}]{\bf x}_{21} \cdot({\bf x}_{31}\times{\bf x}_{41})\] \[= [{\bf s}_{21}\cdot({\bf s}_{31}\times{\bf s}_{41})]{\bf x}_{21} \cdot({\bf x}_{31}\times{\bf x}_{41}),\]
which is exactly proportional to the closed-shell four-fermion wave function in 3D:
\[\det\left(\begin{array}{cccc}1&x_{1}&y_{1}&z_{1}\\ 1&x_{2}&y_{2}&z_{2}\\ 1&x_{3}&y_{3}&z_{3}\\ 1&x_{4}&y_{4}&z_{4}\end{array}\right)=\det\left(\begin{array}{cccc}1&x_{1}&y_ {1}&z_{1}\\ 0&x_{21}&y_{21}&z_{21}\\ 0&x_{31}&y_{31}&z_{31}\\ 0&x_{41}&y_{41}&z_{41}\end{array}\right)=\det\left(\begin{array}{cccc}x_{21} &y_{21}&z_{21}\\ x_{31}&y_{31}&z_{31}\\ x_{41}&y_{41}&z_{41}\end{array}\right)=\mathbf{x}_{21}\cdot({\bf x}_{31}\times{ \bf x}_{41}). \tag{3.17}\]
Since this closed-shell state is unique, it is multiplied only by a single coefficient \({\bf s}_{21}\cdot({\bf s}_{31}\times{\bf s}_{41})\) corresponding to the volume formed by any non-coplanar set of relative vectors \({\bf s}_{k1}\).
Since the triple product \({\bf s}_{21}\cdot({\bf s}_{31}\times{\bf s}_{41})\) vanishes in 2D, one must expand to the fourth-order in \({\bf s}_{i}\) to obtain the four-fermions wave function in 2D. We will not bother with this task here. However, for any dimension and any number of fermions, one can always evaluate the wave function (10) numerically. In the next Section, we show that the wave function (10) can be used in Monte Carlo calculations to obtain the ground state energy of up to 1000 spin-balanced fermions in a 3D harmonic oscillators.
## IV Numerical evaluation of fermion energies
In this work, we will only focus on the fermion part of the wave function (1) and leave the Jastrow correlation for later more specific applications. The wave function can be rewritten as
\[\Psi({\bf x}_{1},{\bf x}_{2}\ldots{\bf x}_{2N})=\det{\bf M}_{\uparrow}({\bf x }_{1},\ldots{\bf x}_{N})\det{\bf M}_{\downarrow}({\bf x}_{N+1},\ldots{\bf x}_{ N+L})=\exp(S_{\uparrow}+S_{\downarrow}), \tag{12}\]
with
\[S_{\uparrow}({\bf x}_{1},{\bf x}_{2}\ldots{\bf x}_{N})=\ln(\det{\bf M}_{ \uparrow}) \tag{13}\]
where \({\bf M}_{\uparrow}\) is the \(N\times N\) matrix for particles \(i=1\) to \(N\)
\[M_{\uparrow ij}=\exp\left[-\frac{1}{2}({\bf x}_{i}-{\bf s}_{j})^{2}\right]. \tag{14}\]
Similarly one can define \(S_{\downarrow}=\ln(\det{\bf M}_{\downarrow})\) where \({\bf M}_{\downarrow}\) is the \(L\times L\) matrix (14) for particles \(i=N+1\) to \(N+L\). The computer code evaluates the determinant of both \(N\times N\) and \(L\times L\) matrices and can study various unequal spin cases for \(N\neq L\). Here, only results for the spin-balance case of \(L=N\) is presented. The resulting energy is directly computed from two determinants and not from doubling the energy of a single determinant.
The local energy function for the spin-up fermions is then given by (See Appendix B)
\[E_{L}^{\uparrow} = \frac{H\Psi}{\Psi}=\frac{\sum_{i}^{N}[-\frac{1}{2}\nabla_{i}+V({ \bf x}_{i})]\Psi}{\Psi} \tag{15}\] \[= \sum_{i=1}^{N}\left[-\frac{1}{2}[\nabla_{i}^{2}S_{\uparrow}+( \nabla_{i}S_{\uparrow})^{2}]+\frac{1}{2}{\bf x}_{i}^{2}\right],\] \[= N\frac{D}{2}-\frac{1}{2}\sum_{i=1}^{N}({\bf s}_{i}^{2}-\tilde{ \bf s}_{i}^{2})-\frac{1}{2}\sum_{i=1}^{N}({\bf x}_{i}-\tilde{\bf s}_{i})^{2}+ \frac{1}{2}\sum_{i=1}^{N}{\bf x}_{i}^{2},\]
where
\[\tilde{\bf s}_{i}=\sum_{k=1}^{N}{\bf s}_{k}M_{\uparrow ik}M_{\uparrow ki}^{-1}. \tag{10}\]
The spin-down local energy \(E_{L}^{\downarrow}\) is similarly defined for particles \(i=N+1\) to \(2N\). One then computes the energy expectation value
\[E_{2N}=\frac{\int d{\bf x}_{1}\cdots d{\bf x}_{2N}(E_{L}^{\uparrow}+E_{L}^{ \downarrow})\Psi^{2}({\bf x}_{1},{\bf x}_{2}\ldots{\bf x}_{2N})}{\int d{\bf x} _{1}\cdots d{\bf x}_{2N}\Psi^{2}({\bf x}_{1},{\bf x}_{2}\ldots{\bf x}_{2N})} \tag{11}\]
using the Metropolis _et al._ algorithm[19]. Since one is sampling \(\Psi^{2}\), there is no sign problem in this calculation. The fermion character of the problem is encapsulated in \(\tilde{\bf s}_{i}\), which requires computing the inverse matrix \({\bf M}^{-1}\).
To implement the wave function (10) in the limit of \({\bf s}_{i}\to 0\), one generates a set of random vectors \({\bf s}_{i}=(r_{i1},r_{i2},r_{i3})\) where each component is uniformly distributed about the origin as \(-\Delta x<r_{ik}<\Delta x\). One then decreases \(\Delta x\) to see how the energy value is approached by \(E_{2N}\). This is shown in Fig.1 for \(2N=100\) and \(200\). The energy convergence is clearly second-order in \(\Delta x\), befitting the fact a non-zero \(\Delta x\) implies a set of finite \({\bf s}_{i}\), which are first order errors in the wave function. At large values \(\Delta x\), with small number of fermions, the set of random vector \({\bf s}_{i}\) can clump, resulting in different energy values, as shown in two
Figure 1: (color online) The convergence of fermion energy in a 3D harmonic oscillator. Exact energies for \(2N=\)100, 200 spin-balanced fermions are 510 and 1280, as shown by the horizontal black line. The best calculated results are 510.024\(\pm\)0.003 and 1280.08\(\pm\)0.01, with systematic errors of 0.005% and 0.006%.
separate runs of different random number sequence. When \(\Delta x\) is reduced, there is less room for clumping and less scattering in the energy value.
All particles are moved simultaneous in a single Metropolis trial step; sequential particle moves would be too slow. After on the order of \(10^{5}\) Metropolis steps, the statistical errors are very small. As \(\Delta x\) squeezes all \({\bf s}_{i}\) toward zero, the determinant approaches zero, the matrix inversion subroutine fails and the calculation cannot continue below that value of
Figure 3: (color online) Exact energies for \(2N\)=800 and 1000 are 8070 and 10860. The best calculated results are 8076.6\(\pm\)0.3 and 10877\(\pm\)1, with systematic errors of 0.08% and 0.16%.
Figure 2: (color online) Exact energies for \(2N\)=400 and 600 fermions are 3210 and 5498. The best calculated results are 3211.17\(\pm\)0.04, 5507.1\(\pm\)0.2, with systematic errors of 0.03% and 0.17%.
\(\Delta x\). The result is a systematic error always above the exact energy values, since all \({\bf s}_{i}\) can be viewed as variational parameters. For \(2N=100\) and 200, this systematic error is too small to be of concern. The exact energy values are computed in Appendix C.
Fig.2 and Fig.3 shows the results for \(2N\)=400, 600 and \(2N\)=800, 1000 respectively. For \(2N\)=600, 800 and 1000, \(\Delta x\) can no longer be reduced below one and the systematic error is discernible on the absolute scale. However, on the relative scale, they remained below 0.2%. This calculation was done using Fortran with only double-precision. If quad-precision were used, this systematic error can be further reduced at smaller values of \(\Delta x\).
## V Conclusions
The thermodynamics of a system of harmonic confined fermions in 3D, including its ground state energies, has been extensively studies by Brosens _et al._[21] using the exact harmonic propagator in the low temperature limit. In that study, the ground state energy is its final goal. Here, we have proposed a remarkably simple wave function (10) that can yield the ground state energy directly, for up to a thousand harmonic fermions, without evaluating any single particle wave function. Moreover, because it is a wave function, and not a propagator, it can be part of an initial trial function for doing very large scale VMC, or even Ground State Path Integral Monte Carlo[3; 4; 5] calculations on _interacting_ 3D fermions.
It is known that non-interacting fermions, due to Pauli's exclusion principle, can probe the excited states of any potential. Lyubartsev[22] has even suggested that by antisymmetrization, one can also probe the excited states of an interacting, many-particle system. Mathematically, the wave function (10) can yield the spectrum of the harmonic oscillator in _any_ dimension. It seems miraculous that such a simple wave function can automatically account for the degeneracy of each harmonic state in \(D\)-dimension, which is the partition of a non-negative integer \(N\) by a sum of \(D\) non-negative integers.
The generalization of (10) to the Coulomb potential case of \(V(r)=-Z/r\) would seem to be
\[\Psi({\bf x}_{1},{\bf x}_{2}\ldots{\bf x}_{N})=\lim_{{\bf s}_{j}\to 0}\det \Bigl{(}\exp[-Z|{\bf x}_{i}-{\bf s}_{j}|]\Bigr{)}. \tag{11}\]
The wave function (10) works for the harmonic oscillator because all of its excited state have the same Gaussian factor \({\rm e}^{-r^{2}/2}\). For the Coulomb potential, excited states have different Slater orbitals \({\rm e}^{-Zr/n}\) depending on the principle quantum number \(n\). Therefore, (11) cannot
possibly be the exact wave function for \(N\) non-interacting fermions in a Coulomb potential. Thus the simple wave function (3.2) is unique to the harmonic oscillator.
## Appendix A The N-fermion wave function in one dimension
The \(N=2\) determinant in 1D can be computed alternatively as
\[\det\left(\begin{array}{cc}\mathrm{e}^{x_{1}s_{1}}&\mathrm{e}^{x_ {1}s_{2}}\\ \mathrm{e}^{x_{2}s_{1}}&\mathrm{e}^{x_{2}s_{2}}\end{array}\right) = \mathrm{e}^{x_{1}s_{1}}\mathrm{e}^{x_{2}s_{2}}-\mathrm{e}^{x_{2} s_{1}}\mathrm{e}^{x_{1}s_{2}} \tag{30}\] \[= \sum_{n_{1}=0}^{\infty}\frac{1}{n_{1}!}x_{1}^{n_{1}}s_{1}^{n_{1}} \sum_{n_{2}=0}^{\infty}\frac{1}{n_{2}!}x_{2}^{n_{2}}s_{2}^{n_{2}}-\sum_{n_{1}= 0}^{\infty}\frac{1}{n_{1}!}x_{1}^{n_{1}}s_{1}^{n_{1}}\sum_{n_{2}=0}^{\infty} \frac{1}{n_{2}!}x_{2}^{n_{2}}s_{2}^{n_{2}}\] \[= \sum_{n_{1}=0}^{\infty}\sum_{n_{2}=0}^{\infty}\frac{1}{n_{1}!n_{2 }!}s_{1}^{n_{1}}s_{2}^{n_{2}}(x_{1}^{n_{1}}x_{2}^{n_{2}}-x_{2}^{n_{1}}x_{1}^{n _{2}})\] \[= \sum_{n_{1}=0}^{\infty}\sum_{n_{2}=0}^{\infty}\frac{1}{n_{1}!n_{2 }!}s_{1}^{n_{1}}s_{2}^{n_{2}}\det\left(\begin{array}{cc}x_{1}^{n_{1}}&x_{1}^ {n_{2}}\\ x_{2}^{n_{1}}&x_{2}^{n_{2}}\end{array}\right).\]
Since \(n_{1}\) and \(n_{2}\) are column indices and the determinant vanishes for \(n_{1}=n_{2}\), the sum is only over \(n_{1}<n_{2}\) and \(n_{2}<n_{1}\). The latter case can be view as the first case with \(n_{1}\) interchanged with \(n_{2}\). This interchange corresponds to the old determinant with a negative sign, hence
\[\det\left(\begin{array}{cc}\mathrm{e}^{x_{1}s_{1}}&\mathrm{e}^{ x_{1}s_{2}}\\ \mathrm{e}^{x_{2}s_{1}}&\mathrm{e}^{x_{2}s_{2}}\end{array}\right) = \sum_{n_{1}<n_{2}}\frac{1}{n_{1}!n_{2}!}\left[s_{1}^{n_{1}}s_{2}^ {n_{2}}\det\left(\begin{array}{cc}x_{1}^{n_{1}}&x_{1}^{n_{2}}\\ x_{2}^{n_{1}}&x_{2}^{n_{2}}\end{array}\right)-s_{1}^{n_{2}}s_{2}^{n_{1}}\det \left(\begin{array}{cc}x_{1}^{n_{1}}&x_{1}^{n_{2}}\\ x_{2}^{n_{1}}&x_{2}^{n_{2}}\end{array}\right)\right] \tag{31}\] \[= \sum_{n_{1}<n_{2}}\frac{1}{n_{1}!n_{2}!}\det\left(\begin{array}[] {cc}s_{1}^{n_{1}}&s_{1}^{n_{2}}\\ s_{2}^{n_{1}}&s_{2}^{n_{2}}\end{array}\right)\det\left(\begin{array}{cc}x_{ 1}^{n_{1}}&x_{1}^{n_{2}}\\ x_{2}^{n_{1}}&x_{2}^{n_{2}}\end{array}\right).\]
This expansion was used in the study of the quantum Hall effect by Mikhailov[20].
The lowest order terms in \(s_{i}\) are therefore given by \(n_{1}=0\) and \(n_{2}=1\), resulting in
\[\det\left(\begin{array}{cc}\mathrm{e}^{x_{1}s_{1}}&\mathrm{e}^{x_{1}s_{2}}\\ \mathrm{e}^{x_{2}s_{1}}&\mathrm{e}^{x_{2}s_{2}}\end{array}\right)=\det\left( \begin{array}{cc}1&s_{1}\\ 1&s_{2}\end{array}\right)\det\left(\begin{array}{cc}1&x_{1}\\ 1&x_{2}\end{array}\right)=s_{21}x_{21}, \tag{32}\]
reproducing (3.5).
Similar manipulations gives the \(N=3\) determinant,
\[\det\left(\begin{array}{cc}\mathrm{e}^{x_{1}s_{1}}&\mathrm{e}^{x_{1}s_{2}}& \mathrm{e}^{x_{1}s_{3}}\\ \mathrm{e}^{x_{2}s_{1}}&\mathrm{e}^{x_{2}s_{2}}&\mathrm{e}^{x_{2}s_{3}}\\ \mathrm{e}^{x_{3}s_{1}}&\mathrm{e}^{x_{3}s_{2}}&\mathrm{e}^{x_{3}s_{3}}\end{array} \right)=\sum_{n_{1}<n_{2}<n_{3}}^{\infty}\frac{1}{n_{1}!n_{2}!n_{3}!}\det \left(\begin{array}{ccc}s_{1}^{n_{1}}&s_{1}^{n_{2}}&s_{1}^{n_{3}}\\ s_{2}^{n_{1}}&s_{2}^{n_{2}}&s_{2}^{n_{3}}\\ s_{3}^{n_{1}}&s_{3}^{n_{2}}&s_{3}^{n_{3}}\end{array}\right)\det\left( \begin{array}{ccc}x_{1}^{n_{1}}&x_{1}^{n_{2}}&x_{1}^{n_{3}}\\ x_{2}^{n_{1}}&x_{2}^{n_{2}}&x_{2}^{n_{3}}\\ x_{3}^{n_{1}}&x_{3}^{n_{2}}&x_{3}^{n_{3}}\end{array}\right).\]
The first non-vanishing term in the above sum, is the third order term in \(s_{i}\) given by \(n_{1}=0\), \(n_{2}=1\), \(n_{3}=2\),
\[=\frac{1}{2}\det\left(\begin{array}{ccc}1&s_{1}&s_{1}^{2}\\ 1&s_{2}&s_{2}^{2}\\ 1&s_{3}&s_{3}^{2}\end{array}\right)\det\left(\begin{array}{ccc}1&x_{1}&x_{1} ^{2}\\ 1&x_{2}&x_{2}^{2}\\ 1&x_{3}&x_{3}^{2}\end{array}\right)=\frac{1}{2}s_{21}s_{31}s_{32}x_{21}x_{31}x_ {32}, \tag{10}\]
which correctly changes sign whenever \(x_{i}\leftrightarrow x_{j}\).
For \(N\)-fermion, the above two cases (10) and (10) generalize to the the \(N\times N\) Vandermonde determinant:
\[\det\left(\begin{array}{cccc}1&x_{1}&x_{1}^{2}&\cdots&x_{1}^{N-1}\\ 1&x_{2}&x_{2}^{2}&\cdots&x_{2}^{N-1}\\ 1&x_{2}&x_{2}^{2}&\cdots&x_{2}^{N-1}\\ 1&\cdots&\cdots&\cdots&\cdots\\ 1&x_{N}&x_{n}^{2}&\cdots&x_{N}^{N-1}\end{array}\right)=\prod_{1\leq i<j\leq N }(x_{j}-x_{i}), \tag{11}\]
yielding the \(N\)-fermion wave function:
\[\Psi(x_{1},x_{2},\cdots x_{n})\propto\prod_{i<j}(s_{j}-s_{i})\prod_{i<j}(x_{j }-x_{i})\exp(-\sum_{i=1}^{N}x_{i}^{2}/2). \tag{12}\]
This is an example in which the determinant wave function can be explicit given without evaluating any specific single particle wave function, _i.e._, Hermit polynomials.
## Appendix B The Hamiltonian energy estimator
Since the below applies equally to \({\bf M}_{\uparrow}\) and \({\bf M}_{\downarrow}\), for clarity, we will suppress the \(\uparrow\) and \(\downarrow\) labels. Given that
\[{\bf M}=M_{lk}=\exp\left[-\frac{1}{2}({\bf x}_{l}-{\bf s}_{k})^{2}\right], \tag{13}\]
one has
\[\nabla_{i}S = \nabla_{i}\ln({\rm det}{\bf M})={\rm Tr}[{\bf M}^{-1}\nabla_{i} {\bf M}]=\sum_{kl}M_{kl}^{-1}\nabla_{i}M_{lk}, \tag{14}\] \[= -\sum_{kl}M_{kl}^{-1}({\bf x}_{l}-{\bf s}_{k})\delta_{il}M_{lk}=- \sum_{k}M_{ki}^{-1}({\bf x}_{i}-{\bf s}_{k})M_{ik},\] \[= -({\bf x}_{i}-\sum_{k}{\bf s}_{k}M_{ik}M_{ki}^{-1})=-({\bf x}_{i} -{\bf\tilde{s}}_{i}),\]
and therefore
\[\nabla_{i}^{2}S = -\nabla_{i}\cdot({\bf x}_{i}-\sum_{k}{\bf s}_{k}M_{ik}M_{ki}^{-1}), \tag{10}\] \[= -D+\sum_{k}{\bf s}_{k}\cdot\nabla_{i}(M_{ik}M_{ki}^{-1}).\]
In the following, repeated indices \(l\) and \(n\) are summed over, but not for \(i\) or \(k\),
\[\nabla_{i}(M_{ik}M_{ki}^{-1}) = -({\bf x}_{i}-{\bf s}_{k})M_{ik}M_{ki}^{-1}-M_{ik}M_{kl}^{-1}( \nabla_{i}M_{ln})M_{ni}^{-1} \tag{11}\] \[= -\Big{[}({\bf x}_{i}-{\bf s}_{k})M_{ik}M_{ki}^{-1}-M_{ik}M_{ki}^{- 1}({\bf x}_{i}-{\bf s}_{n})M_{in}M_{ni}^{-1}\Big{]}\] \[= -\Big{[}{\bf x}_{i}M_{ik}M_{ki}^{-1}-{\bf s}_{k}M_{ik}M_{ki}^{-1}- M_{ik}M_{ki}^{-1}{\bf x}_{i}+M_{ik}M_{ki}^{-1}{\bf s}_{n}M_{in}M_{ni}^{-1} \Big{]}\] \[= -\Big{[}-{\bf s}_{k}M_{ik}M_{ki}^{-1}+M_{ik}M_{ki}^{-1}{\tilde{ \bf s}}_{i}\Big{]}.\]
Now the sum over \(k\) in (10) yields
\[\sum_{k}{\bf s}_{k}\cdot\nabla_{i}(M_{ik}M_{ki}^{-1}) = -\sum_{k}{\bf s}_{k}\cdot\Big{[}-{\bf s}_{k}M_{ik}M_{ki}^{-1}+M_{ ik}M_{ki}^{-1}{\tilde{\bf s}}_{i}\Big{]} \tag{12}\] \[= \sum_{k}{\bf s}_{k}^{2}M_{ik}M_{ki}^{-1}-{\tilde{\bf s}}_{i}^{2},\]
and the final sum over \(i\) gives
\[\sum_{i=1}^{N}\nabla_{i}^{2}S=-ND+\sum_{i=1}^{N}({\bf s}_{i}^{2}-{\tilde{\bf s }}_{i}^{2}). \tag{13}\]
The local energy is therefore
\[E_{L} = \sum_{i=1}^{N}\left[-\frac{1}{2}[\nabla_{i}^{2}S_{\uparrow}+( \nabla_{i}S_{\uparrow})^{2}]+\frac{1}{2}{\bf x}_{i}^{2}\right], \tag{14}\] \[= N\frac{D}{2}-\frac{1}{2}\sum_{i=1}^{N}({\bf s}_{i}^{2}-{\tilde{ \bf s}}_{i}^{2})-\frac{1}{2}\sum_{i=1}^{N}({\bf x}_{i}-{\tilde{\bf s}}_{i})^{2 }+\frac{1}{2}\sum_{i=1}^{N}{\bf x}_{i}^{2}.\]
If \({\bf M}\) were diagonal, as in the boson case, then \({\tilde{\bf s}}_{i}={\bf s}_{i}\), and the above is just
\[E_{L} = N\frac{D}{2}-\frac{1}{2}\sum_{i=1}^{N}({\bf x}_{i}-{\bf s}_{i})^{2}+ \frac{1}{2}\sum_{i=1}^{N}{\bf x}_{i}^{2}, \tag{15}\]
which correctly reproduces the \(N\)-boson energy of \(ND/2\) when \({\bf s}_{i}\to 0\).
## Appendix C Fermion energies in a 3D harmonic oscillator
The spectrum of the 3D harmonic oscillator is given by
\[E_{m}=(m-1)+\frac{3}{2}=\frac{2m+1}{2}, \tag{16}\]
where
\[m-1=n_{x}+n_{y}+n_{z}, \tag{10}\]
with the ground state corresponding to \(m=1\).
Given \(m\), \(n_{x}\) can take on \(m\) values from 0 to \((m-1)\). For \(n_{x}=(m-1)\), one must have just \((n_{y},n_{z})=(0,0)\). For \(n_{x}=(m-1)-1\), one can have \((n_{y},n_{z})=\)(1,0) or (0,1). For \(n_{x}=(m-1)-2\), \((n_{y},n_{z})=(2,0),(1,1),(0,2)\), etc.. The total degeneracy for level \(E_{m}\) is therefore \(1+2+3+\cdots+m\), given by
\[g=\frac{1}{2}m(m+1). \tag{11}\]
Therefore, for closed-shell occupation up to and including the \(n^{th}\) level, we have total particle number and energy
\[N(n)=\sum_{m=1}^{n}\frac{1}{2}m(m+1),\]
\[E(n)=\sum_{m=1}^{n}\frac{1}{2}m(m+1)\frac{2m+1}{2}.\]
For equal spin-up and spin-down fermions, multiply by 2 gives
\[N(n)=\sum_{m=1}^{n}m(m+1),\]
\[E(n)=\sum_{m=1}^{n}\frac{1}{2}m(m+1)(2m+1).\]
Since we have the power sums
\[\sum_{m=1}^{n}m = \frac{1}{2}n(n+1), \tag{12}\] \[\sum_{m=1}^{n}m^{2} = \frac{1}{6}n(n+1)(2n+1),\] (13) \[\sum_{m=1}^{n}m^{3} = \frac{1}{4}n^{2}(n+1)^{2}, \tag{14}\]
we have the following _closed shell_ results
\[N(n) = \frac{1}{2}n(n+1)+\frac{1}{6}n(n+1)(2n+1)=\frac{1}{3}n(n+1)(n+2), \tag{15}\] \[E(n) = \frac{1}{4}n(n+1)[n^{2}+3n+2]=\frac{1}{4}n(n+1)^{2}(n+2), \tag{16}\]
which agree with a previous, different derivation of Brosens _et al._[21] where their \(L=n-1\).
For any \(M\) between two closed shells \(n\) and \(n+1\), \(N(n)\leq M\leq N(n+1)\), the energy is \(E(n)\) plus the number of particle greater then \(N(n)\), which is \([M-N(n)]\), times the energy level at \(m=n+1\), which is
\[E_{M}=E(n)+[M-N(n)]\left(\frac{2(n+1)+1}{2}\right). \tag{102}\] |
2304.14165 | An Algorithm for Computing with Brauer's Group Equivariant Neural
Network Layers | The learnable, linear neural network layers between tensor power spaces of
$\mathbb{R}^{n}$ that are equivariant to the orthogonal group, $O(n)$, the
special orthogonal group, $SO(n)$, and the symplectic group, $Sp(n)$, were
characterised in arXiv:2212.08630. We present an algorithm for multiplying a
vector by any weight matrix for each of these groups, using category theoretic
constructions to implement the procedure. We achieve a significant reduction in
computational cost compared with a naive implementation by making use of
Kronecker product matrices to perform the multiplication. We show that our
approach extends to the symmetric group, $S_n$, recovering the algorithm of
arXiv:2303.06208 in the process. | Edward Pearce-Crump | 2023-04-27T13:06:07Z | http://arxiv.org/abs/2304.14165v1 | # An Algorithm for Computing with Brauer's Group Equivariant Neural Network Layers
###### Abstract
The learnable, linear neural network layers between tensor power spaces of \(\mathbb{R}^{n}\) that are equivariant to the orthogonal group, \(O(n)\), the special orthogonal group, \(SO(n)\), and the symplectic group, \(Sp(n)\), were characterised in Pearce-Crump (2022b). We present an algorithm for multiplying a vector by any weight matrix for each of these groups, using category theoretic constructions to implement the procedure. We achieve a significant reduction in computational cost compared with a naive implementation by making use of Kronecker product matrices to perform the multiplication. We show that our approach extends to the symmetric group, \(S_{n}\), recovering the algorithm of Godfrey et al. (2023) in the process.
## 1 Introduction
There has been an increased focus in deep learning to develop neural network architectures that are equivariant to a symmetry group. When we use such a neural network, we know exactly how the output changes when a symmetry transformation is applied to the input. These neural networks come with additional benefits: they require less training data; the layers themselves have a high level of parameter sharing; and there is also a reduction in the time, effort and cost that is needed to search for a neural network architecture, since the form of the architectures is restricted by the symmetry group itself.
Pearce-Crump (2022b) recently characterised the learnable, linear neural network layers between tensor power spaces of \(\mathbb{R}^{n}\) that are equivariant to the orthogonal group, \(O(n)\), the special orthogonal group, \(SO(n)\), and the symplectic group, \(Sp(n)\). In particular, they found a spanning set of matrices that are indexed by certain sets of set partition diagrams for the learnable, linear, equivariant layer functions between such tensor power spaces in the standard basis of \(\mathbb{R}^{n}\) when the group is \(O(n)\) or \(SO(n)\), and in the symplectic basis of \(\mathbb{R}^{n}\) when the group is \(Sp(n)\). This overparameterization of the layer spaces makes it possible to learn the weights that appear in such neural networks.
The main contribution of this paper is that we present an algorithm for multiplying any input vector by any weight matrix for each of the groups in question. In particular, we apply the category theoretic constructions introduced in Pearce-Crump (2023a), which build a functoral correspondence between set partition diagrams and the spanning set matrices, to work with the set partition diagrams as a proxy for the matrices themselves. There are three key properties that we take advantage of to develop the algorithm. The first is that the functors are _full_. This means that, in our case, we can recover the spanning set from the corresponding set of set partition diagrams. The second is that the set partition categories, in which the set partition diagrams form the morphisms in the category, are (strict) _monoidal_. This means that not only is it possible to manipulate the connected components of the set partition diagrams as if they were strings, with the potential to form new set partition diagrams, but also certain set partition diagrams can be decomposed into a tensor product of smaller
set partition diagrams. We focus in particular on trying to construct _planar_ set partition diagrams - diagrams where no connected components intersect each other - as they decompose into the smallest possible set partition diagrams. The third is that the functors themselves are _monoidal_. Critically, this means that any tensor product decomposition of diagrams is respected when viewed as matrices; that is, under the functor, we obtain a Kronecker product of matrices, where each matrix is indexed by a set partition diagram. By using these properties, we construct an algorithm that achieves a significant reduction in computational cost compared with a naive implementation, since we use a Kronecker product of smaller sized matrices to perform the multiplication. We also show that our approach extends to the symmetric group, \(S_{n}\), recovering, with one key distinction, the algorithm of Godfrey et al. (2023) in the process.
## 2 Preliminaries
We choose our field of scalars to be \(\mathbb{R}\) throughout. Tensor products are also taken over \(\mathbb{R}\), unless otherwise stated. Also, we let \([n]\) represent the set \(\{1,\ldots,n\}\).
Recall that a representation of a group \(G\) is a choice of vector space \(V\) over \(\mathbb{R}\) and a group homomorphism
\[\rho_{V}:G\to GL(V) \tag{1}\]
Furthermore, recall that a map \(\phi:V\to W\) between two representations of \(G\) is said to be \(G\)-equivariant if, for all \(g\in G\) and \(v\in V\),
\[\phi(\rho_{V}(g)[v])=\rho_{W}(g)[\phi(v)] \tag{2}\]
We denote the set of all _linear_\(G\)-equivariant maps between \(V\) and \(W\) by \(\operatorname{Hom}_{G}(V,W)\). It can be shown that \(\operatorname{Hom}_{G}(V,W)\) is a vector space over \(\mathbb{R}\). See Segal (2014) for more details.
### Tensor Power Spaces as Group Representations
The groups \(O(n)\), \(Sp(n)\), and \(SO(n)\) are subgroups of \(GL(n)\). We use the symbol \(G\) to refer to any of these groups in the following. Recall that \(\mathbb{R}^{n}\) has a standard basis that is given by \(\{e_{i}\mid i\in[n]\}\), where \(e_{i}\) has a \(1\) in the \(i^{\text{th}}\) position and is \(0\) otherwise.
(Note that if \(G=Sp(n)\), then \(n=2m\), and we label and order the indices by \(1,1^{\prime},\ldots,m,m^{\prime}\), and call the standard basis of \(\mathbb{R}^{n}\) the symplectic basis.)
There exists a (left) action of \(G\) on \(\mathbb{R}^{n}\) that is given by left multiplication on the standard basis, which can be extended linearly to obtain a representation \(G\to GL(\mathbb{R}^{n})\).
Moreover, since the elements
\[e_{I}\coloneqq e_{i_{1}}\otimes e_{i_{2}}\otimes\cdots\otimes e_{i_{k}} \tag{3}\]
for all \(I\coloneqq(i_{1},i_{2},\ldots,i_{k})\in[n]^{k}\) form a basis of \((\mathbb{R}^{n})^{\otimes k}\), the \(k\)-tensor power space of \(\mathbb{R}^{n}\), there also exists a (left) action of \(G\) on \((\mathbb{R}^{n})^{\otimes k}\) that is given by
\[g\cdot e_{I}\coloneqq ge_{i_{1}}\otimes ge_{i_{2}}\otimes\cdots\otimes ge_{i_{ k}} \tag{4}\]
Again, this action can be extended linearly to obtain a representation \(\rho_{k}:G\to GL((\mathbb{R}^{n})^{\otimes k})\).
We are interested in the space of \(G\)-equivariant linear maps between any two tensor power spaces of \(\mathbb{R}^{n}\), \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\), since these maps are the linear layer functions in the group equivariant neural networks of interest.
Figure 1: Examples of \((7,5)\)–partition diagrams. b) is also a \((7,5)\)–Brauer diagram, and c) is also a \(12\backslash 6\)–diagram.
### Set Partition Categories
Pearce-Crump (2022a,b) showed that, for the groups \(G\) in question, \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\) can be constructed from certain set partitions of \([l+k]\), and in particular, from their corresponding set partition diagrams. Pearce-Crump (2023a) introduced a category theoretic framework around these set partition diagrams which allows us to better understand and work with the linear layer functions of the neural networks themselves. We assume throughout that \(n\in\mathbb{N}_{\geq 0}\).
For \(l,k\in\mathbb{N}_{\geq 0}\), consider the set \([l+k]:=\{1,\ldots,l+k\}\) having \(l+k\) elements. We can create a set partition of \([l+k]\) by partitioning it into a number of subsets. We call the subsets of a set partition _blocks_. Let \(\Pi_{l+k}\) be the set of all set partitions of \([l+k]\). Then, for each set partition \(\pi\) in \(\Pi_{l+k}\), we can associate to it a diagram \(d_{\pi}\), called a \((k,l)\)-partition diagram, consisting of two rows of vertices and edges between vertices such that there are
* \(l\) vertices on the top row, labelled left to right by \(1,\ldots,l\)
* \(k\) vertices on the bottom row, labelled left to right by \(l+1,\ldots,l+k\), and
* the edges between the vertices correspond to the connected components of \(\pi\).
As a result, \(d_{\pi}\) represents the equivalence class of all diagrams with connected components equal to the blocks of \(\pi\).
There are special types of \((k,l)\)-partition diagrams that we are interested in, namely:
* A \((k,l)\)-Brauer diagram \(d_{\beta}\) is a \((k,l)\)-partition diagram where the size of every block in \(\beta\) is exactly two.
* Given \(k\) and \(l\), an \((l+k)\backslash n\)-diagram \(d_{\alpha}\) is a \((k,l)\)-partition diagram where exactly \(n\) blocks in \(\alpha\) have size one, with the rest having exactly size two. The vertices corresponding to the blocks of size one are called free vertices.
We give examples of these set partition diagrams in Figure 1.
From these special types of set partition diagrams, we can form a number of set partition categories, as follows.
_Definition 2.1_.: The Brauer category \(\mathcal{B}(n)\) is the category whose objects are the non-negative integers \(\mathbb{N}_{\geq 0}=\{0,1,2,\ldots\}\), and, for any pair of objects \(k\) and \(l\), the morphism space \(\operatorname{Hom}_{\mathcal{B}(n)}(k,l)\) is a vector space that is defined to be the \(\mathbb{R}\)-linear span of the set of all \((k,l)\)-Brauer diagrams.
_Definition 2.2_.: The Brauer-Grood category \(\mathcal{BG}(n)\) is the category whose objects are the same as those of \(\mathcal{B}(n)\) and, for any pair of objects \(k\) and \(l\), the morphism space \(\operatorname{Hom}_{\mathcal{BG}(n)}(k,l)\) is defined to be the \(\mathbb{R}\)-linear span of the set of all \((k,l)\)-Brauer diagrams together with the set of all \((l+k)\backslash n\)-diagrams.
These two categories come with a vertical composition operation on morphisms, a tensor product operation on objects and morphisms, and a unit object. The vertical composition operation can be found in (Pearce-Crump, 2023a, Section 2.2, Appendix A). The unit object in each category is the object \(0\). The tensor product operation on objects is given by the standard addition operation in \(\mathbb{N}_{\geq 0}\). Finally, the tensor product operation is defined on diagrams (morphisms) as follows:
* If \(d_{\beta_{1}}\) is a \((k,l)\)-Brauer diagram and \(d_{\beta_{2}}\) is a \((q,m)\)-Brauer diagram, then \(d_{\beta_{1}}\otimes d_{\beta_{2}}\) is defined to be the \((k+q,l+m)\)-Brauer diagram obtained by horizontally placing \(d_{\beta_{1}}\) to the left of \(d_{\beta_{2}}\) without any overlapping of vertices.
* If \(d_{\beta}\) is a \((k,l)\)-Brauer diagram and \(d_{\alpha}\) is an \((m+q)\backslash n\)-diagram, then \(d_{\beta}\otimes d_{\alpha}\) is defined to be the \((l+m+k+q)\backslash n\)-diagram obtained by horizontally placing \(d_{\beta}\) to the left of \(d_{\alpha}\) without any overlapping of vertices.
* See (Pearce-Crump, 2023a, Appendix A) for the definition of the tensor product of an \((l+k)\backslash n\)-diagram with an \((m+q)\backslash n\)-diagram.
\(\mathcal{B}(n)\) and \(\mathcal{BG}(n)\) are, in fact, strict \(\mathbb{R}\)-linear monoidal categories - see (Pearce-Crump, 2023a, Section 4.1) for more details. Morphisms in such categories can be represented using a diagrammatic language known as string diagrams. See (Pearce-Crump, 2023a, Section 3.2) for more details. This has the consequence that we can pull on and bend the connected components as if they were strings
and/or move the vertices to obtain new set partition diagrams, and hence new morphisms, in the appropriate categories.
### Group Equivariant Linear Layers
For each group \(G\), there is a spanning set for \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\) that is indexed by certain set partitions of \([l+k]\) that correspond to the special types of \((k,l)\)-partition diagrams that were introduced in Section 2.2. These spanning sets are expressed in the basis of matrix units for \(\operatorname{Hom}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\). We state here what these spanning sets are, leaving their explicit definitions to the Technical Appendix.
**Theorem 2.3** (Spanning set when \(G=O(n)\)).: _(Pearce-Crump, 2022b, Theorem 6.5) For any \(k,l\in\mathbb{N}_{\geq 0}\) and any \(n\in\mathbb{N}_{\geq 1}\), the set_
\[\{E_{\beta}\mid d_{\beta}\text{ is a }(k,l)\text{--Brauer diagram}\} \tag{5}\]
_is a spanning set for \(\operatorname{Hom}_{O(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) in the standard basis of \(\mathbb{R}^{n}\)._
**Theorem 2.4** (Spanning set when \(G=Sp(n),n=2m\)).: _(Pearce-Crump, 2022b, Theorem 6.6) For any \(k,l\in\mathbb{N}_{\geq 0}\) and any \(n\in\mathbb{N}_{\geq 2}\) such that \(n=2m\), the set_
\[\{F_{\beta}\mid d_{\beta}\text{ is a }(k,l)\text{--Brauer diagram}\} \tag{6}\]
_is a spanning set for \(\operatorname{Hom}_{Sp(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\), for \(n=2m\), in the symplectic basis of \(\mathbb{R}^{n}\)._
**Theorem 2.5** (Spanning set when \(G=SO(n)\)).: _(Pearce-Crump, 2022b, Theorem 6.7) For any \(k,l\in\mathbb{N}_{\geq 0}\) and any \(n\in\mathbb{N}_{\geq 1}\), the set_
\[\{E_{\beta}\mid\beta\text{ is a }(k,l)\text{--Brauer diagram}\}\cup\{H_{\alpha}\mid\alpha\text{ is a }(k+l)\backslash n\text{--diagram}\} \tag{7}\]
_is a spanning set for \(\operatorname{Hom}_{SO(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) in the standard basis of \(\mathbb{R}^{n}\)._
For each group \(G\) in question, we can also define the following category.
_Definition 2.6_.: The category \(\mathcal{C}(G)\) consists of objects that are the \(k\)-order tensor power spaces of \(\mathbb{R}^{n}\), as representations of \(G\), and morphism spaces between any two objects that are the vector spaces \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\).
The vertical composition of morphisms is given by the usual composition of linear maps, the tensor product is given by the usual tensor product of linear maps, and the unit object is the one-dimensional trivial representation of \(G\).
\(\mathcal{C}(G)\) is a strict, \(\mathbb{R}\)-linear monoidal category: see (Pearce-Crump, 2023a, Appendix D) for more details.
### Full, Strict \(\mathbb{R}\)-Linear Monoidal Functors
(Pearce-Crump, 2023a, Section 4.2, Appendix D) showed that we have a number of _full, strict \(\mathbb{R}\)-linear monoidal_ functors between the set partition categories and the category \(\mathcal{C}(G)\) for the appropriate group \(G\). We reproduce the results below.
**Theorem 2.7**.: _There exists a full, strict \(\mathbb{R}\)-linear monoidal functor_
\[\Phi:\mathcal{B}(n)\to\mathcal{C}(O(n)) \tag{8}\]
_that is defined on the objects of \(\mathcal{B}(n)\) by \(\Phi(k)\coloneqq((\mathbb{R}^{n})^{\otimes k},\rho_{k})\) and, for any objects \(k,l\) of \(\mathcal{B}(n)\), the map_
\[\operatorname{Hom}_{\mathcal{B}(n)}(k,l)\to\operatorname{Hom}_{\mathcal{C}(O( n))}(\Phi(k),\Phi(l)) \tag{9}\]
_is given by_
\[d_{\beta}\mapsto E_{\beta} \tag{10}\]
_for all \((k,l)\)-Brauer diagrams \(d_{\beta}\), where \(E_{\beta}\) is given in Theorem 2.3._
**Theorem 2.8**.: _There exists a full, strict \(\mathbb{R}\)-linear monoidal functor_
\[X:\mathcal{B}(n)\to\mathcal{C}(Sp(n)) \tag{11}\]
_that is defined on the objects of \(\mathcal{B}(n)\) by \(X(k)\coloneqq((\mathbb{R}^{n})^{\otimes k},\rho_{k})\) and, for any objects \(k,l\) of \(\mathcal{B}(n)\), the map_
\[\operatorname{Hom}_{\mathcal{B}(n)}(k,l)\to\operatorname{Hom}_{\mathcal{C}( Sp(n))}(\Phi(k),\Phi(l)) \tag{12}\]
_is given by_
\[d_{\beta}\mapsto F_{\beta} \tag{13}\]
_for all \((k,l)\)-Brauer diagrams \(d_{\beta}\), where \(F_{\beta}\) is given in Theorem 2.4._
**Theorem 2.9**.: _There exists a full, strict \(\mathbb{R}\)-linear monoidal functor_
\[\Psi:\mathcal{BG}(n)\to\mathcal{C}(SO(n)) \tag{14}\]
_that is defined on the objects of \(\mathcal{BG}(n)\) by \(\Psi(k)\coloneqq((\mathbb{R}^{n})^{\otimes k},\rho_{k})\) and, for any objects \(k,l\) of \(\mathcal{B}(n)\), the map_
\[\operatorname{Hom}_{\mathcal{BG}(n)}(k,l)\to\operatorname{Hom}_{\mathcal{C}( SO(n))}(\Phi(k),\Phi(l)) \tag{15}\]
_is given by_
\[d_{\beta}\mapsto E_{\beta} \tag{16}\]
_for all \((k,l)\)-Brauer diagrams \(d_{\beta}\), where \(E_{\beta}\) is given in Theorem 2.3, and_
\[d_{\alpha}\mapsto H_{\beta} \tag{17}\]
_for all \((l+k)\backslash n\)-diagrams \(d_{\alpha}\), where \(H_{\alpha}\) is given in Theorem 2.5._
The key implications of these results going forward are as follows:
1. To understand and work with any matrix in \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\), it is enough to work with the subset of \((k,l)\)-partition diagrams that correspond to \(G\). This is because we can express any matrix in terms of the set of spanning set elements for \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\), given in Theorems 2.3 - 2.5, and these correspond bijectively with the subset of \((k,l)\)-partition diagrams that corresponds to \(G\). We can recover the matrix itself by applying the appropriate functor to the set partition diagrams because the functors are full.
2. We can manipulate the connected components and vertices of \((k,l)\)-partition diagrams like strings to obtain new set partition diagrams because the set partition categories are strict monoidal. Point 1. immediately implies that we will obtain new \(G\)-equivariant matrices between tensor power spaces of \(\mathbb{R}^{n}\) from the resulting set partition diagrams.
3. If a \((k,l)\)-partition diagram can be decomposed as a tensor product of smaller set partition diagrams, then the corresponding matrix can also be decomposed as a tensor product of smaller sized matrices, each of which is \(G\)-equivariant. This is because the functors are monoidal. It is this property that makes these specific functors so valuable in what follows, as without it, using the set partition diagrams to factor the matrices will not be possible.
4. In particular, \((k,l)\)-partition diagrams that are _planar_ - that is, none of the connected components in the diagram intersect each other - can be decomposed as a tensor product of smaller set partition diagrams.
## 3 Multiplication Algorithm
We can use the summary points given above to construct an algorithm for multiplying any vector \(v\in(\mathbb{R}^{n})^{\otimes k}\) by any matrix in \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\), expressed in the standard basis of \(\mathbb{R}^{n}\), for each of the groups \(G\) in question.
Since we have a spanning set of \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) for each group \(G\), it is enough to describe an algorithm for how to multiply \(v\) by a spanning set element, since we can extend the result by linearity. The linearity is particularly nice as it allows for the computation with a generic matrix in \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) to be executed in parallel on each of the spanning set elements that appear in its expression.
Algorithm 1 outlines a procedure MatrixMult for how to multiply \(v\) by a spanning set element in \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\). We assume that we have the set partition diagram that is associated
with the spanning set element. (Note that in the description of the algorithm, we have used \(d_{\pi}\) to represent a generic \((k,l)\)-partition diagram; however, the type of set partition diagrams that we can use as input depends entirely upon the group \(G\).)
We describe each of the procedures that appear in Algorithm 1 in more detail below.
Factor is a procedure that takes as input a set partition diagram corresponding to \(G\) and uses the string-like property of these diagrams to output three diagrams whose composition is equivalent to the original input diagram. The first is a diagram that corresponds to a permutation \(\sigma\), where if \(i\) is a vertex in the top row, then the vertex in the bottom row that is connected to it (using the same labelling of the vertices as the top row) is \(\sigma(i)\); the second is another set partition diagram (of the same type as the input) that is planar, and the third is a diagram that corresponds to another permutation, to be interpreted in the same way as the first.
Permute takes as input a vector \(w\in(\mathbb{R}^{n})^{\otimes m}\), for some \(n,m\), that is expressed in the standard basis of \(\mathbb{R}^{n}\), and a permutation \(\sigma\) in \(S_{m}\), and outputs another vector in \((\mathbb{R}^{n})^{\otimes m}\) where only the indices of the basis vectors (and not the indices of the coefficients of \(w\)) have been permuted according to \(\sigma\). Expressed differently, Permute performs the following operation, which is extended linearly:
\[\sigma\cdot w_{I}e_{I}\coloneqq w_{I}(e_{i_{\sigma(1)}}\otimes e_{i_{\sigma(2 )}}\otimes\cdots\otimes e_{i_{\sigma(m)}}) \tag{18}\]
PlanarMult takes as input a planar set partition diagram and a vector, and performs a fast matrix multiplication on this vector. Since the set partition diagram is planar, we first use the monoidal property of the category in which it is a morphism to decompose it as a tensor product of smaller set partition diagrams. Next, we apply the appropriate monoidal functor to express the matrix that the planar set partition diagram corresponds to as a Kronecker product of smaller matrices. Finally, we perform matrix multiplication by applying these smaller matrices to the input vector from "right-to-left, diagram-by-diagram" - to be described in more detail for each group below - returning another vector as output.
The four step procedure given in Algorithm 1 to perform the matrix multiplication is quicker than just performing the multiplication with the matrix as given, since we are taking advantage of the Kronecker product decomposition of a matrix in PlanarMult to speed up the computation. We analyse the performance of the algorithm for each group \(G\) in the Technical Appendix.
We note that the implementation of the Factor and PlanarMult procedures vary according to the group \(G\) and the type of set partition diagrams that correspond to \(G\), although they share many commonalities. We describe the implementation of these procedures for each of the groups below.
### Orthogonal Group \(O(n)\)
In this case, we wish to perform matrix multiplication between \(E_{\beta}\in\operatorname{Hom}_{O(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{ R}^{n})^{\otimes l})\) and \(v\in(\mathbb{R}^{n})^{\otimes k}\).
Factor:The input is \(d_{\beta}\), the \((k,l)\)-Brauer diagram corresponding to \(E_{\beta}\). We drag and bend the strings representing the connected components of \(d_{\beta}\) to obtain a factoring of \(d_{\beta}\) into three diagrams whose composition is equivalent to \(d_{\beta}\): a \((k,k)\)-Brauer diagram that represents a permutation \(\sigma_{k}\) in the symmetric group \(S_{k}\); another \((k,l)\)-Brauer diagram that is planar; and a \((l,l)\)-Brauer diagram that represents a permutation \(\sigma_{l}\) in the symmetric group \(S_{l}\). To obtain the desired planar \((k,l)\)-Brauer diagram, we drag and bend the strings in any way such that
* the pairs that are solely in the bottom row of \(d_{\beta}\) are pulled up to be next to each other in the far right hand side of the bottom row of the planar \((k,l)\)-Brauer diagram,
* the pairs that are solely in the top row of \(d_{\beta}\) are pulled down to be next to each other in the far left hand side of the top row of the planar \((k,l)\)-Brauer diagram, and
* the pairs between vertices in different rows of \(d_{\beta}\) are bent to be in between the other vertices of the planar \((k,l)\)-Brauer diagram such that no two pairings in the planar diagram intersect each other.
We give an example of this procedure in Figure 1(a).
PlanarMult: First, we take the planar \((k,l)\)-Brauer diagram that comes from Factor and express it as a tensor product of three types of Brauer diagrams. The right-most type is itself a tensor product of Brauer diagrams, where each diagram has only two connected vertices in the bottom row. The middle type is a Brauer diagram that consists of all of the pairs in the planar \((k,l)\)-Brauer diagram between vertices in different rows. The left-most type is a tensor product of Brauer diagrams having only two connected vertices in the top row. Figure 2(a) shows an example of the tensor product decomposition for the planar \((5,5)\)-Brauer diagram that appears in Figure 1(a).
The resulting tensor product decomposition corresponds to a Kronecker product of smaller matrices under the functor \(\Phi\), defined in Theorem 2.7, by the monoidal property of \(\Phi\). In order to perform the matrix multiplication, we would like to apply each smaller matrix to the input vector from right-to-left, diagram-by-diagram. To do this, we first deform the entire tensor product decomposition of Brauer diagrams by pulling each individual diagram up one level higher than the previous one, going from right-to-left, and then apply the functor \(\Phi\) at each level. The newly inserted strings correspond to an identity matrix, hence only the matrices corresponding to the original tensor product decomposition act on the input vector at each stage, as desired! We give an example in Figure 4 of how the computation takes place at each stage for the tensor product decomposition given in Figure 2(a), using its equivalent diagram form. We provide full details of how this procedure is implemented in the Technical Appendix.
_Remark 3.1_.: It is important to highlight that the implementation of Factor is very important to the overall performance of the entire algorithm. Specifically, we want to obtain a particular planar Brauer diagram, hence the use of the word _desired_ above. Firstly, we want the middle Brauer diagram to be planar in order to take advantage of the fact that it can be decomposed as a tensor product of smaller Brauer diagrams, as this corresponds to a Kronecker product of matrices under the functor \(\Phi\). However, when performing matrix multiplication with a Kronecker product of matrices, these matrices will perform different operations, and so, if we can choose their order, we should do so in order to execute them in the most efficient way possible. It is clear that ordering the matrices is equivalent to ordering the smaller diagrams, which is equivalent to obtaining a specific planar Brauer diagram!
Under the functor \(\Phi\), the right-most type of Brauer diagrams that are used in PlanarMult corresponds to tensor contraction (indexing and summation) operations. The middle type corresponds to index transfer operations, which, in this case, act as the identity transformation - hence no such operations are executed. Finally, the left-most type corresponds to indexing operations that perform copies. In particular, it is best to perform tensor contraction operations first before performing copying operations as we are reducing the number of elements that need to be copied; this is why we want Factor to return the particular planar Brauer diagram that it does. We analyse the performance of these operations further in the Technical Appendix.
### Symplectic Group \(Sp(n)\)
In this case, we wish to perform matrix multiplication between \(F_{\beta}\in\operatorname{Hom}_{Sp(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{ R}^{n})^{\otimes l})\) and \(v\in(\mathbb{R}^{n})^{\otimes k}\).
The implementation of the Factor procedure is the same as for the orthogonal group. The implementation of the PlanarMult procedure is also the same as for the orthogonal group, except we apply the functor \(X\), defined in Theorem 2.8, instead of the functor \(\Phi\), to perform the matrix multiplication. Note that the three types of diagrams correspond to operations of the same nature as for the orthogonal group; however, the horizontal pairs correspond to different matrices due to the
change in functor. We provide full details of how this procedure is implemented in the Technical Appendix.
### Special Orthogonal Group \(So(n)\)
We wish to perform matrix multiplication between either \(E_{\beta}\) or \(H_{\alpha}\in\mathrm{Hom}_{SO(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^ {\otimes l})\) and \(v\in(\mathbb{R}^{n})^{\otimes k}\), where \(d_{\beta}\) is a \((k,l)\)-Brauer diagram, and \(d_{\alpha}\) is a \((l+k)\backslash n\)-diagram. The \(E_{\beta}\) case is the same as for the orthogonal group by Theorem 2.5. We consider the \(H_{\alpha}\) case below.
Factor: The input is \(d_{\alpha}\). Again, we drag and bend the strings representing the connected components of \(d_{\alpha}\) to obtain a factoring of \(d_{\alpha}\) into the same three diagrams, except this time the middle diagram will be a planar \((l+k)\backslash n\)-diagram. To obtain the desired planar \((l+k)\backslash n\)-diagram we want to drag and bend the strings in any way such that
* the free vertices in the top row of \(d_{\alpha}\) are pulled down to the far right of the top row of the planar \((l+k)\backslash n\)-diagram, maintaining their order,
* the free vertices in the bottom row of \(d_{\alpha}\) are pulled up to the far right of the bottom row of the planar \((l+k)\backslash n\)-diagram, maintaining their order,
* the pairs in the bottom row of \(d_{\alpha}\) are pulled up to be next to each other in the right hand side of the bottom row of the planar \((l+k)\backslash n\)-diagram, but next to and to the left of the free vertices in the bottom row of the planar \((l+k)\backslash n\)-diagram,
* the pairs in the top row of \(d_{\alpha}\) are pulled down to be next to each other in the far left hand side of the top row of the planar \((l+k)\backslash n\)-diagram,
* the pairs connecting vertices in different rows of \(d_{\alpha}\) are ordered in the planar \((l+k)\backslash n\)-diagram in between the other vertices such that no two pairings in the planar diagram intersect each other.
We give an example of this procedure in Figure 2b).
PlanarMult: The implementation is slightly different. Again, we take the planar \((l+k)\backslash n\)-diagram that comes from Factor, but now express it as a tensor product of _four_ types of set partition diagrams. The right-most is a diagram consisting of all the free vertices. This corresponds to the evaluation operation \(\chi\) that is given in (Pearce-Crump, 2022b, Theorem 6.7). The other types of diagrams (and their corresponding operations) are exactly the same as for the orthogonal group. Figure 3b) shows an example of this tensor product decomposition.
The matrix multiplication step is very similar to the orthogonal group, in that to obtain the matrices we perform the same deformation of the tensor product decomposition of diagrams before applying
Figure 3: a) The tensor product decomposition of the planar \((5,5)\)–Brauer diagram appearing in Figure 2a). b) The tensor product decomposition of the planar \((4+5)\backslash 3\)–diagram appearing in Figure 2b).
Figure 2: a) We use the string-like aspect of \((k,l)\)–Brauer diagrams to Factor them as a composition of a permutation in \(S_{k}\), a _planar_\((k,l)\)–Brauer diagram, and a permutation in \(S_{l}\). b) We perform the same procedure but on \((l+k)\backslash n\)–diagrams.
the functor \(\Psi\), defined in Theorem 2.9, at each level. Note, in particular, that we need to attach identity strings to the free vertices appearing in the top row. We provide full details of how this procedure is implemented in the Technical Appendix.
_Remark 3.2_.: Similar to Remark 3.1, we want Factor to return a specific planar \((l+k)\backslash n\)-diagram in order to obtain the most efficient matrix multiplication possible. In this case, we want to pull the free vertices over to the far right hand side as such a diagram corresponds to an operation that zeroes out most terms in the input vector; in fact, by the definition of \(\chi\), the number of terms will decrease from \(n^{k}\) to \(n!\). As a consequence of the form of a planar \((l+k)\backslash n\)-diagram, the rest of the order is determined by the order for the orthogonal group case.
_Remark 3.3_.: The methods that we have used to construct the algorithm presented in this paper can be extended to the case where the group \(G\) is the symmetric group \(S_{n}\). In this case, we recover, in effect, the algorithm given in (Godfrey et al., 2023, Appendix C); however, we use an entirely different approach - involving monoidal categories - to obtain it. We provide full details of its implementation, as well as a discussion on some key differences between the two versions, in the Technical Appendix.
We also give examples of how to perform MatrixMult for each of the groups in question in the Technical Appendix.
## 4 Related Work
The motivation for our algorithm comes from the literature on permutation equivariant neural networks. Maron et al. (2019) were the first to classify the linear layer functions in such networks. Pearce-Crump (2022a) then established a connection between these layer functions and the partition algebra using Schur-Weyl duality. Pan and Kondor (2022) investigated the operations that are needed to perform matrix multiplication with these layers. Godfrey et al. (2023) implemented an algorithm to perform the matrix multiplication itself, and found that using a basis known as the diagram basis, first constructed by Jones (1994) in the case \(k=l\), is particularly beneficial for these computations.
Pearce-Crump (2022b) characterised the learnable, linear neural network layers between tensor power spaces of \(\mathbb{R}^{n}\) that are equivariant to the orthogonal group, \(O(n)\), the special orthogonal group, \(SO(n)\), and the symplectic group, \(Sp(n)\). The Brauer algebra, which appears in this characterisation, was first developed by Brauer (1937). Brown (1955, 1956) showed that the Brauer algebra is semisimple if and only if \(n\geq k-1\). Grood (1999) investigated the representation theory of the Brauer-Grood algebra. The Brauer category first appeared in Lehrer and Zhang (2012), and it is also
Figure 4: We show how matrix multiplication is implemented in PlanarMult for \(O(n),Sp(n)\) and \(SO(n)\) using the tensor product decomposition of the planar \((5,5)\)–Brauer diagram given in Figure 3a) as an example. Effectively, we perform the matrix multiplication by applying the matrices “right–to–left, diagram–by–diagram”. In reality, we perform the matrix multiplication as follows: first, we deform the entire tensor product decomposition diagram by pulling each individual diagram up one level higher than the previous one, going from right–to-left, and then we apply the functor that corresponds to the group at each level. Finally, we perform matrix multiplication at each level to obtain the final output vector.
discussed in Hu (2019). Lehrer and Zhang (2018) investigated the theory behind what we have termed the Brauer-Grood category. This category also appears in Comes (2020).
## 5 Conclusion
In this paper, we have introduced an algorithm for multiplying a vector by any weight matrix that appears in a group equivariant neural network where the layers are tensor power spaces of \(\mathbb{R}^{n}\), for the orthogonal, special orthogonal, and symplectic groups. Our implementation takes advantage of the properties of monoidal categories and monoidal functors, and results in a significant reduction in computational cost compared with a naive implementation. Ultimately, this algorithm reduces the time that is needed to train and run such neural networks.
## 6 Acknowledgments
The author would like to thank his PhD supervisor Professor William J. Knottenbelt for being generous with his time throughout the author's period of research prior to the publication of this paper.
This work was funded by the Doctoral Scholarship for Applied Research which was awarded to the author under Imperial College London's Department of Computing Applied Research scheme. This work will form part of the author's PhD thesis at Imperial College London.
|
2301.00916 | Individual Path Recommendation Under Public Transit Service Disruptions
Considering Behavior Uncertainty | This study proposes a mixed-integer programming formulation to model the
individual-based path (IPR) recommendation problem during public transit
service disruptions with the objective of minimizing system travel time and
respecting passengers' path choice preferences. Passengers' behavior
uncertainty in path choices given recommendations is also considered. We model
the behavior uncertainty based on the passenger's prior preferences and
posterior path choice probability distribution with two new concepts:
epsilon-feasibility and Gamma-concentration, which control the mean and
variance of path flows in the optimization problem. We show that these two
concepts can be seen as a way of approximating the recourse function (expected
system travel time) in a two-stage stochastic optimization. It is proved that
these two concepts help to bound the difference between the approximated
recourse function and the exact one. Additional theoretical analysis shows that
epsilon-feasibility and Gamma-concentration can be seen as an approximation of
expectation and chance constraints in a typical stochastic optimization
formulation, respectively. The proposed IPR problem with behavior uncertainty
is solved efficiently with Benders decomposition. The model is implemented in
the Chicago Transit Authority (CTA) system with a real-world urban rail
disruption as the case study. Results show that the proposed IPR model
significantly reduces the average travel times compared to the status quo and
outperforms the capacity-based benchmark path recommendation strategy. | Baichuan Mo, Haris N. Koutsopoulos, Zuo-Jun Max Shen, Jinhua Zhao | 2023-01-03T01:17:26Z | http://arxiv.org/abs/2301.00916v1 | Individual Path Recommendation Under Public Transit Service Disruptions Considering Behavior Uncertainty
###### Abstract
This study proposes a mixed-integer programming formulation to model the individual-based path (IPR) recommendation problem during public transit service disruptions with the objective of minimizing system travel time and respecting passengers' path choice preferences. Passengers' behavior uncertainty in path choices given recommendations is also considered. We model the behavior uncertainty based on the passenger's prior preferences and posterior path choice probability distribution with two new concepts: \(\epsilon\)-feasibility and \(\Gamma\)-concentration, which control the mean and variance of path flows in the optimization problem. We show that these two concepts can be seen as a way of approximating the recourse function (expected system travel time) in a two-stage stochastic optimization. It is proved that these two concepts help to bound the difference between the approximated recourse function and the exact one. Additional theoretical analysis shows that \(\epsilon\)-feasibility and \(\Gamma\)-concentration can be seen as an approximation of expectation and chance constraints in a typical stochastic optimization formulation, respectively. The proposed IPR problem with behavior uncertainty is solved efficiently with Benders decomposition. The model is implemented in the Chicago Transit Authority (CTA) system with a real-world urban rail disruption as the case study. Results show that the proposed IPR model significantly reduces the average travel times compared to the status quo and outperforms the capacity-based benchmark path recommendation strategy. We also show that incorporating behavior uncertainty with respect to responses to information achieves lower system travel times than assuming that all passengers would follow the recommendations. In terms of respecting people's preferences, results show that it is possible to make recommendations so that most of the passengers (e.g., more than 70%) use their preferred paths while only increasing the system travel time by 0.51%. Individualized recommendation; Mixed-integer programming; Behavior uncertainty.
## 1 Introduction
### Background and challenges
With aging systems and near-capacity operations, service disruptions often occur in urban public transit (PT) systems. These incidents may result in passenger delays, cancellation of trips, and economic losses (Cox et al., 2011).
During a significant disruption where the service is interrupted for a relatively long period of time (e.g., 1 hour), affected passengers usually need to find an alternative path or use other travel modes (such as transfer to another bus route). However, due to a lack of knowledge of the system (especially during incidents), the routes chosen by passengers may not be optimal or even cause more congestion (Mo et al., 2022). For example, during a rail disruption, most of the passengers may choose bus routes that are parallel to the interrupted rail line as an alternative. However, given limited bus capacity, parallel bus lines may become oversaturated and passengers have to wait for a long time to board due to being denied boarding (or left behind).
One of the strategies to better guide passengers is to provide path recommendations so that passenger flows are re-distributed in a better way and the system travel times are minimized. This can be seen as solving an optimal passenger flow distribution (or assignment) problem over a public transit network. However, different from the typical flow redistribution problem, there are several unique characteristics and challenges for the path recommendation problem under PT service disruptions.
* Passengers may have different preferences on different alternative paths. This heterogeneity suggests that we cannot treat a group of passengers simply as flows. Individualization is needed in the path recommendation design.
* Passengers may not follow the recommendation. When providing a specific path recommendation to a passenger, their actual path choice is uncertain (though the recommendation may change their preferences). This behavior uncertainty brings challenges to the recommendation system design and has not been considered in the path recommendation literature. In the context of individualization, the behavior uncertainty is also individual-specific, which requires a more granular modeling approach.
### Organization and contributions
To tackle these challenges, this study proposes an individual-based path recommendation model to reduce system congestion during public transit disruptions considering behavior uncertainty. We first formulate an optimal flow problem as a linear program based on the model of Bertsimas et al. (2020), which solves the optimal path flows for each OD pair and time interval that minimize the system travel time. Then, we add the recommendation decision variables, \(x_{p,r}\) (binary variable indicating whether path \(r\) is recommended passenger \(p\)), and associated constraints to capture the behavior uncertainty. The behavior uncertainty is modeled with a conditional path choice probability distribution for each passenger given their received path recommendation. We introduce
two new concepts: \(\epsilon\)-feasible flows and \(\Gamma\)-concentrated flows, to connect the optimal flow problem with the conditional path choice probabilities. We show that these two concepts can be seen as a way of approximating the recourse function (expected system travel time) in a two-stage stochastic optimization. It is proved that these two concepts help to bound the difference between the approximated recourse function and the exact one. Additional theoretical analysis shows that \(\epsilon\)-feasibility and \(\Gamma\)-concentration are approximations of expectation and chance constraints in a typical stochastic optimization formulation, respectively. The individual path recommendation problem with behavior uncertainty is a mixed-integer program. We solve it efficiently with Benders decomposition. The proposed approach is implemented in the Chicago Transit Authority (CTA) system with a real-world urban rail disruption as the case study.
The main contributions of this paper are as follows:
* To the best of the authors' knowledge, this is the first article dealing with individual path recommendations under public transit service disruptions considering behavior uncertainty. Previous studies only considered uncertainty in demand (Mo et al., 2022) or incident duration (Tan et al., 2020).
* To model behavior uncertainty, this paper proposes a framework with prior path utility and posterior path choice distribution given recommendations. We use two new concepts: \(\epsilon\)-feasibility and \(\Gamma\)-concentration, to control the mean and variance of path flows due to behavior uncertainty and transform these two requirements to linear constraints in the optimization model using Chebyshev's inequality.
* The proposed concepts can be seen as a way of approximating the recourse function (expected system travel time) in a two-stage stochastic optimization. It is proved that these two concepts help to bound the difference between the approximated recourse function and the exact one. Additional theoretical analysis shows that \(\epsilon\)-feasibility and \(\Gamma\)-concentration are approximations of expectation and chance constraints in a typical stochastic optimization formulation, respectively.
* Benders decomposition (BD) is used to solve the mixed-integer individual path recommendation problem efficiently. Under BD, the master problem becomes a small-scale integer program and the sub-problem reduces to a linear program. A series of theoretical analyses are provided to show the connections between the proposed concepts and typical stochastic and robust optimization.
The remainder of the paper is organized as follows. The literature review is discussed in Section 2. In Section 3, we describe the problem conceptually and analytically. Section 4 develops the solution methods, including the optimal flow problem formulation, modeling of the behavior uncertainty,
theoretical analysis of the proposed concepts, and Benders decomposition. In Section 5, we apply the proposed model on the CTA system as a case study. The model results are analyzed in Section 6. Finally, we conclude the paper and summarize the main findings in Section 7. All mathematical proofs can be found in appendices.
## 2 Literature review
### Individualized recommendations system
Individualized recommendations design is a popular topic in the field of computer science and operations research, with many real-world implications such as Ads ranking (Richardson et al., 2007; Khalid et al., 2013), mobile news recommendations (Yeung and Yang, 2010), travel recommendations (Majid et al., 2013), etc. Most of these recommendation systems focus on individual preference maximization, which, in return, can increase indicators of interest such as click-through rate (CTR) and conversion rate. However, in the context of path recommendations under disruptions, though respecting passengers' preferences is important, the ultimate goal is to minimize the system travel time and mitigate the impact of disruptions, which is different from typical recommendation design literature. Another difference is that, the typical recommendation systems are usually designed with machine learning algorithms trained with the real-world user and system interaction data because they have to learn users' preferences based on their interaction histories. However, in this study, the system travel time can be evaluated using a network loading model. This implies that, instead of using machine learning models, we can use an optimization formulation to determine the individualized path recommendations that minimize system travel time.
In summary, different from the typical individualized recommendation system literature, this study focuses on system-level objectives instead of individual-level preferences. It leverages an optimization model to design the recommendation, rather than machine learning models.
### Path recommendations during disruptions
Most previous studies on path recommendation under incidents are like designing a "trip planner". That is, the main objective is to find available routes or the shortest path given an OD pair when the network is interrupted by incidents. For example, Bruglieri et al. (2015) designed a trip planner to find the fastest path in the public transit network during service disruptions based on real-time mobility information. Bohmova et al. (2013) developed a routing algorithm in urban public transportation to find reliable journeys that are robust for system delays. Roelofsen et al. (2018) provided a framework for generating and assessing alternative routes in case of disruptions in urban
public transport systems. To the best of the authors' knowledge, none of the previous studies have considered path recommendations aiming to minimize the system-wide travel time.
Providing path recommendations during disruptions is similar to the topic of passenger evacuation under emergencies. The objective of evacuation is usually to minimize the total evacuation time. For example, Abdelgawad and Abdulhai (2012) developed an evacuation model with routing and scheduling of subway and bus transit to alleviate congestion during the evacuation of busy urban areas. Wang et al. (2019) proposed an optimal bus bridging design method under operational disruptions on a single metro line. Tan et al. (2020) propose an evacuation model with urban bus networks as alternatives in the case of common metro service disruptions by jointly designing the bus lines and frequencies.
However, although these passenger evacuation studies focus on minimizing the system travel time, there are several differences from this paper. First, in our paper, the service disruption is not as severe as the emergency situation. We assume the service will recover after a period of time and passengers are allowed to wait. They do not necessarily need to cancel trips or follow the evacuation plan as assumed in previous evacuation studies. Second, in this article, we do not adjust the operations on the supply side. Instead, we focus on providing information to the passengers to better utilize the existing resources/capacities of the system. Third, as mentioned before, this paper considers passenger heterogeneity and focuses on individual-level path recommendations, while previous evacuation papers simply model passengers as flows. Besides, we also assume that passengers may not follow the recommendation (i.e., behavior uncertainty), which has not been considered in any evacuation paper before.
### Behavior uncertainty
Behavior uncertainty is a well-known challenge in transportation modeling (Mahmassani, 1984). Typically, passenger's behavior is modeled using various econometrics approaches (Ben-Akiva et al., 1985; Train, 2009; Mo et al., 2021) or machine learning models (Mirchevska, 2013; Wang et al., 2020). These models output the probability distribution for the passenger's possible behavior. At the aggregate level, there are numerous studies using the predicted demand for different transportation applications taking demand uncertainty into consideration, such as ride-sharing (Guo et al., 2021), transit route planning (Yoon and Chow, 2020), and supply chain management (Jung et al., 2004).
However, at the individual level, the number of studies is limited. The main reason is that individual-level decision-making is usually discrete. So it is challenging to use typical robust optimization to address discrete uncertain variables (Subramanyam et al., 2021). In terms of stochastic
optimization, the number of possible scenarios increases exponentially with the number of individuals in the system. Some studies use simulation to incorporate individual-level behavior uncertainty. For example, Horne et al. (2005) use a discrete choice model to simulate how different hybrid energy-economy policies can motivate users' responses. However, to incorporate behavior uncertainty in an optimization model (such as the individual path recommendation model in this paper), new modeling techniques are needed.
Another difference in this study compared to previous literature is that the behavior uncertainty (i.e., passenger's response to the recommendation) makes the decision variables (i.e., passenger flow) become random variables. Typical robust optimization or stochastic optimization usually assumes the parameters of constraints are random variables, but not the decision variable.
## 3 Problem description
### Conceptual description
Consider a service disruption in an urban rail system. During the disruption, some stations in the incident line (or the whole line) are blocked. Passengers in the blocked trains are usually offloaded to the nearest platforms. To respond to the incident, some operating changes are made, such as dispatching shuttle buses, rerouting existing services, short-turning in the incident line, headway adjustment, etc. Assume that all information about the operating changes is available. These changes define a new PT service network and available path sets. Our objective is to develop an individual-based path recommendation model that, when an incident happens, provides a recommended path to every passenger who uses their phones, websites, or electronic boards at stations to enter their origin, destination, and departure time. The recommendation considers the individual's preferences and behavioral histories. Hence, passengers with the same origin, destination, and departure time may get different recommended paths. The overall system aims to minimize the total travel time for all passengers, including passengers in nearby lines or bus routes without incidents (note that these passengers may experience additional crowding due to transfer passengers from the incident line).
Figure 1 shows a simple example of the path recommendation problem. In this example, Rail Line 1 has an incident and cannot provide service for a period of time. Both of the two passengers at station A want to go to station C. Assuming that they request path recommendations. The alternative paths include using the bus route (blue dashed line), using Rail Line 2 (green dashed line), or waiting for the system to recover (i.e., still using Rail Line 1). Note that using either the bus route or Rail Line 2 will take away capacity from passengers who originally use these two services (i.e.,
the orange passengers in the figure). Hence, the model should consider the total travel time of all four passengers in the system to design recommendation strategies.
Moreover, as mentioned in the introduction, behavior uncertainty needs to be considered. In this example, if we recommend a passenger use a bus route, he/she may not follow the recommendation and choose Rail Line 2 instead.
### Analytical description
Let us divide the analysis period into several time intervals with equal length \(\tau\) (e.g., \(\tau=5\) min). Let \(t\) be the integer time index. \(t=1\) is the start of the incident and \(t\leq 0\) indicates the time before the incident. Let \(\mathcal{P}\) be the set of passengers that will receive path recommendations. We assume \(\mathcal{P}\) is known as we can obtain passengers' requests before running the model. Given the revised operation during the incident, let \(\mathcal{R}_{p}\) be the feasible path set for each passenger \(p\in\mathcal{P}\). Note that \(\mathcal{R}_{p}\) includes all feasible services that are provided by the PT operator. A path \(r\in\mathcal{R}_{p}\) may be waiting for the system to recover (i.e., using the incident line), or transfer to nearby bus lines, using shuttle services, etc. We do not consider non-PT modes such as TNC or driving for the following reasons: 1) This study aims to design a path recommendation system used by PT operators. The major audience should be all PT users. Considering non-PT modes needs the supply information of all other travel modes and even consider non-PT users (such as the impact of traffic congestion on drivers), which is beyond the scope of this study. Future research may consider a multi-modal path recommendation system. 2) Passengers using non-PT modes can be simply treated as demand reduction for the PT system. So their impact on the PT system can still be captured.
Given a passenger \(p\in\mathcal{P}\), we aim to determine \(x_{p,r}\) for each \(p\), where \(x_{p,r}\) indicates whether path \(r\in\mathcal{R}_{p}\) is recommended to passenger \(p\) or not. Assume only one path is recommended to each passenger, we have
\[\sum_{r\in\mathcal{R}_{p}}x_{p,r}=1\quad\forall p\in\mathcal{P} \tag{1}\]
Figure 1: Example of the individual path recommendation problem
Note that we can relax this assumption by designing the recommendation to a passenger as a "composition" including multiple paths or travel times. This generalization is discussed in Section B.1.
\(\mathcal{P}\) includes passengers with different origins, destinations, and departure times. If an incident ends at \(t^{\text{end}}\), the recommendation should consider a time horizon after \(t^{\text{end}}\) because there is remaining congestion in the system. Hence, we provide recommendations until time \(T^{D}>t^{\text{end}}\) (e.g., \(T^{D}\) can be one hour after \(t^{\text{end}}\)). Therefore, the departure times for passenger \(p\in\mathcal{P}\) range from \([1,T^{D}]\) (\(T^{D}\) and \(t^{\text{end}}\) are both time indices).
The recommendation model will be solved at \(t=1\) and will generate the recommendation strategies \(\mathbf{x}=(x_{p,r})_{p\in\mathcal{P},r\in\mathcal{R}_{p}}\) for passengers who depart at time \(t\in[1,T^{D}]\). In reality, the model can be implemented in a rolling horizon manner. Specifically, at each time interval \(t\geq 1\), we first update the demand and supply information in the system, including new demand estimates, new to-be-recommended passenger set \(\mathcal{P}\), newly available path sets \(\mathcal{R}_{p}\), new service routes and frequencies, new incident duration estimates, new onboard passenger estimates, etc. Based on this information, we solve the model to obtain recommendations for passengers with departure time in \([t,T^{D}]\). But we only implement the recommendation strategies for passengers who depart at the current time \(t\).
Therefore, in the following formulation, we only focus on solving the model at \(t=1\), which is the start of the incident. The whole analysis period includes warm-up and cool-down periods to better estimate the system states (e.g., vehicle loads, passenger travel times, etc.). Therefore, the analysis period is defined as \([t^{\text{min}},T]\), where \(t^{\text{min}}<1\) (time before the incident) and \(T>T^{D}\). For example, \(t^{\text{min}}\) and \(T\) can be one hour before and after \(t=1\) and \(T^{D}\), respectively. And we define all time intervals in the analysis period as \(\mathcal{T}=\{t^{\text{min}},t^{\text{min}}+1,...,T\}\). The overall path recommendation framework can be summarized in Figure 2.
## 4 Formulation
In this section, we elaborate on the detailed formulation of the individual path recommendation model. Section 4.1 develops an optimization model to solve the optimal flow distribution over a public transit network with disruptions. Section 4.2 describes how passengers' behavior uncertainties (i.e., non-compliance to recommendation) are modeled based on a random utility maximization framework. Section 4.3 provides the overall formulation of the individual path recommendation model by combining the optimal flow model in Section 4.1 and the behavior uncertainty component
in Section 4.2. Section 4.5 shows how the individual path recommendation model can be solved efficiently using Benders decomposition.
The notations used in the paper are summarized in Appendix A.
### Optimal flow during disruptions
In this section, we formulate a linear programming (LP) model adapted from Bertsimas et al. (2020) to solve the optimal flow distribution in a public transit system with service disruptions. Consider an OD pair \((u,v)\) and departure time \(t\). Let \(\mathcal{R}^{u,v}\) be the set of feasible paths for OD pair \((u,v)\). Define \(q_{t}^{u,v,r}\) (resp. \(f_{t}^{u,v,r}\)) as the number of passengers **in** (resp. **not in**) \(\mathcal{P}\) with OD pair \((u,v)\) and departure time \(t\), who use path \(r\in\mathcal{R}^{u,v}\). Specifically, \(q_{t}^{u,v,r}\) represents the passenger flows that receive recommendations while \(f_{t}^{u,v,r}\) those do not. Hence, the total path flow in \(r\in\mathcal{R}^{u,v}\) is \(q_{t}^{u,v,r}+f_{t}^{u,v,r}\). Let \(d_{t}^{u,v}\) be the total demand of OD pair \((u,v)\) at time \(t\), we have
\[q_{t}^{u,v,r}+f_{t}^{u,v,r}=d_{t}^{u,v}\quad\forall(u,v)\in\mathcal{W},t\in \mathcal{T} \tag{2}\]
where \(\mathcal{W}\) is the set of all OD pairs. As we focus on path recommendations for \(\mathcal{P}\), in this study, \(q_{t}^{u,v,r}\) is the decision variable while \(f_{t}^{u,v,r}\) is a known constant (i.e., the estimated demand information). For mathematical convenience, we define \(\mathcal{F}\) as the set of all triplets \((u,v,r)\) in the system. And the objective in this section is to find the optimal flows \(q_{t}^{u,v,r}\) (\(\forall(u,v,r)\in\mathcal{F},t\in\mathcal{T}\)) that minimize the total system travel time.
Consider a path \(r\) for OD pair \((u,v)\). A path may include multiple legs, where each leg is associated with the service in a rail or a bus line. For example, the path in Figure 3 (indicated by green arrows) has two legs: the first one in the rail line and the second in the bus line. Every leg has a boarding and an alighting station. For example, Leg 1 (resp. 2) in this example has boarding station A (resp. C) and alighting station B (resp. D). Let \(\mathcal{I}^{u,v,r}=\{1,...,|\mathcal{I}^{u,v,r}|\}\) be the set of legs
Figure 2: Problem description and model framework
for path \(r\). We use a four-element tuple \((u,v,r,i)\) to represent a leg \(i\) of path \(r\) for OD pair \((u,v)\), where \(i\in\mathcal{I}^{u,v,r}\).
Let \(\Delta_{t}^{u,v,r,i}\) (resp. \(\delta_{t}^{u,v,r,i}\)) be the travel time between the **terminal** and the **boarding** (resp. **alighting**) station of leg \((u,v,r,i)\) for a vehicle **departing** from the terminal at time \(t\). Hence, the vehicle's arrival time at the boarding (resp. alighting) station of leg \((u,v,r,i)\) is \(t+\Delta_{t}^{u,v,r,i}\) (resp. \(t+\delta_{t}^{u,v,r,i}\)). \(\delta_{t}^{u,v,r,i}-\Delta_{t}^{u,v,r,i}\) represents the total in-vehicle time of leg \((u,v,r,i)\) for the vehicle. Define \(z_{t}^{u,v,r,i}\) (decision variable) as the total number of onboard passengers in leg \((u,v,r,i)\) who board a vehicle that **had departed** from the terminal at time \(t\).
There are three types of constraints for the network flow description: 1) existing flow constraints, 2) vehicle capacity constraints, and 3) flow conservation constraints.
**Existing flows constraints:** Although the path recommendations start at time \(t=1\), there are passengers that already boarded the vehicles. Ignoring these existing flows may lead to an overestimation of the system's available capacity. To capture the existing onboard flows at \(t=1\), we define the set of onboard flow indices as
\[\Omega_{1}=\{(u,v,r,i,t):t+\Delta_{t}^{u,v,r,i}\leq 1\leq t+\delta_{t}^{u,v,r,i}\} \tag{3}\]
And the existing flow constraints can be expressed as
\[z_{t}^{u,v,r,i}=\hat{z}_{t}^{u,v,r,i}\quad\forall(u,v,r,i,t)\in\Omega_{1} \tag{4}\]
where \(\hat{z}_{t}^{u,v,r,i}\) are constants that capture the existing onboard flows when the incident happens. These flows can be directly obtained from a simulation model or real-time passenger counting data.
Figure 3: Definition of paths and legs
**Capacity constraints:** Transit vehicles have limited capacity. Consider a vehicle departing at time \(t\) on line \(l\) (referred to as vehicle \((l,t)\)). We denote its total number of onboard passengers at time \(t^{\prime}\) as \(O_{l,t,t^{\prime}}\). Specifically, \(O_{l,t,t^{\prime}}\) can be expressed as
\[O_{l,t,t^{\prime}}(\mathbf{z})=\sum_{\{(u,v,r,i,t)\in\texttt{Onboard}(l,t^{\prime}) \}}z_{t}^{u,v,r,i}\quad\forall l\in\mathcal{L},\forall t\in\mathcal{T},t^{ \prime}=t,t+1,...,T_{l,t} \tag{5}\]
where \(T_{l,t}\) is the time index that vehicle \((l,t)\) arrives at the last station of line \(l\), \(\mathbf{z}=(z_{t}^{u,v,r,i})_{t\in\mathcal{T},(u,v,r)\in\mathcal{F},i\in\mathcal{ I}^{u,v,r}}\). \(\texttt{Onboard}(l,t^{\prime})\) is the set of onboard flow indices for vehicle \((l,t)\), defined as
\[\texttt{Onboard}(l,t^{\prime})=\{(u,v,r,i,t):\text{Leg }(u,v,r,i)\text{ on line }l\text{, and }t+\Delta_{t}^{u,v,r,i}\leq t^{\prime}\leq t+\delta_{t}^{u,v,r,i}\} \tag{6}\]
Then the capacity constraint is:
\[O_{l,t,t^{\prime}}(\mathbf{z})\leq K_{l,t}\quad\forall l\in\mathcal{L},t\in \mathcal{T},t^{\prime}=t,t+1,...,T_{l,t} \tag{7}\]
where \(K_{l,t}\) is the capacity of the vehicle \((l,t)\). \(\mathcal{L}\) is the set of all lines.
**Flow conservation constraint:** There are two different flow conservation constraints: 1) flow conservation at origin stations and 2) at transfer stations. To ensure the origin flow conservation, the cumulative number of arrival passengers should be larger than the cumulative number of boarding passengers at an origin at any time. This indicates that not all arrival passengers can board due to potentially being left behind because of capacity constraints.
The number of arriving passengers (i.e., demand) for \((u,v,r)\) at time \(t\) is \(q_{t}^{u,v,r}+f_{t}^{u,v,r}\). And the number of boarding passengers at the origin station (i.e., \(u\)) at time \(t\) is \(z_{t^{\prime}}^{u,v,r,1}\) (i.e., the first leg) with \(t^{\prime}+\Delta_{t^{\prime}}^{u,v,r,1}=t\). \(t^{\prime}\) is the vehicle departure time from the terminal and \(t^{\prime}+\Delta_{t^{\prime}}^{u,v,r,1}\) is the time when the vehicle arrives at the boarding station. Therefore, the origin flow conservation constraint can be written as:
\[\sum_{\{t^{\prime}:t^{\text{min}}\leq t^{\prime}+\Delta_{t^{\prime}}^{u,v,r,1 }\leq t\}}z_{t^{\prime}}^{u,v,r,1}\leq\sum_{t^{\prime}=t^{\text{min}}}^{t}(f_{ t^{\prime}}^{u,v,r}+q_{t^{\prime}}^{u,v,r})\quad\forall(u,v,r)\in\mathcal{F},t\in \mathcal{T} \tag{8}\]
Now consider the flow conservation at a transfer station. All arrival passengers at a transfer station of a path are the onboard passengers from the last leg. Therefore, we use a similar way to define the transfer flow conservation: the cumulative number of onboard passengers from the last leg should be larger than the cumulative number of boarding passengers at the transfer station. And
the number of boarding passengers at the transfer station is simply \(z_{t^{\prime}}^{u,v,r,i}\) with \(i\geq 2\). Hence, flow conservation constraints at a transfer station are:
\[\sum_{\{t^{\prime}:t^{\text{min}}\leq t^{\prime}+\Delta_{t^{\prime}}^{u,v,r,i} \leq t\}}z_{t^{\prime}}^{u,v,r,i}\leq\sum_{\{t^{\prime}:t^{\text{min}}\leq t^{ \prime}+\delta_{t^{\prime}}^{u,v,r,i-1}\leq t\}}z_{t^{\prime}}^{u,v,r,i-1} \quad\forall(u,v,r)\in\mathcal{F},i\in\mathcal{I}^{(u,v,r)}\setminus\{1\},t\in \mathcal{T} \tag{9}\]
Note that \(z_{t^{\prime}}^{u,v,r,i}\) is defined as the onboard passengers for vehicles **departing** at time \(t^{\prime}\). Therefore, \(t^{\prime}+\delta_{t^{\prime}}^{u,v,r,i-1}\) is the alighting time for passengers at leg \(i-1\) (which is also the transfer demand arrival time at leg \(i\) as we assume transfer walk time is within a time interval \(\tau\) and is negligible). \(t^{\prime}+\Delta_{t^{\prime}}^{u,v,r,i}\) is the boarding time for passengers at leg \(i\).
The objective is to minimize the total travel time for all passengers in the system. Total travel time can be decomposed into waiting time and in-vehicle time.
**In-vehicle time:** Total in-vehicle time is simply the onboard flow multiplied by the travel time on each leg:
\[IVT(\mathbf{z})=\sum_{(u,v,r)\in\mathcal{F}}\sum_{i\in\mathcal{I}^{u,v,r}}\sum_{ t\in\mathcal{T}}z_{t}^{u,v,r,i}\cdot T_{u,v,r,i,t}^{\text{IVT}} \tag{10}\]
where \(T_{u,v,r,i,t}^{\text{IVT}}\) is the in-vehicle time of leg \((u,v,r,i)\) of the vehicle departing at time \(t\).
**Waiting time:** There are two causes of waiting time: 1) waiting time because of vehicle headways, and 2) waiting time resulting from being left behind. During a specific time interval \(t\), all left behind passengers would have a waiting time of \(\tau\). All boarding passengers, assuming uniform arrival, have an average waiting time that is half of the time interval (i.e., \(\frac{\tau}{2}\)). Therefore, the total waiting time for passengers at station \(s\) and time \(t\) can be formulated as
\[WT_{s,t}=\tau(AD_{s,t}+XD_{s,t}-BD_{s,t})+\frac{\tau}{2}(BD_{s,t+1}-BD_{s,t}) \tag{11}\]
where \(AD_{s,t}\) represents the **cumulative arriving demand** at station \(s\)**up to** time \(t\), \(XD_{s,t}\) represents the **cumulative transferring demand** at station \(s\)**up to** time \(t\), and \(BD_{s,t}\) represents the **cumulative boarded demand** at station \(s\)**up to** time \(t\). Hence, \((BD_{s,t+1}-BD_{s,t})\) represents the total number of boarding passengers at time \(t\) and station \(s\), and \((AD_{s,t}+XD_{s,t}-BD_{s,t})\) represents the total number of left behind passengers at station \(s\) and time \(t\). Finally, the total system waiting time is
\[WT(\mathbf{q},\mathbf{z})=\sum_{s\in\mathcal{S}}\sum_{t=1}^{T}WT_{s,t} \tag{12}\]
where \(\mathbf{q}=(q_{t}^{u,v,r})_{t\in\mathcal{T},(u,v,r)\in\mathcal{F}}\).
The cumulative arriving demand \(AD_{s,t}\) is simply all arriving passengers with origin \(s\) up to time \(t\):
\[AD_{s,t}=\sum_{\{(u,v,r):u=s\}}\sum_{t^{\prime}=t^{\text{min}}}^{t}(f_{t^{\prime }}^{u,v,r}+q_{t^{\prime}}^{u,v,r})\quad\forall s\in\mathcal{S},t\in\mathcal{T} \tag{13}\]
where \(\mathcal{S}\) is the set of all stations.
The cumulative transferring demand is all passengers alighting at station \(s\) from their previous leg \(i-1\) for their next leg \(i\):
\[XD_{s,t}=\sum_{\{(u,v,r,i)\in\texttt{Xth}(s)\}}\sum_{\{t^{\prime}:t^{\text{min }}\leq t^{\prime}+\delta_{t^{\prime}}^{u,v,r,i-1}\leq t\}}z_{t^{\prime}}^{u,v, r,i-1}\quad\forall t=t^{\text{min}},...,T \tag{14}\]
where \(\texttt{Xth}(s)\) is the set of legs that transfer at station \(s\).
The cumulative boarded demand is all passengers that successfully board a vehicle at station \(s\) at time \(t\). Define \(\texttt{Bdat}(s)\) as the set of all legs with boarding station \(s\), we have
\[BD_{s,t}=\sum_{\{(u,v,r,i)\in\texttt{Bdat}(s)\}}\sum_{\{t^{\prime}:t^{\text{ min}}\leq t^{\prime}+\Delta_{t^{\prime}}^{u,v,r,i}\leq t\}}z_{t^{\prime}}^{u,v,r,i} \quad\forall t=t^{\text{min}},...,T \tag{15}\]
Taking everything into consideration, the total travel time in the system is \(WT(\mathbf{x},\mathbf{z})+IVT(\mathbf{z})\). The optimal flow problem is:
\[(OF)\quad\min_{\mathbf{q},\mathbf{z}} WT(\mathbf{q},\mathbf{z})+IVT(\mathbf{z})\] (16a) s.t. \[O_{t,t,t^{\prime}}(\mathbf{z})\leq K_{l,t}\quad\forall l\in\mathcal{ L},t\in\mathcal{T},t^{\prime}=t,t+1,...,T_{l,t}\] (16b) \[\sum_{\{t^{\prime}:t^{\text{min}}\leq t^{\prime}+\Delta_{t^{ \prime}}^{u,v,r,1}\leq t\}}z_{t^{\prime}}^{u,v,r,1}\leq\sum_{t^{\prime}=\text{ min}}^{t}(f_{t^{\prime}}^{u,v,r}+q_{t^{\prime}}^{u,v,r})\quad\forall(u,v,r) \in\mathcal{F},t\in\mathcal{T}\] (16c) \[\sum_{\{t^{\prime}:t^{\text{min}}\leq t^{\prime}+\Delta_{t^{ \prime}}^{u,v,r,i}\leq t\}}z_{t^{\prime}}^{u,v,r,i}\leq\sum_{\{t^{\prime}:t^{ \text{min}}\leq t^{\prime}+\delta_{t^{\prime}}^{u,v,r,i-1}\leq t\}}z_{t^{\prime }}^{u,v,r,i-1}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
As the objective function is minimizing the system travel time, this formulation will automatically load passengers to a train as long as there is available capacity (Bertsimas et al., 2020).
**Path travel time calculation:** It is worth noting that Eq. 16 does not explicitly output the travel time of different paths. The travel time of a path \((u,v,r)\) for trips departing at time \(t\) (denoted as \(TT_{t}^{u,v,r}\)) has to be obtained from the network flow patterns **after** solving Eq. 16. Specifically, consider the group of passengers using path \((u,v,r)\) and departing at time \(t\). Their arrival time at the destination (denoted as \(AT_{t}^{u,v,r}\)) can be calculated as
\[AT_{t}^{u,v,r} = \min\left\{\tilde{t}\in\mathcal{T}_{t}^{u,v,r}\colon\sum_{t^{ \prime}=t^{\text{min}}}^{t}\left(f_{t^{\prime}}^{u,v,r}+q_{t^{\prime}}^{u,v,r} \right)\leq\sum_{t^{\text{min}}\leq t^{\prime}+\delta_{t^{\prime}}^{u,v,r,| \mathcal{I}^{u,v,r}|}\leq\tilde{t}}z_{t^{\prime}}^{u,v,r,|\mathcal{I}^{u,v,r}|}\right\} \tag{17}\] \[\forall t\in\mathcal{T},(u,v,r)\in\mathcal{F}\]
where \(\mathcal{T}_{t}^{u,v,r}\) is the set of possible arrival time indices, defined as \(\mathcal{T}_{t}^{u,v,r}=\{t^{\prime}:t\leq t^{\prime}\leq T\}\). Eq. 17 represents the travel time calculation with cumulative demand curves at origins and destinations. \(\sum_{t^{\prime}=t^{\text{min}}}^{t}\left(f_{t^{\prime}}^{u,v,r}+q_{t^{\prime }}^{u,v,r}\right)\) is the cumulative demand up to time \(t\) at the origin. \(\sum_{t^{\text{min}}\leq t^{\prime}+\delta_{t^{\prime}}^{u,v,r,|\mathcal{I}^{u,v,r}|}\leq\tilde{t}}z_{t^{\prime}}^{u,v,r,|\mathcal{I}^{u,v,r}|}\) is the cumulative passengers arriving at the destination up to time \(t^{\prime}\). When the cumulative arrivals at the destination are greater or equal to the cumulative demand at the origin (up to time \(t\)), all passengers finish the trip. So taking the minimum over \(t^{\prime}\) gives the arrival time for passengers departing at \(t\). The path travel time is then simply:
\[TT_{t}^{u,v,r}=AT_{t}^{u,v,r}-t\quad\forall t\in\mathcal{T},(u,v,r)\in \mathcal{F} \tag{18}\]
Figure 4 illustrates the travel time calculation.
Figure 4: Travel time calculation
**Incident specification:** Eq. 16 is a general formulation of the optimal flow problem. Now we will introduce how the incident-specific information is incorporated into this problem. We assume the incident causes a service disruption in a specific line (if only several stations are interrupted, we can separate the line into multiple lines so that the assumption always holds). The service disruption in a line can be seen as stops of vehicles for a period of time. The vehicle stopping can be captured by the parameters \(\Delta_{t}^{u,v,r,i}\), \(\delta_{t}^{u,v,r,i}\), and \(K_{l,t}\). Specifically, a long stop due to an incident can be seen as an increase in travel time from the terminal to downstream stations (i.e., increase in \(\Delta_{t}^{u,v,r,i}\) and \(\delta_{t}^{u,v,r,i}\)). Moreover, since there is no vehicle dispatching during the incident, we set \(K_{l,t}=0\) for the corresponding time and line. In this way, we can model the incident without changing the formulation.
### Behavior uncertainty
Consider a passenger \(p\) with a path set \(\mathcal{R}_{p}\). Their inherent preference (utility) of using path \(r\) is denoted as \(V_{p}^{r}\). If path \(r^{\prime}\) was recommended, the impact of the recommendation on the utility of path \(r\) is denoted as \(I_{p,r^{\prime}}^{r}\). Hence, his/her overall utility of using path \(r\) can be represented as
\[U_{p}^{r}=V_{p}^{r}+\sum_{r^{\prime}\in\mathcal{R}_{p}}x_{p,r^{ \prime}}\cdot I_{p,r^{\prime}}^{r}+\xi_{p}^{r}\quad\forall r\in\mathcal{R}_{p},\,p\in\mathcal{P}. \tag{19}\]
where \(\xi_{p}^{r}\) is the random error. \(x_{p,r^{\prime}}=1\) if passenger \(p\) is recommended path \(r^{\prime}\), otherwise \(x_{p,r^{\prime}}=0\). Let \(\pi_{p,r^{\prime}}^{r}\) be the conditional probability that passenger \(p\) chooses path \(r\) given that the recommended path is \(r^{\prime}\). Assuming a utility-maximizing behavior, we have
\[\pi_{p,r^{\prime}}^{r}=\mathbb{P}(V_{p}^{r}+I_{p,r^{\prime}}^{r} +\xi_{p}^{r}\geq V_{p}^{r^{\prime\prime}}+I_{p,r^{\prime\prime}}^{r^{\prime \prime}}+\xi_{p}^{r^{\prime\prime}},\,\forall r^{\prime\prime}\in\mathcal{R}_{ p}) \tag{20}\]
Different assumptions for the distribution of \(\xi_{p}^{r}\) can lead to different expressions. For example, if \(\xi_{p}^{r}\) are i.i.d. Gumbel distributed, the choice probability reduces to multinomial logit model (Train 2009) and we have
\[\pi_{p,r^{\prime}}^{r}=\frac{\exp(V_{p}^{r}+I_{p,r^{\prime}}^{r})}{\sum_{r^{ \prime\prime}\in\mathcal{R}_{p}}\exp(V_{p}^{r^{\prime\prime}}+I_{p,r^{\prime \prime}}^{r^{\prime\prime}})} \tag{21}\]
The value of \(V_{p}^{r}\) and \(I_{p,r^{\prime}}^{r}\) can be calibrated using data from individual-level surveys or smart card data, which deserves separate research. When developing the individual path recommendation model, we assume \(\pi_{p,r^{\prime}}^{r}\) is known. Figure 5 shows an example of the conditional probability matrix. The specific values assume that paths with recommendations are more likely to be chosen.
The conditional probability \(\pi_{p,r^{\prime}}^{r}\) captures the individual's inherent preference for different paths as well as the response to the recommendation system. It varies across individuals and reflects their
behavioral uncertainties. This study focuses on design path recommendation systems based on the value of \(\pi^{r}_{p,r^{\prime}}\).
### Individual path recommendation
Let \(\mathbb{I}^{r}_{p,r^{\prime}}\) be the indicator random variable representing whether passenger \(p\) actually chooses path \(r\) or not given that he/she is recommended path \(r^{\prime}\). By definition, \(\mathbb{I}^{r}_{p,r^{\prime}}\) is a Bernoulli random variable with \(\mathbb{E}[\mathbb{I}^{r}_{p,r^{\prime}}]=\pi^{r}_{p,r^{\prime}}\) and \(\text{Var}[\mathbb{I}^{r}_{p,r^{\prime}}]=\pi^{r}_{p,r^{\prime}}\cdot(1-\pi^{ r}_{p,r^{\prime}})\)
Therefore, the actual flow for path \((u,v,r)\) at time \(t\) is
\[Q^{u,v,r}_{t}=\sum_{p\in\mathcal{P}^{u,v}_{t}}\sum_{r^{\prime}\in\mathcal{R}^ {u,v}}x_{p,r^{\prime}}\cdot\mathbb{I}^{r}_{p,r^{\prime}} \tag{22}\]
\(Q^{u,v,r}_{t}\) is also a random variable. \(\mathcal{P}^{u,v}_{t}\subseteq\mathcal{P}\) is the set of passengers with OD pair \((u,v)\) arriving at the system at time interval \(t\) that receive path recommendations. \(\mathcal{R}^{u,v}\) is the set of paths of OD pair \((u,v)\). The mean and variance of the actual flow is
\[\mu^{u,v,r}_{t}(\mathbf{x}) :=\mathbb{E}\left[Q^{u,v,r}_{t}\right]=\sum_{p\in P^{u,v}_{t}} \sum_{r^{\prime}\in\mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi^{r}_{p,r^{ \prime}} \tag{23}\] \[(\sigma^{u,v,r}_{t}(\mathbf{x}))^{2} :=\text{Var}\left[Q^{u,v,r}_{t}\right]=\sum_{p\in\mathcal{P}^{u,v }_{r^{\prime}}}\sum_{r^{\prime}\in\mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi^ {r}_{p,r^{\prime}}\cdot(1-\pi^{r}_{p,r^{\prime}}) \tag{24}\]
Note that Eqs. 24 is based on the fact that \(x^{2}_{p,r^{\prime}}=x_{p,r^{\prime}}\) and \(\text{Cov}[\mathbb{I}^{r}_{p,r^{\prime}},\mathbb{I}^{r}_{p,r^{\prime\prime}}]=0\) if \(r^{\prime}\neq r^{\prime\prime}\).
In an optimization model, we cannot use a random variable (e.g., actual flow) as the decision variable. Therefore, let us treat \(\mathbf{q}\) in Eq. 16 as a **realization** of the random variable flow \(\mathbf{Q}=(Q^{u,v,r}_{t})_{t\in\mathcal{T},(u,v,r)\in\mathcal{F}}\). To make \(\mathbf{q}\) a reasonable realization, some constraints need to be considered between the value of \(\mathbf{q}\) and the distribution of \(\mathbf{Q}\). We define two new concepts: "\(\epsilon\)-feasibility" and "\(\Gamma\)-concentration".
Figure 5: Example of conditional path choice probability
**Definition 1** (\(\epsilon\)-feasible flows): A flow \(q_{t}^{u,v,r}\) is \(\epsilon\)-feasible if and only if
\[|q_{t}^{u,v,r}-\mu_{t}^{u,v,r}(\boldsymbol{x})|\leq\epsilon_{t}^{u,v,r},\quad \forall(u,v,r)\in\mathcal{F},t=t^{\text{min}},...,T \tag{25}\]
where \(\epsilon_{t}^{u,v,r}\) is a small positive constant. This means that \(\boldsymbol{q}\) is close to the expectation of the actual flow under recommendation strategy \(\boldsymbol{x}\).
**Definition 2** (\(\Gamma\)-concentrated flows): A flow \(q_{t}^{u,v,r}\) is \(\Gamma\)-concentrated if and only if it is \(\epsilon\)-feasible and for any constant \(a>\epsilon_{t}^{u,v,r}\), we have
\[\mathbb{P}\left[|Q_{t}^{u,v,r}-q_{t}^{u,v,r}|\geq a\right]\leq\left(\frac{ \Gamma_{t}^{u,v,r}}{a-\epsilon_{t}^{u,v,r}}\right)^{2}\quad\forall(u,v,r)\in \mathcal{F},t=t^{\text{min}},...,T \tag{26}\]
where \(\Gamma_{t}^{u,v,r}\) is a small positive constant. This means that the probability that \(Q_{t}^{u,v,r}\) and \(q_{t}^{u,v,r}\) are very different (i.e., with difference greater than \(a\)) is bounded from above, suggesting that \(Q_{t}^{u,v,r}\) is concentrated around \(q_{t}^{u,v,r}\).
**Remark 1**: The logic of using \(\boldsymbol{q}\) as the decision variable and defining the above two concepts is as follows. The objective of this study is to find the best recommendation strategy \(\boldsymbol{x}\) that minimizes the system travel time. The system travel time is a function of network flows. Given a recommendation strategy \(\boldsymbol{x}\), the actual flow \(\boldsymbol{Q}\) is a random variable, which cannot be directly used in the optimization model (as decision variables) to evaluate the system travel time. Hence, we assume that \(\boldsymbol{q}\) in Eq. 16 is a realization of the actual flow (deterministic variable). We also add two constraints to \(\boldsymbol{q}\) so that \(\boldsymbol{q}\) is close to the mean of the actual flow, and the distribution of the actual flow is concentrated around \(\boldsymbol{q}\). Then, using \(\boldsymbol{q}\) to evaluate the system travel time is similar to that of using the actual flows (or taking the expectation). In section 4.4.2, we show that these two concepts help to bound the difference between the expected system travel time and the system travel time evaluated using \(\boldsymbol{q}\).
Note that one may argue that we can directly use \(\mu_{t}^{u,v,r}(\boldsymbol{x})\) as decision variables to represent network flows and eliminate \(\boldsymbol{q}\). This idea is essentially equivalent to setting \(\epsilon_{t}^{u,v,r}=0\) and does not consider the concentration property (i.e., \(\Gamma_{t}^{u,v,r}=+\infty\)), which is a special case of our framework. Our framework has more advantages in controlling the variance. Specifically, ignoring the \(\Gamma\)-concentration may make the model recommendation strategies meaningless. Consider an extreme scenario that there is a recommendation strategy \(\boldsymbol{x}\), under which the actual flow is uniformly distributed in \([0,1]\). Further, assume that the system travel time is simply a linear function of the actual flow, say the factor is 1 (i.e., the system travel time is also uniformly distributed in \([0,1]\)). Suppose that the recommendation strategy \(\boldsymbol{x}\) minimizes the expected system travel time (now the system travel time is \(1\times\mu_{t}^{u,v,r}(\boldsymbol{x})=0.5\)). However, as the actual flow can be any value between
0 and 1 with equal probability, we know that the actual system travel time can also be any value between 0 and 1. Hence, the recommendation strategy \(\mathbf{x}\), though minimizing the expected system travel time, is meaningless because there are too many variations for the actual system travel time under this recommendation. \(\Gamma\)-concentration is an important property to ensure that the distribution of actual flows is not too dispersed1 so that the recommendation strategy \(\mathbf{x}\) is solved based on a reliable estimate of the system travel time. In Section 4.4.2, we show that controlling the variance is important if we wish to minimize the expected system travel time. In Sections 4.4.1 and 4.4.3, we also elaborate on how these two concepts relate to typical stochastic optimization methods.
Footnote 1: In reality, as the actual flow is the summation of many Bernoulli random variables, the coefficient of variation will shrink with the increase in passenger size. So in the case of a large number of passengers, the \(\Gamma\)-concentration should be naturally satisfied
We will therefore incorporate \(\epsilon\)-feasibility and \(\Gamma\)-concentration as constraints into the optimization formulation (Eq. 16). It turns out that both of them can be modeled as linear constraints. \(\epsilon\)-feasibility (Eq. 25) can be easily transformed into a linear constraint by eliminating the absolute value. To incorporate \(\Gamma\)-concentration (Eq. 26), the following Proposition is used:
Proposition 1: _The \(\Gamma\)-concentration inequality (Eq. 26) holds if the variance of \(Q_{t}^{u,v,r}\) is bounded from above by \((\Gamma_{t}^{u,v,r})^{2}\). Mathematically:_
\[\sum_{p\in\mathcal{P}_{t^{\prime}}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi_{p,r^{\prime}}^{r}\cdot(1-\pi_{p,r ^{\prime}}^{r})\leq(\Gamma_{t}^{u,v,r})^{2} \tag{27}\]
For modeling convenience, we set \(\epsilon_{t}^{u,v,r}=\epsilon\cdot\mu_{t}^{u,v,r}(\mathbf{x})\) and \(\Gamma_{t}^{u,v,r}=\Gamma\cdot d_{t}^{u,v}\), where \(\epsilon\) and \(\Gamma\) are hyper-parameters determining how close and concentrated the actual flow should be. Then the final constraint becomes:
\[(1-\epsilon)\sum_{p\in\mathcal{P}_{t}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi_{p,r^{\prime}}^{r}\leq q_{t}^{u,v, r}\leq(1+\epsilon)\sum_{p\in\mathcal{P}_{t}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi_{p,r^{\prime}}^{r} \tag{28}\]
and
\[\sum_{p\in\mathcal{P}_{t^{\prime}}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi_{p,r^{\prime}}^{r}\cdot(1-\pi_{p,r ^{\prime}}^{r})\leq(\Gamma\cdot d_{t}^{u,v})^{2} \tag{29}\]
Both constraints are linear and can be added into Eq. 16.
Besides the total system travel time, many recommendation systems also aim to respect passengers' preferences. That is, if possible, a path with high inherent utility \(V_{p}^{r}\) should be recommended. Hence the following term is added to the objective function.
\[\max\sum_{p\in\mathcal{P}}\sum_{r\in\mathcal{R}_{p}}x_{p,r}\cdot V _{p}^{r}\Longleftrightarrow\min\sum_{p\in\mathcal{P}}\sum_{r\in\mathcal{R}_{p }}-x_{p,r}\cdot V_{p}^{r} \tag{30}\]
The final individual path recommendation (IPR) model can be formulated as:
\[(IPR) \min_{\mathbf{x},\mathbf{q},\mathbf{z}} WT(\mathbf{q},\mathbf{z})+IVT(\mathbf{z})+\Psi\sum_{p\in\mathcal{P}}\sum_{r\in \mathcal{R}_{p}}-x_{p,r}\cdot V_{p,r}\] (31a) s.t. Constraints \[(\ref{eq:1})-(\ref{eq:1}) \tag{31b}\] \[(1-\epsilon)\sum_{p\in\mathcal{P}_{t}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi_{p,r^{\prime}}^{r}\leq q_{t}^{u,v,r }\leq (1+\epsilon)\sum_{p\in\mathcal{P}_{t}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi_{p,r^{\prime}}^{r}\] \[\forall t\in\mathcal{T},(u,v,r)\in\mathcal{F}\] (31c) \[\sum_{p\in\mathcal{P}_{t^{\prime}}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi_{p,r^{\prime}}^{r}\cdot(1-\pi_{p, r^{\prime}}^{r})\leq(\Gamma\cdot q_{t}^{u,v})^{2}\quad\forall t\in\mathcal{T},(u,v,r)\in \mathcal{F}\] (31d) \[\sum_{r\in R_{p}}x_{p,r}=1\quad\forall p\in\mathcal{P}\] (31e) \[x_{p,r}\in\{0,1\}\quad\forall p\in\mathcal{P},r\in\mathcal{R}_{p} \tag{31f}\]
where \(\Psi\) is a hyper-parameter to adjust the scale and balance the trade-off between system efficiency and passenger preferences.
### Discussions on \(\epsilon\)-feasibility and \(\Gamma\)-concentration
#### 4.4.1 Connections to two-stage stochastic optimization
As the problem needs to solve both recommendation strategy \(\mathbf{x}\) and path flow \(\mathbf{q}\), a possible alternative formulation is a two-stage stochastic optimization, where the first stage is to determine the recommendation and the second stage is to determine the path flow:
\[\min_{\mathbf{x}} \Psi\sum_{p\in\mathcal{P}}\sum_{r\in\mathcal{R}_{p}}-x_{p,r}\cdot V _{p,r}+\mathbb{E}_{\mathbf{Q}|_{\mathbf{x}}}[STT(\mathbf{Q}|_{\mathbf{x}})]\] (32a) s.t. Constraints \[(\ref{eq:1})-(\ref{eq:1}) \tag{32b}\]
where \(\mathbf{Q}|_{\mathbf{x}}\) is the path flow (random variable) given recommendation strategy \(\mathbf{x}\) and \(STT(\mathbf{Q}|_{\mathbf{x}})\) is the actual system travel time (STT) defined as
\[STT(\mathbf{Q}|_{\mathbf{x}})=\min_{\mathbf{q},\mathbf{z}\in\mathcal{X}^{\text{OF}}(\mathbf{Q}|_{ \mathbf{x}})}WT(\mathbf{q},\mathbf{z})+IVT(\mathbf{z}) \tag{33}\]
where \(\mathcal{X}^{\text{OF}}(\mathbf{Q}|_{\mathbf{x}})=\{(\mathbf{q},\mathbf{z}):\text{Constraints }(\ref{eq:1})-(\ref{eq:1}),\mathbf{q}=\mathbf{Q}|_{\mathbf{x}}\}\) is the feasible region of the optimal flow problem (Eq. 16) with an additional constraint of fixing \(\mathbf{q}\) to the value of \(\mathbf{Q}|_{\mathbf{x}}\).
Our formulation differs from the typical two-stage stochastic optimization problem because the distribution of the uncertain parameter \(\mathbf{Q}|_{\mathbf{x}}\) also depends on the first-stage decision \(\mathbf{x}\). While the
typical two-stage stochastic optimization assumes the uncertain parameters in the second stage have some predetermined exogenous distribution (Ahmed, 2010). The difference comes from that \(\mathbf{q}=\mathbf{Q}|_{\mathbf{x}}\) essentially makes the decision variable become a random variable, as we mentioned in Section 4.3. Section 4.4.3 discusses how to deal with random decision variables in stochastic optimization and how that connects with \(\epsilon\)-feasibility and \(\Gamma\)-concentration.
A typical way to solve the two-stage stochastic optimization is to construct an approximation \(S\hat{T}T(\mathbf{x})\) for \(\mathbb{E}_{\mathbf{Q}|_{\mathbf{x}}}[STT(\mathbf{Q}|_{\mathbf{x}})]\). Then we solve
\[\min_{\mathbf{x}} \Psi\sum_{p\in\mathcal{P}}\sum_{r\in\mathcal{R}_{p}}-x_{p,r}\cdot V _{p,r}+S\hat{T}T(\mathbf{x})\] (34a) s.t. \[\text{Constraints}\ (\ref{eq:S1}e)-(\ref{eq:S1}f) \tag{34b}\]
for the first stage instead2. From this perspective, we can treat \(\epsilon\)-feasibility and \(\Gamma\)-concentration as a way of constructing \(S\hat{T}T(\mathbf{x})\), that is,
Footnote 2: The approximation is then updated based on the second-stage solutions
\[S\hat{T}T(\mathbf{x}) = \min_{\mathbf{q},\mathbf{z}} WT(\mathbf{q},\mathbf{z})+IVT(\mathbf{z})\] (35a) s.t. \[\text{Constraints}\ (\ref{eq:S1}b)-(\ref{eq:S1}h) \tag{35b}\] \[\text{Constraints}\ (\ref{eq:S1}c)-(\ref{eq:S1}d) \tag{35c}\]
Therefore, combining Eqs. 34 and 35 as a one-stage optimization problem yields our IPR formulation (Eq. 31).
However, a natural question would be how good the approximation is. Section 4.4.2 discusses the system travel time difference bounds that partially answer this question.
#### 4.4.2 Difference between the expected system travel time and the IPR objective function
Let \((\mathbf{q}^{\star},\mathbf{z}^{\star},\mathbf{x}^{\star})\) be the optimal solution of Eq. 31. Given the optimal recommendation strategy \(\mathbf{x}^{\star}\), we denote the random variable of path flow as \(\mathbf{Q}|_{\mathbf{x}^{\star}}\). Let \(\mathbb{P}_{\mathbf{Q}|_{\mathbf{x}^{\star}}}(\cdot)\) be the density function of \(\mathbf{Q}|_{\mathbf{x}^{\star}}\). The expectation of \(STT(\mathbf{Q}|_{\mathbf{x}^{\star}})\) is
\[\mathbb{E}_{\mathbf{Q}|_{\mathbf{x}^{\star}}}[STT(\mathbf{Q}|_{\mathbf{x}^{\star}})]=\sum_{ \hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^{\star})}\left[\min_{\mathbf{q},\mathbf{z}\in \mathcal{X}^{\mathcal{O}}(\hat{\mathbf{q}})}WT(\mathbf{q},\mathbf{z})+IVT(\mathbf{z})\right] \cdot\mathbb{P}_{\mathbf{Q}|_{\mathbf{x}^{\star}}}(\hat{\mathbf{q}}) \tag{36}\]
where
\[\mathcal{Q}(\mathbf{x}^{\star})=\{\hat{\mathbf{q}}\geq 0:\hat{q}_{t}^{u,v,r}=\sum_{p \in\mathcal{P}_{t}^{u,v}}\sum_{r^{\prime}\in\mathcal{R}^{u,u}}x_{p,r^{\prime}} ^{\star}\cdot\hat{1}_{p,r^{\prime}}^{\,r},\,\forall\,\hat{1}_{p,r^{\prime}}^{ \,r}\in\{0,1\},\sum_{r\in\mathcal{R}_{p}}\hat{1}_{p,r^{\prime}}^{\,r}=1,\, \forall(u,v,r)\in\mathcal{F},t\in\mathcal{T}\} \tag{37}\]
is the set of all possible values of network flows given recommendation \(\mathbf{x}^{*}\).
\(\mathbb{E}_{\mathbf{Q}|_{\mathbf{x}^{*}}}[STT(\mathbf{Q}|_{\mathbf{x}^{*}})]\) is usually the indicator of the system performance. However, our model is optimized with Eq. 31, where the minimization is conducted over the model-evaluated system travel time with optimal realized flow \(\mathbf{q}^{*}\):
\[STT(\mathbf{q}^{*})=\min_{\mathbf{q},\mathbf{z}\in\mathcal{X}^{\text{OF}}(\mathbf{q}^{*})}WT( \mathbf{q},\mathbf{z})+IVT(\mathbf{z}) \tag{38}\]
It is worth analyzing the relationship between the model-evaluated system travel time (\(STT(\mathbf{q}^{*})\)) and the expected system travel time (\(\mathbb{E}_{\mathbf{Q}|_{\mathbf{x}^{*}}}[STT(\mathbf{Q}|_{\mathbf{x}^{*}})]\)). This analysis tells us how well our proposed approach can approximate the real system performance indicator.
We first introduce a Lemma based on Berge's Maximum Theorem (Berge, 1957).
**Lemma 1**: \(STT(\hat{\mathbf{q}})=\min_{\mathbf{q},\mathbf{z}\in\mathcal{X}^{\text{OF}}(\hat{\mathbf{q}}) }WT(\mathbf{q},\mathbf{z})+IVT(\mathbf{z})\) _is continuous in terms of \(\hat{\mathbf{q}}\) if the set of optimal flows is bounded (i.e., there are a limited number of flow patterns that permits the optimal system travel time)._
Lemma 1 implies that a small change in the path flows only results in small changes in system travel time. Since the system travel time is usually bounded from above given a finite-scale transit network, unit flow changes should not yield infinite changes in the system travel time. Hence, \(STT(\hat{\mathbf{q}})\) should have bounded gradient. Combining the continuity property in Lemma 1, we conclude that \(STT(\hat{\mathbf{q}})\) is Lipschitz continuous. That is, there exists a constant \(L\) such that, for any network flows \(\mathbf{q}_{1}\) and \(\mathbf{q}_{2}\), we have
\[|STT(\mathbf{q}_{1})-STT(\mathbf{q}_{2})|\leq L\cdot\left\|\mathbf{q}_{1}-\mathbf{q}_{2}\right\| _{1} \tag{39}\]
**Proposition 2**: _Let \((\mathbf{q}^{*},\mathbf{z}^{*},\mathbf{x}^{*})\) be the optimal solution of Eq. 31. \(\mathbf{Q}|_{\mathbf{x}^{*}}\) is the random variable of path flows. The difference between the model-evaluated system travel time and the expected system travel time is bounded from above if_
* _the set of optimal flows is bounded (to implement Lemma_ 1_),_
* _the network flows are bounded from above (i.e., there exist_ \(\mathbf{q}^{\text{Max}}<\infty\) _such that_ \(\hat{\mathbf{q}}\leq\mathbf{q}^{\text{Max}}\)__\(\forall\)__\(\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^{*})\)_), and_
* \(|\mathbf{q}^{*}-\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]|\leq\mathbf{\epsilon}\) _(_\(\epsilon\)_-feasibility) and_ \(\text{Var}[\mathbf{Q}|_{\mathbf{x}^{*}}]\leq\mathbf{\Gamma}^{2}\) _(_\(\Gamma\)_-concentration)_
_where \(\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]=(\mathbb{E}[Q_{i}|_{\mathbf{x}^{*}}])_{i\in \mathcal{F}\times\mathcal{T}}\) is the element-wise expectation vector, \(\mathbf{\epsilon}=(\epsilon_{i})_{i\in\mathcal{F}\times\mathcal{T}}\), \(\mathbf{\Gamma}^{2}=(\Gamma_{i}^{2})_{i\in\mathcal{F}\times\mathcal{T}}\). The bound of the difference is determined by both \(\mathbf{\epsilon}\) and \(\mathbf{\Gamma}\). Mathematically,_
\[\left|\mathbb{E}_{\mathbf{Q}|_{\mathbf{x}^{*}}}[STT(\mathbf{Q}|_{\mathbf{x}^{*}})]-SST(\mathbf{q}^ {*})\right|\leq 2L\cdot\left\|\mathbf{\epsilon}\right\|_{1}+L\cdot\big{(}\left\| \mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\right\|_{1}+\left\|\mathbf{q}^{\text{Max}} \right\|_{1}+2\left\|\mathbf{\epsilon}\right\|_{1}\big{)}\cdot\left\|\mathbf{\Gamma} \right\|_{2}^{2} \tag{40}\]
**Remark 2**: Proposition 2 shows that even if the model is optimized on a realization of the system travel time (not the expectation), as long as we impose the \(\epsilon\)-feasibility and \(\Gamma\)-concentration, the model-evaluated system travel time and the expected system travel will be similar if \(\epsilon\) and \(\Gamma\) are small.
#### 4.4.3 Random decision variables in stochastic optimization
The proposed \(\epsilon\)-feasibility and \(\Gamma\)-concentration are used to solve optimization problems with random decision variables. Though typical stochastic optimization methods generally deal with random parameters (either in the objective functions or in constraints), the methods may also apply to cases where decision variables are random. In this section, we analyze a general optimization problem with random decision variables from the stochastic optimization point of view, trying to construct the connections between the proposed two concepts with typical stochastic optimization methods.
Consider a general optimization problem where the decision variable \(\mathbf{Y}=(Y_{i})_{i=1,\ldots,n}\in\mathbb{R}^{n}\) is a random variable with density function \(f(\cdot\mid\mathbf{\theta})\) (Eq. 41).
\[\min_{\mathbf{Y}\sim f(\cdot\mid\mathbf{\theta})} g(\mathbf{Y})\] (41a) s.t. \[h_{j}(\mathbf{Y})\leq b_{j}\quad\forall j\in\mathcal{J} \tag{41b}\]
where \(\mathbf{\theta}\) is the parameter for the density function. \(g(\cdot):\mathbb{R}^{n}\rightarrow\mathbb{R}\) is the objective function, \(h_{j}(\cdot):\mathbb{R}^{n}\rightarrow\mathbb{R}\) is the constraint function, \(b_{j}\in\mathbb{R}\) is the constraint parameter, \(\mathcal{J}\) is the set of constraints indices. The typical way to transform this problem into a deterministic problem is to take the expectation of the objective function and constraints (or consider the probability guarantee of the constraints with a pre-defined parameter \(\eta\), such as \(\eta=0.95\)), as shown in Eq. 42. That is, instead of solving for random variable \(\mathbf{Y}\), we treat the distribution parameters \(\mathbf{\theta}\) as the decision variables (Hernandez, 2018), which is deterministic.
\[\min_{\mathbf{\theta}} \mathbb{E}_{\mathbf{Y}\sim f(\cdot\mid\mathbf{\theta})}[g(\mathbf{Y})\mid\bm {\theta}]\] (42a) s.t. \[\mathbb{E}_{\mathbf{Y}\sim f(\cdot\mid\mathbf{\theta})}[h_{i}(\mathbf{Y})] \leq b_{j}\quad\forall j\in\mathcal{J}\] (Expectation constraints) (42b) or/and \[\mathbb{P}_{\mathbf{Y}\sim f(\cdot\mid\mathbf{\theta})}[h_{j}(\mathbf{Y})\leq b _{j}]\geq\eta\quad\forall j\in\mathcal{J}\] (Chance constraints) (42c)
However, the formulations in Eq. 42 are in general hard to solve except that we have the closed-form expressions for \(\mathbb{E}_{\mathbf{Y}\sim f(\cdot\mid\mathbf{\theta})}[\cdot]\) and \(\mathbb{P}_{\mathbf{Y}\sim f(\cdot\mid\mathbf{\theta})}[\cdot]\) (or using some approximation techniques for the constraints).
In this study, we propose two concepts, \(\epsilon\)-feasibility and \(\Gamma\)-concentration, to model random decision variables. The following propositions discuss how these two concepts are related to the stochastic optimization formulation (Eq. 42).
**Proposition 3**: _The \(\epsilon\)-feasibility constraint is an approximation to the expectation constraints (Eq. 42b). Define:_
\[G_{\text{SO}}=\min_{\boldsymbol{\theta}}\{\mathbb{E}[g(\boldsymbol{ Y})]:\,\mathbb{E}[h_{j}(\boldsymbol{Y})]\leq b_{j},\,\forall j\in\mathcal{J}\} \tag{43}\] \[G_{\text{EP}}(\boldsymbol{\epsilon})=\min_{\boldsymbol{y}, \boldsymbol{\theta}}\{g(\boldsymbol{y}):\,h_{j}(\boldsymbol{y})\leq b_{j},\, \forall j\in\mathcal{J},|\boldsymbol{y}-\mathbb{E}[\boldsymbol{Y}]|\leq \boldsymbol{\epsilon}\} \tag{44}\]
_where \(\boldsymbol{y}=(y_{i})_{i=1,\ldots,n}\in\mathbb{R}^{n}\) is a realization of \(\boldsymbol{Y}\). \(\boldsymbol{\epsilon}=(\epsilon_{i})_{i=1,\ldots,n}\in\mathbb{R}^{n}\) is a vector of small constants (i.e., \(\epsilon\)-feasibility). All expectations are taken over \(\boldsymbol{Y}\sim f(\cdot\mid\boldsymbol{\theta})\). \(\mathbb{E}[\boldsymbol{Y}]=(\mathbb{E}[Y_{i}])_{i=1,\ldots,n}\) is the element-wise expectation vector of \(\boldsymbol{Y}\). \(G_{\text{SO}}\) is the optimal solution of the stochastic optimization problem with expectation constraints. \(G_{\text{EP}}\) is the optimal solution of the proposed approach with the \(\epsilon\)-feasibility constraint. If \(\boldsymbol{\epsilon}=0\), and \(g(\cdot)\) and \(h_{j}(\cdot)\) are both convex functions (corresponding to the convex optimization), we have_
\[G_{\text{EP}}(\boldsymbol{\epsilon}=0)\leq G_{\text{SO}} \tag{45}\]
_That is, the proposed approach is a lower bound of the stochastic optimization problem._
**Remark 3**: _The proposition is related to the certainty-equivalent (or mean-field) variant of a stochastic optimization problem. Consider a special case where both \(g(\cdot)\) and \(h_{j}(\cdot)\) are linear, we would have \(G_{\text{EP}}(\boldsymbol{\epsilon}=0)=G_{\text{SO}}\). Note that Eq. 31 is an integer linear programming. Hence, setting \(\boldsymbol{\epsilon}=0\) makes Eq. 31 a special version of the stochastic optimization formulation with expectation constraints._
**Proposition 4**: _The \(\Gamma\)-concentration constraint is an approximation for the chance constraints (Eq. 42c). Define:_
\[\tilde{G}_{\text{SO}}= \min_{\boldsymbol{\theta}}\{\mathbb{E}[g(\boldsymbol{Y})]:\, \mathbb{E}[h_{j}(\boldsymbol{Y})]\leq b_{j},\,\mathbb{P}[h_{j}(\boldsymbol{Y} )\leq b_{j}]\geq\eta,\,\forall j\in\mathcal{J}\} \tag{46}\] \[\tilde{G}_{\text{EP}}= \min_{\boldsymbol{\theta}}\{\mathbb{E}[g(\boldsymbol{Y})]:\, \mathbb{E}[h_{j}(\boldsymbol{Y})]\leq b_{j},\,\forall j\in\mathcal{J},\text{ Var}[\boldsymbol{Y}]\leq\boldsymbol{\Gamma}^{2}\} \tag{47}\]
_where \(\text{Var}[\boldsymbol{Y}]=(\text{Var}[Y_{i}])_{i=1,\ldots,n}\in\mathbb{R}^{n}\) is the element-wise variance vector of \(\boldsymbol{Y}\). \(\boldsymbol{\Gamma}^{2}=(\Gamma_{i}^{2})_{i=1,\ldots,n}\in\mathbb{R}^{n}\) is a vector of squared small constants (i.e., \(\Gamma\)-concentration). If \(h_{j}(\cdot)\) is Lipschitz continuous (i.e.,
_there exists a positive constant \(C\) such that, for all \(\mathbf{y_{1}}\) and \(\mathbf{y_{2}}\), \(|h_{j}(\mathbf{y_{1}})-h_{j}(\mathbf{y_{2}})|\leq C\left\|\mathbf{y_{1}}-\mathbf{y_{2}}\right\|_{2}\), and \(\mathbf{\Gamma}>0\) is sufficiently small, we have_
\[\text{Var}[\mathbf{Y}]\leq\mathbf{\Gamma}^{2}\Rightarrow\mathbb{P}[h_{j}(\mathbf{Y})\leq b _{j}]\geq\eta \tag{48}\]
_That is, the \(\Gamma\)-concentration constraints can lead to chance constraints._
**Remark 4**: Proposition 4 does not require \(h_{j}(\cdot)\) to be a convex function. Instead, we only need it to be Lipschitz continuous. If \(\mathbf{\Gamma}\) is not small enough, \(\text{Var}[\mathbf{Y}]\leq\mathbf{\Gamma}^{2}\) cannot result in the chance constraint. We can only obtain a weaker condition (i.e., looser than the chance constraint). Details can be found in Appendix G.
#### 4.4.4 Connections to robust optimization
Essentially, \(\epsilon\)-feasibility and \(\Gamma\)-concentration define a set for the uncertain path flow \(\mathbf{Q}\). The uncertainty set is
\[\Lambda(\mathbf{x})=\{\mathbf{q}\geq 0:|\mathbf{q}-\mathbf{\mu}(\mathbf{x})|\leq\mathbf{\epsilon}, \text{Var}[\mathbf{Q}|_{\mathbf{x}}]\leq\mathbf{\Gamma}\}. \tag{49}\]
This set is determined by the endogenous decision variable \(\mathbf{x}\). And the optimization is conducted over the uncertainty set \(\Lambda(\mathbf{x})\). This concept is related to two-stage robust optimization with decision-dependent uncertainties (Zeng and Wang, 2022), where the uncertainty set in the second stage is determined by the first-stage decision variables. However, in our model, the randomness is in the decision variables rather than in the parameters. Hence, instead of optimizing under the worst case in \(\Lambda(\mathbf{x})\), we directly optimize over \(\Lambda(\mathbf{x})\) (equivalent to the "best-case"), which avoids the complexity of min-max formulation in robust optimization.
### Solving the problem by Benders decomposition
Eq. 31 is a mixed-integer linear programming (MILP). The structure of Eq. 31 allows us to efficiently solve it by Benders decomposition (BD) (Benders, 1962). The basic idea of BD is to decompose the problem into a master problem and a subproblem and solve these problems iteratively. The decision variables are divided into difficult variables, which in our case are the binary variables \(\mathbf{x}\), and a set of easier variables, the continuous \(\mathbf{q}\) and \(\mathbf{z}\). At each iteration, the master problem determines one possible leader decision \(\mathbf{x}\). This solution is used in the subproblem to generate optimality-cuts or feasibility-cuts, which are added to the master problem.
Interestingly, in this study, the master problem decides the recommendation strategies, which is a MILP of a smaller scale and can be solved efficiently using existing solvers. The subproblem reduces to the optimal flow problem (Eq. 16) with one more linear constraint (still linear programming). This format makes the BD an appropriate algorithm for the original problem.
#### 4.5.1 Subproblem
The subproblem is derived by fixing the decision variables \(\mathbf{x}\), and only considering the components including \(\mathbf{q}\) and \(\mathbf{z}\).
\[[SP(\mathbf{x})]\quad\min_{\mathbf{q},\mathbf{z}} WT(\mathbf{q},\mathbf{z})+IVT(\mathbf{z})\] (50a) s.t. Constraints \[(\ref{eq:1})-(\ref{eq:1}) \tag{50b}\] \[\text{Constraint }(\ref{eq:1}) \tag{50c}\]
The objective of the dual problem of Eq. 50 is
\[D(\mathbf{\alpha},\mathbf{\beta},\mathbf{\gamma},\mathbf{\iota},\mathbf{\kappa}, \mathbf{\rho};\mathbf{x})= \sum_{l\in\mathcal{L}}\sum_{t\in\mathcal{T}}\sum_{t^{\prime}=t}^{ T_{l,t}}K_{l,t}\alpha_{l,t,t^{\prime}}+\sum_{(u,v,r)\in\mathcal{F}}\sum_{t \in\mathcal{T}}\sum_{t^{\prime}=t^{\text{min}}}^{t}f_{t^{\prime}}^{u,v,r}\beta _{t}^{u,v,r}\] \[+\sum_{(u,v,r,t)\in\Omega_{1}}\hat{z}_{t}^{u,v,r,i}\gamma_{t}^{u, v,r,i}+\sum_{(u,v)\in\mathcal{W}}\sum_{t\in\mathcal{T}}d_{t}^{u,v}\iota_{t}^{u,v}\] \[+\sum_{(u,v,r)\in\mathcal{F}}\sum_{t\in\mathcal{T}}\kappa_{t}^{u,v,r}\cdot(1-\epsilon)\sum_{p\in\mathcal{P}_{t}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi_{p,r^{\prime}}^{r}\] \[+\sum_{(u,v,r)\in\mathcal{F}}\sum_{t\in\mathcal{T}}\rho_{t}^{u, v,r}\cdot(1+\epsilon)\sum_{p\in\mathcal{P}_{t}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\pi_{p,r^{\prime}}^{r} \tag{51}\]
where \(\mathbf{\alpha},\mathbf{\beta},\mathbf{\gamma},\mathbf{\iota}\) are the dual variables associated with constraints 16b, 16c, 16f, 16e, respectively. \(\mathbf{\kappa}\), \(\mathbf{\rho}\) are the dual variables associated with constraint 31c. Let \(\mathbf{\Theta}:=(\mathbf{\alpha},\mathbf{\beta},\mathbf{\gamma},\mathbf{\iota},\mathbf{\kappa},\mathbf{ \rho})\). If the dual problem of Eq. 50 is feasible and bounded with a solution \(\mathbf{\Theta}^{*}\), the following optimality cut is added to the master problem:
\[Z\geq D(\mathbf{\Theta}^{*};\mathbf{x}) \tag{52}\]
where \(Z\) is a decision variable in the master problem. If the dual problem of Eq. 50 is unbounded, and \(\mathbf{\Theta}^{*}\) is an optimal extreme ray of the dual, the following feasibility cut is added to the master problem:
\[D(\mathbf{\Theta}^{*};\mathbf{x})\leq 0 \tag{53}\]
#### 4.5.2 Master problem
Let \(\mathcal{A}^{\text{O}}\) be the set of solutions \(\mathbf{\Theta}^{*}\) of optimality cuts and \(\mathcal{A}^{\text{F}}\) be the set of solutions \(\mathbf{\Theta}^{*}\) of feasibility cuts. At each iteration of the BD, a cut based on the solution of the subproblem is added to the respective set, and the corresponding master problem is defined as follows:
\[[MP(\mathcal{A}^{\text{O}},\mathcal{A}^{\text{F}})]\quad\min_{ \mathbf{x},Z}\quad\Psi\sum_{p\in\mathcal{P}}\sum_{r\in\mathcal{R}_{p}}-x_{p,r} \cdot V_{p,r}+Z \tag{54a}\]
\[\text{s.t.}\quad Z\geq D(\mathbf{\Theta}^{*};\mathbf{x})\quad\forall\mathbf{ \Theta}^{*}\in\mathcal{A}^{\text{O}} \tag{54b}\] \[\quad D(\mathbf{\Theta}^{*};\mathbf{x})\leq 0\quad\forall\mathbf{\Theta}^{*}\in \mathcal{A}^{\text{F}}\] (54c) \[\quad\text{Constraints }(\ref{eq:D1}d)-(\ref{eq:D2}f) \tag{54d}\]
Note that the master problem has a smaller scale compared to the original problem (because there are no \(\mathbf{z}\) and \(\mathbf{q}\)), which can be solved efficiently.
#### 4.5.3 Convergence
Let \((\mathbf{x}^{(k)},Z^{(k)})\) and \((\mathbf{q}^{(k)},\mathbf{z}^{(k)})\) be the solutions of the master problem and subproblem, respectively, in the \(k\)-th iteration. Then, the upper (\(UB^{(k)}\)) and lower (\(LB^{(k)}\)) bounds at the \(k\)-th iteration are given by:
\[UB^{(k)} =\Psi\sum_{p\in\mathcal{P}}\sum_{r\in\mathcal{R}_{p}}-x_{p,r}^{( k)}\cdot V_{p,r}+WT(\mathbf{q}^{(k)},\mathbf{z}^{(k)})+IVT(\mathbf{z}^{(k)}) \tag{55}\] \[LB^{(k)} =\Psi\sum_{p\in\mathcal{P}}\sum_{r\in\mathcal{R}_{p}}-x_{p,r}^{( k)}\cdot V_{p,r}+Z^{(k)} \tag{56}\]
\(LB^{(k)}\) will keep increasing as \(k\) increases because more cuts are added to the master problem. \(UB^{(k)}\) does not necessarily decrease at every iteration. The convergence criterion is
\[\text{Gap}^{(k)}=\frac{UB^{(k)}-LB^{(k)}}{LB^{(k)}}\leq\text{Predetermined threshold} \tag{57}\]
## 5 Case study
#### 5.1.1 Cta Blue Line disruption
For the case study, we consider an actual incident in the Blue Line of the Chicago Transit Authority (CTA) urban rail system (Figure 6). The incident starts at 8:14 AM and ends at 9:13 AM on Feb 1st, 2019 due to infrastructure issues between Harlem and Jefferson Park stations (the red X in the figure) that led to a whole Blue Line suspension. During the disruption (morning hours), the destination for most of the passengers is the "Loop" in the CBD area in Chicago. There are four alternative paths to the Loop: 1) using the Blue Line (i.e., waiting for the system to recover), 2) using the parallel bus lines, 3) using the North-South (NS) bus lines to transfer to the Green Line, and 4) using the West-East (WE) bus lines to transfer to the Brown Line. Based on the service structure, the route sets \(\mathcal{R}^{(u,v)}\) for each OD pair \((u,v)\) can be constructed.
In the case study, we divide the time into \(\tau=5\) mins equal-length intervals, and focus on solving the problem at \(t=1\) (i.e., beginning of the incident). We assume that the set of passengers to receive recommendations (\(\mathcal{P}\)) consists of all passengers with their intended origins at the Blue Line and destinations in the Loop. A simulation model (Mo et al., 2020) is used to get the system state up
to time \(t=1\) (i.e., the incident time 8:14 AM) and generate \(\hat{z}_{t}^{u,v,r,i}\) and \(\Omega_{1}\). The recommendation strategy covers passengers departing between \(t=1\) and \(T^{D}=23\), approximately one hour after the end of the incident (9:13 AM). The analysis period is set as \(t^{\text{min}}=-13\) and \(T=34\), approximately one hour before \(t=1\) and after \(T^{D}\), providing enough buffer (warm-up and cool-down time) for passengers in \(\mathcal{P}\) to finish their trips. As demand and incident duration predictions are out of the scope of this paper, we simply use the actual demand and incident duration for all experiments. Our other work (Mo et al., 2022) proposes to use robust and stochastic optimization to address demand and incident duration uncertainty, respectively.
#### 5.1.2 Conditional probability matrix \(\pi\)
In this section, we describe how to generate the synthetic conditional probability matrix \(\pi\) used for the case study. During the incident, CTA does not provide specific path recommendation information. For every individual, we assume that their actual path choices (referred to as the "status quo" choices) reflect their inherent preferences. H presents the method of inferring passengers' status quo choices during the disruption using smart card data (Mo et al., 2022). The basic idea is to track their tap-in records when entering the Blue Line and nearby bus routes, and compare them with their historical travel histories to get the transfer information.
Given the status quo choices, we assume that the "true" passenger \(p\)'s inherent preference for path \(r\) is given by
\[V_{p}^{r}=\begin{cases}1+v_{p}^{r}&\quad\text{if $r$ is $p$'s actual path choice}\\ v_{p}^{r}&\quad\text{otherwise},\end{cases}\quad\forall\,p\in\mathcal{P},r\in \mathcal{R}_{p} \tag{58}\]
Figure 6: Case study network
where \(v_{p}^{r}\) is drawn uniformly from \(\mathcal{U}[0,1]\). Eq. 58 indicates every path has a random utility \(v_{p}^{r}\) normalized to \(0\sim 1\). And the chosen path has an additional utility value of 1. We assume that the impact of the recommendation of \(r^{\prime}\) on the utility of path \(r\) is
\[I_{p,r^{\prime}}^{r}=\begin{cases}\text{Drawn from }\mathcal{U}[0,5]&\text{if }r=r^{ \prime}\\ 0&\text{otherwise},\end{cases}\quad\forall\,p\in\mathcal{P},r,r^{\prime}\in \mathcal{R}_{p} \tag{59}\]
Eq. 59 means that the utility of the path recommended (i.e., \(r=r^{\prime}\)) has an additional positive impact drawn uniformly from \(\mathcal{U}[0,5]\). The utilities of paths not being recommended (\(r\neq r^{\prime}\)) do not change. Given Eqs. 58 and 59, we can generate the conditional probability \(\boldsymbol{\pi}\) using Eq. 21. It is worth mentioning that the above assumptions for generating synthetic passenger prior preferences are based on two reasonable principles: 1) Passenger's actual chosen path has a higher inherent utility. 2) Recommendations of a path can increase its probability of being chosen.
### Parameter settings
The \(\epsilon\)-feasibility and \(\Gamma\)-concentration parameters are set as \(\epsilon=0.05\) and \(\Gamma=0.3\), indicating 5% and 30% variation constraints in mean and variance. The convergence gap threshold for Benders decomposition is set as \(1\times 10^{-8}\). The post-adjustment updating step is set as \(\lambda_{k}=\frac{1}{4}\) based on numerical tests.
### Benchmark models
There are two benchmark path choice scenarios we use for comparison purposes:
**Status-quo path choices**. This scenario provides the status quo situation which does not include any recommendations. It represents the worst case. In this scenario, no behavior uncertainty is considered because this is based on the actual path choices realized by passengers.
**Capacity-based path recommendations**. The capacity-based path recommendations aim to recommend passengers to different paths according to the available capacity of paths. Specifically, for a path in OD pair \((u,v)\) and time \(t\), its capacity is the total available capacity of all vehicles passing through the first boarding station of the path during the time period. For example, for a path consisting of an NS bus route and the Green Line, the path capacity is the total available capacity of all buses at the boarding station of the NS bus route during time interval \(t\). The available capacity can be obtained from a simulation model using historical demand as the input or using historical passenger counting data. The available capacity for the Blue line (the incident line) depends on modified operations during the incident (i.e., the service suspension is considered). When no vehicles operate in the Blue line during time interval \(t\), the path capacity is zero.
### System travel time evaluation
Given a recommendation strategy \(\mathbf{x}\), as mentioned above, the actual system travel time is a random variable because of the passenger behavior uncertainty. To obtain the mean and standard deviation of the system travel time, we generate multiple passenger choice realizations based on \(\mathbf{\pi}\) and \(\mathbf{x}\). For each generated passenger choice (\(\hat{\mathbb{1}}_{p,r^{\prime}}^{r}\)), the realized path flows are
\[\hat{q}_{t}^{u,v,r}=\sum_{p\in\mathcal{P}_{t}^{u,v}}\sum_{r^{\prime}\in \mathcal{R}^{u,v}}x_{p,r^{\prime}}\cdot\mathbb{1}_{p,r^{\prime}}^{r}\quad \forall(u,v,r)\in\mathcal{F},t\in\mathcal{T}. \tag{60}\]
The system travel time for the above passenger choice realization is calculated by solving the optimal flow problem (Eq. 16) with the constraints \(q_{t}^{u,v,r}=\hat{q}_{t}^{u,v,r}\) for all \((u,v,r)\in\mathcal{F}\) and \(t\in\mathcal{T}\). This process is repeated with multiple realizations, providing the sample mean and standard deviation of the system travel time under recommendation strategy \(\mathbf{x}\).
### Experimental design
As this paper considers various components (such as optimal flow optimization, passengers' path preferences, behavior uncertainty, etc.), it is useful to test different components separately to identify the impact of each one. Hence, we design the following test cases, each one with specific parameter settings to systematically evaluate the impacts of each component.
**Model performance compared to benchmark models**. The most straightforward model validation is to evaluate the effect of reducing system travel time. In this test case, we set \(\Psi=0\), meaning that we ignore the passengers' preferences and focus only on minimizing system travel time. The results of this test case are discussed in Section 6.2
**The benefit of considering behavior uncertainty**. In this test case, we evaluate the importance of incorporating behavior uncertainty in the model. The model without behavior uncertainty assumes that passengers take the recommended path. The recommendation strategy is obtained by solving Eq. 31 with \(\pi_{p,r^{\prime}}^{r}=1\) if \(r=r^{\prime}\). Similarly, we set \(\Psi=0\). Note that, when we evaluate the recommendation strategy, the behavior uncertainty is still considered in generating the system travel time (see Section 5.4). The results of this test case are shown in Section 6.3
**Impact of considering passenger preference**. In all the above tests, \(\Psi=0\) is used, focusing on the system travel time. In this test case, we evaluate the model performance under different values of \(\Psi\) in order to assess the impact of considering passenger preferences. The results of this test case are discussed in Section 6.4.
## 6 Results
### Model convergence
Figure 7 shows the convergence of the BD algorithm. As expected, the lower bound of the model keeps increasing, while the upper bound, after dropping significantly in early iterations, exhibits some fluctuations. The model converges after 28 iterations with a relative gap of less than 1\(\times 10^{-8}\). The number of optimality cuts was 28 and no feasibility cut was generated.
Table 1 compares the computational time of the Benders decomposition and off-the-shelf solvers. The BD algorithm was implemented using Julia 1.6 with the Gurobi 9.1 solver (Gurobi Optimization, LLC 2021) on a personal computer with the I9-9900K CPU. The total computational time is 17.8 seconds (master problem 8.2 seconds + subproblem 9.6 seconds), which is more efficient than directly using the Mixed integer programming (MIP) solvers, including Gurobi (Gurobi Optimization, LLC 2021), CPLEX (Cplex 2009), GLPK (GNU Linear Programming Kit) (Makhorin 2008), and CBC (Coin-or branch and cut) (Forrest and Lougee-Heimer 2005).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Solver & CPU time (sec) & Gap & Solver & CPU time (sec) & Gap \\ \hline BD & 17.8 & 0.000\% & Gurobi & 55.1 & 0.000\% \\ CPLEX & 65.7 & 0.000\% & CBC & 425.4 & 0.000\% \\ GLPK & 562.6 & 0.000\% & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Computational time comparison
Figure 7: Convergence of the Benders decomposition
### Model performance compared to benchmark models
In this section, we compare the system travel time under the proposed individual path recommendations (without post-adjustment) and two benchmark models. All travel times (except for the status quo that is deterministic) are calculated based on 10 replications using the randomly sampled actual path choices based on the given recommendation (see Section 5.4).
Table 2 shows that the proposed model (IPR) significantly reduces the average travel time in the system compared to the status quo. Specifically, there is a 6.6% reduction in travel times of all passengers in the system. And for passengers in the incident line (i.e., passengers who received the recommendation, \(\mathcal{P}\)), the average travel time reduction is 19.0%. Our model also outperforms the capacity-based benchmark path recommendation strategy, which reduces the travel time of all passengers by 2.5% and incident line passengers by 15.9%. It is also worth noting that the standard deviation is small, meaning that variations due to behavior uncertainty are not significant.
### Benefits of considering behavior uncertainty
In this section, we aim to compare the model with and without considering the behavior uncertainty. The model without behavior uncertainty assumes that all passengers follow the recommended path when designing the recommendation (but they may not in reality).
Table 3 shows the comparison of average travel time for the two models. As expected, considering behavior uncertainty in the path recommendation design achieves smaller travel time for all passengers and incident line passengers. Note that, though the 0.93% reduction (around 15 seconds saving per passenger) is relatively small, considering a large number of passengers in the system, the total travel time savings are still significant.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Average travel time (all passengers)} & \multicolumn{2}{c}{Average travel time (incident line passengers)} \\ \cline{2-5} & Mean (min) & Std. (min) & Mean (min) & Std. (min) \\ \hline Status quo & 28.318 & N.A. & 40.255 & N.A. \\ Capacity-based & 27.609 (-2.5\%) & 0.033 & 33.848 (-15.9\%) & 0.165 \\ IPR model & 26.457 (-6.6\%) & 0.018 & 32.626 (-19.0\%) & 0.187 \\ \hline \hline \end{tabular}
* Numbers in parentheses represent percentage travel time reduction compared to the status quo
\end{table}
Table 2: Average travel time comparison for different models
### Impact of respecting passenger's prior preferences
In this section, we evaluate the impact of different values of \(\Psi\) in terms of respecting passengers' prior preferences. Besides the system travel time, we also evaluate the total utility, defined as the sum of the prior utilities of the recommended path:
\[TU(\mathbf{x})=\sum_{p\in\mathcal{P}}\sum_{r\in\mathcal{R}_{p}}x_{p,r}\cdot V_{p,r}. \tag{61}\]
Note that the maximum value of \(TU(\mathbf{x})\) is achieved when every passenger is recommended with their preferred path (i.e., the path with the highest prior utility, \(V_{p,r}\)). Denote this maximum value as \(TU^{\text{max}}\). The relative ratio of total utility, \(\frac{TU(\mathbf{x})}{TU^{\text{max}}}\), represents the fraction of the total (prior) utility that the recommendation has achieved.
Another indicator is the number of passengers recommended with their preferred path (denoted as \(NP(\mathbf{x})\)). Similarly, we also define the proportion of passengers recommended with their preferred path (i.e., \(\frac{NP(\mathbf{x})}{|\mathcal{P}|}\), where \(|\mathcal{P}|=5,827\) in the case study).
Figure 8 shows the results for different values of \(\Psi\). The x-axis is plotted in a log-scale. In Figure 7(a), the average travel time for all passengers and incident-line passengers increases with the increase of \(\Psi\), which is as expected because the larger value of \(\Psi\) means that the recommendation generation focuses more on satisfying passenger's inherent preferences rather than minimizing the system travel time. Similarly, in Figure 7(b), as expected, both \(TU(\mathbf{x})\) and \(NP(\mathbf{x})\) increase with the increase in \(\Psi\). When \(\Psi=10^{5}\), the average travel time of the incident line passengers increased by 21.3%, which is close to the status quo scenario. This is because we generate passengers' prior utilities based on the status quo choices. Figure 7(b) shows that nearly all passengers in \(\mathcal{P}\) are recommended with their preferred path when \(\Psi=10^{5}\).
Figure 8 illustrates the trade-off between respecting passengers' preferences and reducing system congestion. When the value of \(\Psi\) is relatively small (e.g., less than \(10^{3}\)), increasing \(\Psi\) can effectively increase the total utility and number of passengers recommended with their preferred
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Average travel time (all passengers)} & \multicolumn{2}{c}{Average travel time (incident line passengers)} \\ \cline{2-5} & Mean (min) & Std. (min) & Mean (min) & Std. (min) \\ \hline IPR model (w.o. BU) & 26.706 & 0.026 & 32.852 & 0.122 \\ IPR model (w. BU) & 26.457 (-0.93\%) & 0.018 & 32.626 (-0.69\%) & 0.187 \\ \hline \hline \end{tabular} Numbers in parentheses represent percentage travel time reduction compared to the IPR model w.o. BU
\end{table}
Table 3: Average travel time comparison with and without behavior uncertainty (BU)
path. Meanwhile, the system travel time only slightly increases. But when \(\Psi\) is large (e.g., greater than \(10^{4}\)), increasing \(\Psi\) significantly increases the system travel time, but the impact on increasing the passenger's utility is limited. The reason may be that, in the system, there are some passengers whose preferred paths are not at the capacity bottlenecks. Hence, when \(\Psi\) is small, the optimal solution recommends those passengers use their preferred paths without significantly impacting the system travel time. When \(\Psi\) is large, passengers are recommended to use their preferred paths even if these paths are highly congested, causing a significant increase in the system travel time. The results imply that a reasonable value of \(\Psi\) should be relatively small. With small \(\Psi\), most of the passengers (e.g., more than 70%) are recommended to use their preferred paths without significantly reducing the system efficiency.
## 7 Conclusion and discussion
This study proposes a mixed-integer programming formulation to model the individual-based path recommendation problem during PT service disruptions with the objective of minimizing total system travel time and respecting passengers' path choice preferences. Passengers' behavior uncertainty in path choices given recommendations is also considered in the formulation. We first formulate the optimal flow distribution problem in PT systems as linear programming, which outputs the optimal path flows for each OD pair and time interval that minimize the total system travel time. Then, we model the behavior uncertainty based on the passenger's prior preferences and posterior path choice probability distribution with two new concepts: \(\epsilon\)-feasible flows and \(\Gamma\)-concentrated flows, which control the mean and variance of path flows in the optimization problem. We show
Figure 8: Impact of different values of \(\Psi\) on results. The percentage change in Figure (a) is compared with the scenario of \(\Psi=0\). The percentage in parentheses in Figure (b) represents the relative ratio of total utility and proportion of passengers recommended with their preferred path, respectively.
that these two concepts can be transformed into linear constraints using Chebyshev's inequality. Besides, we show that these two concepts can be seen as a way of approximating the recourse function (expected system travel time) in a two-stage stochastic optimization. It is proved that these two concepts help to bound the difference between the approximated recourse function and the exact one. Additional theoretical analysis shows that \(\epsilon\)-feasibility and \(\Gamma\)-concentration are approximations of expectation and chance constraints in a typical stochastic optimization formulation, respectively. The individual path recommendation problem with behavior uncertainty is solved using Benders decomposition (BD) efficiently. The master problem of BD is small-scale integer programming and the subproblem of BD reduces the optimal flow problem that is a linear program. The BD is more efficient than many off-the-shelf MIP solvers.
The proposed approach is demonstrated in a case study using data from a real-world urban rail disruption in the CTA system. The results show that the proposed IPR model significantly reduces the average travel times in the system compared to the status quo. Specifically, there is a 6.6% reduction in travel times for all passengers in the system. Passengers in the incident line (i.e., passengers who received the recommendation), experience a 19.0% average travel time reduction. Our model also outperforms the capacity-based benchmark path recommendation strategy. Compared to the model that assumes all passengers would follow the recommendations, considering behavior uncertainty in the path recommendation design can achieve smaller system travel time. In terms of respecting passengers' preferences, we show that it is possible that most of the passengers (e.g., more than 70%) are recommended with their preferred paths while only increasing the system travel time by 0.51%.
Following the discussion in Section B, future studies can be pursued in the following directions. First, as shown in Section B.1, it is possible to extend the current framework with more complex recommendation compositions. The challenges in implementing the more general framework stem from the quantification of the posterior path choice probabilities. Future studies may conduct corresponding surveys to calibrate passengers' responses to the recommendations. Besides, future studies may consider different sources of uncertainty (including incident duration, in-vehicle time, demand, etc.) for a more realistic modeling framework.
## 8 Acknowledgement
The authors would like to thank the Chicago Transit Authority (CTA) for their support and data availability for this research.
## Appendix A Notation
## Appendix B Model extension
In this section, we discuss several extensions of the model to accommodate more realistic/general scenarios.
### Generalization of recommendations
In this study, we assume the information given to passengers is a recommended path. In reality, the recommendation system may provide a bundle of recommended paths with information like estimated in-vehicle time, waiting time, travel cost, etc. The proposed framework can be extended to handle different recommendation typologies. Figure 9 shows an example where the recommendation system will provide a composition of path and travel time information, where each composition can include different paths, different estimated waiting/in-vehicle times, etc. Then, we can change \(x_{p,r}\) to \(x_{p,c}\), where \(x_{p,c}\) indicates whether we will present composition \(c\) to passenger \(p\). Similarly, each \(c\) is associated with a conditional probability \(\pi_{p,c}^{r}\) as shown in Figure 9 (the probability for passenger \(p\) to choose path \(r\) given that he/she is recommended composition \(c\)).
In this way, we only need to calibrate \(\pi^{r}_{p,c}\) and predetermine the composition set \(\mathcal{C}_{p}\) for each passenger \(p\). The overall framework proposed above can be easily adapted to the new recommendation typology by replacing \(x_{p,r}\) and \(\pi^{r}_{p,r^{\prime}}\) with \(x_{p,c}\) and \(\pi^{r}_{p,c}\), respectively.
### Feedback and rolling-horizon
As mentioned in Section 3.2, the whole path recommendation problem should be solved in a rolling-horizon manner. At each time interval \(t\geq 1\), we update the demand, supply, and system state information, and solve the proposed framework above to get a recommendation strategy \(\mathbf{x}\). But we only implement the \(x_{p,r}\) for \(p\in\mathcal{P}^{u,v}_{t}\), \(\forall(u,v)\) (i.e., passengers departing at current time \(t\)).
The rolling horizon requires updating the estimated demand and system state information. The recommendation system can ask for passenger feedback to facilitate the estimation. For example, after providing a recommendation, we can ask the passenger to respond whether he/she will actually use it or not. This feedback can be used to update the demand predictions.
## Appendix C Proof of Proposition 1
From the triangular inequality, we have:
\[\underbrace{|Q^{u,v,r}_{t}-q^{u,v,r}_{t}|}_{\text{LHS}} \leq|Q^{u,v,r}_{t}-\mu^{u,v,r}_{t}(\mathbf{x})|+|\mu^{u,v,r}_{t}(\mathbf{x })-q^{u,v,r}_{t}|\] \[\leq\underbrace{|Q^{u,v,r}_{t}-\mu^{u,v,r}_{t}(\mathbf{x})|+\epsilon ^{u,v,r}_{t}}_{\text{RHS}} \tag{62}\]
As LHS \(\leq\) RHS, the probability measure satisfies (for all \(a>\epsilon^{u,v,r}_{t}\)):
\[\mathbb{P}[\text{LHS}\geq a]\leq\mathbb{P}[\text{RHS}\geq a] \tag{63}\]
Notice that
\[\mathbb{P}[\text{RHS}\geq a]=\mathbb{P}\left[|Q^{u,v,r}_{t}-\mu^{u,v,r}_{t}( \mathbf{x})|\geq a-\epsilon^{u,v,r}_{t}\right]\leq\frac{(\sigma^{u,v,r}_{t}(\mathbf{ x}))^{2}}{(a-\epsilon^{u,v,r}_{t})^{2}} \tag{64}\]
Eq. 64 is based on Chebyshev's inequality. Therefore,
\[\mathbb{P}[\text{LHS}\geq a]=\mathbb{P}[|Q^{u,v,r}_{t}-q^{u,v,r}_{t}|\geq a] \leq\frac{(\sigma^{u,v,r}_{t}(\mathbf{x}))^{2}}{(a-\epsilon^{u,v,r}_{t})^{2}} \tag{65}\]
Comparing Eqs. 65 and 26, we know that to satisfy Eq. 26, we only need \(\sigma^{u,v,r}_{t}(\mathbf{x})\leq\Gamma^{u,v,r}_{t}\), which completes the proof.
Figure 9: Illustration of the generalized recommendation typology. \(\mathcal{C}_{p}\) is the predetermined recommendation composition sets for passenger \(p\)
## Appendix D Proof of Lemma 1
\(STT(\hat{\mathbf{q}})\) is obtained by solving a linear programming. \(\hat{\mathbf{q}}\) is a parameter in the constraints. For every \(\hat{\mathbf{q}}\geq 0\), the problem is feasible because the physical meaning of the optimal flow problem (i.e., assigning flows to the network) as long as the system has enough capacity (i.e., dispatching enough vehicles). Hence, the lemma directly follows Theorem 1 in Martin (1975), which implements Berge's Maximum Theorem in parametric linear programming.
## Appendix E Proof of Proposition 2
\[\big{|}\mathbb{E}_{\mathbf{Q}_{|\mathbf{x}^{*}}}\big{[}STT(\mathbf{Q}_{|\mathbf{x} ^{*}})\big{]}-SST(\mathbf{q}^{*})\big{|}=\sum_{\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^ {*})}|SST(\hat{\mathbf{q}})-SST(\mathbf{q}^{*})|\cdot\mathbb{P}_{\mathbf{Q}_{|\mathbf{x}^{*} }}(\hat{\mathbf{q}})\] \[\leq\sum_{\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^{*})}L\cdot\|\hat{ \mathbf{q}}-\mathbf{q}^{*}\|_{1}\cdot\mathbb{P}_{\mathbf{Q}_{|\mathbf{x}^{*}}}(\hat{\mathbf{q}}) \tag{66}\]
Let us divide the support of the random variable \(\mathbf{Q}|_{\mathbf{x}^{*}}\) as three mutually exclusive subsets:
\[\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}} =\mathcal{Q}(\mathbf{x}^{*})\cap\{\hat{\mathbf{q}}:\,0\leq\hat{\mathbf{q}}< \mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]-\mathbf{\epsilon}\} \tag{67}\] \[\mathcal{Q}(\mathbf{x}^{*})^{\text{Mid}} =\mathcal{Q}(\mathbf{x}^{*})\cap\{\hat{\mathbf{q}}:\,\mathbb{E}[\mathcal{ Q}|_{\mathbf{x}^{*}}]-\mathbf{\epsilon}\leq\hat{\mathbf{q}}\leq\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]+ \mathbf{\epsilon}\}\] (68) \[\mathcal{Q}(\mathbf{x}^{*})^{\text{Geq}} =\mathcal{Q}(\mathbf{x}^{*})\cap\{\hat{\mathbf{q}}:\,\mathbb{E}[\mathbf{Q}|_{ \mathbf{x}^{*}}]+\mathbf{\epsilon}<\hat{\mathbf{q}}\leq\mathbf{q}^{\text{Max}}\} \tag{69}\]
where \(\mathcal{Q}(\mathbf{x}^{*})=\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}\cup\mathcal{Q}( \mathbf{x}^{*})^{\text{Mid}}\cup\mathcal{Q}(\mathbf{x}^{*})^{\text{Geq}}\)
We can calculate the summation over these three subsets separately:
**(1) Bounds on the summation over \(\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}\)**:
\[\sum_{\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}}L\cdot\|\hat{\mathbf{q} }-\mathbf{q}^{*}\|_{1}\cdot\mathbb{P}_{\mathbf{Q}_{|\mathbf{x}^{*}}}(\hat{\mathbf{q}})\leq \sum_{\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}}L\cdot\left(\|\hat{ \mathbf{q}}-\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\|_{1}+\|\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*} }]-\mathbf{q}^{*}\|_{1}\right)\cdot\mathbb{P}_{\mathcal{Q}_{|\mathbf{x}^{*}}}(\hat{\mathbf{ q}}) \tag{70}\]
which is followed by the triangle inequality. Notice that
\[\sum_{\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}}L\cdot \|\hat{\mathbf{q}}-\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\|_{1}\cdot\mathbb{P}_{\mathbf{Q} _{|\mathbf{x}^{*}}}(\hat{\mathbf{q}})\leq L\cdot\sum_{\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x }^{*})^{\text{Leq}}}\|\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\|_{1}\cdot\mathbb{P}_{ \mathbf{Q}_{|\mathbf{x}^{*}}}(\hat{\mathbf{q}})\] \[=L\cdot\|\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\|_{1}\cdot\mathbb{P} \big{[}\mathbf{Q}|_{\mathbf{x}^{*}}\leq\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]-\mathbf{\epsilon} \big{]}\leq L\cdot\|\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\|_{1}\cdot\|\mathbf{\Gamma}\| _{2}^{2} \tag{71}\]
where the last inequality is the result of the following:
\[\mathbb{P}\big{[}\mathbf{Q}|_{\mathbf{x}^{*}}\leq\mathbb{E}[\mathbf{Q}|_{\mathbf{ x}^{*}}]-\mathbf{\epsilon}\big{]}\leq\mathbb{P}\big{[}\|\mathbf{Q}|_{\mathbf{x}^{*}}- \mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\|\geq\mathbf{\epsilon}\big{]}\] \[\leq\sum_{i\in F\times\mathcal{T}}\mathbb{P}\|Q_{i}|_{\mathbf{x}^{*} }-\mathbb{E}[Q_{i}|_{\mathbf{x}^{*}}]\|\geq\epsilon_{i}]\leq\sum_{i\in F\times \mathcal{T}}(\Gamma_{i})^{2}=\|\mathbf{\Gamma}\|_{2}^{2} \tag{72}\]
where the inequality follows by the union bound and the \(\Gamma\)-concentration property. Similarly, we have
\[\sum_{\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}}L\cdot\| \mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]-\mathbf{q}^{*}\|_{1}\cdot\mathbb{P}_{\mathbf{Q}_{|\mathbf{x }^{*}}}(\hat{\mathbf{q}})\leq\sum_{\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq }}}L\cdot\|\mathbf{\epsilon}\|_{1}\cdot\mathbb{P}_{\mathbf{Q}_{|\mathbf{x}^{*}}}(\hat{\mathbf{q}})\] \[=L\cdot\|\mathbf{\epsilon}\|_{1}\cdot\mathbb{P}\big{[}\mathbf{Q}|_{\mathbf{x }^{*}}\leq\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]-\mathbf{\epsilon}\big{]}\leq L\cdot\| \mathbf{\epsilon}\|_{1}\cdot\|\mathbf{\Gamma}\|_{2}^{2} \tag{73}\]
where the first inequality is due to the \(\epsilon\)-feasibility.
Therefore, combining Eqs. 71 and 73 leads to
\[\sum_{\hat{\mathbf{q}}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}}L\cdot\|\hat{\mathbf{q}}- \mathbf{q}^{*}\|_{1}\cdot\mathbb{P}_{\mathbf{Q}_{|\mathbf{x}^{*}}}(\hat{\mathbf{q}})\leq L\cdot \big{(}\left\|\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\|_{1}+\|\mathbf{\epsilon}\|_{1} \right)\cdot\|\mathbf{\Gamma}\|_{2}^{2} \tag{74}\]
(2) Bounds on the summation over \(\mathcal{Q}(\mathbf{x}^{*})^{\text{Mid}}\):
\[\sum_{\hat{q}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Mid}}}L\cdot\|\hat{ \mathbf{q}}-\mathbf{q}^{*}\|_{1}\cdot\mathbb{P}_{\mathbf{Q}_{\mathbf{\mid}\mathbf{x}^{*}}}(\hat{ \mathbf{q}}) \leq\sum_{\hat{q}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}}L\cdot(\| \hat{\mathbf{q}}-\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\|_{1}+\|\mathbb{E}[\mathbf{Q}|_{\mathbf{ x}^{*}}]-\mathbf{q}^{*}\|_{1})\cdot\mathbb{P}_{\mathbf{Q}_{\mathbf{\mid}\mathbf{x}^{*}}}(\hat{ \mathbf{q}})\] \[\leq\sum_{\hat{q}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Mid}}}L\cdot( \left\|\mathbf{\epsilon}\right\|_{1}+\left\|\mathbf{\epsilon}\right\|_{1})\cdot\mathbb{ P}_{\mathbf{Q}_{\mathbf{\mid}\mathbf{x}^{*}}}(\hat{\mathbf{q}})\leq 2L\cdot\left\|\mathbf{\epsilon} \right\|_{1} \tag{75}\]
(3) Bounds on the summation over \(\mathcal{Q}(\mathbf{x}^{*})^{\text{Geq}}\):
Similar to the proof of \(\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}\), notice that
\[\sum_{\hat{q}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Leq}}}L\cdot\| \hat{\mathbf{q}}-\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\|_{1}\cdot\mathbb{P}_{\mathbf{Q}_{ \mathbf{\mid}\mathbf{x}^{*}}}(\hat{\mathbf{q}}) \leq L\cdot\sum_{\hat{q}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Geq}}} \left\|\mathbf{q}^{\text{Max}}\right\|_{1}\cdot\mathbb{P}_{\mathbf{Q}_{\mathbf{\mid}\mathbf{x }^{*}}}(\hat{\mathbf{q}})\] \[=L\cdot\left\|\mathbf{q}^{\text{Max}}\right\|_{1}\cdot\mathbb{P} \big{[}\mathbf{Q}|_{\mathbf{x}^{*}}\leq\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]-\mathbf{\epsilon} \big{]} \leq L\cdot\left\|\mathbf{q}^{\text{Max}}\right\|_{1}\cdot\left\|\mathbf{\Gamma} \right\|_{2}^{2} \tag{76}\]
Combining Eqs. 3.2 and 3.2, we have
\[\sum_{\hat{q}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Geq}}}L\cdot\|\hat {\mathbf{q}}-\mathbf{q}^{*}\|_{1}\cdot\mathbb{P}_{\mathbf{Q}_{\mathbf{\mid}\mathbf{x}^{*}}}(\hat{ \mathbf{q}}) \leq\sum_{\hat{q}\in\mathcal{Q}(\mathbf{x}^{*})^{\text{Geq}}}L\cdot( \left\|\hat{\mathbf{q}}-\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\right\|_{1}+\left\| \mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]-\mathbf{q}^{*}\right\|_{1})\cdot\mathbb{P}_{\mathbf{ Q}_{\mathbf{\mid}\mathbf{x}^{*}}}(\hat{\mathbf{q}})\] \[\leq L\cdot\big{(}\left\|\mathbf{q}^{\text{Max}}\right\|_{1}+\left\| \mathbf{\epsilon}\right\|_{1}\big{)}\cdot\left\|\mathbf{\Gamma}\right\|_{2}^{2} \tag{77}\]
In summary, combining the summation over three mutually exclusive sets, we have:
\[\left|\mathbb{E}_{\mathbf{Q}_{\mathbf{\mid}\mathbf{x}^{*}}}[STT(\mathbf{Q}|_{\mathbf{x}^{*}})]-SST( \mathbf{q}^{*})\right|\leq 2L\cdot\left\|\mathbf{\epsilon}\right\|_{1}+L\cdot\big{(} \left\|\mathbb{E}[\mathbf{Q}|_{\mathbf{x}^{*}}]\right\|_{1}+\left\|\mathbf{q}^{\text{Max}} \right\|_{1}+2\left\|\mathbf{\epsilon}\right\|_{1}\big{)}\cdot\left\|\mathbf{\Gamma} \right\|_{2}^{2} \tag{78}\]
## Appendix F Proof of Proposition 3
When \(\mathbf{\epsilon}=0\), we have \(\mathbf{y}=\mathbb{E}[\mathbf{Y}]\). Then:
\[G_{\text{EP}}(\mathbf{\epsilon}=0)=\min_{\mathbf{y},\mathbf{\theta}}\{g(\mathbb{E}[\mathbf{Y}] ):\,h_{j}(\mathbb{E}[\mathbf{Y}])\leq b_{j}\} \tag{79}\]
According to Jensen's inequality, we have:
\[g(\mathbb{E}[\mathbf{Y}])\leq\mathbb{E}[g(\mathbf{Y})],\quad h_{j}(\mathbb{E}[\mathbf{Y}] )\leq\mathbb{E}[h_{j}(\mathbf{Y})],\,\forall j\in\mathcal{J} \tag{80}\]
Therefore, the proposed approach has a smaller objective function and a larger feasible space, which makes it a lower bound of the stochastic optimization problem (Eq. 43).
## Appendix G Proof of Proposition 4
**Step 1:** We first show that if \(\text{Var}[\mathbf{Y}]\) is bounded, then \(\text{Var}[h_{j}(\mathbf{Y})]\) is also bounded.
Notice that for any random variable \(X\), we have \(\text{Var}[X]=\mathbb{E}[X^{2}]-(\mathbb{E}[X])^{2}\leq\mathbb{E}[X^{2}]\). Hence, if we take \(X=h_{j}(\mathbf{Y})-h_{j}(\mathbb{E}[\mathbf{Y}])\), we get
\[\text{Var}[h_{j}(\mathbf{Y})-h_{j}(\mathbb{E}[\mathbf{Y}])]=\text{Var}[h_{j}(\mathbf{Y})] \leq\mathbb{E}[(h_{j}(\mathbf{Y})-h_{j}(\mathbb{E}[\mathbf{Y}]))^{2}] \tag{81}\]
From the Lipschitz continuity of \(h_{j}(\cdot)\), we have
\[\left|h_{j}(\mathbf{Y})-h_{j}(\mathbb{E}[\mathbf{Y}])\right|\leq C\left\|\mathbf{Y}- \mathbb{E}[\mathbf{Y}]\right\|_{2} \tag{82}\]
which further yields:
\[\mathbb{E}[(h_{j}(\mathbf{Y})-h_{j}(\mathbb{E}[\mathbf{Y}]))^{2}]\leq C^{2}\mathbb{E}[ \left\|\mathbf{Y}-\mathbb{E}[\mathbf{Y}]\right\|_{2}^{2}]=C^{2}\cdot\sum_{i=1}^{n} \mathbb{E}\left[(Y_{i}-\mathbb{E}[Y_{i}])^{2}\right]=C^{2}\sum_{i=1}^{n}\text{ Var}[Y_{i}] \tag{83}\]
Combining with Eq. 81, we have
\[\text{Var}[h_{j}(\mathbf{Y})]\leq C^{2}\sum_{i=1}^{n}\text{Var}[Y_{i}]\leq C^{2}\cdot \left\|\mathbf{\Gamma}\right\|_{2}^{2} \tag{84}\]
**Step 2:** We then show that \(\text{Var}[h_{j}(\mathbf{Y})]\leq C^{2}\cdot\left\|\mathbf{\Gamma}\right\|_{2}^{2}\) can lead to an approximation for the chance constraint if \(\mathbf{\Gamma}\) is sufficiently small.
Consider Chebyshev's inequality, for a given positive number \(a>0\):
\[\mathbb{P}[\left|h_{j}(\mathbf{Y})-\mathbb{E}[h_{j}(\mathbf{Y})]\right|>a]\leq\frac{ \text{Var}[h_{j}(\mathbf{Y})]}{a^{2}} \tag{85}\]
Eq. 85 implies
\[\mathbb{P}[\left|h_{j}(\mathbf{Y})-\mathbb{E}[h_{j}(\mathbf{Y})]\right|\leq a]\geq 1- \frac{\text{Var}[h_{j}(\mathbf{Y})]}{a^{2}}\,\Rightarrow\,\mathbb{P}[h_{j}(\mathbf{Y} )\leq a+\mathbb{E}[h_{j}(\mathbf{Y})]]\geq 1-\frac{\text{Var}[h_{j}(\mathbf{Y})]}{a^{2}} \tag{86}\]
Since we know that \(\mathbb{E}[h_{j}(\mathbf{Y})]\leq b_{j}\), Eq. 86 yields:
\[\mathbb{P}[h_{j}(\mathbf{Y})\leq a+b_{j}]\geq 1-\frac{\text{Var}[h_{j}(\mathbf{Y})]}{ a^{2}}\geq 1-\left(\frac{C}{a}\right)^{2}\cdot\left\|\mathbf{\Gamma}\right\|_{2}^{2} \tag{87}\]
Let us pick \(a=\frac{C\left\|\mathbf{\Gamma}\right\|_{2}}{\sqrt{1-\eta}}\), we have
\[\mathbb{P}\left[h_{j}(\mathbf{Y})\leq\frac{C\left\|\mathbf{\Gamma}\right\|_{2}}{\sqrt {1-\eta}}+b_{j}\right]\geq\eta \tag{88}\]
Therefore, when \(\mathbf{\Gamma}\) is sufficiently small, we would have \(\mathbb{P}\left[h_{j}(\mathbf{Y})\leq\frac{C\left\|\mathbf{\Gamma}\right\|_{2}}{\sqrt {1-\eta}}+b_{j}\right]\approx\mathbb{P}[h_{j}(\mathbf{Y})\leq b_{j}]\). In this case, we derive the chance constraints from the \(\Gamma\)-concentration constraints.
## Appendix H Inference of status quo choices
The status quo path choice inference method is based on our previous study (Mo et al., 2022), which is also similar to the trip-train method used for destination inference in open public transit systems (i.e., no tap-out).
**[In the system when the incident happens]**: Consider a passenger \(p\in\mathcal{P}\) with an incident line tap-in record before the end of the incident, meaning that he/she were in the transit system when the incident happens. We then track his/her next tap-in record. If he/she next tap-in is a transfer at a nearby bus or rail station, we can identify his/her chosen path based on the transfer station. We can also identify the waiting passenger if he/she continues to use the incident line to his/her intended destination inferred by his/her next tap-in records.
**[Out of the system when the incident happens]**: For a passenger \(p\in\mathcal{P}\) with only a tap-in record in nearby bus or rail stations. He/she may be affected by the incident to change the tap-in station, or just use the service as a normal commute. To identify whether he/she was affected, we extract his/her travel histories on previous days without incidents to get the normal commute trajectories. If his/her tap-in time and location on the incident day have never appeared in the historical records before, we treat him/her as a passenger affected by the incident and identify his/her chosen path based on the tap-in station.
For passengers in \(\mathcal{P}\) without next tap-in records or travel histories, we randomly assign him/her a status quo path based on the proportion of inferred passengers. |
2307.05538 | Advancements in Scientific Controllable Text Generation Methods | The previous work on controllable text generation is organized using a new
schema we provide in this study. Seven components make up the schema, and each
one is crucial to the creation process. To accomplish controlled generation for
scientific literature, we describe the various modulation strategies utilised
to modulate each of the seven components. We also offer a theoretical study and
qualitative examination of these methods. This insight makes possible new
architectures based on combinations of these components. Future research will
compare these methods empirically to learn more about their strengths and
utility. | Arnav Goel, Medha Hira, Avinash Anand, Siddhesh Bangar, Rajiv Ratn Shah | 2023-07-08T15:22:29Z | http://arxiv.org/abs/2307.05538v1 | # Advancements in Scientific Controllable Text Generation Methods
###### Abstract
The previous work on controllable text generation is organized using a new schema we provide in this study. Seven components make up the schema, and each one is crucial to the creation process. To accomplish controlled generation for scientific literature, we describe the various modulation strategies utilised to modulate each of the seven components. We also offer a theoretical study and qualitative examination of these methods. This insight makes possible new architectures based on combinations of these components. Future research will compare these methods empirically to learn more about their strengths and utility.
**Keywords:** Controllable text generation, Neural text generation, Natural Language Processing, Sequence to Sequence models, Transformers
1
Footnote 1: These authors contributed equally to this work.
## 1 Introduction
The amount of free text on the Internet is enormous many orders of magnitude greater than the number of labelled benchmark data sets. Modern language models (LM) are trained on a huge scale using unsupervised Web data. When generating samples from a language model, we have minimal influence over the resulting text's desired topic, style, sentiment, and other properties. Controlled text creation means producing coherent sentences while maintaining
control over various properties. These properties encompass elements such as style, sentiment, formality and intent; demographic aspects like gender or age and the organization of events or information, such as the arrangement of plot summaries. By manipulating multiple text generation properties, diverse objectives can be accomplished. Areas of focus in the field of dialogue response generation include the control of persona, where efforts have been made to manipulate and shape the character or identity expressed in the generated dialogues [1; 2], controlling various response characteristics such as politeness [3], formality, intent [4; 5], etc., response grounding in fixed information [6; 7; 8], and controlling topic sequence [9; 10].
Despite the large volume of past research on programmable text production, there is no overarching topic or subject that embraces it all. Each study focuses on specific tasks within particular contexts. Consequently, the challenge lies in determining how to guide a potent unconditioned language model in accordance with personal preferences and desired outcomes. In this study, we provide a novel paradigm that links earlier research and sheds light on many facets of controlled text synthesis. The schema comprises 7 modules that span the pipeline, explain how each part affects the generating process, and explain each technique and method that might provide better insights. We also offer an analysis of the parallels between earlier work that concentrated on particular schema elements we present here.
The proposed schema in this study, illustrated in Figure 1, comprises seven modules designed to facilitate the production process. The initial module, labeled as **(1) External Inputs**, serves as the starting point for the generation process. At each generational time step \(t\), the input is sourced from the **(2) Sequential Inputs** module. Furthermore, this input can be simultaneously shared and trained with the **(3) Discriminator** module to acquire valuable feedback on the utilized inputs. However, it is important to note that the discriminator should not be employed indefinitely during the development of new architectures.
The **(4) Encoding Operations** module is responsible for performing consistent operations or computations on each input at every time step. These computations are then propagated forward, generating relevant outputs through the **(5) Decoding Strategies** module. Subsequently, the **(6)
Figure 1: Modules Schema for the scientific controllable text generation process.
**Output** is projected onto the vocabulary space in order to predict the next token. Finally, the **(7) Training Objectives** module handles the necessary loss functions for training the generator.
In summary, the proposed schema consists of interconnected modules, each contributing to different aspects of the producing process. The flow begins with external inputs, progresses through sequential inputs, encoding operations, decoding strategies, and output projection, ultimately guided by training objectives.
The contributions of the various modules as well as the techniques and methods for controlling text production, are explained by this schema. This work focuses on using this schema to explain controlled text production, specifically emphasising the application of auto-regressive and uncontrollable language models. Through this effort, new model architectures based on the schema may be designed. This may be accomplished by selecting promising approaches and techniques for each module before fusing them together. This research also provides the linked work in text creation, spotlighting the earlier work while introducing the techniques.
## 2 Related Work
The initial study on related work generation Hoang et al. [11] predates the widespread use of neural networks. Their work focused on summarization, specifically the development of ReWoS (Related Work Summarization). ReWoS employed two distinct strategies for generating summaries: General Content Summarization (GCSumm) and Specific Content Summarization (SCSumm). As a heuristic-based system, ReWoS effectively mapped these strategies to the topic tree structure, which served as the input for the summarization process.
In a subsequent study, Wang et al. [12] explored the explicit extraction of Cited Text Spans (CTS) from cited publications. These CTSs were specific text fragments within the referenced work that were closely related to a particular citation. In addition to utilizing a topic model, the researchers employed a two-layered ensemble model for classifying and extracting the CTS. They employed a greedy algorithm to select candidate phrases, creating connected parts of the text and forming coherent sections. This approach aimed to enhance the controllability and accuracy of the collected and retrieved data.
Addressing the challenge of automatically generating citation texts within academic works, Xing et al. [13] proposed a novel approach. Due to a scarcity of training data, the researchers annotated a dataset and trained an implicit citation extraction algorithm. They suggested the use of a multi-source pointer-generation network with cross-attention mechanisms to effectively tackle this issue. This method allowed for more precise and efficient automatic generation of citation texts.
In contrast to the work by Ge et al. [14] and Chen et al. [15], Luu et al. [16] aims to utilize citation sentences as a form of partial supervision for
elucidating the connections between two scientific articles. They explore two approaches in their study. Firstly, they fine-tune the GPT2 model on scientific texts using a specific dataset and employ a neural network to generate concise summaries. Secondly, they establish a direct relationship between the target papers and the articles that both cite them and are cited by them by extracting the citation sentences.
When it comes to constructing the related work section, Chen et al. [15], building upon the configuration proposed by Hu et al. [17], utilize the title's keywords to identify relevant themes. They leverage a discriminative feature graph to select pertinent sentences. To overcome the computational complexity of the set cover problem, they introduce a greedy approximation technique that prioritizes phrases containing the most uncovered information at each step. This technique facilitates the efficient construction of the related work section while ensuring the inclusion of critical details from the available sources.
Ge et al. [14] extended the approach proposed by Xing et al. [13] by incorporating a graph attention network (GAT) to encode the citation network information. They also utilized a hierarchical bidirectional LSTM to store the citation context and the abstracts of the cited papers.
Beltagy et al. [18] introduced SCI BERT, a pre-trained language model specifically designed for scientific text based on BERT. This model was trained on various scientific tasks and datasets from diverse fields. In evaluations, SCI BERT outperformed BERT-Base on several tasks, particularly in computer science, achieving new state-of-the-art (SOTA) results with improvements of +3.55 and +0.49 F1 scores through fine-tuning. The performance of SCI BERT even surpassed the results of previously published BIOBERT [19] on biomedical tasks.
Abura et al. [20] employs a straightforward sequence-to-sequence strategy in their research utilising traditional designs to generate citation sentences from the title and abstract of the referenced work. For this objective, they use Transformer, OpenNMT-py, and the Pointer-Generator Network (PGN). The PGN incorporates copy-attention and coverage methods.
Jadika et al. [21] looked at 20 reviews of the literature obtained from journal articles published in JASIST to establish the fundamental knowledge and comprehension of the referenced papers, source papers, and literature review papers. To grasp and emulate the intricacies of human writing in literature reviews, the researchers meticulously analysed linguistic and content attributes present in reviews extracted from esteemed journals within the field of information science. The study aimed to gain a deeper understanding of the stylistic and substantive aspects employed in the scholarly discourse by subjecting these reviews to a thorough evaluation. This endeavour sought to pave the way for the development of automated systems that can mimic the quality and essence of human-authored literature reviews.
Galactica [22] is an LLM capable of retaining, integrating, and applying scientific data through its reasoning abilities. This proficiency was developed through extensive training utilizing scientific publications, books, databases,
and other resources. Galactica performs better on various scientific problems than the current models. Galactica's corpus is excellent and carefully managed, unlike other language models, which rely on a crawl-based paradigm that is not properly vetted. Without overfitting, they could train on it for several epochs. Upstream and downstream performance increases with the usage of repeated tokens.
Wu et al. [23], Jung et al. [5] and Gu et al. [4] improve upon the limitations in existing citation scientific text generation systems such as those proposed by Xing et al. [13] and Luu et al. [16]. The limitations include not accommodating the ability of authors to summarise multiple studies into one citation sentence and the ability of the authors to control the intent of the citation.
According to Cohan et al. [24], the classification of citation texts involves categorizing them into three intent classes: "Background," "Method," and "Result." In this context, when paper \(B\) cites paper \(A\), "Background" indicates that \(B\) is utilizing \(A\)'s idea as a foundational element for its own work. "Method" refers to \(B\) adopting \(A\)'s methodology, such as experimentation or dataset preparation. On the other hand, "Result" signifies \(B\)'s comparison of its findings with those obtained by \(A\). These classifications are incorporated in the Sci-Cite dataset, and a structural scaffold is employed to accomplish the categorization.
Gu et al. [4] provides the SciCCG dataset where each citation text is between 5 and 200 words. An intent was randomly chosen from among the three to prepare the baselines. The keywords were chosen using KeyBERT [25] and the relevant sentences using SentenceBERT [26]. They are extraction techniques based on BERT token embeddings to extract keywords and sentences from a text. This baseline is then compared with the attribute suggestor's keyword and sentence extractor. The latter performs much better due to triple-loss while fine-tuning it. The overall citation generator system gave very high accuracy compared to ground truth citations when all three attributes, along with contextual input, were fed in as input. On human evaluation, the model gave much higher preference to other existing systems and showed appropriate usage of the suggested keywords and sentences.
## 3 External Inputs
In this part, we go through the many methods that may be used to modify the initialization of the generator to regulate the encoding and decoding process. External Inputs are equivalent to Sequential Inputs in the conventional generating method.
### Decompose
It is possible to divide the encoder representation into many subspaces, each representing a property that may be manipulated. The encoder representation is divided into two parts by Liu et al. [27], one of which reflects the document's structure and the other of which contains its semantic content. Balchandran et
al. [28] utilized this approach to manage structure in abstractive summarization. With regard to the encoder representation's dimensions, this work splits. The technique makes the first n dimensions of the encoder representation for the structure and the last n dimensions to represent meaning. Additionally, Balchandra et al. [28] provide quantitative and qualitative analyses of the various document structures that may be learned using this method.
A document is represented by its abstract and title for scientific citation text generation. The encoder representation is divided into subspaces for different permutations with the citing and cited paper. Xing et al. [13] decomposes the encoder representation into two different encoders. It uses only the referencing paper's context and the referenced paper's abstract to generate relevant target citations. Both are seen as separate sequences of words encoded by the two different encoders to two separate vectors denoting hidden states. Jung et al. [5] encodes the two different documents into the same input text. It varies the presence of tokens representing the abstract and title of the citing paper and thus encodes for the two separately. Gu et al. [4] decomposes the encoder representation into many subspaces, adding subspace to store the local context of the citing paper. Local context refers to the tokens of five sentences before our target citation. Thus, encoder representations can store local context to improve text generation at specific target places.
### Arithmetic and Linear Transform
Concatenating a control vector to the encoder's output is one of the simplest ways to regulate the generation. [Output of the encoder; control vector], where \([a;\ b]\) implies concatenation, will be the external input of the decoder. Here, the control vector would deliver a potent signal to the generator to direct the generating process.
In Fu et al. [29], the encoder creates style-free representations that are solely kept content. The encoder representation is then joined to the style control vector to initialize the decoder. The approach described here is commonly employed to merge information obtained from external sources with the context of a discussion to generate coherent dialogue responses [6; 7; 8]. This method is also used to add controls to consider user preferences. Gu et al. [4] prepends the attribute tokens such as intent, keywords and relevant sentences to encoder input before the local and global context to guide the citation generation process. Wu et al. [23] and Jung et al. [5] similarly prepend intent tokens and corresponding control codes to ensure intent-controlled citation generation for research papers.
### Stochastic Changes
Variational auto-encoder, introduced by Kingma et al. [30], allows you to extract a continuous latent variable from a Gaussian distribution stochastically. This latent variable serves as the foundation for the initialization of the generator. This idea is used by Bowan et al. [31] to produce sentences from this
continuous latent representation. Only the Kullback-Leibler (KL) Divergence training goal may be employed with this method of modifying the encoder state.
## 4 Sequential Inputs
In this part, we go through various methods for influencing the sequential input to the decoder at each time step:
### Arithmetic and Linear Transform
Sequential Inputs can undergo the same procedure as Arithmetic and Linear Transforms for External Inputs. By incorporating a few additional control vectors \(s\), we can modify the input to the decoder similar to how we alter the initialization. To achieve this, the data at each time step is concatenated with the control vectors. During training, the generator commonly employs the teacher forcing technique [32]. At each time step \(t\), the predictor determines the word to be generated, denoted as \(y\), based on the anticipated word embedding \(x\) at \(t-1\). Notably, \(x\) corresponds to \(y-1\). To facilitate this process, the input \(x\) is concatenated with the context variable \(s\) at each time step \(t\), resulting in \(\hat{x}=[x;\;s]\).
## 5 Discriminator
This part discusses the discriminator's function in creating controlled text. Although the discriminator is not always required for training the architecture, its inclusion can offer helpful feedback and enhance the resulting text's content.
### External Feedback
Controlling the external input to the generator is frequently done with a regularizer. A common external feedback approach is employing an adversarial loss to alter the latent space. This effectively controls the encoder's latent space, which is then given to the generator as initialization. A multi-layer perceptron is employed in Fu et al. [29] to predict the input style labels. Similarly, Wang et al. [12] also uses adversarial loss to regulate the latent representation for style characteristics. To ensure that the meaning representation remains devoid of style indications, the loss function is employed in the study conducted by Romanow et al. [33]. It trains a discriminator, which analyses a representation as input and assesses if it produces the specified loss. This technique, like the adversarial loss, employs a motivating loss to verify that the style representation truly includes the stylistic information. The cross-entropy loss is used by Gu et al. [4] to fine-tune SCIBERT [18] to steer the process of text production depending on purpose. Based on user-suggested keywords and phrases, triplet loss is utilised to fine-tune and direct the text production process. This use of loss to offer external input ensures that the model generates a citation text that has these features and is more human-friendly than other produced texts.
## 6 Encoding Options
One step in the generator process is the choice of encoding. We will briefly review some of the encoding choices most likely applied to developing controlled text generation.
### Gradient Based Search
_Auto Prompt [34]:_
A novel technique is employed to automate the generation of ideas for various tasks through the use of gradient-based search. This method involves creating a prompt that incorporates task-specific inputs and trigger tokens within a predefined template. These trigger tokens, which are shared among all inputs, possess universal applicability.
To identify the universal trigger tokens, a gradient-guided search approach similar to the one utilized in Wallace et al. [35] is adopted. By optimizing the trigger tokens with respect to the desired output using a universal configuration, all inputs from a given dataset can contribute to the process. Initially, each trigger token's embedding is set to a default value, which is subsequently adjusted to minimize the first-order Taylor expansion of the task-specific loss at the current token embedding.
_Prefix Tuning:_
Intelligent prompt design creates effective context, which may result in the desired completion. Li et al. [36] proposed the notion of Prefix-Tuning in response to the aforementioned phenomenon. This method includes initialising a limited set of trainable parameters at the start of an input sequence, known as the "prefix." Prefix-Tuning is used to guide a Language Model, allowing it to provide more controlled and focused outputs.
_P-Tuning:_
P-Tuning [37] is a technique that involves training continuous prompt embeddings, employing specific alternatives for trainable parameters and architecture. Unlike Prefix-Tuning, P-Tuning only requires the input to function correctly, while facing optimization challenges related to discreteness and association.
_Prompt Tuning:_
Prompt tuning [38] simplifies the concept of prefix tuning by limiting the number of changeable tokens prepended to the input text for each downstream operation to a maximum of \(k\). This approach achieves results similar to model fine-tuning, even for large models with billions of parameters. This finding is noteworthy since optimizing and executing large models during inference can be computationally expensive. Prompt tuning proves beneficial for transfer learning when adapting to new domains with learned task-specific parameters,
outperforming fine-tuning in addressing domain shift concerns. Additionally, the study demonstrates that prompt ensembling, which involves combining multiple prompts for the same task, further improves performance.
### Recurrent Neural Networks
Recurrent neural networks (RNNs) operate sequentially, processing elements one by one while considering previous computations. This iterative nature enables them to maintain a form of memory, facilitating the propagation of contextual information across the network. Although RNNs can theoretically handle arbitrarily long sequences of data, in practice, they often struggle with dependencies beyond a few time steps. To address this limitation, Long Short-Term Memory (LSTM) [39] units were introduced as a variant of RNNs, incorporating specialized "memory cells" in addition to the standard units. These memory cells enable the retention of information over extended periods, and a set of gates control the input, output, and forgetting of information within the memory. This architectural design allows LSTMs to capture longer-term dependencies effectively. The vanishing gradient problem commonly encountered in RNNs is mitigated by these advancements.
Another variation of RNNs, known as gated recurrent units (GRUs), also employs a similar foundational structure as LSTMs to capture correlations on dynamically adapting time scales. Additionally, GRUs employ a gating mechanism to regulate the flow of information. In contrast to traditional models that utilize multiple memory cells and gates, GRUs achieve comparable functionality with a reduced number of separate memory cells and gates.
Researchers such as Wen et al. [40] have made modifications to LSTMs, introducing methods to control the generation process by incorporating conversation act information. In various text generation tasks, RNNs, LSTMs, and GRUs are commonly employed, as evidenced by the works of Prabhumoye et al. [41]., Rao et al. [42], See et al. [43], Zhou et al. [6], and Fu et al. [29]. Despite the utility of these models, their ability to handle long sequences remains a challenge, often necessitating the integration of attention mechanisms to enhance performance on the original sequence.
### Transformer
The utilization of the Transformer model, as described by Vaswani et al. [44], enables the establishment of global connections between input and output by leveraging attention mechanisms. Within the Transformer architecture, both the encoder and decoder components consist of multiple layers of self-attention and fully connected layers. Specifically, the encoder comprises N identical layers, each containing two sublayers: a self-attention mechanism with multiple heads in the first sublayer, and a position-wise fully connected feed-forward network in the second sublayer. Layer normalization and residual connections are applied to each of these sublayers. Additionally, the decoder includes an additional third sublayer that performs multi-head attention over the output of the encoder stack.
By employing an attention mechanism as its core component, the decoder in the Transformer model can selectively focus on any position within the input sequence. Consequently, computations across the sequence can be parallelized, resulting in improved efficiency. However, it is important to note that the Transformer architecture has not been extensively explored for studying modifications to recurrent neural network (RNN) computing units that incorporate specialized parameters for controlling features such as style, dialogue act, and so on.
### Pre-trained Language Model
Newly pre-trained conditional language models, including XLNet [45], GPT [46], GPT-2 [47], and GPT-3 [48], have been widely utilized in text generation tasks. Researchers have made efforts to enhance these pre-trained models for specific controlled text generation tasks in various studies [7; 49; 50]. However, a recent model called Galactica [22] has outperformed the more recent GPT-3 model in technical knowledge-related evaluations, achieving a higher score of 68.2% compared to 49.0%.
While pre-trained models often generate fluent and grammatically correct content, adapting them for sequence-to-sequence applications like machine translation and abstractive summarization can be challenging. One model that excels in text generation when modified is the denoising autoencoder BART [51], which utilizes a sequence-to-sequence architecture. On the other hand, T5 [52] treats every natural language processing (NLP) problem as a "text-to-text" problem, where it takes text as input and produces new text as output, making it well-suited for controlled text generation tasks.
Another approach for controlled language generation is the Plug and Play Language Model (PPLM) introduced by Dathathri et al. [53]. PPLM combines a pre-trained language model with one or more attribute classifiers, eliminating the need for training from scratch and allowing it to drive text generation based on specific attributes.
Large Language Models (LLMs) are trained on a large corpus of general text. These models have been performing extremely well on text-generation tasks. It gives a promising performance on downstream tasks such as complex reasoning, problem-solving, question-answering, etc. PaLM [54] is a 540 billion parameter, dense decoder-only transformer model. This, combined with fine-tuning techniques like Chain-of-Thought Prompting [55], shows remarkable performance on reasoning and understanding-based text generation tasks. LLAMA [56] is a collection of LLMs with parameters ranging from 7 billion to 65 billion. This has been fine-tuned by techniques to create Alpaca [57], which is fine-tuned on 52K instruction-following demonstrations generated in the technique called self-instruct [58]. This technique uses a smaller policy Language Model to create similar demonstrations as those in the existing dataset. This creates a larger corpus of data for the bigger language model to be fine-tuned with.
These large language models perform well on text generation tasks as they are trained on a large corpus of texts using attention-based transformers, which helps ensure fluency and dich in the output text. Additionally, these can be fine-tuned using techniques such as prompting to show remarkable improvements on domain-specific downstream tasks. However, these language models often suffer problems such as bias, toxicity and hallucinations which can severely deteriorate accuracy in certain cases.
## 7 Decoding Strategies
During training, these tactics are not employed as a loss aim. Numerous of these goals rely on post-hoc decoding techniques. In this section, we will discuss decoding methods such as Top k-sampling, nucleus sampling, or versions of beam search.
Greedy Search:Using a greedy search, which chooses the most likely word at each stage of the sequence of output, is a straightforward approximation. Then the decoder's output is projected onto the entire vocabulary space, and we compute the softmax probabilities for each predicted token \(y_{t}\) at time step \(t\). We choose the token with the highest softmax probability, which gives us the word with the maximum likelihood or probability at each step \(t\)[5]. Another technique, as given by [4], tries to find keywords by ranking candidates' keywords' embeddings based on their cosine similarity with the contextual text embeddings. It uses this ranking to then fine-tune SCI BERT on triplet loss [59] and chooses the most highly ranked keyword as the output. These are all greedy methods of decoding and choosing outputs, also known as **greedy decoding**. These methods have the advantage of being very quick, but the final output sequences' quality might not be at its best.
Beam Search:Greedy search solution might not output the best quality sentence because of only giving locally optimal solutions. Hence, the beam search technique is used. With a constrained bandwidth, it effectively performs a breadth-first search, one token for each tree level. Beam search extends all descendants of the best candidates at each level of the search tree and maintains a record of the best candidates (referred to as "beam width") at each level. If a beam search encounters the EOS (end-of-sentence) token, it may cease extending a node. However, high-quality generation is not a given with maximization-based decoding.
Top-K Sampling:In their work, Fan et al. [60] presented a straightforward yet highly effective sampling technique known as Top-K sampling. This approach distributes the
probability mass exclusively among the top \(K\) words predicted to have the highest likelihood. GPT2 incorporated this sampling strategy, which played a significant role in enhancing its performance in narrative generation.
In Figure 2, we limit our selection pool to 4 words in both sampling phases with \(K=4\). While the first phase defines two-thirds of the entire probability mass, the second step encompasses practically all probability mass. Figure 3 depicts the removal of unnecessary terms such as \({}^{\prime}little^{\prime},^{\prime}large^{\prime},and^{\prime}not^{\prime}\).
_Top-P (Nucleus) Sampling:_
Holtzman et al. [61] introduced the concept of Top-p sampling, which differs from traditional methods by selecting words based on their cumulative probability exceeding a threshold value, denoted as \(p\), instead of solely considering the highest probability \(K\). This approach involves applying a revised probability distribution to the selected set of words. Consequently, the size of the word set, or the number of words within it, can vary dynamically depending on the probability distribution associated with the subsequent word.
Top-p sampling selects the fewest words that surpass \(p=91\%\) when \(p=0.91\) is specified. The words are chosen as top-p candidates when the probability mass surpasses the p level. In figure 4, the approach includes the five most likely terms, but in figure 5, it only has to select the top three words to surpass \(91\%\). It can be seen that it maintains a large range of w
word is arguably less foreseeable, such as \(input=^{\prime}The^{\prime}\), and just a few words where the next word is more predictable, such as \(input=^{\prime}The^{\prime},^{\prime}car^{\prime}\).
Penalized Sampling:A unique sampling technique was introduced by the CTRL study [62] to address the issue of repetitions in generated text. This approach penalizes the scores of previously generated tokens, effectively discouraging the generation of duplicate substrings and mitigating the common failure scenario associated with such repetitions.
Guided Decoding:Traditional decoding methods rely on sampling tokens solely based on their likelihood without considering any additional information. However, customizing the candidate ranking score can influence the generated samples based on specific preferences related to subject matter or attitude. By incorporating a selected set of feature discriminators, the ranking score for token selection at each decoding step can be personalized. These discriminators can be designed to evaluate human preferences through heuristics [63], supervised learning [64], or real-world testing using reinforcement learning [65].
Trainable Decoding:Gu et al. [66] proposed a trainable greedy decoding strategy that enables the sampling of sequences from a trained language model in order to optimize a given objective. This method, known as Noisy Parallel Decoding (NPAD), is based on the concept of approximation. To address potential performance degradation, NPAD introduces unstructured noise into the hidden states of the model and performs multiple parallel noisy decodings. Taking this concept further, trainable greedy decoding replaces the unstructured noise with a learnable random variable. A reinforcement learning (RL) agent is employed to predict this random variable using context, previously decoded tokens, and prior hidden states as inputs. In essence, this decoding approach trains an RL actor to manipulate the model's hidden states to achieve desired outcomes.
In a related work, Grover et al. [67] trained a binary classifier to distinguish between samples generated by the data distribution and the generative model. By employing a likelihood-free importance weighting (LFIW) technique, this classifier determines the significance weights required for generating a new unnormalized distribution.
## 8 Output
The output of the generator module is projected into the vocabulary space to forecast the next token during the normal generating process. There are numerous techniques of modifying the sequential output before it is projected into the vocabulary space at each time step \(t\).
### Attention
Attention is a widely employed mechanism that guides the generation process by directing the focus towards the source sequence [68]. In the context of the generator, the attention module takes the current hidden state and aims to identify relevant source-side information, encapsulated in a context vector, to aid in token prediction. In the case of global attention, the encoder's hidden states are taken into account when computing the context vector. However, it is important to note that this approach incurs a high computational cost, particularly for longer source sequences such as documents.
### External Feedback
External feedback can be leveraged to manipulate the latent space of the generator's output. Adversarial loss is one such technique that affects both the output latent space, denoted as \(s\), and the external input \(x\). Logeswaran et al. [69] employ an adversarial loss to encourage the generation of words that are both plausible and compatible with desired attributes. The objective of the adversarial loss is to estimate the distribution of sentence and attribute vector pairs \((x,s)\), where the sentence can be either real or intentionally generated.
## 9 Training Objectives
This section explores various methods for regulating the generation process using objective functions. At each generation step, a linear transformation is applied to project the output into the vocabulary space. By applying a softmax function to the transformed output and selecting the token with the highest probability, a token from the vocabulary can be predicted. The predicted token is then compared to the reference token using a loss function. By manipulating the loss function, the generated text can exhibit desired control properties.
### General Loss
Cross Entropy Loss: Every text generation mechanism uses this fundamental loss to compare the created tokens to the reference tokens. The generator must anticipate a token from the vocabulary at each time step. Therefore, it might be viewed as a classification issue whereby the number of classes equals the vocabulary size.
For intent-controlled scientific text generation, the following is used:
\[L=-\frac{exp(x_{intent}(i_{true}))}{\sum_{i\in intents}exp(x_{intent}(i))}\]
Here \(i\) represents the different intents:
* \(i=1\) refers to "background" when one paper summarizes the related work and concepts of the other paper.
_Advancements in Scientific Controllable Text Generation Methods_
* \(i=2\) refers to "method" when one paper uses a certain method or dataset of the other paper.
* \(i=3\) refers to "result" when one paper compares its results with those of the other paper.
\(x_{\text{intent}}(i)\) is the output obtained when the last hidden state of a prepend token, used to connect the local and global context, is input into the intent prediction header.
### Prompt Tuning Loss
Prompting is a new technique for fine-tuning large language models (LLMs). explore into the topic of graph pre-training frameworks, with the goal of smoothly integrating pre-training with downstream tasks for graph neural networks. Formulating a loss function based on a standardised sub-graph similarity template is a major aspect of their technique. This allows for the optimisation of adaptive prompts through the use of task-specific sub-graph representations aided by prompts. The loss function is defined as follows:
\[\mathcal{L}_{\text{prompt}}(p_{t})=-\sum_{(x_{i},y_{i})\in\tau_{t}}\ln(\frac{ \exp{(\text{sim}(s_{(t,x_{i})},\tilde{s}_{(t,y_{i})})/\tau)}}{\sum_{c\in Y} \exp{(\text{sim}(s_{(t,x_{i})},\tilde{s}_{(t,c)})/\tau)}})\]
* \(\tau_{t}:\) Labelled Training Set for task \(t\). Represented as a set of \((x_{i},y_{i})\)
* \(x_{i}\): An Instance or a Node in a Graph
* \(y_{i}\): a subset of Y, is the class label for corresponding \(x_{i}\)
* \(\tilde{s}_{(t,c)}\): Representation of a class prototypical subgraph for class c.
### Triplet Loss
The triplet loss function is used in Gu et al. [4] to fine-tune SciBERT, a language model, with the goal of increasing keyword extraction from text. They hoped to improve scientific text production by recommending appropriate features by utilising the triplet loss. The anchor sample \((x_{i})\) is matched with a comparable positive sample \((x_{p})\) and a dissimilar negative sample \((x_{n})\) in the triplet loss formulation. The goal is to reduce the distance between the anchor and the positive samples while increasing the distance between the anchor and the negative samples:
\[\mathcal{L}_{\text{triplet}}=\sum_{i=1}^{N}\max{(d(x_{i},x_{p})-d(x_{i},x_{n}) +\alpha,0)}\]
\(d(a,b)\) represents a distance metric, typically Euclidean distance or cosine similarity.
\(\alpha\) is a margin that defines the desired difference between positive and negative samples.
The phrase within the \(max\) function guarantees that the loss is only incurred if the distance between the anchor and positive samples is higher by
at least the margin \(alpha\) than the distance between the anchor and negative samples. If this criterion is not met, the loss is zero. There are other varieties of triplet loss, such as semi-hard or hard triplet loss, which try to pick negative samples that are more difficult to discriminate against the anchor sample, resulting in greater convergence and discriminative feature embeddings.
The use of triplet loss has found widespread applications when the primary goal is to get a succinct feature space that can successfully discriminate between various entities, such as face recognition and person re-identification.
### Validation Loss
Validation loss assesses the model during the validation phase. It measures the difference between expected and actual outputs on a separate validation dataset. Validation loss provides insight into the model's performance beyond the training data by examining how well the model generalises to new, unknown data. It aids in detecting over-fitting and under-fitting by guiding changes in model architecture, hyperparameters, and regularisation strategies. Monitoring validation loss aids in developing models that perform effectively in real-world circumstances.
### Training Loss
_Unlikelihood Loss_
Including frequent tokens, recurring tokens (or n-grams), and the continuous updating of this set with each token generation helps maintain a pool of negative possibilities [70]. The objective is to minimize repetition across generations, accomplished at both the individual token and sequence levels. This approach can be applied to any task during training alongside the primary objective of maximizing the probability of the target.
_KL Divergence Loss_
The Kullback-Leibler (KL) Divergence is a measure of dissimilarity between two probability distributions. It quantifies the difference between distributions \(Q\) and \(P\), denoted as \(KL(P\|Q)\), where the notation \(\|\) represents the divergence of \(P\) from \(Q\). It is important to note that KL Divergence is not symmetric, i.e., \(KL(P\|Q)\neq KL(Q\|P)\).
_Classifier Loss_
The purpose of this loss is to ensure that the generated tokens possess the desired control attributes. It is important to differentiate this loss, which operates at the token level, from the external feedback loss that operates on latent or hidden representations. The classifier loss employed here is distinct from the external feedback loss utilized in the external input and output modules.
### Task Specific Loss
_Strategy Loss:_
A conversation strategy-based objective is used by Zhou et al. [71] to create replies for negotiation assignments. Ground truth techniques used in this assignment result in more effective negotiations. Given the dialogue history, this loss represents the likelihood that a certain method will be used in the subsequent speech. It directs the generator to match certain techniques with specific answers.
_Coverage Loss:_
Text generation systems often face the challenge of producing repeated words or phrases, particularly in tasks involving multi-sentence text generation like abstractive document summarization. To address this issue, See et al. [43] introduced a coverage loss that discourages the model from repeatedly attending to the same locations in the source document.
_Structure Loss:_
In the context of abstractive document summarization, Li et al. [72] proposed two additional loss targets based on sentence-level attention. These objectives, namely structural compression and structural coverage, are specifically designed to improve summarization performance. Structural compression aims to generate a summary sentence by consolidating several distinct source sentences into a concise form. On the other hand, structural coverage focuses on capturing important features from the original text. These loss objectives leverage the structural properties of the document summarization task, evaluating how effectively the generative model can produce shorter and more precise summaries.
## 10 Fine Tuning Models
_Conditional Training:_
A conditional language model was trained for 2-step story generation by Fan et al. [60]. A narrative writing model then develops a tale based on the sketch the model first generated. A fusion model architecture implements the sketch conditioning method. The story writing model may concentrate on figuring out what the first sketch generation model lacks since the fusion model imposes a type of residual learning. [73] experimented with a narrative generator LM that ended valence-conditioned.
_RL Fine Tuning:_
It has already been established that fine-tuning sequential models using RL is successful for any arbitrary, non-differentiable reward function [74]. There are various drawbacks to the instructor-forcing strategy in sequence generation
that may be overcome by RL fine-tuning. During training in instructor forcing, the model minimises the maximum-likelihood loss at each decoding step. During testing, however, the model is expected to create the entire sequence from the beginning. Exposure bias and cumulative mistake can result from the mismatch between training and testing. RL fine-tuning helps overcome these challenges by allowing the model to refine its predictions through reinforcement learning, mitigating the discrepancies between the training and testing phases. For example, BLEU for translation [74; 75; 76]; ROUGE for summarization [74; 77; 78]; and a custom measure for story generation may all be directly optimized by RL fine-tuning [79].
Li et al. [80] tries to use a policy language model (LM) to fine-tune a black-box LLM on downstream tasks using RL. ROUGE scores are used for making the reward function. This generates better keywords and improves performance on summarization tasks. Peng et al. [81] propose an LLM-Augmenter to enhance language model (LM) performance. Acting as a plug-and-play (PnP) module, it accesses evidence from external databases to improve response generation. Given the impracticality of adjusting the multitude of parameters in large LMs, PnP modules serve as a means to provide automatic feedback and external knowledge, enhancing performance while minimizing computational costs. The suggested approach employs RL to develop a policy function that determines whether to query for evidence, generate a candidate response from the LLM, or deliver the response to the user. This policy augments model-generated responses by comparing them with evidence from the external database. Consequently, the performance of LLMs in addressing queries requiring external databases, such as specific historical questions, is significantly improved.
#### RL Fine Tuning with Human Preferences:
In order to ascertain human preferences, the utilization of reward learning is crucial. While quantitative measurement tools such as BLEU or ROUGE are commonly used to compute the overlap of words and n-gram phrases across sequences, they do not necessarily align with the judgments of human evaluators regarding quality. To address this limitation, the approach of reward learning from human input [82] provides a superior method of aligning evaluation metrics with actual priorities. This approach has been successfully applied in various applications, including narrative production [83] and summarization [84; 85; 86], where human input is used to train a reward function.
Yi et al. [83] collected binary human feedback in four categories for a given dialogue pair (user utterance, system answer), evaluating whether the system response is (1) comprehensive, (2) on topic, (3) interesting, and (4) conducive to continuing the discussion. An evaluator is then trained to anticipate human input and re-rank the beam search samples, thereby improving the model or performing both tasks simultaneously. It is important to note that in their
work, supervised fine-tuning rather than reinforcement learning (RL) fine-tuning was employed, and a discriminator loss derived from the evaluator was utilized.
Prompt-Based Fine-Tuning:Prompting is a technique that enhances model performance and accuracy on specific downstream tasks without adjusting model weights [87]. Zero-shot prompting involves directly supplying a query to the model and observing its response. In few-shot prompting, the model is provided with \(n-1\) query-response examples as a prompt, and in the \(n^{th}\) example, only the query is given. This makes the model follow the given query responses as a template and generate the response similarly. Chain-of-Thought (CoT) prompting has shown advancements in LLM performance for complex reasoning tasks [55]. CoT prompting aims to generate concise sequences of logical statements that progressively lead to problem-solving. However, preparing human-based annotations for CoT prompting is laborious and challenging.
To address this, Shum et al. [88] propose an Augment-Prune-Select strategy to automate the generation of CoT prompts. This approach generates multiple chains of thoughts and prunes them based on their ability to lead to the correct answer. Zhang et al. [89] tries to find patterns in mistakes made by the model and clusters them. It thus prevents the frequency of the same type of error demonstration by the model. Chen et al. [90] proposes Mixture of Soft Prompts (MSP). In-context learning is used to prompt a large black-box language model. Thus, they use the model as a data augmentation tool rather than directly predicting the answer. The exemplars generated from the LLM train a smaller policy LM to generate the final answers. Taori et al. & Li et al. [57; 80] use a policy LM to fine-tune a larger language model by prompting techniques. Using LMs or LLMs to fine-tune LLMs is a well-known technique which has improved their accuracy on various downstream tasks and helped automate the process of creating large datasets.
Unlikelihood Training:During language model training, the maximization of log-likelihood loss can lead to an imbalanced token distribution, which cannot be adequately rectified through decoding techniques alone. Specifically, when deterministic decoding is employed, these models tend to generate high-frequency words while neglecting low-frequency terms excessively. In essence, they exhibit overconfidence in their predictions. The training objective is explicitly modified by incorporating preferences for less desirable content through probability training to address this issue. This approach aims to mitigate the bias towards high-frequency words and promote more balanced and accurate generation [91].
## 11 Evaluation
An essential phase in the automated summarizing process is evaluating the resulting text. In this part, we examine how the suggested systems operate and how they are assessed in relation to their starting points.
ROUGE:ROUGE is a widely adopted evaluation method used for assessing the quality of automated summarization. It quantifies the level of overlap between candidate summaries and reference summaries authored by humans. The ROUGE measures encompass various statistics, including ROUGE-L, ROUGE-W, ROUGE-N (e.g., ROUGE-1, ROUGE-2), and ROUGES [2019].
BLEU:BLEU is an algorithm commonly employed for evaluating the accuracy of machine-translated content across different natural languages. Typically, individual translated segments or sentences are scored by comparing them against a set of accurate reference translations. The resulting scores are then averaged to provide an approximation of the overall translation quality. It is important to note that BLEU evaluation does not consider grammar or higher-level semantic aspects. The output of BLEU is a value between 0 and 1, where higher scores indicate greater similarity between the candidate text and the reference texts [101].
F1 ScoresThe F1 score is a widely used evaluation metric in machine learning for quantifying the accuracy of a model's predictions on a given dataset. It combines precision and recall scores to provide a comprehensive assessment of the model's performance.
In the context of evaluating generated responses in the Wiki-QA dataset, the model's outputs are evaluated using token-level precision, recall, and F1 scores, which are calculated by comparing them to annotated answers.
BLEURT ScoresThe BLEURT Score is another evaluation metric employed in assessing Natural Language Generation tasks. It takes a pair of sentences as input and produces a score that indicates the fluency of the generated text with respect to a reference text. It also assesses whether the generated text can effectively convey the same meaning as the reference text. The BLEURT Score has demonstrated state-of-the-art agreement with human judgments on machine translation benchmarks.
SciBERT ScoreThe SciBERT Score is designed specifically for evaluating the quality of generated text in the scientific domain, considering its relevance and coherence within that domain. This evaluation metric leverages the domain-specific
knowledge captured by the SciBERT language model [18] by comparing the generated text to a reference text or a set of annotated reference texts. The SciBERT Score serves as a statistical measure for evaluating the effectiveness of scientific and citation text generation methods.
By providing an informative assessment of the generated content's quality and its alignment with the scientific domain, the SciBERT Score enables researchers and developers to compare different models and drive advancements in text generation systems tailored specifically for scientific applications.
MeteorThe evaluation of translation quality using the Metric for Evaluation of Translation with Explicit Ordering (METEOR) involves the calculation of similarity between a machine-generated translation and a reference translation. This similarity assessment is based on comparing n-grams, which are consecutive sequences of n words or tokens within a text. N-grams have proven valuable in various applications, including text generation, text analysis, and sentiment analysis.
When evaluating machine-generated answers, the METEOR metric functions by computing a score that considers the matching of words between the generated output and a provided reference. In cases where multiple references are available, the generated output is independently scored against each reference. Consequently, the pair with the highest score is selected as the most suitable match.
Additional Metrics
* Shum et al. [88] uses **Exact Match Accuracy**. This is done after removing special characters and symbols and checking if the generated citation text has the exact tokens as the ground truth.
* Other measures often used to measure the model performance include \(sentence-BLEU\), \(BERTscore\) and \(COMET\).
## 12 Future Work and Recommendations
Although transformer-based language models have significantly improved scientific text generation and other downstream tasks, they continue encountering issues like hallucinations, bias, and inaccuracy. Furthermore, most language models trained on broad web-based corpora (LLMs) struggle with domain-specific tasks and cannot incorporate substantial user input for controllability. Here, we comment on future advancements in controllable scientific text generation and present our recommendations based on the findings of our survey:
Controllable Text Generation:Gu et al. [4] presents a comprehensive approach to multi-system controllable text generation. They propose identifying specific attributes that users can
control to guide the text generation process according to their requirements. These attributes include intent, keywords, and relevant sentences. To enhance the diversity of choices, we suggest fine-grained divisions of intent. For example, the "method" intent can be subdivided into using the dataset and the model described in the cited paper. Moreover, instead of relying on relevant sentences for context, we propose using the specific passage from the referenced paper associated with the intent to generate target citations that provide contextual information. Fine-tuning techniques like Reinforcement Learning from Human Feedback (RLHF) will play an important role here. Currently, proposed systems cannot generate a citation sentence citing multiple documents [23]. This can be added as a feature that the author can control and influence through a prompt-based system.
Language Models:Large Language Models (LLMs) can perform with improved accuracy over pre-existing systems on domain-specific tasks such as scientific text generation by fine-tuning with those curated corpora of data sets. With the advent of various open-source models with varying numbers of parameters, future work will involve their quantisation to improve access and democratisation. Newer models such as GPT-4, Llama, Alpaca, PaLM [54; 56; 57; 93] and models specifically fine-tuned on scientific texts should be given higher preference for designing systems upon. Model architectures can be re-looked by opting for different encoding and decoding strategies. Newer strategies not solely based on greedy methods promise to make these processes more efficient and accurate.
Prompting:As mentioned earlier, prompting is a valuable technique for fine-tuning large language models to enhance performance without requiring weight updates. One such technique, Chain of Thought Prompting [55] along with other discussed methods, helps address hallucinations and improves the model's reasoning abilities. Recent studies by Long et al. & Zhou et al. [94; 95] introduce novel prompting techniques, namely Least-to-Most Prompting and Tree of Thoughts Prompting, respectively. These techniques build upon the limitations of Chain of Thought Prompting and further improve model performance. By treating text generation as a reasoning task, these techniques hold promise in enhancing the controllability of language models. In a chain-of-thoughts-like manner, keywords and relevant sentences can be incorporated to guide the model towards generating citation text that closely resembles the ground truth.
Retrieval:Lazaridou et al. [96] looks at document retrieval to augment LLMs. It proposes using web search for the same. This has been applied in closed-book question-answering previously. Training LLMs on a specific and fixed corpus of data can be challenging. Alternatively, employing retrieval-augmentation methods, as
examined by Soong et al. [97], within a specific domain can simplify the task. These methods retrieve relevant context from domain-specific corpora based on user queries. Subsequently, this extracted information is used as context to seed the LLM, constraining the model's responses to the retrieved text. This innovative technique shows promise in improving model performance and reducing hallucinations by limiting the model's domain space without incurring the time and resource expenses associated with training LLMs.
## 13 Conclusion
The previous work on controllable text generation is organised using a new schema we provide in this study. Seven components make up the schema, and each one is crucial to the creation process. To accomplish controlled generation for scientific literature, we describe the various modulation strategies utilised to modulate each of the seven components. We also offer a theoretical study and qualitative examination of these methods. This insight makes new architectures based on combinations of these components possible. Future research will compare these methods empirically to learn more about their strengths and utility.
|
2304.04376 | ICDAR 2023 Video Text Reading Competition for Dense and Small Text | Recently, video text detection, tracking, and recognition in natural scenes
are becoming very popular in the computer vision community. However, most
existing algorithms and benchmarks focus on common text cases (e.g., normal
size, density) and single scenarios, while ignoring extreme video text
challenges, i.e., dense and small text in various scenarios. In this
competition report, we establish a video text reading benchmark, DSText, which
focuses on dense and small text reading challenges in the video with various
scenarios. Compared with the previous datasets, the proposed dataset mainly
include three new challenges: 1) Dense video texts, a new challenge for video
text spotter. 2) High-proportioned small texts. 3) Various new scenarios, e.g.,
Game, sports, etc. The proposed DSText includes 100 video clips from 12 open
scenarios, supporting two tasks (i.e., video text tracking (Task 1) and
end-to-end video text spotting (Task 2)). During the competition period (opened
on 15th February 2023 and closed on 20th March 2023), a total of 24 teams
participated in the three proposed tasks with around 30 valid submissions,
respectively. In this article, we describe detailed statistical information of
the dataset, tasks, evaluation protocols and the results summaries of the ICDAR
2023 on DSText competition. Moreover, we hope the benchmark will promise video
text research in the community. | Weijia Wu, Yuzhong Zhao, Zhuang Li, Jiahong Li, Mike Zheng Shou, Umapada Pal, Dimosthenis Karatzas, Xiang Bai | 2023-04-10T04:20:34Z | http://arxiv.org/abs/2304.04376v1 | # ICDAR 2023 Video Text Reading Competition for Dense and Small Text
###### Abstract
Recently, video text detection, tracking and recognition in natural scenes are becoming very popular in the computer vision community. However, most existing algorithms and benchmarks focus on common text cases (_e.g.,_ normal size, density) and single scenario, while ignore extreme video texts challenges, _i.e.,_ dense and small text in various scenarios. In this competition report, we establish a video text reading benchmark, named DSText, which focuses on dense and small text reading challenge in the video with various scenarios. Compared with the previous datasets, the proposed dataset mainly include three new challenges: 1) Dense video texts, new challenge for video text spotter. 2) High-proportioned small texts. 3) Various new scenarios, _e.g.,_ 'Game', 'Sports', etc. The proposed DSText includes 100 video clips from 12 open scenarios, supporting two tasks (_i.e.,_ video text tracking (Task 1) and end-to-end video text spotting (Task2)). During the competition period (opened on 15th February, 2023 and closed on 20th March, 2023), a total of 24 teams participated in the three proposed tasks with around 30 valid submissions, respectively. In this article, we describe detailed statistical information of the dataset, tasks, evaluation protocols and the results summaries of the ICDAR 2023 on DSText competition. Moreover, we hope the benchmark will promise the video text research in the community.
## I Introduction
Video text spotting [1] has received increasing attention due to its numerous applications in computer vision, _e.g.,_ video understanding [2], video retrieval [3], video text translation, and license plate recognition [4], etc. There already exist some video text spotting benchmarks, which focus on easy cases, _e.g.,_ normal text size, density in single scenario. ICDAR2015 (Text in Videos) [5], as the most popular benchmark, was introduced during the ICDAR Robust Reading Competition in 2015 focus on wild scenarios: walking outdoors, searching for a shop in a shopping street, etc. YouTube Video Text (YVT) [6] contains 30 videos from YouTube. The text category mainly includes overlay text (caption) and scene text (_e.g.,_ driving signs, business signs). RoadText-1K [7] provide 1,000 driving videos, which promote driver assistance and self-driving systems. LSVTD [8] proposes 100 text videos, 13 indoor (_e.g.,_ bookstore, shopping mall) and 9 outdoor (_e.g.,_ highway, city road) scenarios, and support two languages, _i.e.,_ English and Chinese. BOVText [9] establishes a a large-scale, bilingual video text benchmark, including abundant text types, _i.e.,_ title, caption or scene text.
However, the above benchmarks still suffer from some limitations: 1) Most text instances present normal text size without challenge, _e.g.,_ ICDAR2015(video) YVT, BOVText. 2) Sparse text density in single scenario, _e.g.,_ RoadText-1k and YVT, which can not evaluate the small and dense text robustness of algorithm effectively. 3) Except for ICDAR2015(video), most benchmarks present unsatisfactory maintenance. YVT, RoadText-1k and BOVText all do not launch a corresponding competition and release open-source evaluation script. Besides, the download links of YVT even have become invalid. The poor maintenance plan is not helpful to the development of video text tasks in the community. To break these limitations, we establish one new benchmark, which focus on dense and small texts in various scenarios, as shown in Fig. 1. The benchmark mainly supports two tasks, _i.e.,_ video text tracking, and end to end _video text spotting_ tasks, includes 100 videos with 56k frames and 671k text instances.
Therefore, we organize the ICDAR 2023 Video Text Reading competitive for dense and small text, which generates a large-scale video text database, and proposes video text tracking, spotting tasks, and corresponding evaluation methods. This competition can serve as a standard benchmark for assessing the robustness of algorithms that are designed for video text spotting in complex natural scenes, which is more challenging. The proposed competition and dataset will enhance the related direction (Video OCR) of the ICDAR community from two main aspects:
* Compared to the current existing video text reading datasets, the proposed DSText has some special fea
Fig. 1: **Visualization of DSText.** Different from previous benchmarks, DSText focuses on dense and small text challenge.
tures and challenges, including 1) Abundant scenarios, 2) higher proportion of small text, 3) dense text distribution. Tab. I, Fig. 2, Fig. 3, and Fig. 5 present detailed statistical comparison and analysis.
* The competition supports two tasks: video text tracking and end-to-end video text spotting. And we provide comprehensive evaluation metrics, including \(\mathrm{ID_{P}}\), \(\mathrm{ID_{R}}\), \(\mathrm{ID_{F1}}\)[11], MOTA, and MOTP. These metrics are widely used on previous video text benchmarks, such as ICDAR2015 [12, 10]. We are proud to report the successful completion of the competition, which has garnered over 25 submissions and attracted wide interest. The submissions have inspired new insights, ideas, and approaches, which promise to advance the state of the art in video text analysis.
## II Competition Organization
ICDAR 2023 video text reading competition for dense and small text is organized by a joint team, including Zhejiang University, University of Chinese Academy of Sciences, Kuaishou Technology, National University of Singapore, Computer Vision and Pattern Recognition Unit, Computer Vision Centre, Universitat Autonoma de Barcelona, and Huazhong University of Science and Technology. And we organize the competition on the Robust Reading Competition Website 1, where provide corresponding download links of the datasets, and user interfaces for participants and submission page for their results.
Footnote 1: [https://rc.cvc.uab.es/?ch=22&com=introduction](https://rc.cvc.uab.es/?ch=22&com=introduction)
### _Competition Schedule_
The official competition schedule is as follows:
* December 1, 2022: Website online.
* February 1, 2023: Sample training videos available.
* February 15, 2023: Release of full training set and ground truth.
* March 15, 2023: Test set is available, and website opens for results submission.
* March 20, 2023: Deadline of the competition and result submission closes.
* March 31, 2023: Submission deadline for 1-page competition report, and the final ranking will be released after checking the results.
* August 21-26, 2023: Presentation or results at ICDAR 2023 Conference.
Overall, after removing the duplicate submissions, we received 30 valid submissions from 24 teams from both research communities and industries for the three tasks.
## III Dataset
### _Dataset and Annotations_
**Dataset Source.** The videos in DSText are collected from three parts: 1) 30 videos sampled from the large-scale video text dataset BOVText [9]. BOVText, as the largest video text dataset with various scenarios, includes a mass of small and dense text videos. We select the top \(30\) videos with small and dense texts via the average text area of the video and the average number of text per frame. 2) \(10\) videos for driving scenario are collected from RoadText-1k [7]. As shown in Fig. I, RoadText-1k contains abundant small texts, thus we also select \(10\) videos to enrich the driving scenario. 3) \(60\) videos for street view scenes are collected from YouTube. Except for BOVText and RoadText-1k, we also obtain \(60\) videos with dense and small texts from YouTube, which mainly cover street view scenarios. Therefore, we obtain \(100\) videos with \(56\)k video frames, as shown in Table I. Then the dataset is
Fig. 2: **The Data Distribution for 12 Open Scenarios. “%” denotes the percentage of each scenario data over the whole data.**
divided into two parts: the training set with \(29,107\) frames from \(50\) videos, and the testing set with \(27,234\) frames from \(50\) videos.
**Annotation.** For these videos from BOVText, we just adopt the original annotation, which includes four kinds of description information: the rotated bounding box of detection, the tracking identification(ID) of the same text, the content of the text for recognition, the category of text, _i.e.,_ caption, title, scene text, or others. As for others from RoadText-1k and YouTube, we hire a professional annotation team to label each text for each frame. The annotation format is the same as BOVText. One mentionable point is that the videos from RoadText-1k only provides the upright bounding box (two points), thus we abandon the original annotation and annotate these videos with the oriented bounding box. Due to the structure of the video source, it is not allowed to use BOVText and RoadText-1k as extra data for training in this competition. As a _labor-intensive_ job, the whole labeling process takes **30** men in one months, _i.e.,_ around **4,800** man-hours, to complete the 70 video frame annotations. As shown in Fig. 6, it is quite time-consuming and expensive to annotate a mass of text instances at each frame.
### _Dataset Comparison and Analysis_
The statistical comparison and analysis are presented by three figures and one table. Table. 2 presents an overall comparison for the basic information, _e.g.,_ number of video, frame, text, and supported scenarios. In comparison with previous works, the proposed DSText shows the denser text instances density per frame (_i.e.,_ average \(23.5\) texts per frame) and smaller text size (_i.e.,_ average \(1,984\) pixels area of texts).
**Video Scenario Attribute**. As shown in Fig. 2, we present the distribution of video, frame, and frame of \(11\) open scenarios and an "Unknown" scenario on DSText. 'Street View (Outdoor)' and 'Sport' scenarios present most video and text numbers, respectively. And the frame number of each scenario is almost the same. We also present more visualizations for 'Game', 'Driving', 'Sports' and 'Street View' in Fig. 4.
**Higher Proportion of Small Text**. Fig. 3 presents the proportion of different text areas. The proportion of big text (more than \(1,000\) pixel area) on our DSText is less than that of BOVText and ICDAR2015(video) with at least \(20\%\). Meanwhile, DSText presents a higher proportion for small texts (less 400 pixels) with around \(22\%\). As shown in Table. I, RoadText-1k [7] and LSVTD [8] also show low average text area, but their text density is quite sparse (only \(0.75\) texts and \(5.12\) per frame), and RoadText-1k only focuses on the driving domain, which limits the evaluation of other scenarios.
**Dense Text Distribution**. Fig. 5 presents the distribution of text density at each frame. The frame with more than \(15\) text instances occupies \(42\%\) in our dataset, at least \(30\%\) improvement than the previous work, which presents more dense text scenarios. Besides, the proportion of the frame with less \(5\) text instances is just half of the previous benchmarks, _i.e.,_ BOVText, and ICDAR2015(video). Therefore, the proposed DSText shows the challenge of dense text tracking and recognition. More visualization can be found in Fig. 4 (Visualization for various scenarios) and Fig. 6 (Representative case with around 200 texts per frame).
**WordCloud.** We also visualize the word cloud for text content in Fig 5. All words from annotation must contain at least \(3\) characters, we consider the words less four characters usually are insignificant, _e.g.,_ 'is'.
## IV Tasks and Evaluation Protocols
The competition include two tasks: 1) the video text tracking, where the objective is to localize and track all words in the video sequences. and 2) the end-to-end video text spotting: where the objective is to localize, track and recognize all words in the video sequence.
**Task 1: Video Text Tracking.** In this task, all the videos (50 train videos and 50 test videos) will be provided as MP4 files. Similar to ICDAR2015 Video [10], the ground truth will be provided as a single XML file per video. A single compressed (zip or rar) file should be submitted containing all the result files for all the videos of the test set.
The task requires one network to detect and track text over the video sequence simultaneously. Given an input video, the network should produce two results: a rotated detection box, and tracking ids of the same text. For simplicity, we adopt the evaluation method from the ICDAR 2015 Video Robust Reading competition [10] for the task. The evaluation is based on an adaptation of the MOTChallenge [11] for multiple object tracking. For each method, MOTChallenge provides three different metrics: the Multiple Object Tracking Precision (MOTP), the Multiple Object Tracking Accuracy (MOTA), and the IDF1. See the 2013 competition report [12] and MOTChallenge [11] for details about these metrics. In our competition, we reuse the evaluation scripts from the 2015 video text reading competition [10], and transfer the format of annotation to the same as that of the ICDAR2015 Video.
**Task 2: End-to-End Video Text Spotting.** Video Text Spotting (VTS) task that requires simultaneously detecting, tracking, and recognizing text in the video. The word recognition performance is evaluated by simply whether a word recognition result is completely correct. And the word recognition evaluation is case-insensitive and accent-insensitive. All non-alphanumeric characters are not taken into account, including decimal points, such as '1.9' will be transferred to '19' in our GT. Similarly, the evaluation method (_i.e.,_ IDF1, MOTA and
Fig. 3: **The distribution of different text size range on different datasets** “g.” denotes the percentage of text size region over the whole data. Text area (# pixels) is calculated while the shorter side of image is 720 pixels.
\(\mathrm{MOTP}\)) from the ICDAR 2015 Robust Reading competition is also adopted for the task. In the training set, we provide the detection coordinates, tracking id, and transcription result.
Note: From 2020, the ICDAR 2015 Robust Reading competition online evaluation 2 has updated the evaluation method, and added one new metric, _i.e.,_ 1D metrics (\(ID_{F1}\)) [13, 14]. Similarly, we adopted the updated metric for the two tasks.
Footnote 2: [https://rrc.cve.uab.es/?ch=3&com=evaluation&task=1](https://rrc.cve.uab.es/?ch=3&com=evaluation&task=1)
## V Baseline: TransDETR
To help participants more easily engage in our competition, we have also provided corresponding baseline algorithms in the competition website 3, _i.e.,_ TransDETR [15]. where including the corresponding training and inference code for the competition. TransDETR is a novel, simple and end to end video text DEtection, Tracking, and Recognition framework (TransDETR), which views the video text spotting task as a direct long-sequence temporal modeling problem.
Footnote 3: [https://rrc.cve.uab.es/?ch=22&com=downloads](https://rrc.cve.uab.es/?ch=22&com=downloads)
## VI Submissions
The result for Task 1 and Task 2 are presented on Table. II and Table. III, respectively.
Fig. 4: **More Qualitative Video Text Visualization of DSText.** DSText covers small and dense texts in various scenarios, which is more challenge.
Fig. 5: **Comparison for frame percentage of different text numbers.** ”%” denotes the percentage of the corresponding frame over the whole data.
Fig. 6: **One Case for around 200 Texts per Frames.** DSText includes huge amounts of small and dense text scenarios, which is a new challenge.
Fig. 7: **Wordcloud visualizations for DSText.**
### _Top 3 Submissions in Task 1_
**Tencent-OCR team.** The top 1 solution method follows the framework of Cascade Mask R-CNN [16]. Multiple backbones including HRNet [17] and InterImage [18] are used to enhance the performance. On the text tracking task, the team designed four different metrics to compare the matching similarity between the current frame detection box and the existing text trajectory, _i.e.,_ box IoU, text content similarity, box size proximity and text geometric neighborhood relationship measurement. These matching confidence scores are used as a weighted sum for the matching cost between the currently detected box and tracklet. When there is a time difference between the current detection box and the last appearance of the tracklet, the IoU and box size metrics are divided by the corresponding frame number difference to prioritize matching with the latest detection box in the trajectory set. They construct a cost matrix for each detected box and existing trajectory in each frame, where the Kuhn-Munkres algorithm is used to obtain matching pairs. When the metrics are less than a certain threshold, their corresponding costs are set to 0. Finally, they perform grid search to find better hyperparameters. Referring to ByteTrack [19], boxes with high detection/recognition scores are prioritized for matching, followed by boxes with lower detection/recognition scores. Each box that is not linked to an existing trajectory is only considered as a starting point for a new trajectory when its detection/recognition score is high enough. Finally, we removed low-quality trajectories with low text confidence scores and noise trajectories with only one detection box.
Strong data augmentation strategies are adopted such as photometric distortions, random motion blur, random rotation, random crop, and random horizontal flip. And IC13 [12], IC15 [5], IC15 Video [5] and Synth800k [20] are involved during the training phase. Furthermore, they treat non-alphanumeric characters as negative samples and regard text instances that are labeled '##DONT#CARE#" as ignored ones during the training phase. In the inference phase, they use multiple resolutions of 600, 800, 1000, 1333, 1666, and 2000.
**Guangzhou Shiyuan Technology team.** The team utilized Mask R-CNN [21] and DBNet [22] as their base architectures. These were trained separately, and their prediction polygons were fused through non-maximum suppression. For the tracking stage, VideoTextSCM [23] was adopted, with Bott-SORT [24] replacing the tracker in the VideoTextSCM model. Bot-SORT is an enhanced multi-object tracker that leverages MOT bag-of-tricks to achieve robust association. It combines the strengths of motion and appearance information, and also incorporates camera-motion compensation and a more accurate Kalman filter state vector. COCO-Text [25], RCTW17 [26], ArT [27], LSVT [28], LSVTD [8], as the public datasets, are used in the training stage. RandomHorizontalFlip, RandomRotate, Colorflitter, MotionBlur and GaussNoise were used for data augmentation.
**AI Lab, Du Xiaoman Financial.** The team selected TransDETR [15] as the baseline and employed the public
\begin{table}
\begin{tabular}{c|c|c c c|c c|c|c} \hline User ID & Rank & MOTA & MOTP & ID\({}_{\text{E1}}\)/\% & Mostly Matched & Partially Matched & Mostly Lost & Affiliations \\ \hline \hline Tencent/OCR & 1 & 62.56\% & 79.88\% & 75.87\% & 8114 & 1800 & 2663 & Tencent. \\ DA & 2 & 50.52\% & 78.33\% & 70.99\% & 7121 & 2405 & 3051 & Guangzhou Shiyuan Electronic Technology Company Limited \\ Tianyu Zhang & 3 & 43.52\% & 78.15\% & 62.27\% & 4980 & 2264 & 5333 & AI Lab, Du Xiaoman Financial \\ Liu Hongen & 4 & 36.87\% & 79.24\% & 48.99\% & 2123 & 3625 & 6829 & Tianjin University \\ Yu Hao & 5 & 31.01\% & 78.00\% & 50.39\% & 2361 & 1767 & 8449 & - \\ Hujin & 6 & 28.92\% & 78.46\% & 43.96\% & 1385 & 1186 & 10006 & Beijing University of Posts \& Telecommunications \\ Cecl & 7 & 27.55\% & 78.40\% & 44.28\% & 1583 & 1103 & 9891 & CQUT \\ MiniDragon & 8 & 25.75\% & 74.03\% & 50.22\% & 3302 & 2806 & 6469 & - \\ FanZhengDu & 9 & 23.41\% & 75.54\% & 49.66\% & 5216 & 3578 & 3783 & - \\ zjb & 10 & 19.85\% & 71.98\% & 39.87\% & 2815 & 3354 & 6408 & - \\ dunachao & 11 & 19.84\% & 73.82\% & 31.18\% & 924 & 1765 & 9888 & China Mobile Communications Research Institute \\ JiangQing & 12 & 13.83\% & 75.75\% & 58.41\% & 6924 & 2622 & 3031 & South China University of Technology: Shanghai AI Laboratory; KingSoft Office CV&D Department Beijing University of Posts and Telecommunications \\ Kebin Liu & 13 & 7.49\% & 75.62\% & 45.68\% & 5403 & 3835 & 3339 & Beijing University of Posts and Telecommunications \\ TungLX & 14 & 0\% & 0\% & 0\% & 0 & 0 & 0 & - \\ \hline \end{tabular}
\end{table} TABLE II: **Task 1: Video Text Tracking Results.** - denotes missing descriptions in affiliations.
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c} \hline User ID & Rank & MOTA & MOTP & ID\({}_{\text{E1}}\)/\% & Mostly Matched & Partially Matched & Mostly Lost & Affiliations \\ \hline \hline Tencent/OCR & 1 & 22.44\% & 80.82\% & 56.45\% & 5062 & 1075 & 6440 & Tencent. \\ DA & 2 & 10.51\% & 78.97\% & 53.45\% & 4629 & 1392 & 6556 & Guangzhou Shiyuan Electronic Technology Company Limited \\ dunachao & 3 & 5.54\% & 74.61\% & 24.25\% & 528 & 946 & 11103 & China Mobile Communications Research Institute \\ cnn\_lin & 4 & 0\% & 0\% & 0\% & 0 & 0 & 0 & South China Agricultural University \\ Hu Jijin & 5 & 0\% & 0\% & 0\% & 0 & 0 & 0 & Beijing University of Posts and Telecommunications \\ XUE CHUHUI & 6 & 0\% & 0\% & 0\% & 0 & 0 & 0 & - \\ MiniDragon & 7 & -25.09\% & 74.95\% & 26.38\% & 1388 & 1127 & 10062 & - \\ JiangQing & 8 & -27.47\% & 76.59\% & 43.61\% & 4090 & 1471 & 7016 & South China University of Technology; Shanghai AI Laboratory; KingSoft Office CV&D Department \\ Tianyu Zhang & 9 & -28.58\% & 80.36\% & 26.20\% & 1556 & 543 & 10478 & AI Lab, Da Xiaoman Financial \\ \hline \end{tabular}
\end{table} TABLE III: **Task2: End-to-End Video Text Spotting Results.** - denotes missing descriptions in affiliations.
datasets COCO-Text V2.0 [25] and SynthText [20] as the pre-training data. To enhance the model's capacity to detect small texts, additional small texts were added to the SynthText images. Furthermore, the HRNet [17] was employed as the new backbone, which demonstrated superiority in identifying faint text objects. The team modify the original hyper-parameters of TransDETR to detect more texts from a single frame. When loading the training data, the maximum number of text instance queries of the Transformer module is set to 400.
### _Top 3 Submissions in Task 2_
**Tencent-OCR team.** To enable end-to-end video text spotting, two methods, namely Parseq [29] and ABINet [30], were utilized in the recognition stage. Both methods were trained on a dataset of 20 million samples, extracted from various open-source datasets, including ICDAR-2013 [12], ICDAR-2015 [5], COCO-Text [25], SynthText [20], among others.
During the end-to-end text spotting stage, different recognition methods are applied to predict all the detected boxes of a trajectory. The final text result corresponding to the trajectory is selected based on confidence and character length. Trajectories with low-quality text results, indicated by low scores or containing only one character, are removed.
**Guangzhou Shiyuan Technology team.** The text tracking task was addressed in a similar manner to Task 1. To recognize text, the PARSeq method was employed, which involves learning an ensemble of internal autoregressive (AR) language models with shared weights using Permutation Language Modeling. This approach unifies context-free non-AR and context-aware AR inference, along with iterative refinement using bidirectional context. The recognition model was trained using several extra public datasets, including COCO-Text [25], RCTW17 [26], ArT [27], LSVT [28], and LSVTD [8].
**China Mobile Communications Research Institute team.** The team used CoText [31] as the baseline and utilized ABINet [30] to enhance the recognition head. The Cotext model was trained using the ICDAR2015 [5], ICDAR2015Video [5], and ICDAR2023 DSText datasets for text detection and tracking. The recognition part employed a pretrained ABINet model based on the MJSynth and SynthText [20] datasets.
## VII Discussion
**Text tracking task.** In this task, most participants firstly employ the powerful backbone to enhance the performance, _e.g.,_ HRNet, Res2Net and SENet. With multiple backbones, TencentOCR team achieves the best score in three main metrics, _i.e.,_ MOTA, MOTP, and ID\({}_{\mathrm{F1}}\)/%, as shown in Table. II. For the text tracking, based on ByteTrack, the team designed four different metrics to compare the matching similarity between the current frame detection box and the existing text trajectory, i.e., box IoU, text content similarity, box size proximity and text geometric neighborhood relationship. To further enhance the performance of result, most participants use the various data augmentations, _e.g.,_ random motion blur, random rotation, random crop. Besides, various public datasets, _e.g.,_ COCO-Text [25], RCTW17 [26], ArT [27], LSVT [28], and LSVTD [8] are used for joint training.
**End-to-End Video Text Spotting task.** To enhance the end-to-end text spotting, most participants adopted the advanced recognition models, _i.e.,_ Parseq [29] and ABINet [30]. Large synthetic datasets (_e.g.,_ SynthText [20]), firstly are used to pretrain the model, and then further finetuned on the released training dataset (DSText). With various data augmentation, large public datasets, powerful network backbones and model ensembles, TencentOCR team achieves the best score in three main metrics, as shown in Table. III.
Overall, while many participants implemented various improvement techniques such as using extra datasets and data augmentation, the majority of their results were unsatisfactory, with MOTA scores below \(25\%\) and ID\({}_{\mathrm{F1}}\) scores below \(70\%\). As a result, there is still a significant amount of room for improvement in this benchmark and many technical challenges to overcome. It is worth mentioning that many of the top ranking methods utilize an ensemble of multiple models and large public datasets to enhance their performance. However, these pipelines tend to be complex and the corresponding inference speeds are slow. Simplifying the pipeline and accelerating inference are also important considerations for the video text spotting task. Additionally, it is noteworthy that many of the submitted methods adopted different ideas and strategies, providing the community with new insights and potential solutions. We expect that more innovative approaches will be proposed following this competition.
## VIII Conclusion
Here, we present a new video text reading benchmark, which focuses on dense and small video text. Compare with the previous datasets, the proposed dataset mainly includes two new challenges for dense and small video text spotting. High-proportioned small texts are a new challenge for the existing video text methods. Meanwhile, we also organize the corresponding competition on Robust Reading Competition Website 4, where we received around 30 valid submissions from 24 teams. These submissions will provide the community with new insights and potential solutions. Overall, we believe and hope the benchmark, as one standard benchmark, develops and improve the video text tasks in the community.
Footnote 4: [https://rrc.cvu.aabs/?ch=22&com=introduction](https://rrc.cvu.aabs/?ch=22&com=introduction)
Footnote 5: [https://github.com/agetitges/face_recognition](https://github.com/agetitges/face_recognition)
## IX Potential Negative Societal Impacts and Solution
Similar to BOVText [9], we blur the human faces in DSText with two steps. Firstly, detecting human faces in each frame with _face recognition6_ - an easy-to-use face recognition open source project with complete development documents and application cases. Secondly, after obtaining the detection box, we blur the face with Gaussian Blur operation in OpenCV7.
Footnote 6: [https://www.utorialspoint.com/opencv/opencv_gaussian_blur.htm](https://www.utorialspoint.com/opencv/opencv_gaussian_blur.htm)
## X Acknowledgements
This competition is supported by the National Natural Science Foundation (NSFC#62225603).
Competition Organizers
The benchmark is mainly done by Weijia Wu and Yuzhong Zhao, while they are research interns at Kuaishou Technology. The establishment of the benchmark is supported by the annotation team of Kuaishou Technology. Prof. Xiang Bai at Huazhong University of Science and Technology, Prof. Dimosthenis Karatzas at the Universitat Autonoma de Barcelona, Prof. Umapada Pal at Indian Statistical Institute, and Asst Prof. Mike Shou at the National University of Singapore, as the four main supervisors, provide many valuable suggestions and comments, _e.g.,_ annotation format suggestion from Prof. Xiang Bai, competition schedule plan from Prof. Dimosthenis Karatzas, submission plan and suggestions for the proposal from Prof.Umapada Pal, and statistical analysis from Asst Prof. Mike Shou. Therefore, our team mainly includes eight people from seven institutions.
|
2306.05611 | Observation of local atomic displacements intrinsic to the double zigzag
chain structure of 1T-MTe2 (M = V, Nb, Ta) | We describe the existence of local distortion discovered in the synchrotron
x-ray single-crystal structure analysis of layered ditelluride 1T-MTe2 (M = V,
Nb, Ta). In 1T-TaTe2, the double zigzag chain structure of Ta is deformed at
about 170 K, and heptamer molecules are formed periodically at low
temperatures. We found that some of the Ta atoms that compose the double zigzag
chain structure appearing at high temperatures are locally displaced, resulting
in local dimerization. This tendency weakens when Ta is replaced by V or Nb.
Our results indicate that the local distortion persistently survives in these
ditellurides, where the electronic degrees of freedom, including orbitals, are
weakened. We further discuss the origin of local distortion in these
ditellurides, which is different from many usual material systems where
molecular formation occurs at low temperatures. | N. Katayama, Y. Matsuda, K. Kojima, T. Hara, S. Kitou, N. Mitsuishi, H. Takahashi, S. Ishiwata, K. Ishizaka, H. Sawa | 2023-06-09T01:10:34Z | http://arxiv.org/abs/2306.05611v1 | Observation of local atomic displacements intrinsic to the double zigzag chain structure of 1\(T\)-\(M\)Te\({}_{2}\) (\(M\) = V, Nb, Ta).
###### Abstract
We describe the existence of local distortion discovered in the synchrotron X-ray single crystal structure analysis of layered ditelluride 1\(T\)-\(M\)Te\({}_{2}\) (\(M\) = V, Nb, Ta). In 1\(T\)-TaTe\({}_{2}\), the double zigzag chain structure of Ta is deformed at about 170 K, and heptamer molecules are formed periodically at low temperatures. We found that some of the Ta atoms that compose the double zigzag chain structure appearing at high temperatures are locally displaced, resulting in local dimerization. This tendency weakens when Ta is replaced by V or Nb. Our results indicate that the local distortion persistently survives in these ditellurides, where the electronic degrees of freedom, including orbitals, are weakened. We further discuss the origin of local distortion in these ditellurides, which is different from many usual material systems where molecular formation occurs at low temperatures.
+
Footnote †: preprint: APS/123-QED
## I Introduction
Among transition metal compounds with orbital degrees of freedom, there are many substances whose atoms assemble to form "molecules" at low temperatures [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. Examples include dimers in LiRh\({}_{2}\)O\({}_{4}\)[1] and CuIr\({}_{2}\)S\({}_{4}\)[2], trimers in LiVO\({}_{2}\)[3; 4] and LiVS\({}_{2}\)[5], and heptamer (trimer/tetramer pair) in AlV\({}_{2}\)O\({}_{4}\)[6; 14]. Although these molecules that form spontaneously in crystals have been thought to disappear and regular lattices are realized at high temperatures, recent local structure studies have revealed that local lattice distortions appear in various forms in a precursory manner [14; 15; 16; 17; 18; 19; 20]. For example, in Li\({}_{2}\)RuO\({}_{3}\)[15] and LiRh\({}_{2}\)O\({}_{4}\)[16], dimers that appear at low temperatures appear as short-range orders at high temperatures; in CuIr\({}_{2}\)S\({}_{4}\)[18], tetragonal distortions appear locally at high temperatures; in LiVS\({}_{2}\), short-range orders of zigzag chains that are unrelated to trimer in the low temperature phase appear and slowly fluctuate at high temperatures [19]. These can be interpreted as local nematic states in which the spontaneous symmetry lowering of the electronic system is strongly coupled to the lattice system [18], and the search for various distortion patterns and the elucidation of their mechanisms are important research themes that go beyond the category of molecular formation systems and have a broad impact on physical properties in general.
Layered transition metal ditellurides \(M\)Te\({}_{2}\) provide a unique playground for such studies. In these material systems, Te-Te covalent bonds derived from large tellurium ions occur, which cause the formal valence of Te to shift from 2- and transfer additional electrons to the transition metal element \(M\). Depending on the amount of charge transfer, a variety of molecular formation patterns coupled with charge degrees of freedom appear at low temperatures. For example, in IrTe\({}_{2}\), a charge-ordered stripe state between Ir\({}^{3+}\) and Ir\({}^{4+}\) occurs at low temperatures, forming an Ir\({}^{4+}\)-Ir\({}^{4+}\) dimer state [21; 22], and superconductivity appears when this dimer phase is suppressed by Pt doping [23]. In 1\(T\)-\(M\)Te\({}_{2}\) (\(M\) = V, Nb, Ta), transition metal elements form quasi-one-dimensional double zigzag chains, named "ribbon chain", as shown in Fig. 1(a) [24; 25; 26]. This chain is formed from multiple linear trimers, with each \(M\) element offering 2/3 electrons for the formation of one trimer [26; 27]. An interesting feature of this system is that only at 1\(T\)-TaTe\({}_{2}\) the modulation of the charge changes at low temperatures below \(T_{c}\sim 170\) K and the double zigzag chain changes to Ta heptamers, as shown in Fig. 1(b) [28]. This transformation is different from the situation in conventional matter systems where molecular formation occurs from a regular lattice, where the electron degrees of freedom are expected to be highly degenerate. Do unique local distortions appear in such ditellurides at high temperatures? In addition, clarifying whether the nature of local distortion differs between 1\(T\)-TaTe\({}_{2}\), where heptamerization occurs, and 1\(T\)-NbTe\({}_{2}\) and 1\(T\)-VTe\({}_{2}\), where the double zigzag chain is maintained at low temperatures, will provide important insights into the background physics that generates local distortion.
In this article, we report on the structural analysis of 1\(T\)-\(M\)Te\({}_{2}\) (\(M\) = V, Nb, Ta) single crystals using
synchrotron radiation X-rays. The anisotropy of the atomic displacement parameters (ADPs) obtained from the structural analysis suggests that the \(M\) atom at the center of the double zigzag chain of these dieturilides is locally distorted, in the zigzag chain direction. Structural analysis using the split-site model shows that local distortion toward the heptamerization occurs, and this tendency is strongest at 17-TaTe\({}_{2}\). This indicates that local distortions appear universally, as in many molecule-forming systems, even if the average structure of the high-temperature phase is in a state where the degeneracy of the orbital degrees of freedom has been resolved.
## II Results and discussions
### Sample Preparation and experimental details
Single crystal samples of 1\(T\)-\(M\)Te\({}_{2}\) (\(M\) = V, Nb, Ta) were synthesized by a conventional solid phase reaction method. The mixture of the constituent elements in their amphoteric ratios was vacuum-sealed and sintered at 1000 'C for \(M\) = Nb and Ta, and at 850 'C for \(M\) = V for 15 hours. The obtained samples are basically powders, but some of them contain tiny single crystals of several tens of micrometers on a side, which were used for diffraction experiments using synchrotron radiation x-rays. X-ray diffraction (XRD) experiments were conducted using the BL02B1 beamline at SPring-8 at an x-ray energy of 40 keV. The typical dimensions of the 1\(T\)-\(M\)Te\({}_{2}\) (\(M\) = V, Nb, Ta) single crystal used for the XRD experiment were 20 \(\times\) 20 \(\times\) 20 \(\mu\)m\({}^{3}\). A He-gas-blowing device was employed to cool the sample to 100 K. A 2D CdTe PILATUS detector was utilized with the diffractometer. The CrysAlisPro program was used to integrate the diffraction profiles. Diffraction intensity averaging and refinement of structural parameters were performed using Jana2006 program [29]. Crystal structure was visualized by using VESTA [30]. The obtained powder diffraction data were indexed using Conograph [31], and the analysis was performed using Rietan-FP [32].
### Structural studies of 1\(T\)-TaTe\({}_{2}\)
Fig. 1(c) and (d) show single-crystal XRD patterns of 1\(T\)-TaTe\({}_{2}\) in the high temperature (300 K) and low temperature (125 K) phases. Although the Bragg peaks remain sharp, the presence of a merohedral domain sharing a 20\(\bar{1}\) plane, as shown in Fig. 1(e), is confirmed. In the following, we show the results of the analysis in which only one domain component is extracted. The diffraction pattern changes below the phase transition around 170 K, and the superlattice peak shown by the circle is observed at low temperatures.
The obtained crystal structures are shown in Figs. 1(a) and (b), both of which are consistent with the previously reported structures [25; 28; 33]. Details of the structure analysis results are summarized in the Appendix of this paper. It is noteworthy that the Ta-Ta distance constituting the linear trimer changes significantly between the high and low temperature phases. In the high-temperature phase, the linear trimer is composed of equally spaced Ta2-Ta1-Ta2 arrays, whereas in the low-temperature phase, the displacement of Ta1b ions is accompanied by a large difference in the distance between adjacent Ta-Ta inside the Ta2a-Ta1b-Ta2b array. This indicates that the Ta2a-Ta1b-Ta2b array does not form a linear trimer in the low-temperature phase, but rather transforms into a Ta2a-Ta1b dimer and isolated Ta2b ion. Although the contraction of the Ta-Ta distance upon heptamerization is greatest between Ta1a and Ta1b, the Ta1a-Ta1b distance of \(\sim\)3.34 A in the low temperature phase is still much longer than the Ta2a-Ta1b distance of \(\sim\)3.18 A, indicating that the Ta2a-Ta1b bond is more essential. The large change in the Ta1a-Ta1b distance associated with the phase transition should be a side effect of the \(b\) axis displacement of Ta1b to form dimers with two Ta2a simultaneously.
Structural analysis revealed that the atomic positions of the high-temperature phase are almost the same as those previously reported [25], and the anisotropic ADP at the Ta1 site also shows an anomalous elongation in the double zigzag chain direction as shown in the in
Figure 1: (a-b) The lattice structure of Ta ions at (a) 300 K and (b) 125 K. (c-d) Single crystal XRD patterns of 1\(T\)-TaTe\({}_{2}\) at (c) 300 K and (d) 125 K. Both the high and low temperature phases could be refined using the previously reported structure. Details are shown in the Appendix of this paper. (e) Geometrical arrangement of merohedral domains sharing the 20\(\bar{1}\) plane in the crystal structure of the high-temperature phase monoclinic \(C2/m\).
set of Fig. 2(a), as indicated in a previous report [28]. Fig. 2(a) shows that the \(U_{22}\) parameter has an unusually large value compared to \(U_{11}\) and \(U_{33}\). It is important to note that the \(U_{11}\) and \(U_{33}\) parameters are zero when extrapolated toward 0 K, but the \(U_{22}\) parameter clearly reaches a finite value. This indicates that the anomalous increase in \(U_{22}\) is not simply anisotropic strong thermal oscillation, but rather a local distortion at the Ta1 site. This is consistent with the possibility of dynamic disorder discussed in the previous report [28].
In order to clarify the local displacement of the ions at the Ta1 site, we performed a structural analysis using a split-site model. This is an analytical method that examines the change in the confidence factor \(R\) with the change in distance \(r\), assuming that the Ta1 site ions are not at the central position, but two ions with occupancy 0.5 exist at positions \(+r\) and -\(r\) along the \(b\) axis, as shown in the inset of Fig. 2(b). If the atomic displacement as represented by this model does not actually occur, the \(R\) value will show a minimum at \(r=0\). If it occurs, the \(R\) value will be a minimum at a finite \(r\). As shown in Fig. 2(b), the analysis shows that the \(R\) value varies significantly with \(r\), reaching a minimum at about \(r=0.13\) A. This indicates that there is an intrinsic local atomic displacement in the high-temperature phase of \(1T\)-TaTe\({}_{2}\), where the local atomic displacement \(r\) value that minimizes the \(R\) value is almost independent of temperature, and consequently the Ta1-Ta2 bond that splits into two types, as shown in Fig. 2(c). It seems strange that such local distortion is independent of temperature, but it should be noted that similar behavior has been observed at high temperatures in AlV\({}_{2}\)O\({}_{4}\)[14] and Li\({}_{2}\)RuO\({}_{3}\)[15] where molecular formation occurs at low temperature.
### Structural studies of \(1T\)-NbTe\({}_{2}\) and \(1T\)-VTe\({}_{2}\)
Will similar local atomic displacements appear in \(1T\)-NbTe\({}_{2}\) and \(1T\)-VTe\({}_{2}\), where the double zigzag chain structure is maintained down to low temperatures? The anisotropic ADPs of the high-temperature phases of these two materials are shown in the inset of Fig. 3(a). Although a larger \(U_{22}\) is realized compared to \(U_{11}\) and \(U_{33}\), its value is smaller than that of \(1T\)-TaTe\({}_{2}\). As summarized in Appendix, the values of lattice constants and bond lengths are close between \(1T\)-TaTe\({}_{2}\), \(1T\)-NbTe\({}_{2}\) and \(1T\)-VTe\({}_{2}\). Thus, the difference in magnitude of \(U_{22}\) indicates a clear difference in local structure between these telluriums. Also, as shown in Fig. 3(a), the \(U_{22}\) parameter decreases in a temperature-dependent manner to zero at 0 K. Fig. 3(b) shows the results of the split-site model analysis for these two materials: \(1T\)-NbTe\({}_{2}\) exhibits a minimum \(R\) value at finite \(r\), but the value of \(r\) that minimizes \(R\) at 300 K is about 0.09 A, which is smaller than that of \(1T\)-TaTe\({}_{2}\). The change of \(R\) with \(r\) is also very small compared to \(1T\)-TaTe\({}_{2}\). This trend is more pronounced for \(1T\)-VTe\({}_{2}\), where the \(R\) value is almost constant over a wide \(r\) range. As shown in Fig. 3(c), the local atomic displacements, if any, of \(1T\)-NbTe\({}_{2}\) and \(1T\)-VTe\({}_{2}\) are much smaller than those of \(1T\)-TaTe\({}_{2}\) and decrease upon cooling to zero at 0 K.
### Discussion based on the single-crystal X-ray diffraction experimental results
These experimental results raise two questions. The first question is what is the origin of these local atomic displacements. In the high-temperature phase of \(1T\)-TaTe\({}_{2}\), we can predict that two dimer patterns with equal
Figure 2: (a) Temperature dependence of the atomic displacement parameters \(U_{11}\), \(U_{22}\), and \(U_{33}\) of the Ta1 site in the high temperature phase. The equations relating the temperature factor T to these atomic displacement parameters are given in the Appendix of this paper. \(U_{11}\), \(U_{22}\), \(U_{33}\) are the mean square amplitudes \(\langle u^{2}\rangle\) in the reciprocal lattice vector \(a^{*}\), \(b^{*}\), \(c^{*}\) directions. The inset shows the thermal oscillating ellipsoid showing 99 % probability of the Ta1 site. (b) Temperature dependence of \(R\) values at each \(r\) obtained by the split-site model. The inset shows a schematic picture of the split-site model. (c) Temperature dependence of Ta-Ta distance for average structure and split-site model, respectively.
lattice energy can be realized, as shown in the right side of Fig. 4(a), which are thermally fluctuating, resulting in a linear trimer in the average structure, as shown in the left side of Fig. 4(a). In the three-center, two-electron bonding state derived from the linear trimer, three orbitals are formed: a bonding orbital, a non-bonding orbital, and an anti-bonding orbital, with two electrons stored in the bonding orbitals, which are energetically stabilized, as shown in the left energy scheme of Fig. 4(a). When local atomic displacement occurs, the linear trimer is transformed into a dimer-isolated atom pair, which should result in the formation of a bonding orbital and an anti-bonding orbital, derived from the dimer, and an isolated atomic orbital, as shown in the right energy scheme of Fig. 4(a). The three energetically equally spaced orbitals formed by the linear trimer and dimer-isolated atom pairs are similar at first glance. However, if the actual inter-atomic distances of the dimer are sufficiently closer than those of the linear trimer, the bonding orbitals should be \(\Delta\) lower in energy for the dimer. If the energy difference \(\Delta\) is larger than the energy loss due to lattice distortion, \(\varepsilon\), the local dimer is expected to stabilize. Since the energies between the two dimer patterns are equal, we expect the two dimer patterns to appear thermally fluctuating, as shown in Fig. 4(a). The transition process between the two dimer patterns is via a trimer state, and the low energy of the bonding orbital in this trimer state should facilitate the transition between the two patterns. The long-range ordering of this local distortion at low temperatures leads to the formation of heptamer clusters, as shown in Fig. 1(b).
Another question is why different behaviors appear for \(1T\)-TaTe\({}_{2}\), \(1T\)-NbTe\({}_{2}\), and \(1T\)-VTe\({}_{2}\). As shown in Fig. 3(c), local atomic displacements are largest for Ta(5\(d\)) and small for Nb(4\(d\)) and V(3\(d\)). In \(1T\)
Figure 3: (a) Temperature dependence of the \(U_{11}\),\(U_{22}\) and \(U_{33}\) parameters of \(1T\)-NbTe\({}_{2}\) and \(1T\)-VTe\({}_{2}\). Inset shows a schematic picture of the thermal oscillating ellipsoid of \(1T\)-NbTe\({}_{2}\) and \(1T\)-VTe\({}_{2}\) compared to that of \(1T\)-TaTe\({}_{2}\). (b) Temperature dependence of \(R\) values at each \(r\) obtained by the split-site model. The arrows in the figure indicate the minimum \(R\) value. (c) Temperature dependence of local distortion \(r\) for \(1T\)-NbTe\({}_{2}\) and \(1T\)-VTe\({}_{2}\). The \(r\) range from the smallest \(R\) value to a 0.5 % increase was defined as the error bar.
Figure 4: (a) Energy schemes in the linear trimer state and in the dimer and isolated-atom states. (b) Schematic diagram of the elemental \(M\) dependence of the relationship between the bonding orbital energy difference \(\Delta\) and the lattice system energy loss \(\varepsilon\) in the linear trimer state and the dimer with isolated atom states, and the temperature dependence of the local distortion \(r\) caused by the difference in \(M\).
with large \(5d\) orbitals, the electronic energy gain \(\Delta\) is expected to be the largest because the orbitals overlap more during dimerization than in \(1T\)-NbTe\({}_{2}\) and \(1T\)-VTe\({}_{2}\). If \(\varepsilon<\Delta\) is realized at \(1T\)-TaTe\({}_{2}\), the overall energy of the system including lattice energy with respect to atomic positions will be as shown in Fig. 4(b). Since atomic displacement and the associated local dimerization stabilize the system, and this trend does not change even when the temperature is lowered, the atomic displacement \(r\) does not show any temperature dependence, and the local distortion is expected to survive until just above the phase transition. On the other hand, for \(4d\) Nb and \(3d\) V, the orbital overlap is small and the energy is not very stable due to local dimerization. Therefore, \(\varepsilon\geq\Delta\) is realized and the energy of the whole system including lattice energy has a flat shape with respect to \(r\). The energy drops to a low central position when the temperature is lowered. Because of these differences in energy schemes, it is thought that among the three tellurides, only \(1T\)-TaTe\({}_{2}\) produces large local distortions, which are maintained at low temperatures and develop into heptamers. From the above discussion, it seems likely that in \(1T\)-NbTe\({}_{2}\) and \(1T\)-VTe\({}_{2}\), when the distance between adjacent transition metals is closer, the orbital overlap becomes larger and dimer (heptamer) states such as \(1T\)-TaTe\({}_{2}\) are formed. It should be noted that it has been reported that applying pressure to \(1T\)-NbTe\({}_{2}\) induces a structural evolution from the trimeric to the dimeric structure [34], confirming the validity of this argument.
An argument similar to the local distortion of \(1T\)-TaTe\({}_{2}\) might be applied to other material systems that exhibit molecular formation at low temperatures, such as the spinel compound AlV\({}_{2}\)O\({}_{4}\). It has been argued that vanadium in AlV\({}_{2}\)O\({}_{4}\) spontaneously forms a heptamer molecule at about 700 K. Based on the Curie paramagnetic component in the low-temperature phase by magnetization measurements, the heptamer has been revealed to be composed of 9 bonds by 18 electrons [6]. However, recent structural analysis of the split-site model on synchrotron XRD data has revealed that the heptamer is actually composed of trimer/tetramer pairs [14]. The two structures are compared in Fig. 25 of the paper by D.I. Khomskii and S.V. Streltsov [35]. The nature of the difference between the two types of molecular structures proposed for AlV\({}_{2}\)O\({}_{4}\) can be interpreted as whether the three bonds connecting the upper and lower triangular trimer are linear trimers consisting of three-center two-electron bonds or pairs of dimers and isolated atoms. This is very similar to the present case. An interesting difference is that such local distortions appear in the low-temperature ordered phase in AlV\({}_{2}\)O\({}_{4}\), whereas they appear only in the high-temperature phase in \(1T\)-TaTe\({}_{2}\). This indicates that unconventional orbital ordering states that appear in the low-temperature phase can universally appear as local distortions in the high-temperature phase, even in systems like \(1T\)-TaTe\({}_{2}\), where there is no short-range order in the low-temperature ordered phase.
The presence of local distortion in the high-temperature phase not only has important implications for the mechanism of phase transitions, but is also important for understanding the thermodynamics of the high-temperature phase. For example, while high thermal conductivity is generally realized in the metallic state due to itinerant electrons, it has been observed that in CuIr\({}_{2}\)S\({}_{4}\), which undergoes insulating transition with molecular formation at low temperatures, the thermal conductivity of the high temperature metallic phase is lower than that of the insulating low temperature phase. It is argued that local distortion suppresses phonon thermal conduction [36; 37; 38]. In addition, the multiple degrees of freedom of electrons are expected to play an important role in the entropy changes associated with molecular formation [5; 8]. The presence of local distortion will be a new factor, not previously considered, to the concept of electron degrees of freedom in the high-temperature phase. In this respect, the finding of differences in local structure among the systematics of \(M\) = V, Nb, and Ta in this study should correspond to providing an attractive stage for discussing the role of local distortion on physical properties through comparison.
###### Acknowledgements.
The work leading to these results has received funding from the Grant in Aid for Scientific Research (Nos. JP20H02604, JP21K18599, JP21J21236) and Research Foundation for the Electrotechnology of Chubu. This work was carried out under the Visiting Researcher's Program of the Institute for Solid State Physics, the University of Tokyo, and the Collaborative Research Projects of Laboratory for Materials and Structures, Institute of Innovative Research, Tokyo Institute of Technology. Single-crystal and powder XRD experiments were conducted at the BL02B1 and BL02B2 of SPring-8, Hyogo, Japan (Proposals No. 2019B1085, 2021A0070, 2021B1261, 2021B1136, 2021B1261, 2022A0304, 2022B0607, 2022B1570, 2022B1582, and 2022B1862), and at the BL5S2 of Aichi Synchrotron Radiation Center, Aichi Science and Technology Foundation, Aichi, Japan (Proposals No. 202202037, 202201033, 202105170, 202104111, 2021L3002, 2021L2002, 2021L1002 and 2020L4002).
## Appendix A Single crystal X-ray diffraction analysis results
Here, the temperature factor \(T\) is expressed as a function of the atomic displacement parameters \(U_{11}\), \(U_{22}\), \(U_{33}\), \(U_{12}\), \(U_{13}\) and \(U_{23}\), using the following equation,
\[T = \exp\left\{-2\pi^{2}(h^{2}a^{*2}U_{11}+k^{2}b^{*2}U_{22}+l^{2}c^{ *2}U_{33}\right.\] \[+\left.2hka^{*}b^{*}U_{12}+2hla^{*}c^{*}U_{13}+2kb^{*}c^{*}U_{23} )\right\}.\]
\(U_{11}\), \(U_{22}\) and \(U_{33}\) are the mean square amplitudes \(\langle u^{2}\rangle\) in the reciprocal lattice vector \(a^{*}\), \(b^{*}\) and \(c^{*}\) directions.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline Ta1 & \(2d\) & 1 & 1/2 & 0 & 0.01177(4) \\ Ta2 & \(4i\) & 1 & 0.63944(2) & 0 & 0.29083(2) & 0.00437(2) \\ Te1 & \(4i\) & 1 & 0.79655(2) & 1/2 & 0.37790(2) & 0.00419(3) \\ Te2 & \(4i\) & 1 & 0.49515(2) & -1/2 & -0.30921(2) & 0.00431(3) \\ Te3 & \(4i\) & 1 & 0.64883(2) & 0 & 0.01094(2) & 0.00551(3) \\ \hline \end{tabular}
\end{table}
Table 2: Structural parameters of 1\(T\)-TaTe\({}_{2}\) at 175 K without the split-site model.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Temperature (K) & 110 (low \(T\) phase) & 175 (high \(T\) phase) \\ \hline Wavelength (Å) & 0.31007 & 0.31007 \\ \hline Crystal dimension (\(\mu\)m\({}^{3}\)) & 20\(\times\)20\(\times\)20 & 20\(\times\)20\(\times\)20 \\ \hline space group & \(C2/m\) & \(C2/m\) \\ \hline \(a\) (Å) & 14.7669(3) & 14.7408(4) \\ \hline \(b\) (Å) & 10.8702(2) & 3.62940(10) \\ \hline \(c\) (Å) & 9.2926(2) & 9.3287(2) \\ \hline \(\beta\) (\({}^{\circ}\)) & 110.630(8) & 110.864(8) \\ \hline \(V\) (Å\({}^{3}\)) & 1395.99(5) & 466.36(3) \\ \hline \(Z\) & 18 & 6 \\ \hline \(F(000)\) & 3186 & 1062 \\ \hline \((\)sin\(\theta/\lambda)_{Max}\) (Å\({}^{-1}\)) & 1.3513 & 1.3512 \\ \hline \(N_{Total,obs}\) & 92313 & 32489 \\ \hline \(N_{Unique,obs}\) & 13372 & 4637 \\ \hline Average redundancy & 6.9 & 7.0 \\ \hline Completeness & 0.904 & 0.894 \\ \hline \hline \end{tabular} Structural analysis using anisotropic displacement parameters
\begin{tabular}{|c|c|c|} \hline \(R_{1}\) [\# of reflections] & 0.066 [12475] & 0.0467 [4259] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\# of reflections] & 0.0445 [9365] & 0.0358 [3586] \\ \hline GOF [\# of reflections] & 1.035 [12475] & 1.057 [4259] \\ \hline \hline \end{tabular} Structural analysis without using anisotropic displacement parameters\({}^{*}\)
\begin{tabular}{|c|c|c|c|} \hline split-site model & Not used & Use & Not used \\ \hline \(R_{1}\) [\# of reflections] & 0.0673 [12475] & 0.0468 [4259] & 0.1318 [4259] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\# of reflections] & 0.0456 [9365] & 0.0359 [3586] & 0.1152 [3586] \\ \hline GOF [\# of reflections] & 1.037 [12475] & 1.048 [4259] & 1.093 [4259] \\ \hline \end{tabular}
\end{table}
Table 1: Summary of crystallographic data of 1\(T\)-TaTe\({}_{2}\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline Ta1 & \(2d\) & 1 & 1/2 & 1/2 & 0 & 0.011177(4) \\ Ta2 & \(4i\) & 1 & 0.63944(2) & 0 & 0.29083(2) & 0.00437(2) \\ Te1 & \(4i\) & 1 & 0.79655(2) & 1/2 & 0.37790(2) & 0.00419(3) \\ Te2 & \(4i\) & 1 & 0.49515(2) & -1/2 & -0.30921(2) & 0.00431(3) \\ Te3 & \(4i\) & 1 & 0.64883(2) & 0 & 0.01094(2) & 0.00551(3) \\ \hline \end{tabular}
\end{table}
Table 2: Structural parameters of 1\(T\)-TaTe\({}_{2}\) at 175 K without the split-site model.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \multicolumn{2}{c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline Ta1 & \(2d\) & 1 & 1/2 & 1/2 & 0 & 0.01177(4) \\ Ta2 & \(4i\) & 1 & 0.63944(2) & 0 & 0.29083(2) & 0.00437(2) \\ Te1 & \(4i\) & 1 & 0.79655(2) & 1/2 & 0.37790(2) & 0.00419(3) \\ Te2 & \(4i\) & 1 & 0.49515(2) & -1/2 & -0.30921(2) & 0.00431(3) \\ Te3 & \(4i\) & 1 & 0.64883(2) & 0 & 0.01094(2) & 0.00551(3) \\ \hline \end{tabular}
\end{table}
Table 3: Anisotropic atomic displacement parameters of 1\(T\)-TaTe\({}_{2}\) at 175 K without the split-site model.
\begin{table}
\begin{tabular}{c c c c c c c} & & \multicolumn{4}{c}{atomic coordinates} & \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline Ta1a & \(2d\) & 1 & 0 & 1/2 & 1/2 & 0.00234(3) \\ Ta1b & \(4h\) & 1 & 0 & 0.19316(2) & 1/2 & 0.00205(2) \\ Ta2a & \(8j\) & 1 & 0.85918(2) & 0.33810(2) & 0.21004(2) & 0.00196(2) \\ Ta2b & \(4i\) & 1 & 0.86247(2) & 0 & 0.20874(2) & 0.00229(2) \\ Te1a & \(8j\) & 1 & 0.70510(2) & -0.16609(2) & 0.12266(2) & 0.00228(2) \\ Te1b & \(4i\) & 1 & 0.79909(2) & 0 & -0.12305(3) & 0.00223(3) \\ Te2a & \(4i\) & 1 & 0.99543(2) & 1/2 & 0.18923(3) & 0.00216(3) \\ Te2b & \(8j\) & 1 & 0.99419(2) & 0.17096(2) & 0.19070(2) & 0.00226(2) \\ Te3a & \(4i\) & 1 & 0.86348(2) & 0 & 0.49175(3) & 0.00207(3) \\ Te3b & \(8j\) & 1 & 0.84603(2) & 0.33834(2) & 0.49008(2) & 0.00215(2) \\ \end{tabular}
\end{table}
Table 5: Structural parameters of \(1T\)-TaTe\({}_{2}\) at 110 K without the split-site model.
\begin{table}
\begin{tabular}{c c c c c c c} & \multicolumn{4}{c}{atomic coordinates} & \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline Ta1 & \(4g\) & 0.5 & 1/2 & -0.53555(7) & 0 & 0.00430(3) \\ Ta2 & \(4i\) & 1 & 0.63944(2) & 0 & 0.29083(2) & 0.00441(2) \\ Te1 & \(4i\) & 1 & 0.79655(2) & 1/2 & 0.37788(2) & 0.00425(3) \\ Te2 & \(4i\) & 1 & 0.49516(2) & -1/2 & -0.30919(2) & 0.00432(2) \\ Te3 & \(4i\) & 1 & 0.64882(2) & 0 & 0.01091(2) & 0.00551(3) \\ \end{tabular}
\end{table}
Table 4: Structural parameters of \(1T\)-TaTe\({}_{2}\) at 175 K with the split-site model.
\begin{table}
\begin{tabular}{c c c c c c c c} & \multicolumn{6}{c}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline Nb1 & \(4g\) & 0.5 & 1/2 & -0.52321(19) & 0 & 0.00812(5) \\ Nb2 & \(4i\) & 1 & 0.63892(2) & 0 & 0.29044(2) & 0.00830(3) \\ Te1 & \(4i\) & 1 & 0.79723(2) & 1/2 & 0.37831(2) & 0.00889(3) \\ Te2 & \(4i\) & 1 & 0.50338(2) & -1/2 & 0.30954(2) & 0.00868(3) \\ Te3 & \(4i\) & 1 & 0.64941(2) & 0 & 0.00948(2) & 0.00865(3) \\ \end{tabular}
\end{table}
Table 1: Structural parameters of 1T-NbTe\({}_{2}\) at 300 K with the split-site model.
\begin{table}
\begin{tabular}{|c|c|} \hline Temperature (K) & 300 \\ \hline Wavelength (Å) & 0.31011 \\ \hline Crystal dimension (\(\mu\)m\({}^{3}\)) & 20\(\times\)20\(\times\)10 \\ \hline space group & \(C2/m\) \\ \hline \(a\) (Å) & 14.6619(5) \\ \hline \(b\) (Å) & 3.63760(10) \\ \hline \(c\) (Å) & 9.3144(3) \\ \hline \(\beta\) (\({}^{\circ}\)) & 110.070(8) \\ \hline \(V\) (Å\({}^{3}\)) & 466.61(3) \\ \hline \(Z\) & 6 \\ \hline \(F(000)\) & 870 \\ \hline (sin\(\theta\)/\(\lambda\))\({}_{Max}\) (Å\({}^{-1}\)) & 1.2499 \\ \hline \(N_{Total,obs}\) & 29479 \\ \hline \(N_{Unique,obs}\) & 3618 \\ \hline Average redundancy & 8.1 \\ \hline Completeness & 0.878 \\ \hline \hline Structural analysis using anisotropic displacement parameters \\ \hline \hline \(R_{1}\) [\# of reflections] & 0.0434 [3494] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\# of reflections] & 0.0290 [2778] \\ \hline GOF [\# of reflections] & 0.981 [3494] \\ \hline \hline Structural analysis without using anisotropic displacement parameters\({}^{*}\) \\ \hline \hline split-site model & Use & Not used \\ \hline \(R_{1}\) [\# of reflections] & 0.0441 [3494] & 0.0564 [3494] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\# of reflections] & 0.0294 [2778] \\ \hline GOF [\# of reflections] & 0.998 [3494] & 1.061 [3494] \\ \hline \end{tabular}
\end{table}
Table 7: Summary of crystallographic data of 1\(T\)-NbTe\({}_{2}\) at 300 K.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) \\ \hline Nb1 & \(2d\) & 1 & 1/2 & -1/2 & 0 & 0.01070(4) \\ Nb2 & \(4i\) & 1 & 0.638910(14) & 0 & 0.29045(2) & 0.00829(3) \\ Te1 & \(4i\) & 1 & 0.797228(11) & 1/2 & 0.378315(18) & 0.00888(3) \\ Te2 & \(4i\) & 1 & 0.503383(11) & -1/2 & 0.309539(18) & 0.00867(3) \\ Te3 & \(4i\) & 1 & 0.649411(11) & 0 & 0.009461(18) & 0.00865(3) \\ \end{tabular}
\end{table}
Table 8: Anisotropy atomic displacement parameters of 1\(T\)-NbTe\({}_{2}\) at 300 K without the split-site model.
\begin{table}
\begin{tabular}{|c|c|} \hline Temperature (K) & 300 \\ \hline Wavelength (Å) & 0.31011 \\ \hline Crystal dimension (\(\mu\)m\({}^{3}\)) & 20\(\times\)20\(\times\)10 \\ \hline space group & \(C2/m\) \\ \hline \(a\) (Å) & 14.6619(5) \\ \hline \(b\) (Å) & 3.63760(10) \\ \hline \(c\) (Å) & 9.3144(3) \\ \hline \(\beta\) (\({}^{\circ}\)) & 110.070(8) \\ \hline \(V\) (Å\({}^{3}\)) & 466.61(3) \\ \hline \(Z\) & 6 \\ \hline \(F(000)\) & 870 \\ \hline (sin\(\theta\)/\(\lambda\))\({}_{Max}\) (Å\({}^{-1}\)) & 1.2499 \\ \hline \(N_{Total,obs}\) & 29479 \\ \hline \(N_{Unique,obs}\) & 3618 \\ \hline Average redundancy & 8.1 \\ \hline Completeness & 0.878 \\ \hline \hline Structural analysis using anisotropic displacement parameters \\ \hline \hline \(R_{1}\) [\# of reflections] & 0.0434 [3494] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\# of reflections] & 0.0290 [2778] \\ \hline GOF [\# of reflections] & 0.981 [3494] \\ \hline \hline Structural analysis without using anisotropic displacement parameters\({}^{*}\) \\ \hline \hline split-site model & Use & Not used \\ \hline \(R_{1}\) [\# of reflections] & 0.0441 [3494] & 0.0564 [3494] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\# of reflections] & 0.0294 [2778] \\ \hline GOF [\# of reflections] & 0.098 [3494] \\ \hline \end{tabular}
\end{table}
Table 9: Summary of crystallographic data of 1\(T\)-NbTe\({}_{2}\) at 300 K.
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline \hline & \multicolumn{6}{c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline Nb1 & \(2d\) & 1 & 1/2 & -1/2 & 0 & 0.00464(4) \\ Nb2 & \(4i\) & 1 & 0.63853(2) & 0 & 0.20955(2) & 0.00385(3) \\ Te1 & \(4i\) & 1 & 0.79750(2) & 1/2 & 0.37881(2) & 0.00392(3) \\ Te2 & \(4i\) & 1 & 0.50338(11) & -1/2 & 0.30954(2) & 0.00391(3) \\ Te3 & \(4i\) & 1 & 0.64991(2) & 0 & 0.00941(2) & 0.00385(3) \\ \hline \hline \end{tabular}
\end{table}
Table 11: Summary of crystallographic data of \(1T\)-NbTe\({}_{2}\) at 100 K.
\begin{table}
\begin{tabular}{|c|c|} \hline Temperature (K) & 100 \\ \hline Wavelength (Å) & 0.31011 \\ \hline Crystal dimension (\(\mu\)m\({}^{3}\)) & 20\(\times\)20\(\times\)10 \\ \hline space group & \(C2/m\) \\ \hline \(a\) (Å) & 14.5770(3) \\ \hline \(b\) (Å) & 3.63410(10) \\ \hline \(c\) (Å) & 9.2961(2) \\ \hline \(\beta\) (\({}^{\circ}\)) & 109.956(8) \\ \hline \(V\) (Å\({}^{3}\)) & 462.88(3) \\ \hline \(Z\) & 6 \\ \hline \(F(000)\) & 870 \\ \hline (sin\(\theta\)/\(\lambda\))\({}_{Max}\) (Å\({}^{-1}\)) & 1.2500 \\ \hline \(N_{Total,obs}\) & 28124 \\ \hline \(N_{Unique,obs}\) & 3608 \\ \hline Average redundancy & 7.8 \\ \hline Completeness & 0.882 \\ \hline \hline Structural analysis using anisotropic displacement parameters \\ \hline \hline \(R_{1}\) [\(\#\) of reflections] & 0.0453 [3504] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\(\#\) of reflections] & 0.0345 [2842] \\ \hline GOF [\(\#\) of reflections] & 0.986 [3504] \\ \hline \hline Structural analysis without using anisotropic displacement parameters\({}^{*}\) \\ \hline \hline split-site model & Use & Not used \\ \hline \(R_{1}\) [\(\#\) of reflections] & 0.0459 [3504] & 0.0469 [3504] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\(\#\) of reflections] & 0.0349 [2842] & 0.0359 [2842] \\ \hline GOF [\(\#\) of reflections] & 0.993 [3504] & 1.017 [3504] \\ \hline \end{tabular} \({}^{*}\) In this analysis, isotropic ADP is used only for the Nb1 site, and anisotropic ADP is used for the remaining sites.
\end{table}
Table 12: Summary of crystallographic data of \(1T\)-NbTe\({}_{2}\) at 100 K without the split-site model.
\begin{table}
\begin{tabular}{|c c c c c c|} \hline \hline & \multicolumn{6}{c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline Nb1 & \(0.00440(9)\) & 0.00579(10) & 0.00332(9) & 0 & 0.00079(8) & 0 \\ Nb2 & \(0.00426(7)\) & 0.00336(7) & 0.00319(7) & 0 & 0.00030(5) & 0 \\ Te1 & \(0.00431(6)\) & 0.00301(5) & 0.00359(6) & 0 & 0.00025(4) & 0 \\ Te2 & \(0.00465(6)\) & 0.00315(5) & 0.00350(5) & 0 & 0.00086(4) & 0 \\ Te3 & \(0.00457(6)\) & 0.00290(5) & 0.00353(5) & 0 & 0.00069(4) & 0 \\ \hline \hline \end{tabular} \({}^{*}\) In this analysis, isotropic ADP is used only for the Nb1 site, and anisotropic ADP is used for the remaining sites.
\end{table}
Table 13: Anisotropic atomic displacement parameters of \(1T\)-NbTe\({}_{2}\) at 100 K without the split-site model.
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline \hline & \multicolumn{6}{c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline Nb1 & \(4g\) & 0.5 & 1/2 & 0.5113(4) & 0 & 0.00405(6) \\ Nb2 & \(4i\) & 1 & 0.63854(2) & 0 & 0.29028(2) & 0.00384(4) \\ Te1 & \(4i\) & 1 & 0.79751(2) & 1/2 & 0.37881(2) & 0.00391(3) \\ Te2 & \(4i\) & 1 & 0.50338(2) & -1/2 & 0.30954(2) & 0.00390(3) \\ Te3 & \(4i\) & 1 & 0.64991(2) & 0 & 0.00943(2) & 0.00383(3) \\ \hline \end{tabular} \({}^{*}\) In this analysis, isotropic ADP is used only for the Nb1 site, and anisotropic ADP is used for the remaining sites.
\end{table}
Table 11: Summary of crystallographic data of \(1T\)-NbTe\({}_{2}\) at 100 K without the split-site model.
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline \hline & \multicolumn{6}{c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline V1 & \(2d\) & 1 & 1/2 & -1/2 & 0 & 0.00503(6) \\ V2 & \(4i\) & 1 & 0.64217(4) & 0 & 0.29701(6) & 0.00431(4) \\ Te1 & \(4i\) & 1 & 0.79577(2) & 1/2 & 0.37769(2) & 0.00396(2) \\ Te2 & \(4i\) & 1 & 0.64568(2) & 0 & 0.01414(2) & 0.00378(2) \\ \hline \hline \end{tabular}
\end{table}
Table 10: Summary of crystallographic data of 1\(T\)-VTe\({}_{2}\) at 100 K.
\begin{table}
\begin{tabular}{|c|c|} \hline Temperature (K) & 100 \\ \hline Wavelength (Å) & 0.311 \\ \hline Crystal dimension (\(\mu\)m\({}^{3}\)) & 30\(\times\)30\(\times\)30 \\ \hline space group & \(C2/m\) \\ \hline \(a\) (Å) & 14.3110(2) \\ \hline \(b\) (Å) & 3.59625(4) \\ \hline \(c\) (Å) & 9.09130(10) \\ \hline \(\beta\) (\({}^{\circ}\)) & 109.602(2) \\ \hline \(V\) (Å\({}^{3}\)) & 440.77(9) \\ \hline \(Z\) & 6 \\ \hline \(F(000)\) & 762 \\ \hline (sin\(\theta\)/\(\lambda\))\({}_{Max}\) (Å\({}^{-1}\)) & 1.7824 \\ \hline \(N_{Total,obs}\) & 41843 \\ \hline \(N_{Unique,obs}\) & 9196 \\ \hline Average redundancy & 4.6 \\ \hline Completeness & 0.831 \\ \hline \hline Structural analysis using anisotropic displacement parameters \\ \hline \hline \(R_{1}\) [\# of reflections] & 0.0708 [8327] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\# of reflections] & 0.0607 [7414] \\ \hline GOF [\# of reflections] & 1.323 [8327] \\ \hline \hline Structural analysis without using anisotropic displacement parameters\({}^{*}\) \\ \hline \hline split-site model & Use & Not used \\ \hline \(R_{1}\) [\# of reflections] & 0.0711 [8327] & 0.0720 [8327] \\ \hline \(R_{1}\) (\(I>4\sigma\)) [\# of reflections] & 0.0610 [7414] & 0.0620 [7414] \\ \hline GOF [\# of reflections] & 1.319 [8327] & 1.316 [8327] \\ \hline \end{tabular} \({}^{*}\) In this analysis, isotropic ADP is used only for the V1 site, and anisotropic ADP is used for the remaining sites.
\end{table}
Table 11: Anisotropic atomic displacement parameters of 1\(T\)-VTe\({}_{2}\) at 100 K without the split-site model.
\begin{table}
\begin{tabular}{|c c c c c c|} \hline \hline & \multicolumn{6}{c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline V1 & \(2d\) & 1 & 1/2 & -1/2 & 0 & 0.00503(6) \\ V2 & \(4i\) & 1 & 0.64217(4) & 0 & 0.29701(6) & 0.00431(4) \\ Te1 & \(4i\) & 1 & 0.79577(2) & 1/2 & 0.37769(2) & 0.00396(2) \\ Te2 & \(4i\) & 1 & 0.50775(2) & -1/2 & 0.30650(2) & 0.00377(2) \\ Te3 & \(4i\) & 1 & 0.64568(2) & 0 & 0.01414(2) & 0.00378(2) \\ \hline \hline \end{tabular} \({}^{*}\) In this analysis, isotropic ADP is used only for the V1 site, and anisotropic ADP is used for the remaining sites.
\end{table}
Table 11: Anisotropic atomic displacement parameters of 1\(T\)-VTe\({}_{2}\) at 100 K without the split-site model.
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline \hline & \multicolumn{6}{c|}{atomic coordinates} \\ site & Wyck. & Occ. & \(x/a\) & \(y/b\) & \(z/c\) & \(U_{eq}\) (Å\({}^{2}\)) \\ \hline V1 & \(4g\) & 0.5 & 1/2 & 0.5135(5) & 0 & 0.00428(7) \\ V2 & \(4i\) & 1 & 0.64217(4) & 0 & 0.29696(6) & 0.00431(4) \\ Te1 & \(4i\) & 1 & 0.79577(2) & 1/2 & 0.37769(2) & 0.00396(2) \\ Te2 & \(4i\) & 1 & 0.50775(2) & -1/2 & 0.30650(2) & 0.00377(2) \\ Te3 & \(4i\) & 1 & 0.64568(2) & 0 & 0.01414(2) & 0.00378(2) \\ \hline \end{tabular} \({}^{*}\) In this analysis, isotropic ADP is used only for the V1 site, and anisotropic ADP is used for the remaining sites.
\end{table}
Table 12: Summary of crystallographic data of 1\(T\)-VTe\({}_{2}\) at 100 K without the split-site model. |
2310.09915 | Impact of higher-order dispersion on frequency-modulated combs | Frequency-modulated (FM) combs form spontaneously in free-running
semiconductor lasers and possess a vast potential for spectroscopic
applications. Despite recent progress in obtaining a conclusive theoretical
description, experimental FM combs often exhibit non-ideal traits, which
prevents their widespread use. Here we explain this by providing a clear
theoretical and experimental study of the impact of the higher-order dispersion
on FM combs. We reveal that spectrally-dependent dispersion is detrimental for
comb performance and leads to a decreased comb bandwidth and the appearance of
spectral holes. These undesirable traits can be mended by applying a
radio-frequency modulation of the laser bias. We show that electrical
injection-locking of the laser leads to a significant increase of the comb
bandwidth, a uniform-like spectral amplitudes, and the rectification of the
instantaneous frequency to recover a nearly linear frequency chirp of FM combs. | Nikola Opačak, Barbara Schneider, Jérôme Faist, Benedikt Schwarz | 2023-10-15T18:52:36Z | http://arxiv.org/abs/2310.09915v1 | # Impact of higher-order dispersion on frequency-modulated combs
###### Abstract
Frequency-modulated (FM) combs form spontaneously in free-running semiconductor lasers and possess a vast potential for spectroscopic applications. Despite recent progress in obtaining a conclusive theoretical description, experimental FM combs often exhibit non-ideal traits, which prevents their widespread use. Here we explain this by providing a clear theoretical and experimental study of the impact of the higher-order dispersion on FM combs. We reveal that spectrally-dependent dispersion is detrimental for comb performance and leads to a decreased comb bandwidth and the appearance of spectral holes. These undesirable traits can be mended by applying a radio-frequency modulation of the laser bias. We show that electrical injection-locking of the laser leads to a significant increase of the comb bandwidth, a uniform-like spectral amplitudes, and the rectification of the instantaneous frequency to recover a nearly linear frequency chirp of FM combs.
+
Footnote †: preprint: APS/123-QED
Perfectly periodic waveforms of light, known as optical frequency combs, stand as one of the pillars of modern optics, with applications ranging from fundamental science to precise frequency metrology [1]. Recent years have witnessed to the enormous efforts behind decreasing the footprint of comb solutions - inciting rapid progress of chip-scale integrated comb generators. Among these, semiconductor Fabry-Perot (FP) lasers are of large importance as they are compact and possess substantial broadband gain provided by electrical pumping. The research interest in semiconductor lasers has recently particularly peaked, largely owing to their capacity to emit the so-called frequency-modulated (FM) combs. These combs constitute an exciting novel alternative to generate equidistant comb spectra, reported to date in various semiconductor laser types such as the quantum cascade laser (QCL) [2; 3; 4], interband cascade laser [5], quantum dot laser [6], VCSEL [7], and quantum well laser diode [8; 9; 10]. The traditional and well-established amplitude-modulated (AM) combs, comprising a train of short periodic light pulses emitted from mode-locked lasers [11], often rely on a series of external optical elements to form, potentially requiring tabletop-sized setups. In sharp contrast, FM combs form spontaneously in semiconductor FP lasers without the need of any additional optical elements, which makes them especially appealing for integrated applications. FM combs stand out all the more as they are not characterized by pulses, but rather a constant intensity where the instantaneous frequency is modulated instead with a periodic linear chirp.
The physical origin of FM combs was the topic of exhaustive theoretical studies in recent years. Initial studies revealed the crucial roles of spatial hole burning (SHB) and four-wave mixing (FWM) in the formation of equidistant multimode spectra [12; 13; 14]. However, the particular modal phase arrangement of FM combs which leads to their strikingly conspicuous linear frequency chirp originates purely from a Kerr third-order optical nonlinearity or a group velocity dispersion (GVD) present in the laser system with finite facet reflectivities [15; 16; 17; 18; 19]. The latter was particularly shocking, as even a small GVD was believed to be detrimental for coherent laser operation, as in the case of AM combs. Despite this notion, a finite GVD was shown to not only be potentially necessary for coherent comb emission, but also to increase the optical bandwidth of the comb [20]. Regardless of the evident breakthroughs in theoretical understanding, experimental FM combs often continue to be plagued by poor performance exhibiting nonuniform spectra and low optical bandwidth, thus limiting their use.
In this work, we present a combined experimental and theoretical study of the influence of higher-order dispersion on FM combs. Although the role of higher-order dispersion is well understood in the formation of other comb types e.g. Kerr combs [21], where it is even utilized to create octave-spanning spectra [22], the impact it has on FM combs remained unclear. We demonstrate that a spectrally-dependent dispersion has a severe impact on FM comb performance. Increasing the higher-order (third) contribution leads to poor experimentally-observed characteristics such as nonuniform comb spectra, nonlinear frequency chirp, and spectral narrowing - which are undesirable for applications. Lastly, we show that even in the case of high third-order dispersion, the negative impact on FM combs can be diminished and even reversed by applying a radio-frequency (RF) modulation of the laser bias. The resulting injection-locking of the laser yields the highly-desired larger comb bandwidth, a uniform optical spectrum, and recovers the linear-like frequency chirp.
To theoretically study the behavior of FM combs, we employ spatio-temporal numerical simulations of the master equation, derived from the Maxwell-Bloch system [15; 16]. A typical theoretical FM comb, obtained with a second-order (constant) dispersion of \(1500\,\mathrm{fs}^{2}/\mathrm{mm}\) in a \(4\,\mathrm{mm}\) long FP cavity is displayed in Fig. 1a). It exhibits an ideal uniform-like spectrum with the distinct
linear intermodal phases that cover the full spectral range of \(2\pi\), corresponding to a linear frequency chirp and a quasi-constant intensity output. In the absence of Kerr nonlinearity, GVD provides the only mechanism that shapes the linear chirp of FM combs in FP lasers with partially reflective facets. Fig. 1b) displays the autocorrelation value of simulated laser states, as the second-order dispersion and the laser pump are swept, where lasing occurs for pump values larger than 1. The autocorrelation is calculated between the emitted field over one roundtrip and the emitted field delayed by 500 roundtrips. It is evident that an FM comb, characterized by the autocorrelation of 1 (smaller values indicate an unlocked state), is obtained only for a nonzero value of the GVD. Furthermore, the minimum value of GVD required for stable comb formation increases with the laser pump (dashed-dotted line). Potential presence of a nonzero Kerr nonlinearity would shift the FM comb parameter space horizontally along the GVD-axis [15; 16; 18; 20]. Not all obtained comb states possess equally appealing characteristics, which is apparent from Fig. 1c), where we display the comb bandwidth. Aiming for the largest spectral width, the laser should be operated at a high bias point, and have a dispersion that is just large enough to obtain stable comb operation, but not larger. The influence of the FP cavity length is visible in Fig. 1d), where we have swept the laser pump for 4 different lengths while keeping the GVD constant. While strongly dependent on the laser bias, the comb bandwidth increases for shorter cavities, at a fixed pump. Identical qualitative behavior was obtained by following analytic theory [18; 19], although the predicted dependence on the cavity length was stronger, probably due to neglecting of the finite laser gainwidth in the analytic approach.
Unlike the ideal theoretical FM combs, experimentally-obtained results are often riddled with undesired traits such as nonuniform spectra with holes, as shown in the top of Fig. 2a), measured for a 4 mm long FP QCL. More details on the laser active region and cavity design can be found in [23]. Employing Shifted-Wave Interference Fourier Transform Spectroscopy (SWIFTS) [24] we can prove the coherence of the comb state and extract the in
Figure 1: **Simulated FM frequency combs induced by constant GVD.****(a)** The intensity spectrum (top) and the intermodal phases (bottom) of a comb obtained for GVD=1500 fs\({}^{2}\)/mm. **(b)** Calculated autocorrelation as both the GVD and the laser pump are swept. The sign of the GVD only affects the direction of the frequency chirp. The pump is defined as \((J-J_{tr})/(J_{th}-J_{tr})\), where \(J\) is the laser current density, \(J_{th}\) is the lasing threshold, and \(J_{tr}\) is the transparency current. Lasing occurs for pump values larger than 1. FM combs are obtained in the indicated parameter space, separated with a dashed-dotted line from the unlocked states. Each pixel represents a state obtained after simulated 20 000 cavity roundtrips. **(c)** Obtained spectral width for the same parameter sweep. Optimal GVD which yields the largest comb bandwidth coincides with the minimum dispersion necessary for stable comb formation given with the dashed-dotted black line. **(d)** Comb bandwidth at a fixed GVD value as the pump is swept for different FP cavity lengths.
Figure 2: **Experimentally-measured FM comb with a large higher-order dispersion.****(a)** Intensity spectrum exhibiting nonuniform amplitude distribution and a spectral hole in the middle (top); SWIFTS spectrum (middle) matches the geometric average of neighboring intensity spectrum amplitudes, indicating comb operation; intermodal phases (bottom). **(b)** Piecewise-linear instantaneous frequency, indicated with the dashed lines. The discontinuity occurs due to the hole in the middle of the comb spectrum. **(c)** Measured subthreshold GVD contains linear and higher-order contributions. The measurement was taken at the bias of \(0.59\,A\), just below the lasing threshold.
termodal phases by recording interferograms at the comb roundtrip frequency. Due to the hole in the middle of the intensity spectrum, the intermodal phases arrange themselves in two distinct linear patterns, together covering the whole range of \(2\pi\). As a consequence, the instantaneous frequency is a piecewise linear function during one cavity roundtrip (Fig. 2b)). The experimental FM comb state does not resemble much the simulated ideal state in Fig. 1, urging to discover the cause behind the observed degradation. Fig. 2c) displays the measured subthreshold dispersion [25], showing non-constant values with a significant linear contribution around the lasing range, corresponding to a large third-order dispersion. This is in sharp contrast with the simulations in Fig. 1 where constant second-order dispersion values were used, already pointing that the probable culprit behind nonuniform experimental spectra is the higher-order dispersion.
To corroborate this hypothesis, we performed numerical simulations of the master equation incorporating the third-order dispersion \(k^{(3)}\), to account just for the linear dependence of the dispersion observed in Fig. 2d). This contribution is sufficient to explain the experimental behavior. The total dispersion is then \(\text{GVD}(\omega)=k^{(2)}+k^{(3)}(\omega-\omega_{0})\), where \(k^{(2)}\) is the second-order dispersion, and \(\omega\) is the optical angular frequency. Mathematically, the additional term in the master equation is introduced as a third-order temporal derivative of the light field [26]. Terms \(\mathcal{O}(k^{(4)})\) describe nonlinear dispersion dependence \(\mathcal{O}(\omega^{(2)})\) and are omitted as they would introduce higher-order derivates and issues with numerical stability. The origin of \(k^{(3)}\) is purely considered to be from the cavity and material dispersion. An additional contribution is due to the gain spectral profile itself and is expected to increase in importance for narrower gainwidths or inhomogenous gain broadening e.g. in heterogenous QCLs [27]. Column a) in Fig. 3 shows the gradual increase of the the third-order dispersion which was inserted in numerical simulations to study its impact on the comb state shown in Fig. 1a). Columns b), c), and d) in Fig. 3 display the resulting comb spectra, intermodal phases, and the instantaneous frequency, respectively. Although maintaining comb operation, the intensity spectra progressively develop an irregular amplitude distribution, resembling experimental comb spectra that are often found in literature [2; 3; 4; 5; 6; 7; 8; 9; 10]. At the same time, the intermodal phases and the instantaneous frequency increasingly deviate from a linear pattern. For the highest value of third-order dispersion (\(k^{(3)}=400\,\text{fs}^{2}/\upmu m\)), the spectrum splits in two separated lobes, alike in Fig. 2a). Further resemblance to the experimental state is observed from the intermodal phases and the instantaneous frequency, which in approximation becomes a piecewise linear function during one roundtrip.
In order to reach the full potential of FM combs for widespread applications, it is of crucial importance to achieve reproducible high-quality comb behavior, as in the numerical simulations. First of all, having a broadband uniform comb spectra is of high interest for spectroscopy. On the other hand, obtaining a linear frequency chirp allows the use of a pulse compressor via group delay compensation [28] to achieve femtosecond pulse emission [29]. In Fig. 4 we demonstrate, both experimentally and theoretically, how to recover these highly desired traits even in FM combs that possess a large higher
Figure 3: **Influence of the third-order dispersion on FM combs in numerical simulations.****(a)** Total spectrally-dependent \(\text{GVD}(\omega)\) with an increasing third-order (linear) dispersion \(k^{(3)}\). Fourth- and higher-order contributions are omitted for the sake of numerical stability. The evolution of the **(b)** intensity spectra, **(c)** intermodal phases, and the **(d)** instantaneous frequency as the third-order dispersion is increased.
order dispersion by employing an RF modulation of the laser bias. Modulating the bias of a semiconductor laser at the roundtrip frequency enables coherent control of many of its characteristics - which was so far successfully used to injection-lock the comb beatnote [5, 6, 30], eliminate higher-order transverse cavity modes [31], emit actively mode-locked pulses [32], and increase the spectral bandwidth of FM combs [20, 23]. The modulation should be done on a short end-section of the FP cavity to maximize the effect [33]. In the top of Fig. 4 we show free-running experimental and simulated FM comb states with large higher-order dispersion, taken from Fig.2a) and bottom of Fig. 3b), respectively. The impact of the RF injection is visible from the panels below, showing agreement between the experimental and simulation results. The comb spectra become more uniform and the spectral hole disappears. Apart from this, another striking effect is reflected in a significant broadening of the comb bandwidth of around \(100\,\%\)[20, 23]. In the theoretical model, the modulation is directly implemented to the laser bias of a short section covering \(10\,\%\) of the cavity as \(J(t)=J_{\mathrm{DC}}+J_{\mathrm{m}}\cos(\omega_{\mathrm{m}}t)\), where \(\omega_{\mathrm{m}}\) is the modulation frequency and we set \(J_{\mathrm{m}}=0.05J_{\mathrm{DC}}\)[32]. In the experiment, a 4 mm Fabry-Perot QCL with a microstrip-like line geometry [23] was used. Both, a DC-bias of 950 mA and a 25 dBm AC bias modulation of 11.090 GHz, which matches the free-running intermode beating, were supplied using a high-power coplanar RF-probe. Whereas the device has no dedicated modulation section, the point of contact with the probe is very close to the front-facet, which has a similar effect as modulating a short end-section. The impact of bias modulation is visible from the intermodal phases as well, which splay over \(2\pi\) in a single nearly linear pattern. Slightly worse behavior in experiments could be explained by the existence of finite \(\mathcal{O}(k^{(4)})\) terms. The piecewise linear instantaneous frequency of free-running combs becomes significantly rectified, thus enabling the use of group delay compensation schemes to achieve pulse compression.
In this work, we provided a study of the impact of higher-order dispersion on FM combs by utilizing both numerical simulations of a spatio-temporal theoretical model and experimental measurements. We reveal that the presence of finite higher-order dispersion \(\mathcal{O}(k^{(3)})\) negatively affects FM comb characteristics by lowering the spectral bandwidth, producing spectral holes in the spectrum, and resulting in a nonlinear frequency chirp. In order to boost FM combs we for broadband spectroscopic applications, it is therefore crucial to mitigate higher-order dispersion if possible. One way of achieving this would be to tailor the spectral gain profile or change the laser cavity in an effort to make the GVD constant within the lasing range. Here we demonstrated a simple technique which achieves this and does not rely on any modifications of the existing laser system. By modulating the laser current around the roundtrip frequency we overcome the severe limitations imposed by the higher-order dispersion by doubling the comb bandwidth, flattening the intensity spectrum and recovering a nearly linear frequency chirp. The further use of RF modulation could enhance the use of FM combs for dual-comb spectroscopy.
## Acknowledgements
N. O. and B. Schwarz have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 853014). B. Schneider and J. F. have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 820419) and from the Swiss Innovation Agency Innosuisse (Grant agreement No. 2155008433).
|
2304.10303 | Avoiding methane emission rate underestimates when using the divergence
method | Methane is a powerful greenhouse gas, and a primary target for mitigating
climate change in the short-term future due to its relatively short atmospheric
lifetime and greater ability to trap heat in Earth's atmosphere compared to
carbon dioxide. Top-down observations of atmospheric methane are possible via
drone and aircraft surveys as well as satellites such as the TROPOspheric
Monitoring Instrument (TROPOMI). Recent work has begun to apply the divergence
method to produce regional methane emission rate estimates. Here we show that
when the divergence method is applied to spatially incomplete observations of
methane, it can result in negatively biased time-averaged regional emission
rates. We show that this effect can be counteracted by adopting a procedure in
which daily advective fluxes of methane are time-averaged before the divergence
method is applied. Using such a procedure with TROPOMI methane observations, we
calculate yearly Permian emission rates of 3.1, 2.4 and 2.7 million tonnes per
year for the years 2019 through 2021. We also show that highly-resolved plumes
of methane can have negatively biased estimated emission rates by the
divergence method due to the presence of turbulent diffusion in the plume, but
this is unlikely to affect regional methane emission budgets constructed from
TROPOMI observations of methane. The results from this work are expected to
provide useful guidance for future implementations of the divergence method for
emission rate estimation from satellite data -- be it for methane or other
gaseous species in the atmosphere. | Clayton Roberts, Rutger IJzermans, David Randell, Matthew Jones, Philip Jonathan, Kaisey Mandel, Bill Hirst, Oliver Shorttle | 2023-04-20T13:33:06Z | http://arxiv.org/abs/2304.10303v3 | # Avoiding methane emission rate underestimates when using the divergence method
###### Abstract
Methane is a powerful greenhouse gas, and a primary target for mitigating climate change in the short-term future due to its relatively short atmospheric lifetime and greater ability to trap heat in Earth's atmosphere compared to carbon dioxide. Top-down observations of atmospheric methane are possible via drone and aircraft surveys as well as satellites such as the TROPOSpheric Monitoring Instrument (TROPOMI). Recent work has begun to apply the divergence method to produce regional methane emission rate estimates. Here we show that spatially incomplete observations of methane can produce negatively biased time-averaged regional emission rate estimates via the divergence method, but that this effect can be counteracted by adopting a procedure in which daily advective fluxes of methane are time-averaged before the divergence method is applied. Using such a procedure with TROPOMI methane observations, we calculate yearly Permian emission rates of 3.1, 2.4 and 2.7 million tonnes per year for the years 2019 through 2021. We also show that highly-resolved plumes of methane can have negatively biased estimated emission rates by the divergence method due to the presence of turbulent diffusion in the plume, but this is unlikely to affect regional methane emission budgets constructed from TROPOMI observations of methane. The results from this work are expected to provide useful guidance for future implementations of the divergence method for emission rate estimation from satellite data - be it for methane or other gaseous species in the atmosphere.
## 1 Introduction
Methane is a powerful greenhouse gas, with a far greater warming potential (84 times greater on a 20-year timescale) and shorter atmospheric lifetime (9 years instead of centuries) than carbon dioxide [1, 2]. These attributes make methane an attractive target for mitigating the short-term effects of climate change, and have been the focus of recent climate summits and global commitments towards emission reductions [3]. Nevertheless, in recent years, the rate of increase of atmospheric methane has itself increased [4, 5]. Roughly 30% of anthropogenic methane emissions are attributed to the fossil fuel industry [6, 7], making increased monitoring and accounting of emissions from this sector an important factor in meeting national commitments towards methane emission reductions.
Satellite observations are a powerful tool for monitoring atmospheric methane abundances [8], with remote sensing of methane from space providing opportunities for repeated and unscheduled monitoring of emissions. The era of greenhouse gas observing satellites began with the SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY) [9] in 2003, and subsequent generations of satellites have given rise to instruments with ever increasing capabilities. The TROPOSpheric Monitoring Instrument (TROPOMI) provides daily global coverage of methane observations with an updated 5.5x7 km\({}^{2}\) pixel resolution [10], whilst other instruments such as GHGSat provide intermittent, targeted methane observations down to 50x50 m\({}^{2}\) resolution [11]. Many greenhouse gas-observing satellites lack the spatial resolution calculate asset-level emissions, and so aircraft and drone surveys are used to bridge this gap. These instruments can image and estimate facility-level methane emission rates [12, 13], and such facility-level measurements can be used for reporting under the Oil & Gas Methane Partnership 2.0 (OGMP
2.0) framework. This is a multi-stakeholder initiative launched by UNEP and the Climate and Clean Air Coalition, aimed at improving the accuracy and transparency of methane emissions reporting in the oil and gas sector [14]. Recently, "top-down" methane emission estimates calculated from aircraft observations have been found to be in disagreement to "bottom-up" emission estimates reported from industrial activity [15, 16], and more work is required to reconcile these standards of reporting methane emissions.
There are a variety of methods for constructing top-down emission estimates from satellite observations of methane. Some analysis uses forward models of Gaussian plumes and Bayesian methods to estimate regional methane emission rates [17]. Bayesian methods require specifications of priors on spatial distributions of emission rates, which are sometimes constructed from bottom-up emission estimates. However, another method to estimate emissions using top-down observations is via the divergence method [18]. The divergence method for estimating the spatial distribution of methane emissions is attractive because it is entirely data-driven and does not rely on prior estimates of spatial emission distributions as extensively as Bayesian methods do. In the divergence method, the total sources and sinks of emission \(E\) are calculated via the emission equation
\[E=\nabla\cdot\vec{F}^{\mathrm{adv}}\quad[\mathrm{kg}\,\mathrm{m}^{-2}\mathrm{ s}^{-1}], \tag{1}\]
where \(\vec{F}^{\mathrm{adv}}\) is the advective flux of a quantity of interest (e.g., methane or other pollutants). Originally presented as a method for estimating the location and emission rates of sources of nitrogen dioxide, this methodology is now being used to estimate regional-level methane emission rates [19, 20].
It is important to make explicit some important simplifying assumptions that are currently intrinsic to this methodology. Firstly, plumes of any gas (including methane) propagate through the atmosphere not only by advection, but also molecular and turbulent diffusion [21, 22, 23]; in atmospheric transport, turbulent diffusion is usually the dominant effect over molecular diffusion in practice. The emission equation \(E=\nabla\cdot\vec{F}^{\mathrm{adv}}\) that is central to many regional-level methane budget estimates does not take the effect of turbulent diffusion into account. Correcting for turbulent diffusion requires the usage of a modified emission equation
\[E=\nabla\cdot\left(\vec{F}^{\mathrm{adv}}-\vec{F}^{\mathrm{dif}}\right)\quad[ \mathrm{kg}\,\mathrm{m}^{-2}\mathrm{s}^{-1}], \tag{2}\]
where \(\vec{F}^{\mathrm{dif}}\) is the turbulent diffusive flux of a quantity of interest. Secondly, regional methane emission rate estimates are often time-averaged. The linear property of the divergence operator means that time-averaged estimated emission rates could be calculated either by time-averaging daily estimated emission rates, or by taking the divergence of time-averaged daily fluxes. To the best of our knowledge, no work has yet been done to examine the consequence of this choice of order of operations when the divergence method is applied to methane observations.
In this work, we derive analytical expressions for \(\vec{F}^{\mathrm{dif}}\), and generate synthetic simulated satellite observations of Gaussian plumes to examine under what physical scenarios it becomes important to include \(\vec{F}^{\mathrm{dif}}\) in the emission equation. We find that when a plume of methane is relatively diffusive, methane emission estimates via the divergence method can become inaccurate if \(\vec{F}^{\mathrm{dif}}\) is excluded from the divergence calculation (under certain conditions). In this case, the estimated emission rate of the source is underestimated, and the spatial distribution of emissions is incorrectly distributed. We also demonstrate that time-averaged emission estimates calculated from spatially incomplete observations may be inaccurate if care is not taken to use time-averaged fluxes in the emission equation (as opposed to taking the time-average of daily emission estimates via the divergence method). We compare the results of our synthetic study to a case study of the Permian basin, using TROPOMI observations of methane [24]. We find that it is unlikely that a regional methane emission budget calculation of the Permian via the divergence method would be negatively biased due to the effects of any turbulent diffusion. We do find though that the sparse nature of the TROPOMI methane data product results in negatively biased time-averaged methane emission rate estimates, in the case where the divergence method is used to calculate daily emission rate estimates which are then time-averaged to produce the time-averaged emission rate estimate. When using the divergence method in conjunction with spatially sparse observations, it is important to take the divergence of time-averaged daily fluxes to obtain a time-averaged emission estimate.
## 2 Results
### Synthetic case study
We generate simulated satellite observations of ideal steady-state Gaussian plumes [21] resulting from isolated point sources with known emission rates, and use them in an investigative synthetic case study. These plumes are characterised by a known
emission rate \(Q_{\rm true}\) [kg s\({}^{-1}\)], wind speed \(w\) [m s\({}^{-1}\)], wind angle \(\theta\) relative to the x-axis of the observation grid, and constant of turbulent diffusion \(K\) [m\({}^{2}\) s\({}^{-1}\)]. Simulated plume observations are generated via Eq. 3, which is derived in S1. Note that when we calculate grid cell values, we numerically spatially integrate Eq. 3 over the area of the grid cell, and so the results of our synthetic study are independent of grid cell resolution. This is important to bear in mind as real observational instruments will have pixel resolutions spanning from meter to kilometer scales. Fig. 1 shows some examples of how these parameters affect plume morphology. For our simulated observations, we use the divergence method to estimate the spatial emission field both with and without including the diffusive flux term in the emission equation. We find in our synthetic study that there are a variety of scenarios where the use of the divergence method results in negatively biased estimated emission rates, i.e., when turbulent diffusion is neglected in some cases, or when time-averaged estimated emission rates are calculated without time-averaging daily flux fields when daily observations are spatially incomplete.
#### 2.1.1 Underestimating emission rates due to turbulent diffusion in the plume
Fig. 2 demonstrates the application of the divergence method to a simulated satellite observation of a plume. We find that when estimating the emission field without including the turbulent diffusion term in the emission equation, emissions are incorrectly spatially distributed in the estimated emission field. Rather than estimating a single point of positive emission, we instead find that "arrowhead" shapes of positive emission are estimated, in conjunction with negative emissions (or sinks) downwind within the plume shadow. To obtain a total estimated emission rate \(Q_{\rm est}\), we spatially integrate the estimated emission field within a circle of fixed radius centered on the source location. We find that the total estimated emission rate of the emission field is underestimated. When increasing the opening angle of a plume (which scales as a function of \(K/w\)), the total estimated emission rate decreases for a fixed radius of spatial integration over the estimated emission field. This demonstrates that the emission rates of "diffuse" plumes are poorly estimated when using the divergence method, relying solely on advective fluxes. To first order, the underestimation is proportional to \(K/w\). It is important to note in our synthetic study that we measure estimated emission rates as percentages of the true emission rate, and so results are independent of the mass emission rate of the point source. For a real instrument, higher emission rates mean a higher measured signal-to-noise ratio.
The total area for spatial integration of the estimated emission field also influences \(Q_{\rm est}\). In Fig. 3 we alter the radius \(r\) of the cirular area of integration for a single simulated observation, and find that increasing the radius of integration increases \(Q_{\rm est}\), i.e., the total estimated emission rate is less negatively biased. Although emissions are incorrectly distributed in the estimated emission field, they are distributed such that that integrating the estimated emission field over a larger area improves the total estimated emission rate. Thus, we determine that in our procedure for estimating the emission rate of a plume, \(r\) and \(K/w\) are two independent parameters which determine the extent to which \(Q_{\rm est}\) is underestimated. In Fig. 4 we vary \(r\) and \(K/w\) over physically realistic values and examine how greatly \(Q_{\rm est}\) is underestimated for an ideal Gaussian plume resulting from a point source of emission. We find that in the most extreme cases, \(Q_{\rm est}\) can be underestimated by more than 40%.
In practice, different plume measurement methodologies will correspond to different parameter locations on Fig. 4, and have characteristic biases associated with them. We indicate three examples on the right hand edge of Fig. 4 of regions where certain instruments tend to lie. Global coverage satellites such as TROPOMI tend to have very large fields of view as well as lower pixel resolutions [9, 10, 25]. Regional emission budgets performed on the scale of tens of kilometers are unlikely to experience a high level of negatively biased emission estimates due to the lack of a diffusive term in the divergence method (e.g., see Sec. 2.2). More targeted satellites have higher pixel resolutions and are capable of imaging plumes on the scale of tens of meters [11]. Although such satellites have fields of view that can still exceed ten kilometers or more, it would still be possible (given the high pixel resolution) to spatially integrate estimated emission fields over small enough areas to experience the outlined negative bias of the uncorrected divergence method, an issue that could arise if attempting to spatially isolate one plume from another adjacent source. Lastly, plume imaging surveys based from aircraft or drones [12, 13] would likely have the smallest field of view and would consequently be most likely to experience negative biases in emission estimates if they were to use the divergence method for emission rate estimation.
The significance of diffusion on the accuracy of the divergence method can be seen most directly when a diffusive term is included in the emission equation, i.e., Eq. 6. Making this addition to the method, we show in Fig. 5 that when the expression for diffusive flux (as calculated by Eq. 7) is included in the emission equation, then the estimated emission field is correctly constrained to a point source (allowing some slight deviation due to numerical derivative effects). \(Q_{\rm est}\) is found precisely to equal \(Q_{\rm true}\).
A caveat to the efficacy of including diffusive flux in the divergence method to restore accurate emission estimates is that it is in practice difficult to estimate the constant of turbulent diffusion \(K\). In our synthetic study, it is possible to know and
choose the precisely correct value of \(K\) to calculate \(\bar{F}^{\rm dif}\), but for real data this is much harder to calculate. Based on a number of real methane plumes measured by GHGSSat [26], a typical value of \(K\) was determined to be highly variable and in the the range between 10 and 400 m\({}^{2}\)s\({}^{-1}\) (see Sec. 4.4).
#### 2.1.2 Miscalculating emission rates due to missing data
Beirle et. al. (2019) [18] was the first study to showcase the divergence method for estimating emission rates, and used TROPOMI observations of nitrogen dioxide to estimate emission rates from cities and power plants. They pointed out that, due to the linear properties of the divergence operator, it was sensible to time-average daily fluxes of nitrogen dioxide first and then take the divergence of the time-averaged flux to obtain a time-averaged estimate of nitrogen dioxide emissions. Beirle et. al. (2019) outlined that this was a sensible procedure as the time-averaged nitrogen dioxide advective flux would have a smooth spatial distribution and thus allow for a more accurate calculation of spatial derivatives.
TROPOMI observations of methane differ significantly from that of nitrogen dioxide, in that the spatial coverage of the methane data product is much less complete than the nitrogen dioxide data product [27, 28]. Fig. 6 demonstrates the problem this poses as we look to apply the divergence method to TROPOMI observations of methane; when we randomly mask 10% of the observational data of a single simulated plume, the corresponding emission field estimated via the divergence method is missing a significantly larger amount of spatial coverage, and dramatically underestimates the total emission rate. This is because the numerical methods for calculating spatial derivatives [29, 30] (see Eqs. 9 and 10) require eight valid neighboring data points, and so any missing observational data is magnified into even more missing spatial coverage of the estimated emission field. Time-averaging observations prior to numerically calculating spatial derivatives, and choosing to describe average emission rates over time domains rather than at specific time points, hopefully allows a chance at circumnavigating the problem of truncated spatial observations. We next investigate two different methodologies for calculating time-averaged emission estimates using the divergence method: in the first (which we denote by \(\overline{E}_{1}\)), we calculate daily estimates of emission fluxes and then temporally average them [19, 20], and in the second (which we denote by \(\overline{E}_{2}\)), we temporally average daily advective fluxes of column density \(C\) and then take the divergence of the temporally averaged flux to obtain a time-averaged estimated emission field [18]. In this section we do not correct the estimated emission fields of our simulated plumes for the effects of turbulent diffusion as we did in Sec. 2.1.1, and focus only on the differences between the \(\overline{E}_{1}\) and \(\overline{E}_{2}\) methodologies.
In Fig. 7 we simulate a time-averaged study of 30 steady-state Gaussian plumes. Plume parameters are left unvaried, and each of the 30 repeated simulated observations has a random 30% of its pixels masked. We then display the resulting estimated emission fields obtained via \(\overline{E}_{1}\) and \(\overline{E}_{2}\), and find that the emission field of \(\overline{E}_{2}\) is spatially complete, whereas \(\overline{E}_{1}\) is still missing some spatial coverage. The total integrated emission rate of \(\overline{E}_{1}\) is also severely underestimated, but the total integrated emission rate of \(\overline{E}_{2}\) retrieves the correct time-averaged emission rate (apart from the slight negative bias due to the presence of diffusion in the plume, which was discussed in the previous section). Figures showing the difference in resulting estimated emission fields under non-static conditions are shown in the supplement.
We investigate how the difference in performance of the two methods varies as a function of amount of daily missing data, and when plume parameters are allowed to vary in time. These results are shown in Fig. 8. We find that as the amount of missing observational data increases, both methodologies underestimate the true average emission rate, but that method \(\overline{E}_{2}\) (i.e., averaging daily fluxes of \(C\) and taking the divergence once) allows for estimates that are far more robust against missing data. This holds true for both static and time-varying simulated plume observations, although the time-averaged estimated emission rates for the time-varying plumes have more variance than the static plumes. This simulation differs from realistic physical scenarios in that data is randomly masked in a physically uncorrelated manner (which would not be the case with cloud cover), but it nonetheless demonstrates that the way in which time-averaged emission estimates are calculated using the divergence method is not trivial. Additionally, Fig. 8 demonstrates that it is also possible to over-estimate the source emission rate, though this typically only occurs at a critical "turn over" point where the fraction of daily missing data begins to dominate over the number of repeated observations. Past this point, we find that estimated emission rates will only be underestimated as a consequence of spatially incomplete data, but the exact location of this critical value is highly dependant on the number of repeated observations and the spatial distribution of missing data.
### Permian basin case study
The Permian basin is the largest oil and gas producing region in the United States, producing nearly 6,000 barrels of oil a day as of January 2023 [33]. Due to its prominence and size the Permian is frequently a target of ground-based, airborne, and space-based campaigns monitoring methane emissions [34, 35]. We grid three years of daily TROPOMI methane observations of the Permian basin (2019-2021) onto a 0.2 x 0.2 latitude-longitude grid using an area-weighted oversampling [36] and calculate yearly emission estimates. We use the TROPOMI Level 2 methane data product [24], and reduce the TROPOMI-observed column average mixing
\begin{table}
\begin{tabular}{l l l l l l} \hline Year & This work & Veefkind et. al. 2023 & Schneising et. al. 2020 & Liu et. al. 2021 & Zhang et. al. 2020 \\
2019 & 3.1 \(\pm\) 0.7 & 3.0 & 2.9 \(\pm\) 1.6 & 3.1 (2.8, 3.8) & 2.7 \(\pm\) 0.5 \\
2020 & 2.4 \(\pm\) 0.6 & 2.8 & 2.3 \(\pm\) 1.7 & - & - \\
2021 & 2.7 \(\pm\) 0.5 & - & - & - & - \\ \hline \end{tabular}
\end{table}
Table 1: Estimates of Permian methane emission rates [Tg/year]. We compare our yearly estimates via the \(\overline{E}_{2}\) methodology (where daily methane fluxes are averaged) to those in other literature and find good agreement. Uncertainties on our time-averaged emission estimates are calculated via the algebraic propagation of the daily variance of advective flux of methane at each grid cell. This methodology is described in supplementary section S4.
\begin{table}
\begin{tabular}{l l l l l} \hline Year & \(\overline{E}_{2}=\nabla\cdot\overline{\overline{F}_{t}^{\rm adv}}\) & \(\overline{E}_{1}=\nabla\cdot\overline{F}_{t}^{\rm adv}\) & \(\cap\left(\overline{E}_{2}-\overline{E}_{1}\right)\) & Avg. Daily Coverage [FOOTNOTE:]Footnote : footnotemark: [ENDFOOTNOTE] \\
2019 & 3.06 \(\pm\) 0.66 & 1.45 \(\pm\) 0.31 & 0.31 \(\pm\) 0.57 & 29.37 \% \\
2020 & 2.39 \(\pm\) 0.55 & 1.25 \(\pm\) 0.30 & 1.16 \(\pm\) 0.62 & 37.68 \% \\
2021 & 2.67 \(\pm\) 0.54 & 1.75 \(\pm\) 0.26 & 0.92 \(\pm\) 0.60 & 41.15 \% \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of yearly methane emission rate estimates [Tg/year] for the Permian when using the two different time-averaging methodologies. In the first column we show yearly methane emission rate estimates for the Permian via \(\overline{E}_{2}\), when we average over advective fluxes for the year and take the divergence to yield a time-averaged emission estimate. In the next column we show the same yearly emission rate estimate calculated via \(\overline{E}_{1}\), where we calculate daily methane emission estimates and time-average them. In the penultimate column we show the difference of the total estimated emission rate between the two methodologies, when the estimated emission fields are only spatially integrated over the intersection of the two estimated emission fields. This is to examine whether the difference in the estimated emission rate is driven by differences in spatial coverage or not. Supplementary figures 4, 5, and 6 plot these estimated emission fields. Also shown in the last column is the average daily spatial coverage of the Permian basin by our regridded TROPOMI methane observations.
ratios of methane to above-background column densities [20]. The 2019-2021 average methane enhancement over the Permian basin is shown in Fig. 9. Using ERA5 wind data from the European Centre for Medium-Range Weather Forecasts [37], we calculate daily advective fluxes of methane, and then calculate yearly time-averaged methane emission maps of the Permian basin using both the \(\overline{E}_{1}\) and \(\overline{E}_{2}\) methodology. We state here again as a reminder: in the \(\overline{E}_{1}\) methodology, we use the divergence method to estimate daily emission fields, which are then time-averaged, and in the \(\overline{E}_{2}\) methodology, we first time-average daily advective fluxes methane, and use the divergence method to estimate the emission field from the time-averaged fluxes. The yearly estimated emission fields produced via the two methodologies are shown in the supplement, and the estimated emission fields produced via the two methodologies for the entire time period 2019-2021 is shown in Fig. 10. Our yearly total estimated methane emission rates for the Permian are shown in Tables 1 and 2. We find good agreement between our time averaging methodology \(\overline{E}_{2}\) and other Permian emission estimates from previous work, but find that the \(\overline{E}_{1}\) methodology (in which daily emission estimates for the Permian are time-averaged) significantly underestimates the time-averaged emission rates when compared to previous estimates in the literature. The difference in results between the two methodologies is likely due to the sparse daily spatial coverage of the TROPOMI methane data product over the Permian basin. For the years 2019-2021, the average daily coverage of our regridded TROPOMI observations of the Permian basin never exceeds 50% (Table 2). In Table 3, we examine whether the choice of order of central finite difference influences the results obtained when calculating time-averaged emission rate estimates for the Permian. Although the spatial coverage of the estimated emission field produced via \(\overline{E}_{1}\) improved by using the second order central finite difference to calculate derivatives instead of the fourth order central finite difference, we did not find any significant change in the total estimated emission rates.
We also investigate whether the difference in results between the \(\overline{E}_{1}\) and \(\overline{E}_{2}\) methodologies can be attributed purely to the difference in spatial coverage of their respective time-averaged estimated emission fields. In Table 2 we show the difference in total estimated emission rate between \(\overline{E}_{1}\) and \(\overline{E}_{2}\) for when we only spatially integrate over the intersection of their spatial coverages. In this scenario, any difference in total estimated emission rate is due to the change in order of operations between the two different methodologies, rather than the difference in spatial coverage obtained. We find that for the year 2019, the difference in average total estimated emission rate between \(\overline{E}_{1}\) and \(\overline{E}_{2}\) can potentially be explained entirely by the difference in spatial coverage of the estimated emission fields. This is shown in Fig. S4. However, for the years 2020 and 2021, the spatial coverages of the estimated emission field obtained via \(\overline{E}_{1}\) and \(\overline{E}_{2}\) are complete, and thus the difference in total estimated emission rates cannot be explained by a difference in spatial coverage. Increasing the spatial coverage of the region of interest by the methane data [31, 38] may close the gap in the results obtained between the two methods. We additionally calculate the percent change of our yearly estimated methane rates for the Permian when including the diffusive flux calculation of Eq. 8, and find in all cases that the total estimated emission rate is increased by less than a millionth of a percent when \(K=400\,\mathrm{m}^{2}\,\mathrm{s}^{-1}\), the maximum value for the constant of turbulent diffusion that we consider in this work. This is not unexpected given the results shown in Fig. 4, which suggests that integrating over large areas sufficiently corrects for any negative bias introduced by
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Year} & \multicolumn{3}{c}{4th order CFD} & \multicolumn{3}{c}{2nd order CFD} \\ \cline{2-7} & \(\overline{E}_{1}\) & \(\overline{E}_{1}\) \% coverage\({}^{*}\) & \(\overline{E}_{2}\) & \(\overline{E}_{1}\) & \(\overline{E}_{1}\) \% coverage\({}^{*}\) & \(\overline{E}_{2}\) \\ \hline
2019 & 1.45 \(\pm\) 0.31 & 73.29\% & 3.06 \(\pm\) 0.66 & 1.84 \(\pm\) 0.26 & 89.53\% & 3.10 \(\pm\) 0.49 \\
2020 & 1.25 \(\pm\) 0.30 & 98.19\% & 2.39 \(\pm\) 0.55 & 1.20 \(\pm\) 0.22 & 100\% & 2.37 \(\pm\) 0.41 \\
2021 & 1.75 \(\pm\) 0.26 & 98.92\% & 2.67 \(\pm\) 0.54 & 1.63 \(\pm\) 0.20 & 99.64\% & 2.61 \(\pm\) 0.41 \\ \hline \hline \end{tabular}
* This is the percentage coverage of the Permian basin of the time-averaged estimated emission field for this year, and not the average daily coverage of the Permian basin for the year by the methane data product.
\end{table}
Table 3: Yearly estimates of methane emission from the Permian basin [Tg/year]. We present yearly estimates (calculated using both the \(\overline{E}_{1}\) and \(\overline{E}_{2}\) time-averaging methodologies), and compare results between when using the fourth-order central finite difference and the second-order central finite difference to calculate numerical derivatives. The second-order central finite-difference requires fewer valid neighbors to calculate derivatives, and so could potentially lessen the discrepancy between the results of the \(\overline{E}_{1}\) and \(\overline{E}_{2}\) methodologies. We find that for the year 2019 (which had the poorest average spatial coverage over the Permian by the TROPOMI methane data product), the difference between the yearly methane emission budgets estimated via \(\overline{E}_{1}\) and \(\overline{E}_{2}\) is slightly decreased, but the gap between the two is not bridged within error. Also shown in this table is the percentage area coverage of the Permian basin by the estimated emission field produced by the \(\overline{E}_{1}\) methodology. As expected, the percentage area coverage is improved by the usage of the second order central finite difference in calculating derivatives, and the greatest improvement is seen in the year 2019. However, this increase in spatial coverage of the estimated emission field is not sufficient alone in bridging the discrepancy of results produced by the \(\overline{E}_{1}\) and \(\overline{E}_{2}\) methodologies.
neglecting turbulent diffusion in the divergence method.
## 3 Discussion
In this work, we examine the conditions under which the divergence method for estimating emission rates may prove to produce negatively biased results. Using a simulation study with synthetic satellite observations of ideal Gaussian plumes, we showed that highly-resolved, diffuse plumes may have negatively biased emission rates when their estimated emission fields are spatially integrated over narrow fields of view. Our simulation study suggests that this affect would only be of concern for observations obtained by high-resolution, narrow field of view methodologies, e.g., drones, or where high-resolution satellite data has clipped the area into which a plume extends. In contrast, our case study of the Permian basin with TROPOMI methane observations does not find that yearly estimated methane emission budgets are impacted by including even a high estimate of turbulent diffusion. In the future, as satellites become more capable of resolving individual plumes, it will become important to correct for turbulent diffusion when estimating the spatial distribution of emissions via the divergence method. Conditions where diffusion may be dominating over advection may also be identified by screening for very low wind speeds [20].
We also examine two possible methodologies for calculating time-averaged emission estimates using the divergence method. Using simulated spatially incomplete plume observations, we find that time-averaging daily emission rate estimates produced via the divergence method will consistently underestimate the true average emission rate. We compare these results to an alternative methodology previously described for nitrogen dioxide observations [2]. By this method, daily advective fluxes are time-averaged, and the divergence is taken thereof in order to obtain a time-averaged emission estimate. We find in our synthetic study that this latter methodology yields robust emission estimates even in the face of spatially incomplete observations. We compare these two methodologies by constructing yearly methane emission budget estimates for the Permian basin for the years 2019-2021, using the TROPOMI Level 2 methane data product [24]. We find that these two methodologies do not produce congruent emission rate estimates, and that the latter methodology produces estimates in agreement with previous top-down estimates for methane emission in the Permian basin [17, 20]. Methods and datasets exist that can augment the spatial coverage of the TROPOMI methane data product [31, 38], which in turn would augment the spatial coverage of the advective flux field of methane prior to the application of the divergence method. Spatial smoothing and interpolation could also be used to try and make the spatial coverage of the advective methane flux field more complete. In this work, we do not explore these avenues further, preferring to examine the differences between the \(\overline{E}_{1}\) and \(\overline{E}_{2}\) methodologies.
When using the divergence method for methane emission rate estimation in the Permian, we find areas bordering the Delaware and Midland basins that are estimated to have negative emission rates. We do not expect this to truly be the case. Even in the "perfect" case in our synthetic study when the point source of emission is correctly estimated (see Fig. 5, panel **c**), we still estimate some grid cells to have negative emission rates. In this case, this is an effect of the discretisation of the emission equations and the manner in which numerical derivatives are calculated over our grids. Therefore, in some areas, especially in the vicinity of large methane sources, the divergence method will return negative emissions. These are local artefacts; the area-integrated emissions remain positive both in our synthetic study and our case study of the Permian basin (and in this latter case, our results are in good agreement with previously estimated methane emission rates). Whilst the TROPOMI methane observations over the Permian can not be considered to be ideal plumes, it may be the case that the regions of negative estimated emission are analogous to those in the synthetic study, as we know that that they are bordering known regions of strong positive emission. Other work also demonstrates that some regions of negative emission estimated via the divergence method in the Permian can be related to changes in oroography or surface albedo [20]. One could develop a model that prohibits the estimation of negative methane emissions in a Bayesian framework, though at this stage this would no longer purely be the "divergence method", which is driven entirely by the data and the principle of the conservation of mass.
We conclude that the divergence method for estimating methane emissions (as described in this work) would be best applied to regional analyses where the affects of turbulent diffusion are unlikely to dominate over the scale of advective methane fluxes. Whenever possible, spatially completely methane data products should be used. When using spatially incomplete datasets, it may be the case that taking the divergence of time-averaged advective fluxes of methane will produce more accurate methane emission rate estimates.
## 4 Methods and Data
### Generating simulated satellite observations of plumes
For our synthetic studies we generate simulated top-down observations of ideal Gaussian plumes [21] via the equation
\[C\left(x,y,\,\theta\right)=\frac{Q}{2\sqrt{\pi\,w\,K\,\left(x\mathrm{cos} \theta+y\,\mathrm{sin}\theta\right)}}\mathrm{exp}\left[-\frac{\left(y\mathrm{ cos}\theta-x\,\mathrm{sin}\theta\right)^{2}w}{4\,K\,\left(x\mathrm{cos} \theta+y\,\mathrm{sin}\theta\right)}\right]\quad\left[\mathrm{kg\,m^{-2}} \right], \tag{3}\]
where \(Q\) is the point source emission rate [kg s], \(w\) is the wind speed [m s\({}^{-1}\)], \(K\) is the constant of turbulent diffusion [m\({}^{2}\) s\({}^{-1}\)], and \(\theta\) is the wind angle relative to the \(x\)-axis (in the anti-clockwise direction). Eq. 3 is derived in S1. There are a variety of assumptions that are fundamental to the ideal Guassian plume equation [21], but most important of note here is that diffusion is assumed to be dominated by advection, and thus diffusion only takes place perpendicular to the wind vector characterised by \(w\) and \(\theta\).
### Calculating emissions and flux terms
We calculate spatially varying estimated emission fields \(E\) using both simulated plume observations and the TROPOMI L2 methane data product. We calculate \(E\) via two different emission equations.
The first emission equation (commonly found in literature [18, 20]) is
\[E=\vec{\nabla}\cdot\vec{F}^{\mathrm{adv}}\quad\left[\mathrm{kg\,m^{-2}\,s^{- 1}}\right], \tag{4}\]
where \(\vec{F}^{\mathrm{adv}}\) [kg m\({}^{-1}\) s\({}^{-2}\)] is the advective flux of some column density \(C\). In the case of our synthetic plume observations, \(C\) is generated via Eq. 3. For TROPOMI observations of methane, we convert column-averaged mixing ratios to above-background column density enhancements [20]. \(\vec{F}^{\mathrm{adv}}\) is then given by
\[\vec{F}^{\mathrm{adv}}=C\,\vec{w}\quad[\mathrm{kg\,m^{-1}\,s^{-1}}], \tag{5}\]
where \(\vec{w}\) is a spatially varying wind vector with magnitude \(w\) and angle \(\theta\) relative to the \(x\)-axis of our grid. For our synthetic studies we specify \(w\) and \(\theta\) ourselves. For our work with TROPOMI observations of the Permian basin, we take \(\vec{w}\) to be the ERA5 wind data on multiple pressure levels, temporally averaged daily over a wind history at 1700, 1800 and 1900 hours, and then averaged vertically to an altitude of 500m to account for changes in wind vector through the boundary layer [20].
To examine the extent to which turbulent diffusion influences estimated emission rates via the divergence method, we also calculate \(E\) via a second emission equation
\[E=\vec{\nabla}\cdot\left(\vec{F}^{\mathrm{adv}}-\vec{F}^{\mathrm{diff}}\right) \quad\left[\mathrm{kg\,m^{-2}\,s^{-1}}\right]. \tag{6}\]
\(\vec{F}^{\mathrm{diff}}\) is the turbulent diffusive flux of some column density \(C\), and for an ideal Guassian plume is given by
\[\vec{F}^{\mathrm{diff}}=K\,\left(\frac{\partial\,C}{\partial\,x}\,\mathrm{sin} ^{2}\theta-\frac{\partial\,C}{\partial\,y}\,\mathrm{cos}\theta\,\mathrm{sin} \theta\right)\,\vec{e}_{x}+K\,\left(\frac{\partial\,C}{\partial\,y}\,\mathrm{ cos}^{2}\theta-\frac{\partial\,C}{\partial\,x}\,\mathrm{sin}\theta\,\mathrm{ cos}\theta\right)\,\vec{e}_{y}\quad[\mathrm{kg\,m^{-1}\,s^{-1}}], \tag{7}\]
where \(\theta\) is the wind angle relative to the \(x\)-axis of our grid. \(C\) is again either generated via Eq. 3 or calculated from TROPOMI satellite observations of methane. Eq. 7 is derived under the assumption that diffusion only takes perpendicular to the wind vector \(\vec{w}\). If, however, we choose to ignore this assumption (but still assume that \(K\) is constant in space), that we can work directly in the \(\left(x,y\right)\) grid and state that
\[\vec{F}^{\mathrm{diff}}=K\,\frac{\partial\,C}{\partial\,x}\vec{e}_{x}+K\, \frac{\partial\,C}{\partial\,y}\vec{e}_{y} \tag{8}\]
Eqs. 4, 5, 6, 7, and 8 are derived in S2.
We need to calculate spatial derivatives over a cartesian grid to fully obtain \(E\) in Eqs. 4 and 6. For first derivatives, we use the fourth-order central finite difference [29]
\[\frac{\partial\,V}{\partial\,p}|_{p=i}=\frac{V|_{p=i-2}-8\,V|_{p=i-1}+8\,V|_{p=i +1}-V|_{p=i+2}}{12\,d} \tag{9}\]
where \(V\) is a spatially varying quantity and \(d\) is the grid spacing in coordinate \(p\). This numerical recipe is commonly used for emission estimates via the divergence method [18, 20]. For second derivatives, we use the fourth order discretization [30]
\[\frac{\partial^{2}\,V}{\partial\,p^{2}}|_{p=i}=\frac{-\frac{1}{12}V|_{p=i-2}- \frac{4}{3}V|_{p=i-1}-\frac{5}{2}V|_{p=i}+\frac{4}{3}V|_{p=i+1}-\frac{1}{12}V|_ {p=i+2}}{d^{2}}. \tag{10}\]
### Calculating time-averaged emission rates
We calculate time-averaged estimated emission fields using two methodologies. In the first (denoted by \(\overline{E}_{1}\)), we calculate daily estimated emission fields and time average them to obtain \(\overline{E}_{1}\). In the second methodology (denoted by \(\overline{E}_{2}\)), we time-average daily fluxes of \(C\), and take the divergence of the time-averaged flux to obtain \(\overline{E}_{2}\). With spatially complete observations of \(C\) over an entire time period, the two methods yield identical results, but for TROPOMI observations of methane, data is often spatially masked due to cloud cover and albedo effects. Detailed equations describing these two methodologies is given is S4.
### Estimating the constant of turbulent diffusion K
It is in practice difficult to estimate constants of turbulent diffusion. If \(K\) is assumed to be constant in space and time, then the standard deviation "width" of a Gaussian plume can be described via
\[\sigma^{2}=\frac{2\,K\,x}{u}\quad[\mathrm{m}^{2}], \tag{11}\]
where \(\sigma\) is the width of the plume \([\mathrm{m}]\), \(x\) is the downwind distance in the plume \([\mathrm{m}]\), \(u\) is the wind speed \([\mathrm{m}\,\mathrm{s}^{-1}]\) and \(K\) is the constant of turbulent diffusion \([\mathrm{m}^{2}\,\mathrm{s}^{-1}]\)[39, 21]. We take multiple GHGSat scenes of isolated methane plumes [26] and measure values of \(\sigma\) at multiple downwind locations within each plume. We then fit a linear function to \(\sigma^{2}\) against \(x\) for each plume using the method of least squares. The slope of the fitted function yields \(2\,K/u\), and thus \(K\) can be determined as \(u\) is known for each scene. We determine using these plumes that \(K\) can vary between 10 and 400 \(\mathrm{m}^{2}\,\mathrm{s}^{-1}\).
## Acknowledgements
C R acknowledges financial support from Shell Research Ltd through the Cambridge Centre for Doctoral Training in Data Intensive Science grant number ST/P006787/1. For the purpose of open access, C R has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. C R thanks Dr. J P Veefkind at TU Delft for insightful discussion and advice.
## Author contributions
C R retrieved all data, wrote all code and conducted the analysis of this work. O S and D R supervised the project. R I guided the analysis. M J, P J, K M, and B H contributed discussions of the data, satellites and meteorological context. All authors reviewed the manuscript.
## Conflict of interest
We report no competing interests.
|
2305.07556 | Theory of Periodically Time-Variant Linear Systems | In this work we provide a mathematical framework to describe the periodically
time variant (PTV) linear systems. We study their frequency-domain features to
estimate the output bandwidth, a necessary value to obtain a suitable digital
representation of such systems. In addition, we derive several interesting
properties enabling useful equivalences to represent, simulate and compensate
PTVs. | Juan I. Bonetti, Agustín C. Galletto, Mario R. Hueda | 2023-05-12T15:33:58Z | http://arxiv.org/abs/2305.07556v1 | # Theory of Periodically Time-Variant Linear Systems
###### Abstract
In this work we provide a mathematical framework to describe the periodically time variant (PTV) linear systems. We study their frequency-domain features to estimate the output bandwidth, a necessary value to obtain a suitable digital representation of such systems. In addition, we derive several interesting properties enabling useful equivalences to represent, simulate and compensate PTVs.
## I Definition
A time-variant (TV) linear system is defined by an impulse response that depends on time. In general, a TV linear system of \(N\) inputs and \(M\) outputs can be written as [1, 2]
\[y_{i}(t)=\sum_{j=1}^{N}\int h_{ij}(t,\tau)x_{j}(t-\tau)\,d\tau\quad\forall i\in \{1,2,...,M\}, \tag{1}\]
being \(x_{j}(t)\) and \(y_{i}(t)\) the continuous-time system inputs and outputs, respectively, and \(h_{ij}(t,\tau)\) the impulse responses. A _periodically time-variant_ (PTV) linear system is a TV system whose impulse responses present a periodic behavior in the time variable, _i.e.,_
\[h_{ij}(t+T_{h},\tau)=h_{ij}(t,\tau). \tag{2}\]
being \(T_{h}\) the PTV period. Figure 1 shows the schematic representation of the PTV \(h\), described by Eqs. 1 and 2. We introduce the variable _temporal phase_\(z_{h}\), defined as \(z_{h}(t)=\mathrm{mod}(t,T_{h})\), being mod the modulo operation. The temporal phase allows for the definition of the PTV from simplified impulse responses,
\[y_{i}(t)=\sum_{j=1}^{N}\int h_{ij}(z_{h}(t),\tau)x_{j}(t-\tau)\,d\tau\quad \forall i\in\{1,2,...,M\}, \tag{3}\]
as the first argument of \(h_{ij}\) is restricted to values between \(0\) and \(T_{h}\).
A clear example of PTV system is the ideal cyclical multiplexer \(N:1\), shown in Fig. 2(a), a device that periodically alternates its single output between its \(N\) inputs. As shown in Fig. 2(b), it can be modeled as a single-output PTV, of period \(T_{h}\), given by
\[y(t)=\sum_{j=1}^{N}\int h_{j}(z_{h}(t),\tau)x_{j}(t-\tau)\,d\tau, \tag{4}\]
where the impulse responses are
\[h_{j}(z,\tau)=\left\{\begin{array}{ll}\delta(\tau),&(j-1)T_{h}/N\leq z<jT_{ h}/N\\ 0,&\mathrm{otherwise},\end{array}\right. \tag{5}\]
being \(\delta(.)\) the Dirac delta function. Another common example is the multiplier with a local oscillator input, displayed in Fig. 2(c). Although the output can be easily written as \(y(t)=x(t){\sin}(\omega_{0}t)\), we can take it to the PTV form, as shown in Fig. 2(d), by writing
\[y(t)=\int h(z_{h}(t),\tau)x(t-\tau)\,d\tau, \tag{6}\]
with
\[h(z,\tau)=\delta(\tau){\sin}(\omega_{0}z) \tag{7}\]
and \(T_{h}=2\pi/\omega_{0}\).
## II Combination of PTVs
In this section we study the interaction between PTVs and time-invariant linear systems.
Fig. 1: Representation of a generic PTV linear system, relating \(N\) continuous-time inputs \(x_{j}(t)\) with \(M\) continuous-time outputs \(y_{i}(t)\). While \(h\) is the name of the PTV, \(\tilde{T}_{h}\) stands for its period.
Fig. 2: Examples of common PTVs: a) cyclical multiplexer \(N:1\). b) equivalent \(N:1\) PTV model. c) local oscillator multiplier. d) equivalent PTV model.
### _Parallel PTVs_
We consider the two PTV systems \(h\) and \(g\), shown in Fig. 3(a), described by the set of equations
\[y_{i}^{(h)}(t)=\sum_{j=1}^{N_{h}}\int h_{ij}(z_{h}(t),\tau)x_{j}^{(h)}(t-\tau)\,d\tau, \tag{8}\]
with \(i\in\{1,2,...,M_{h}\}\), and
\[y_{i}^{(g)}(t)=\sum_{j=1}^{N_{g}}\int g_{ij}(z_{g}(t),\tau)x_{j}^{(g)}(t-\tau) \,d\tau, \tag{9}\]
with \(i\in\{1,2,...,M_{g}\}\), respectively. We define a new time-variant system, \(s\), whose inputs/outputs are given by
\[x_{j}(t)=\left\{\begin{array}{ll}x_{j}^{(h)}(t),&j\in\{1,2,..,N_{h}\}\\ x_{j-N_{h}}^{(g)},&j\in\{N_{h}+1,N_{h}+2,..,N_{h}+N_{g}\},\end{array}\right. \tag{10}\]
and
\[y_{i}(t)=\left\{\begin{array}{ll}y_{i}^{(h)}(t),&i\in\{1,2,..,M_{h}\}\\ y_{i-M_{h}}^{(g)},&i\in\{M_{h}+1,M_{h}+2,..,M_{h}+M_{g}\}.\end{array}\right. \tag{11}\]
The system \(s\) is then described by the linear relationship
\[y_{i}(t)=\sum_{j=1}^{N_{s}}\int s_{ij}(t,\tau)x_{j}(t-\tau)\,d\tau\quad\forall i \in\{1,2,...,M_{s}\}, \tag{12}\]
where \(N_{s}=N_{h}+N_{g}\), \(M_{s}=M_{h}+M_{g}\), and
\[s_{ij}(t,\tau)=\left\{\begin{array}{ll}h_{ij}(z_{h}(t),\tau),\\ i\in\{1,2,...,M_{h}\}\wedge j\in\{1,2,...,N_{h}\}\\ g_{(i-M_{h})(j-N_{h})}(z_{g}(t),\tau),\\ i\in\{M_{h}+1,M_{h}+2,...,M_{s}\}\wedge\\ j\in\{N_{h}+1,N_{h}+2,...,N_{s}\}\\ 0,&\mathrm{otherwise}.\end{array}\right. \tag{13}\]
If two integers, \(k_{h}\) and \(k_{g}\), can be found to satisfy \(k_{h}T_{h}=k_{g}T_{g}\), the system \(s\) is also a PTV. The period of \(s\) is given by
\[T_{s}=k_{h}T_{h}=k_{g}T_{g}, \tag{14}\]
being \(k_{h}\) and \(k_{g}\) the minimum integers satisfying the equality. In other words, if \(T_{s}\) exists, can be obtained as the least common multiple (lcm) of the periods \(T_{h}\) and \(T_{g}\). Note that \(T_{s}\) exists only if \(T_{h}/T_{g}\) is a rational number. The PTV behavior of \(s\) can be easily proven with Eq. 13, obtaining
\[s_{ij}(t+T_{s},\tau)=s_{ij}(t,\tau)=s_{ij}(z_{s}(t),\tau). \tag{15}\]
Figure 3(b) shows the equivalent PTV system \(s\) resulting from the parallel topology of \(h\) and \(g\).
### _Series PTVs_
In the series configuration of the PTVs \(h\) and \(g\), shown in Fig. 4(a), the \(L\) outputs of system \(h\) are the inputs of the system \(g\). By combining the input-output equations of both systems,
\[r_{l}(t)=\sum_{j=1}^{N}\int h_{ij}(z_{h}(t),\tau)x_{j}(t-\tau)\,d\tau\quad \forall l\in\{1,2,...,L\} \tag{16}\]
and
\[y_{i}(t)=\sum_{l=1}^{L}\int g_{il}(z_{g}(t),\tau)r_{l}(t-\tau)\,d\tau\quad \forall i\in\{1,2,...,M\}, \tag{17}\]
we obtain an equivalent TV linear system \(s\), given by
\[y_{i}(t)=\sum_{j=1}^{N}\int s_{ij}(t,\tau)x_{j}(t-\tau)\,d\tau\quad\forall i \in\{1,2,...,M\}, \tag{18}\]
where
\[s_{ij}(t,\tau)=\\ \sum_{l=1}^{L}\int g_{il}(\mathrm{mod}(t,T_{g}),\mu)h_{lj}( \mathrm{mod}(t-\mu,T_{h}),\tau-\mu)\,d\mu. \tag{19}\]
As in the case of the parallel configuration, if the lcm of both periods can be found, \(s\) is proven to be a PTV system satisfying Eqs. 14 and 15. Figure 4(b) displays the equivalent PTV system of the series PTVs.
### _Combination with time-invariant linear systems_
A time-invariant linear system can be expressed as a PTV with an arbitrary period. For instance, the linear system \(L\), described by
\[y_{i}(t)=\sum_{j=1}^{N}\int L_{ij}(\tau)x_{j}(t-\tau)\,d\tau\quad\forall i\in \{1,2,...,M\}, \tag{20}\]
Fig. 4: Series configuration of PTVs: a) the systems \(h\) and \(g\) are PTVs of period \(T_{h}\) and \(T_{g}\), respectively. b) equivalent PTV model \(s\) of period \(T_{s}\), the least common multiplier of \(T_{h}\) and \(T_{g}\).
Fig. 3: Parallel configuration of PTVs: a) the systems \(h\) and \(g\) are PTVs of period \(T_{h}\) and \(T_{g}\), respectively. b) equivalent PTV model \(s\) of period \(T_{s}\), the least common multiplier of \(T_{h}\) and \(T_{g}\).
can be also defined as the PTV system \(h\), given by Eq. 3, where
\[h_{ij}(z_{h}(t),\tau)=L_{ij}(\tau). \tag{21}\]
As \(h\) does not depend on \(z_{h}(t)\), the period \(T_{h}\) can be arbitrarily set. Consequently, combination of time-invariant linear systems with PTVs can be reduced to a unique PTV system by following the rules of parallel and series configuration introduced before.
As a simple example, we study the system shown in Fig. 5(a): a linear combination of two local oscillator multiplier lines. The blocks \(A\) and \(B\) represent time-invariant linear systems. In Fig. 5(b) we show the representation of all the circuit components as PTV systems. The local oscillator multipliers are converted to the systems \(a\) and \(b\) by following Eqs. 6 and 7, and their periods are defined as \(T_{h}=2\pi/\omega_{0}\) and \(T_{g}=4\pi/3\omega_{0}\), respectively. Systems \(A\) and \(B\) are regarded as the PTV systems \(a\) and \(b\) by using Eq. 21. Their periods are conveniently set to \(T_{h}\) and \(T_{g}\), respectively. Also, the split and sum points are regarded as i:2 and 2:1 PTVs, respectively, both with period \(T_{h}\). In the next step, shown in Fig. 5(c), we reduce the series PTVs \(h(g)\) and \(a(b)\) to the single PTVs \(\hat{h}(\hat{g})\). Then, as shown in Fig. 5(d), the parallel configuration of \(\hat{h}\) and \(\hat{g}\) is reduced to the PTV \(\hat{s}\). The period \(T_{s}\) can be easily calculated by expressing the period ratio as a fraction:
\[\frac{T_{h}}{T_{g}}=\frac{3}{2}\quad\Leftrightarrow\quad 2T_{h}=3T_{g}. \tag{22}\]
By comparing Eq. 22 with Eq. 14, we obtain \(T_{s}=4\pi/\omega_{0}\). Finally, in Fig. 5(e), we reduce the serie of \(c\)-\(\hat{s}\)-\(d\) in the 1:1 PTV \(s\), whose period can be easily proven to be \(T_{s}\).
## III Output Bandwidth
Unlike the time-invariant linear systems, the output bandwidth of a PTV is not necessarily equal to the input bandwidth. A clear example is provided by the case shown in Fig. 2(c), where the local oscillator multiplier increases the signal bandwidth due to the frequency translation process. In this section we derive a simple formula to calculate the output bandwidth.
At the first place, we note that the frequency-domain representation of the PTV described by the impulse responses \(h_{ij}(z,\tau)\) is given by the two-dimensional functions
\[\tilde{h}_{ij}(k,f)=\int_{0}^{T_{h}}\int_{-\infty}^{\infty}h_{ij}(z,\tau)e^{- \mathrm{j}2\pi(kz/T_{h}+f\tau)}\,d\tau\,dz, \tag{23}\]
where \(k\in\mathbb{Z}\) and \(f\in\mathbb{R}\). This definition represents an hybrid transformation combining the Fourier transform on \(\tau\) with the Fourier series on \(z\), due to the periodic behavior of \(h_{ij}\) on the last variable. The inverse of Eq. 23 leads to the definition of two bandwidths for the PTV \(h\): the _variation bandwidth_\(A_{h}\), corresponding to the discrete variable \(k\), and the _linear bandwidth_\(B_{h}\), corresponding to the continuous variable \(f\), as the minimum values satisfying
\[h_{ij}(z,\tau)=\sum_{k=-A_{h}}^{A_{h}}\int_{-B_{h}}^{B_{h}}\tilde{h}_{ij}(k,f)e ^{\mathrm{j}2\pi(kz/T_{h}+f\tau)}\,d\tau \tag{24}\]
\(\forall i,j\). While the linear bandwidth has a simple interpretation as the bandwidth of time-invariant linear systems, the variation bandwidth is a particular property of the PTVs, associated to the maximum variation speed of the impulse-response with respect to the temporal variable.
By using the definition of Eq. 23 and the inverse Fourier transform of \(x_{i}\),
\[x_{i}(t)=\int_{-B_{x}}^{B_{x}}\tilde{x}_{i}(f)e^{\mathrm{j}2\pi ft}\,df, \tag{25}\]
where \(\tilde{x}_{i}\) and \(B_{x}\) are the Fourier transform and the bandwidth of \(x_{i}\), respectively, in Eq. 1 we obtain
\[y_{i}(t)=\sum_{j=1}^{N}\sum_{k=-A_{h}}^{A_{h}}\int_{-B_{x}}^{B_{ x}}\int_{-B_{h}}^{B_{h}}\int\\ \tilde{h}_{ij}(k,f)\tilde{x}_{j}(f^{\prime})e^{2\pi\left(kt/T_{ h}+f\tau+f^{\prime}(t-\tau)\right)}\,d\tau\,dfdf^{\prime}. \tag{26}\]
By making the change of variable \(f^{\prime}=\mu-k/T_{h}\) we have
\[y_{i}(t)=\sum_{k=-A_{h}}^{A_{h}}\int_{-B_{x}+k/T_{h}}^{B_{x}+k/T_{h}}\tilde{y }_{i}(k,\mu)e^{\mathrm{j}2\pi\mu t}\,d\mu, \tag{27}\]
where
\[\tilde{y}_{i}(k,\mu)=\\ \sum_{j=1}^{N}\int_{-B_{h}}^{B_{h}}\int\tilde{h}_{ij}(k,f)\tilde{ x}_{j}(\mu-k/T_{h})e^{\mathrm{j}2\pi(f-\mu+k/T_{h})\tau}\,d\tau\,df. \tag{28}\]
Although Eq. 27 is not an usual inverse Fourier transform, like Eq. 25, it allows for the calculation of the output bandwidth \(B_{y}\), as the maximum-frequency component of \(y_{i}(t)\) is clearly
\[B_{y}=B_{x}+\frac{A_{h}}{T_{h}}. \tag{29}\]
Fig. 5: Example of PTV reduction: a) circuit combining PTVs (local oscillator multipliers) with time-invariant linear systems (\(A\) and \(B\)). b) each component of the circuit is expressed in its PTV form. c) the series PTV \(h\)-\(a\) and \(g\)-\(b\) are reduced to the single PTV form. d) parallel reduction of the system \(\hat{h}\|\hat{g}\). e) final equivalent PTV system of the circuit.
## IV Discrete-time representation of PTVs
By knowing the output bandwidth of a PTV, we are able to perform a discrete-time representation of the system. We have to choose a sampling period \(T_{\mathrm{s}}\) satisfying the Nyquist condition, _i.e._
\[T_{\mathrm{s}}\leq\frac{1}{2B_{y}}, \tag{30}\]
and then to define the discrete-time signals as
\[a[n]=a(nT_{\mathrm{s}}), \tag{31}\]
where \(n\in\mathbb{Z}\) and \(a\) stands for any input-output signal. An useful operation is the inverse of the sampling process of Eq. 31, given by
\[a(t)=\sum_{n=-\infty}^{\infty}a[n]\mathrm{sinc}\left(\frac{t}{T_{\mathrm{s}}} -n\right), \tag{32}\]
where \(\mathrm{sinc}(x)=\sin(\pi x)/(\pi x)\ \forall x\neq 0\) and \(\mathrm{sinc}(0)=1\).
By using Eqs. 31 and 32 in the definition of PTV (Eq. 3), we obtain
\[y_{i}[n]=\sum_{j=1}^{N}\sum_{m=-\infty}^{\infty}H_{ij}[n,m]x_{j}[n-m], \tag{33}\]
where
\[H_{ij}[n,m]=\int h_{ij}(\mathrm{mod}(nT_{\mathrm{s}},T_{h}),\tau)\mathrm{sinc }\left(m-\frac{\tau}{T_{\mathrm{s}}}\right)\ d\tau. \tag{34}\]
Equation 33 is the definition of a discrete-time TV system, as the impulse responses \(H_{ij}\) do not only depend on the input sampling index \(m\) but also of the output sampling index \(n\). In addition, if the sampling period is set to be a divisor of the PTV period, _i.e._
\[T_{h}=K_{H}T_{\mathrm{s}}\quad K_{H}\in\mathbb{Z}, \tag{35}\]
Eq. 33 becomes the definition of a _discrete-time periodically time-variant_ (DTPTV) linear system, that reads
\[y_{i}[n]=\sum_{j=1}^{N}\sum_{m=-\infty}^{\infty}H_{ij}[z_{H}[n],m]x_{j}[n-m], \tag{36}\]
with \(z_{H}[n]=\mathrm{mod}(n,K_{H})\) and being \(K_{H}\) the discrete period of the system \(H\). Figure 6 shows the schematic representation of a DTPTV system. Equation 36 allows for the numerical simulation of PTV system and enables the demonstration of two interesting properties, as shown in next section.
## V Inverse of PTVs
We use the discrete-time representation to prove that that the inverse of a PTV linear system, if it exists, is another PTV of the same dimension.
### _Siso Ptv_
The _single-input single-output PTV_ (SISO PTV), shown in Fig. 7(a), can be expressed in its discrete-time form as
\[y[n]=\sum_{m=-\infty}^{\infty}H[z_{H}[n],m]x[n-m]=\\ \sum_{m=-\infty}^{\infty}H[z_{H}[n],n-m]x[n]. \tag{37}\]
An useful alternative representation of this system is given by expressing the input/output signals as vector signals of dimension \(K_{H}\),
\[\left\{\begin{array}{l}x_{j}[r]=x[rK_{H}+j]\\ y_{i}[k]=y[kK_{H}+i],\end{array}\right. \tag{38}\]
where \(i,j\in\{0,1,...,K_{H}-1\}\). By using the vector-signal representation of the input in Eq. 37 we obtain
\[y[n]=\sum_{j=0}^{K_{H}-1}\sum_{r=-\infty}^{\infty}H[z_{H}[n],n-rK_{H}-j]x_{j} [r]. \tag{39}\]
Finally, by using the vector-signal representation of the output in Eq. 39 we have
\[y_{i}[k]=\sum_{j=0}^{K_{H}-1}\sum_{r=-\infty}^{\infty}\bar{H}_{i,j}[k-r]x_{j} [r], \tag{40}\]
where \(\bar{H}\) is a \(K_{H}\times K_{H}\) matrix given by
\[\bar{H}_{i,j}[n]=H[i,nK_{H}+i-j]. \tag{41}\]
Equation 40 denotes an interesting equivalence between a SISO PTV and a time-invariant _multiple-input multiple-output_ (MIMO) linear system, shown in Fig. 7(b).
Inversely, any MIMO linear system written in the form of Eq. 40 can be represented as a SISO PTV, by defining the periodic impulse-responses as
\[H[h,m]=\bar{H}_{h,\mathrm{mod}(m,K_{H})}\left[\frac{m-\mathrm{mod}(m,K_{H})}{ K_{H}}\right], \tag{42}\]
and the higher-rate input-output signals as
\[\left\{\begin{array}{l}x[n]=x_{\mathrm{mod}(n,K_{H})}\left[(n-\mathrm{mod}( n,K_{H}))/K_{H}\right]\\ y[n]=y_{\mathrm{mod}(n,K_{H})}\left[(n-\mathrm{mod}(n,K_{H}))/K_{H}\right]. \end{array}\right. \tag{43}\]
Fig. 6: Representation of a generic DTPTV linear system, relating \(N\) discrete-time inputs \(x_{j}[n]\) with \(M\) discrete-time outputs \(y_{i}[n]\). While \(H\) is the name of the DTPTV, \(K_{H}\) stands for its discrete period.
The equivalence shown in Fig. 7 allows the simple calculation of the DTPTV inverse \(H^{-1}\), as the inverse of a time-invariant MIMO \(K_{H}\times K_{H}\) is another MIMO of the same dimension. Basically, the matrix representation of that inverse must satisfy
\[\sum_{j=0}^{K_{H}-1}\sum_{m=-\infty}^{\infty}\bar{H}_{ij}^{(-1)}[n]\bar{H}_{jk} [n-m]=\delta_{ik}\delta_{n0}, \tag{44}\]
where \(\delta\) stands for the Kronecker delta. In addition, by using Eqs. 42 and 43, we can write \(\bar{H}^{(-1)}\) as a discrete-time SISO PTV. In conclusion, the inverse of a SISO PTV is another SISO PTV of the same period. This conclusion is also valid for continuous-time PTV systems.
### _Square PTV_
The _square_ PTV, shown in Fig. 8(a), is defined as the PTV system whose number of inputs and number of outputs are equal (\(M=N\) in definition of Eq. 3). The discrete-time representation of such system is given by
\[y_{i}[n]=\sum_{j=1}^{N}\sum_{m=-\infty}^{\infty}H_{i,j}[\mathrm{mod}(n,K_{H}), m]x_{j}[n-m], \tag{45}\]
where \(i\in\{1,2,..,N\}\). We define a higher-rate signal to serialize the output of the system, that reads
\[y[k]=y_{i}[n],\quad\left\{\begin{array}{l}i=\mathrm{mod}(k,N)+1\\ n=\frac{k-\mathrm{mod}(k,N)}{N}.\end{array}\right. \tag{46}\]
In a similar way, we define the serialized input
\[x[k-r]=x_{j}[n-m],\quad\left\{\begin{array}{l}j=\mathrm{mod}(k-r,N)+1\\ m=\frac{r-\mathrm{mod}(r,N)}{N}.\end{array}\right. \tag{47}\]
By replacing Eqs. 46 and 47 into Eq. 45, we obtain the TV linear system
\[y[k]=\sum_{r=-\infty}^{\infty}\hat{H}[k,r]x[k-r], \tag{48}\]
where
\[\hat{H}[k,r]=H_{\mathrm{mod}(k,N)+1,\mathrm{mod}(k-r,N)+1}\left[\mathrm{mod} \left(\frac{k-\mathrm{mod}(k,N)}{N},K_{H}\right),\frac{r-\mathrm{mod}(r,N)}{N }\right]. \tag{49}\]
From the definition of Eq. 49, it is easy to prove that this system is a DTPTV, since
\[\hat{H}[k+NK_{H},r]=\hat{H}[k,r]. \tag{50}\]
Consequently, we can rewrite Eq. 48 as
\[y[k]=\sum_{r=-\infty}^{\infty}\hat{H}[z_{\hat{H}}[n],r]x[k-r], \tag{51}\]
where \(z_{\hat{H}}[n]=\mathrm{mod}(n,K_{\hat{H}})\), being \(K_{\hat{H}}=NK_{H}\).
This result means that any square DTPTV can be modeled as a higher-rate SISO DTPTV, as shown in Fig. 8(b), with a discrete period \(N\) times larger. Inversely, we can prove that any SISO DTPTV \(\hat{H}\), of period \(K_{\hat{H}}\), can be represented as a lower-rate square DTPTV \(H\) of period \(K_{H}=K_{\hat{H}}/N\) by defining the parallel inputs/outputs
\[\left\{\begin{array}{l}x_{i}[n]=x[nN+i-1]\\ y_{i}[n]=y[nN+i-1]\end{array}\right. \tag{52}\]
and the periodic impulse-responses
\[H_{i,j}[n,m]=\hat{H}[nN+i-1,mN+j-1]. \tag{53}\]
Thus, by using the equivalence of Fig. 8, we can prove that the inverse of a square DTPTV is another DTPTV, analogously to the inverse of a SISO DTPTV. Again, this conclusion is valid for continuous-time systems.
## VI Conclusions
Starting from a mathematical definition of the periodically time-variant linear systems, we derived simple rules to reduce a circuit, combining different PTVs with time-invariant linear systems, to a single PTV system. By using a frequency-domain analysis of that definition, we obtained a simple formula for the output bandwidth of a PTV, enabling a suitable discrete-time representation of such systems. In addition, we also found interesting equivalences for the DTPTV systems, allowing for the derivation of a meaningful conclusion: the inverse of a square PTV is another square PTV of the same dimension.
|
2308.08841 | Machine Learning-Assisted Discovery of Flow Reactor Designs | Additive manufacturing has enabled the fabrication of advanced reactor
geometries, permitting larger, more complex design spaces. Identifying
promising configurations within such spaces presents a significant challenge
for current approaches. Furthermore, existing parameterisations of reactor
geometries are low-dimensional with expensive optimisation limiting more
complex solutions. To address this challenge, we establish a machine
learning-assisted approach for the design of the next-generation of chemical
reactors, combining the application of high-dimensional parameterisations,
computational fluid dynamics, and multi-fidelity Bayesian optimisation. We
associate the development of mixing-enhancing vortical flow structures in novel
coiled reactors with performance, and use our approach to identify key
characteristics of optimal designs. By appealing to the principles of flow
dynamics, we rationalise the selection of novel design features that lead to
experimental plug flow performance improvements of 60% over conventional
designs. Our results demonstrate that coupling advanced manufacturing
techniques with `augmented-intelligence' approaches can lead to superior design
performance and, consequently, emissions-reduction and sustainability. | Tom Savage, Nausheen Basha, Jonathan McDonough, James Krassowski, Omar K Matar, Ehecatl Antonio del Rio Chanona | 2023-08-17T08:00:20Z | http://arxiv.org/abs/2308.08841v3 | # Machine Learning-Assisted Discovery
###### Abstract
Additive manufacturing has enabled the fabrication of advanced reactor geometries, permitting larger, more complex design spaces. Identifying promising configurations within such spaces presents a significant challenge for current approaches. Furthermore, existing parameterisations of reactor geometries are low-dimensional with expensive optimisation limiting more complex solutions. To address this challenge, we establish a machine learning-assisted approach for the design of the next-generation of chemical reactors, combining the application of high-dimensional parameterisations, computational fluid dynamics, and multi-fidelity Bayesian optimisation. We associate the development of mixing-enhancing vortical flow structures in novel coiled reactors with performance, and use our approach to identify key characteristics of optimal designs. By appealing to fluid mechanical principles, we rationalise the selection of novel design features that lead to experimental performance improvements of \(\sim 60\%\) over conventional designs. Our results demonstrate that coupling advanced manufacturing techniques with augmented-intelligence' approaches can lead to superior design performance and, consequently, emissions-reduction and sustainability.
## 1 Introduction
Advances in additive manufacturing have enabled the fabrication of a wide range of complex and potentially counter-intuitive reactor designs. Previously infeasible, or highly impractical designs can now be manufactured and investigated, resulting in substantially larger design spaces. Coupled with data-driven design tools, such as multi-fidelity Bayesian approaches[1, 2, 3, 4, 5, 6, 7], which have enabled the optimisation of large-scale simulation-based problems, a pathway has emerged towards the identification of optimal reactors from larger design spaces with the potential for improved performance. By exploiting lower-fidelity simulations throughout optimisation, high-quality solutions can be generated in a significantly reduced time; this is particularly the case in scenarios where gradients are unavailable or a more global solution is desired than gradient-based approaches[1, 5]. The purpose of this article is to leverage advances in data-driven optimisation, machine learning, computational fluid dynamics, and additive manufacturing to develop a versatile 'augmented-intelligence' framework which leads to superior reactors that surpass the performance of conventional designs; the coiled tube reactor was chosen as an illustrative exemplar.
Coiled tube reactors have received attention across chemical engineering due to their desirable mixing and heat transfer characteristics[8, 9, 10, 11, 12, 13]. Their applications range from flow chemistry[14] and bioprocesses[15], to chemical kinetic experiments[11]. At the mesoscale, coiled tube reactors have been shown to combine the heat and mass transfer properties of microreactors with the economic benefits of high-throughput larger scale reactors[16]. Previous work has revealed improvement in the performance of coiled tube reactors at relatively low flow rates (characterised by Reynolds numbers \(Re\leq 50\)) by super-imposing pulsed-flow operating conditions, which induce Dean vortices[17, 18, 9] that enhance radial mixing; in the absence of pulsed flow, such vortices develop at much higher flow rates (and \(Re>300\))[19, 20]. It is desirable to induce the formation of Dean vortices at low flow rates, over large proportions of the reactor interior, under steady-flow conditions, and without the added overhead of pulsed-flow forcing.
We seek to enhance the plug-flow performance of coiled-tube reactors by identifying two novel parameterisations for the reactor geometry in the radial and axial directions. We solve the simulation-based optimisation problem for all parameterisations using multi-fidelity Bayesian optimisation, considering steady-flow conditions at low flow rates (with \(Re=50\) for which pulsed-flow would have been necessary to drive vortex formation). The composite objective we maximise consists of plug-flow performance, which we approximate from computational residence-time distributions using a tanks-in-series model, and a non-ideality term that penalises bimodal or unsymmetrical distributions. Optimal solutions are investigated, where, to the best of our knowledge, we identify the presence of fully-developed Dean vortices for the first time at low Reynolds numbers under steady-flow conditions. The key driving factors that result in improved performance are identified from which we design and present two reactors. We 3D-print and experimentally validate these designs, confirming their improved performance over a conventional coiled tube reactor.
Our work establishes a framework for the design of next-generation reactors that can significantly improve the performance, sustainability, and economic viability of various manufacturing processes. Ultimately, this work aims to shift the paradigm of reactor design to take advantage of the suite of
modern computational methodologies in design and optimisation, demonstrating new opportunities to support the discovery, innovation, and advancement across chemical engineering.
## 2 Results
Each parameterisation was optimised under steady flow conditions, with a Reynolds number of 50. Details of both parameterisations can be located within the Methods. To ensure tractability of the problem, the joint parameterisation employs a sequential strategy whereby the parameters of the optimal coil path design were fixed before the introduction of parameters to manipulate the cross-section. **Figure 1a** demonstrates the most optimal geometries for each parameterisation. We denote the control reactor as design 'i', and designs resulting from optimal cross-section, coil path, and combined parameters as 'ii', 'iii', and 'iv', respectively, corresponding to the subplots within **Fig. 1a**.
## 3 Effect of Design Features on Flow Structures
We first consider design 'ii', where the shape of the tube cross-section throughout the length of the reactor is allowed to vary. The first key feature of design 'ii' is that the cross-section undergoes periodic expansions and contractions approximately every half-turn. Secondly, the design comprises a pinch constricting the flow when the cross-sectional area is greatest, during the expansion phase.
Next, we consider the optimal solution resulting from the parameterisation of the coil path corresponding to design 'iii' where the path deviates from a nominal configuration via the interpolation of cylindrical coordinates. The coil radius of curvature of design 'iii' begins relatively large, and subsequently reduces along the length of the reactor, resulting in a tighter design than that in 'ii'. The pitch of the coil in design 'iii' begins small before rising approximately halfway along the length of the reactor then decreasing near the reactor outlet to the extent that the coil path points downwards. Within design 'iv', the path is fixed as in 'iii' and the cross-section is allowed to vary; inspection of design 'iv' reveals that it possesses features that are present in both designs 'ii' and 'iii'.
To further investigate why these solutions are deemed optimal,
Figure 1: **Overview of optimal reactor characteristics.****a**, The conventional coiled-tube reactor (i) alongside optimal coiled tube reactors generated by parameterising the cross-section (ii), the coil path (iii), and a joint parameterisation (iv). **b**, Velocity streamlines coloured with velocity magnitude within a standard coil (i) and within optimal joint parameterisation (ii). **c**, Secondary flow streamlines at various cross-sections of the coil. **d**, Cross-sectional plane across the coil demonstrating streamwise velocities **e**, The presence of induced Dean vortices within the optimal joint parameterisation coil (ii) compared with a standard coil (i)._
we demonstrate different flow characteristics. **Figure 1b** depicts changes in fluid velocity within design 'iv' compared to a conventional coil (design 'i'). The expansion and contraction features in the cross-section alter the velocity distribution along the coil's length in design 'iv', resulting in higher and lower velocities during contraction and expansion, respectively. Conversely, the conventional coil exhibits a relatively uniform velocity distribution along the coil length.
The velocity changes observed in design 'iv', due to acceleration and deceleration, induce stronger pressure gradients, leading to the formation of Dean vortices that significantly enhance radial mixing. **Figure 1c** demonstrates the formation of Dean vortices in both a standard coil and design iv, as shown by sub-plots i and ii in **Figure 1c**, respectively. While Dean vortices are also formed in a standard coil, they are only partially established close to the tube outlet. The earlier formation of fully-developed Dean vortices in design 'iv' demonstrates the impact of reactor geometry resulting from the application of our framework, suggesting the potential for enhanced plug-flow performance within more compact reactors, at lower \(Re\). Moreover, the inclusion of the characteristic pinch feature in design 'iv' plays a key role in redistributing velocity across the coil cross-section by altering the radial position of peak velocities along the coil length. **Figure 1d** demonstrates the velocity distribution throughout the length of a standard coil and design 'iv', as shown by sub-plots i and ii in **Figure 1d**, respectively. The radial distribution of the streamwise velocity associated with design 'i' exhibits a consistent radial position of peak velocity closer to outer walls throughout the length, leading to increased axial dispersion of the tracer. In contrast, inspection of the radial streamwise velocity distribution in design 'iv' demonstrates its dynamic adjustment along the reactor path characterised by acceleration of slow-moving fluid and deceleration of faster-moving fluid, effectively limiting axial dispersion, promoting radial mixing, and enhancing plug-flow performance.
To further illustrate the superior characteristics of the optimised designs over conventional coil tube reactors, **Figure 1e** depicts streamlines representing fluid flow within the initial length of design 'ii'. Blue streamlines originate from the coil centre, while black streamlines start near the coil wall. As the cross-section of the reactor expands throughout the initial curve, the slow-moving fluid radially furthest from the centre of the coil moves initially outwards. Subsequently, as the cross-sectional area is greatest, the fluid within the central region moves towards the outer walls of the coiled tube. The fluid closest to the outer walls is then acted upon by the change in cross-section of the tube in the form of a pinch forcing fluid towards the centre of the tube via a swirling motion. The cross-section then contracts again, enabling a repetition of the mechanism for radial mixing under steady-flow conditions.
### Convergence Analysis
In this section, we focus on the convergence of the designs from an optimisation perspective in order to highlight the potential optimality of the solutions provided. Gaussian processes (GPs) are used to model the simulation cost and objective throughout the design space, and are iteratively updated based on simulations selected through a multi-fidelity acquisition function. Full details of the optimisation can be found within the Methods. Given the high-dimensional design space, we apply t-Distributed Stochastic Neighbour Embedding (t-SNE) to identify emerging trends throughout optimisation, investigate design convergence, and highlight the behaviour of the GP hyperparameters (see Methods). This is a probabilistic dimensionality reduction technique, enabling high-dimensional data to be plotted whilst preserving local structure. We focus on the convergence of the cross-section design problem, with the trends associated with the other parameterisations relegated to the Supplementary Information. We find that the cross-section optimisation problem has better convergence properties than the other two design problems, motivating our methodology in identifying optimal driving characteristics as opposed to designs themselves. **Figure 2a** demonstrates the composite objective function against iteration (i), and wall-clock time (ii), demonstrating how lower-cost simulations are applied to provide less-expensive exploration.
The initial design of experiments used within optimisation is represented as occurring before the first iteration (so-called 'negative' iterations), and before \(t=0\). This sample contains reactor simulations with a variety of computational costs, as simulations across the spectrum of fidelities are performed. After optimisation begins, low-cost, low-fidelity simulations are selected during an initial phase of exploration. Simulations are selected that are progressively less expensive until approximately the 20th iteration, as the updated GPs within the framework gain a better representation of simulation cost, and are able to select more efficient simulations. The framework then undergoes a period largely applying either the highest fidelity simulations, or the lowest fidelity simulations.
To investigate the properties of the GP that model the objective throughout optimisation, we observe the kernel function hyper-parameters at a number of iterations. Figure 2**bi** demonstrates the lengthscale of each dimension within the kernel function within the objective GP as optimisation progresses. Initially, the majority of lengthscales are relatively large indicating a uniform function space, capturing broad trends, as expected in a low-data regime in high dimensions. As optimisation progresses, and the GP is trained with more data, the lengthscales broadly decrease indicating an improved representation of the design space. Figure 2**bi** demonstrates the distribution of lengthscale hyper-parameters as optimisation progresses. The distribution of hyper-parameters tends towards lower values, and the GP becomes more nonlinear as data become available and correlations are more accurately captured.
As different simulation fidelities bias the objective function and the parameterisations are high-dimensional, we apply t-SNE to the convergence data of the cross-section parameterisation, observing data in the space of design parameters but not fidelities, colouring data points based on number of properties. Figure 2**c**
shows the two-dimensional t-SNE embeddings of data points \(\in\mathcal{X}\), labelled by axial and radial fidelities, objective value, and iteration. Feature **i** shows the initial data set generated before optimisation begins. From here, the optimisation can be seen to progress along the path denoted by **ii**, demonstrating systematic changes in parameters indicative of convergent behaviour. Along this path, a number of different fidelities are evaluated (feature **iii**), demonstrating how lower-fidelity simulations are applied.
We present a sample of the inducing cross-sections from various clusters of data within the analysis against optimisation objective. The cross-sections in the original data set (feature **i**), as expected contain no distinct characteristics. As the optimisation progresses, the cross-sections defined by inducing points become more distinct, with certain cross-sections gaining symmetrical forms. Finally, at the end of optimisation, the cross-sections are most distinct, consisting of alternating small and large cross-sectional areas, with pinches throughout.
Lastly, we define parameter variability as the normalised inverse of each GP lengthscale, each corresponding to a specific parameter. We calculate this property at the end of optimisation and present the value for each inducing parameter (the collection of which defines the form of the design space), indicating the variability in objective each specific parameter is responsible for. As can be seen from Figure **2d**, which demonstrates parameter variability throughout the reactor, the variability resulting from parameters in the tube cross-section is greater where pinches occur, than those associated with the bottom and top of the tubular cross-section. These trends indicate that the tube cross-section, and specifically the pinch, has a significant effect on plug-flow performance, confirming our observations.
### Optimal Designs & Comparison with 3D-Printed Reactors
In the foregoing, we have shown that the design concepts identified through the use of data-driven optimisation and machine learning are as follows:
* Expansion and subsequent contraction of reactor cross-section every half turn;
Figure 3: The nominal coiled tube (**left**) alongside extrapolated steady-flow coil designs containing aspects from the optimal cross-section (**centre**) and both cross-section and coil path (**right**).
Figure 2: Analysis of residence time distributions and optimisation convergence. **a**, The optimisation objective against iteration (i) and wall-clock time (ii). The objective has been normalised. The initial data set generated via design-of-experiments is denoted as negative iterations and wall-clock time. Hence, optimisation begins at the \(0^{\text{th}}\) iteration and at \(t=0\). **b**, Gaussian process dimension lengthscales throughout Bayesian optimisation iterations (i) alongside histograms demonstrating the distribution of GP lengthscales changing throughout optimisation (ii). **c**, t-SNE analysis of the data generated throughout optimisation in design parameter space (\(\mathcal{X}\)), reducing the dimensionality of design parameters to two dimensions, and labelled with different respective quantities. **d**, Parameter variability, defined as a function of the GP lengthscale corresponding to each parameter, plotted for each inducing parameter on a nominal coil.
* Distinct pinch in cross-section throughout the reactor;
* Changes in the direction of flow including constriction of coil radius.
Previous work has also demonstrated the existence of a linear relationship between the number of coils and plug-flow performance [21]. We now apply the design concepts to two reactors corresponding to longer coils than those studied above; for one, we include changes to the tube cross-section, and for the other, variations to both cross-section and coil path. **Figure 3** demonstrates the final coiled-tube reactor designs, containing features identified as being responsible for the optimality of both the cross-section and coil path parameterisations.
The designs shown in Figure 3 were 3D-printed, and residence-time distribution experiments (see Methods) were performed for each configuration using steady-flow at a \(Re=50\). Figure 4 demonstrates the aggregated experimental data alongside tank-in-series models for each reactor configuration. The standard coil produces a skewed distribution with a mean equivalent tanks-in-series value across all experiments of of 39.27. The two distributions resulting from proposed reactors with variable cross-section (R2) and cross-section with coil path (R3) are symmetrical. The equivalent tanks-in-series for these two reactors are higher than the standard coil at 61 (R2) and 63.45 (R3), representing a 55% and 62% increased value of mean equivalent tanks-in-series respectively. The RTD resulting from R1 is to be expected, as typically high \(Re\) is required for Dean vortices to form. The improvements in R2 and R3 demonstrate that we can avoid the need for oscillatory conditions to induce Dean vortices and promote mixing at low \(Re\), through the inclusion of the identified design concepts. Other factors such as multiphase flows and chemistry may be explored to further exploit the discovery of designs through machine learning. An uncertainty analysis of the experimental data can be found in the Supplementary Information. We note that the power consumption of the experimental syringe pumps was not measured, however, the proposed designs almost certainly have a larger pressure drop than the standard coil, due to the undulating changes in cross-section. Future work may propose to treat this, or a similar case as a multi-objective black-box optimisation problem, trading off pressure-drop and plug-flow performance.
Throughout optimisation, we cannot guarantee that the global optimum of the parameterisations has been identified. However, the global methodology we apply provides more varied solutions than a gradient-based local approach such as the adjoint method, and we identify the key features that drive the behaviour of fluid within these reactors, including the presence of induced Dean vortices. Subsequently, the reactors that we experimentally validate can be interpreted as 'augmented intelligence'-assisted designs, with features based on the optimal solutions. We propose this workflow for the design of highly-parameterised reactors as the interpretability of solutions is maintained. To support the design of algorithms for the optimisation of high-dimensional expensive black-box problems, we release all parameterisations as benchmark problems1. The parameterisations we optimise contain feasible reactor geometries by design, resulting in an unconstrained optimisation problem. Highlighting the importance of parameterisation specification, the emergent behaviour in designs 'iii' and 'iv' contains a coil path that unintendedly pitches down near the reactor outlet.
Footnote 1: Found at [https://github.com/trsav/reactor_benchmark](https://github.com/trsav/reactor_benchmark)
We anticipate that the workflow for machine-learning-assisted discovery we present here can be used across a large number of expensive simulation-based design and optimisation problems involving, for instance, reactive and multi-phase flows. Alongside developments in additive manufacturing, we hope that chemical reactors discovered via machine learning become increasingly prevalent to support future sustainable processes.
Figure 4: Data generated across three experiments for each reactor configuration, designed to maximise equivalent tanks-in-series and minimise non-symmetric RTDs. A tanks-in-series model is used to estimate the performance of each reactor across the experiments performed. Dimensionless concentration at the outlet of the reactor (\(E(\theta)\)) is plotted against dimensionless time (\(\theta\)).
## Methods
### Parameterisations
In this section, we focus on the design of efficient parameterisations for the discovery of novel coiled-tube reactors, ensuring tractability of the design problem. We present two distinct parameterisations: one where the geometry of the cross-section varies along the length of the reactor, and another where the reactor path itself is allowed to vary. To manage the flexibility/viability trade-off, both parameterisations themselves are defined by hyper-parameters determining their complexity. To ensure smooth transitions in these parameterisations, we employ interpolating points in their formulation, both to define the reactor path, and the cross-section throughout the reactor length. We first introduce Gaussian processes in polar coordinates as a means to generate variable tube cross-sections from interpolating points.
#### Polar Gaussian Processes
A Gaussian processes is an infinite-dimension generalisation of a multi-variate Gaussian distribution[22]. The mean vector and covariance matrix are replaced by a mean function and kernel function, respectively. A Gaussian process can be described as
\[f(x)\sim\mathcal{GP}(m(\mathbf{x}),k(\mathbf{x},\mathbf{x}^{{}^{\prime}})).\]
The kernel function \(k\) dictates the behaviour of functions from this distribution, and can be parameterised by hyper-parameters including length scale. By conditioning a Gaussian process on a data set \(\mathbf{X}_{*}\), a posterior distribution of functions can be obtained. At inputs \(\mathbf{X}\), and previously evaluated function values \(\mathbf{y}\) the posterior predictive mean and standard deviation become
\[\mu_{f}(\mathbf{X}) =K(\mathbf{X},\mathbf{X}_{*})K(\mathbf{X},\mathbf{X})^{-1}\mathbf{ y}\] \[\sigma_{f}(\mathbf{X}) =K(\mathbf{X}_{*},\mathbf{X}_{*})-K(\mathbf{X}_{*},\mathbf{X})K( \mathbf{X},\mathbf{X})^{-1}K(\mathbf{X},\mathbf{X}_{*})\]
where \(K\) is a covariance matrix derived from kernel function \(k\).
The squared-exponential kernel function assigns decreasing correlation between locations in input space with increasing Euclidian distance, providing an intuitive interpretation for the majority of regression settings. Other kernel functions have been proposed to provide valid covariance matrices, resulting in Gaussian process prior distributions with different properties. The polar squared exponential kernel enables valid covariance matrices to be constructed in polar coordinates and was outlined by[23]. A standard kernel function, dealing with data within data in polar coordinates, will determine that data at \(\theta_{1}=0\) and \(\theta_{2}=\pi/2\) are highly uncorrelated. This is untrue, and in the presence of noiseless-observations these two data points should have perfect correlation (\(k(\theta_{1},\theta_{2})=1\)). Polar kernel functions enable smooth interpolation in polar coordinates by including the ability for proper distances and respective covariances to be calculated between any two angles[23, 24]. The polar covariance function is written as
\[k(\theta,\theta^{{}^{\prime}})=\left|\left(1+\tau\frac{d(\theta,\theta^{{}^{ \prime}})}{\pi}\right)\left(1-\frac{d(\theta,\theta^{{}^{\prime}})}{\pi} \right)^{\tau}\right|\quad\tau\geq 4. \tag{1}\]
The angular distance metric \(d\) is given as
\[d(\theta,\theta^{{}^{\prime}})=|(\theta-\theta^{{}^{\prime}}+\pi)\mod 2\pi- \pi|,\]
where \(\tau\) is a hyper-parameter analogous to length-scale and controls how smooth the prior distribution of functions is. Figure 5 demonstrates samples from a Gaussian process prior with polar kernel, as well as a Gaussian process with polar kernel posterior distribution conditioned on data.
#### Coil cross-section Parameterisation
Initially, we define the number of interpolating points for a given tubular cross-section, denoted \(n_{c}\), and indexed by \(i\). We then distribute these points equally along the angle \(\theta\) in polar coordinates. The radius coordinate, \(r_{l}\in n_{c}\) for each inducing point serves as the decision variable, or parameter. Given a set of inducing points corresponding to a specific cross-section, a polar Gaussian process is used to interpolate between them, resulting in a valid, continuous curve in polar coordinates defining an individual cross-section. Next, we establish the number of interpolating cross-sections denoted \(n_{l}\) and indexed by \(j\), equally spaced along the length of the coil. Both \(n_{c}\) and \(n_{l}\) represent hyper-parameters; increasing either one raises the dimensionality of the resulting design problem, whilst resulting in a more flexible parameterisation. Figure 6 demonstrates the polar GP interpolation of each specific cross-section along the length of the coil, with \(n_{c}=6\) and \(n_{l}=6\). To ensure compatibility with existing fixtures and fittings, two additional cross-sections are defined at the beginning and end of the coil, with constant radius.
Subsequently, we place each cross-section along the defined reactor path in 3D space and rotate each cross-section to face perpendicular to the direction of flow. Figure 6(a) demonstrates how these cross-sections are orientated. The resulting interpolated cross-sections are defined by a quadratic interpolation between each polar GP posterior, in cylindrical coordinates. Finally, an inlet and outlet is added to the reactor by extending the cross-sections at the beginning and end of the reactor.
Figure 5: _Left: samples from a Gaussian process prior with a polar kernel. Right: The posterior distribution after the polar Prior distribution has been conditioned on data. In this demonstration we assume noiseless observations._
Figure 6(b) shows the final coil information generated by the parameterisation.
The data generated from the parameterisation is provided to code to mesh tubular reactors2. Importantly, the meshing procedure enables control over the number of cells throughout the axial direction of the reactor (the direction of fluid flow), as well as the radial direction. Defined by axial and radial fidelity parameters respectively, both values dictate the factor to which blocks are subdivided during mesh creation. By maintaining the ability to adapt the fidelity of the mesh in two independent directions, we provide greater scope for identifying efficient (regarding information gained on a computational expense basis) simulations. Previous work has demonstrated how the output of coiled-tube reactor simulations varies approximately smoothly with changes in fidelities[1, 25]. In this article we assume them to be continuous throughout modelling and optimisation, and round them to the nearest integer when stored or evaluated. We direct the reader to previous work, outlining the effect of axial and radial fidelity on final coiled tube reactor mesh in more detail.
Footnote 2: Found at [https://github.com/OptiMaL-PSE-Lab/pulsed-reactor-optimisation](https://github.com/OptiMaL-PSE-Lab/pulsed-reactor-optimisation)
Coil Path ParameterisationTo achieve a parameterisation of the coil path, we start by defining a baseline path in cylindrical coordinates, \(\left(\rho_{0},\phi_{0},z_{0}\right)\). This path is based on a standard coil configuration and serves as a reference for introducing induced deviations. Let \(\Delta\rho_{j}\) and \(\Delta z_{j}\) be the deviations in the radial and height directions, respectively, for each inducing point \(j\). In optimising for inducing point deviations, we ensure that we maintain the coil's non-intersection constraint by design.
We introduce the decision variables as mismatch terms to the baseline path, resulting in a parameterised path given by \(\left(\rho_{j},\phi_{j},z_{j}\right)=\left(\rho_{0}+\Delta\rho_{j},\phi_{0}, z_{0}+\Delta z_{j}\right)\). After defining the path, we place circles defining the constant cross-section in 3D space parallel to the direction of flow. Let \(n_{p}\) be the number of inducing points along the coil path, indexed by \(j\). This parameterisation is similar to the one presented by [26]. To illustrate the parameterisation process, consider Figure 8, where the baseline path and inducing points with deviations are shown. We define 6 inducing points for both the axial direction and radial direction, i.e. \(n_{p}=6\), however none in the rotational axis \(\theta\). Deviations in \(\theta\) may be included, however we find that this often results in self intersections when the coil cross-section is defined from the path.
From the resulting coil path, the coil is defined and created similar to the procedure for the proceeding parameterisation. Figure 9 shows a render of a tubular reactor as produced from this parameterisation.
Further details regarding the implementation of all parameterisations can be found within the supplementary information.
Nominal Reactor and Parameter BoundsWe first specify a nominal reactor, the path of which is used across both parameterisations. Based on previous work, the standard coiled-tube
Figure 8: Baseline coil path and inducing points with deviations in cylindrical coordinates. **Top**: the cylindrical coordinate of the nominal coil, final coil, and inducing points presented alongside their value as a function of coil length. **Bottom**: The path of the nominal and final coil from three alternative perspectives in Cartesian coordinates.
Figure 6: Examples of sets of inducing points at given locations along the length of a coil. A polar Gaussian process is used to interpolate between points, where each data point’s radial value is a parameter.
Figure 7: Demonstration of how inducing points are used to fit polar Gaussian processes defining the cross-section at fixed locations, which contribute towards the overall geometry of a coil.
reactor performs optimally with a low pitch, and large coil radius. We specify the nominal pitch, \(p_{n}\) as 10.4mm and nominal coil radius, \(C_{r,n}\) as 12.5mm. Previous work has demonstrated that as the number of coils increase, the number of equivalent tanks-in-series rises approximately linearly. Therefore we specify two coil rotations, balancing overall simulation time and reactor size, providing an overall reactor length of 4\(\tau\)C\({}_{r,n}\). For cross-sectional interpolating points we apply lower and upper bounds of 2mm and 4mm respectively, indicating minimum and maximum radial values. To maintain feasible coil paths we define the bounds of height deviations \(\Delta z\) as \(\pm\)1mm, and radial deviations \(\Delta\rho\) as \(\pm\)3.5mm, ensuring no self-intersections.
## Simulation
We apply an established methodology for the evaluation of reactor designs under specific operating conditions, similar to the approach used in our previous work [1]. Utilizing OpenFOAM, a simulation is performed where an impulse tracer is injected into the water medium, and the resulting concentration is tracked by solving the relevant transport equations. Key aspects of the approach include the application of the transient pimpleFOAM solver for unsteady momentum equations, and the scalarTransportFoam to handle convection-diffusion effects with diffusion coefficient constant at a low value. The integral of the concentration over the outlet is recorded at each timestep and returned from the simulation. The evaluation process additionally involves monitoring the tracer concentration at the outlet to terminate the solution at a specific threshold, ensuring minimal solution times. This method has been adapted to the current study, enabling the evaluation of highly-parameterised reactors, more information can be located within [1] and [17].
## Optimisation
The optimisation of highly-parameterised reactor simulations is high dimensional and involves computationally expensive function evaluations. To solve this problem we apply multi-fidelity Bayesian optimisation. The specific approach we apply, DARTS, was presented by [1] for the optimisation of simulated reactors and tubes. The method enables multiple continuous simulation fidelities to be considered simultaneously. In addition the approach explicitly models the cost of a simulation as a function of simulation fidelity as well as inputs (such as geometry parameters).
Multi-fidelity Bayesian optimisationExpensive derivative-free optimisation problems, such as the optimisation of CFD simulations, selection of appropriate neural network hyper-parameters, or the optimal design of experiments exist throughout a number of domains. Bayesian optimisation has been proposed to address these challenges, with the aim of determining optimal solutions within an tolerable computational or time-budget. The majority of computationally expensive functions share the common trait: their complexity can be reduced at the expense of accuracy, resulting in the ability to perform function evaluations with the same set of decision variables with varying degrees of confidence, dictated by one or more fidelities. Fidelities control both the bias and noise of function evaluations. For example the number of discrete cells used for the evaluation of a flow-field, reducing which results in a less accurate output. Leveraging lower-fidelity function evaluations for the optimisation of expensive systems in which there is a time or computational budget required to identify an optimal solution can improve or enable the solution to often intractable problems.
Multi-fidelity Bayesian optimisation integrates function evaluations across a number of different fidelities. By applying a cost-adjusted acquisition function, both the subsequent set of decision variables as well as simulation fidelities are selected, accounting for the information/cost trade-off is accounted for. Incorporating lower fidelity evaluations reduces the time to generate optimal solutions as fewer high-fidelity simulations have to be performed. Multi-fidelity approaches contribute to environmental savings in settings where evaluations necessitate substantial computational resources, such as large CFD simulations, reducing life-cycle emissions of overall experimentation, design, and optimisation procedures. Multi-fidelity Bayesian optimisation has been applied for the design of reactor and tube simulations [1], battery design [4], hyper-parameter optimisation [2], and the design of ice-sheet simulations [27].
DartsThe DARTS framework for the design and analysis of reactor and tube simulations was proposed by [1]. The approach takes advantage of multi-fidelity simulations and integrates the optimisation process with OpenFOAM within a single Python framework. At the core of the framework, simulations consisting of a set of decision variables \(\mathbf{x}\in\mathcal{X}\) and fidelities \(\mathbf{z}\in\mathcal{Z}\) are chosen based on a cost-adjusted acquisition function to solve the following equation
\[\mathbf{x}^{*}=\arg\max_{x\in\mathcal{X}}f(\mathbf{x},\mathbf{z}_{\bullet}) \tag{2}\]
where \(\mathbf{z}_{\bullet}\) is the element-wise vector of highest fidelity parameters3. Equation 3 presents the cost adjusted acquisition function,
Footnote 3: In the case that the dimensionality of \(\mathbf{z}\) is greater than 2 and the set is non-ordered, this is specified as the element-wise maximum of fidelity values.
\[\mathbf{x}_{t+1},\mathbf{z}_{t+1}=\underset{(\mathbf{x},\mathbf{z})\in \mathcal{X}\times\mathcal{Z}}{\text{argmax}}\ \ \frac{\mathcal{H}_{f_{t}}(\mathbf{x},\mathbf{z}_{\bullet})+\beta^{1/2}\sigma_{f_{ t}}\big{(}\mathbf{x},\mathbf{z}_{\bullet}\big{)}}{\mu_{\lambda_{t}}(\mathbf{x}, \mathbf{z})\sqrt{1-k((\mathbf{x},\mathbf{z}),(\mathbf{x},\mathbf{z}_{\bullet} ))^{2}}}. \tag{3}\]
Figure 9: The tubular reactor resulting from interpolation of path in cylindrical coordinates following the definition of coil path above.
The criteria balances the exploration, exploitation, and cost of computational experiments and allows for multiple continuous fidelity parameters to be considered. The framework also guarantees an evaluated high-fidelity solution to be returned. Further information regarding the DARTS framework can be found in [1].
We define a maximum time budget approximately depending on the number of parameters in each parameterisation optimised, ranging between 72 and 168 hours in total. The DARTS framework involves three hyper-parameters to be decided, \(\beta\) defining the exploration level, and \(\gamma\) defining how much the cost of a simulation is weighted when selecting the next simulation for evaluation, and \(p_{c}\) which dictates how conservative the stopping criteria is. We select these as \(\beta=1.5\) and \(p_{c}=2\) based on values that were deemed to be robust from previous work concerning a similar problem.
Optimisation ObjectivePrevious work has demonstrated the optimisation and analysis of coiled tube reactors through. The resulting residence time distribution (RTD) from a simulation is approximated using a tanks-in-series model, with a larger number of theoretical tanks representing stronger plug-flow performance. In these studies, RTDs are observed to have been uni-model and relatively symmetrical. Due to the extensive design space applied within this article, we found that simulations can return non-ideal distributions that contain particularly long tails, or are bi-model. As more complex fluid flows are induced by the reactor designs, the resulting distributions become less ideal, and the tanks-in-series model provides a poor approximation.
Therefore, in this work we propose a composite objective function that serves to not only maximise plug-flow performance, but also encourages symmetric and uni-model RTDs, accounting for designs to return non-ideal concentration distributions. Equation 6 demonstrates the quantity, denoted \(f\), that is minimised during optimisation.
\[\hat{E}_{i}(N) =\frac{N(N\theta_{i})^{N-1}}{(N-1)!}e^{-N\theta_{i}}\quad i=1, \ldots,d \tag{4}\] \[N^{*} =\arg\min_{N}\sum_{i=1}^{d}\left(E_{i}-\hat{E}_{i}\right)\] (5) \[f =\underbrace{N^{*}}_{\text{Equivalent}}+\underbrace{\alpha\sum_ {i=1}^{d}\left(E_{i}-\hat{E}_{i}(N^{*})\right)}_{\text{e}\text{r}\text{o} \text{r}} \tag{6}\]
where \(d\) is the number of data points contained within an RTD returned from a simulation, \(\hat{E}\) and \(E\) are the predicted and returned dimensionless concentration values respectively, \(\theta\) represents dimensionless time, and \(N\) represents tank-in-series. Based on initial testing, a value of 100 is assigned to \(\alpha\), ensuring that the tank-in-series, and the non-ideality error are weighted approximately equally.
All code was evaluated using 64 CPUs, a single RTX6000 GPU to aid with training and evaluating Gaussian processes, and 64Gb of RAM. All code used within this article can be found within the associated repository [https://github.com/OptiMaL-PSE-Lab/pulsed-reactor-optimisation](https://github.com/OptiMaL-PSE-Lab/pulsed-reactor-optimisation). Code containing only the respective parameterisations and function evaluations for benchmarking purposes can be found within the associated repository [https://github.com/OptiMaL-PSE-Lab/pulsed-reactor-optimisation](https://github.com/OptiMaL-PSE-Lab/pulsed-reactor-optimisation).
## Experimental Validation
The selected designs were exported into the Standard Tessellation Language (.STL) file format from their mesh representation and modified resulting into 3D printable models. This was achieved by incorporating a bounding cylinder to encompass each reactor. The linear sections at the inlet and outlet were specifically configured with 8 mm and 10 mm outside diameter (OD) tube fittings, a requirement for interfacing with the experimental apparatus. Finally, the bounding cylinder was reduced with the internal volume between the coil removed to minimise the resin required for printing. The designs were printed using a FormLabs Form3+ 3D printer, with a Clear V4 resin, following the manufacturer's default settings. A post-processing phase was performed, comprising of a washing stage in Iso-propyl Alcohol (IPA) for 20 minutes, a drying period extending 24 hours, and a concluding post-cure treatment in a FormCure chamber, maintained at a temperature of 60\({}^{\circ}\)C for a 30-minute interval.
The RTD method was the same as reported by McDonough et al. [9] and applied by Savage et al. [1]. A 0.1 M KCl aqueous tracer solution was injected at the inlet of each reactor, and the conductivity at the outlet was measured throughout time. The net flow of deionized water and tracer injection were controlled using three separate OEM syringe pumps (C3000, TriContinent) that were hydraulically linked to the reactor via PTFE tubing routed through a custom Swagelok piece.
|
2302.03797 | Men Can't Always be Transformed into Mice: Decision Algorithms and
Complexity for Sorting by Symmetric Reversals | Sorting a permutation by reversals is a famous problem in genome
rearrangements. Since 1997, quite some biological evidence were found that in
many genomes the reversed regions are usually flanked by a pair of inverted
repeats. This type of reversals are called symmetric reversals, which,
unfortunately, were largely ignored until recently. In this paper, we
investigate the problem of sorting by symmetric reversals, which requires a
series of symmetric reversals to transform one chromosome $A$ into the another
chromosome $B$. The decision problem of sorting by symmetric reversals is
referred to as {\em SSR} (when the input chromosomes $A$ and $B$ are given, we
use {\em SSR(A,B)}) and the corresponding optimization version (i.e., when the
answer for {\em SSR(A,B)} is yes, using the minimum number of symmetric
reversals to convert $A$ to $B$), is referred to as {\em SMSR(A,B)}. The main
results of this paper are summarized as follows, where the input is a pair of
chromosomes $A$ and $B$ with $n$ repeats. (1) We present an $O(n^2)$ time
algorithm to solve the decision problem {\em SSR(A,B)}, i.e., determine whether
a chromosome $A$ can be transformed into $B$ by a series of symmetric
reversals. (2) We design an $O(n^2)$ time algorithm for a special 2-balanced
case of {\em SMSR(A,B)}, where chromosomes $A$ and $B$ both have duplication
number 2 and every repeat appears twice in different orientations in $A$ and
$B$. (3) We show that SMSR is NP-hard even if the duplication number of the
input chromosomes are at most 2, hence showing that the above positive
optimization result is the best possible. As a by-product, we show that the
\emph{minimum Steiner tree} problem on \emph{circle graphs} is NP-hard,
settling the complexity status of a 38-year old open problem. | Xin Tong, Yixiao Yu, Ziyi Fang, Haitao Jiang, Lusheng Wang, Binhai Zhu, Daming Zhu | 2023-02-07T23:29:59Z | http://arxiv.org/abs/2302.03797v1 | # Men Can't Always be Transformed into Mice:
###### Abstract
Sorting a permutation by reversals is a famous problem in genome rearrangements, and has been well studied over the past thirty years. But the involvement of repeated segments is inevitable during genome evolution, especially in reversal events. Since 1997, quite some biological evidence were found that in many genomes the reversed regions are usually flanked by a pair of inverted repeats. For example, a reversal will transform \(+a+x-y-z-a\) into \(+a+z+y-x-a\), where \(+a\) and \(-a\) form a pair of inverted repeats.
This type of reversals are called symmetric reversals, which, unfortunately, were largely ignored until recently. In this paper, we investigate the problem of sorting by symmetric reversals, which requires a series of symmetric reversals to transform one chromosome \(A\) into the another chromosome \(B\). The decision problem of sorting by symmetric reversals is referred to as _SSR_ (when the input chromosomes \(A\) and \(B\) are given, we use _SSR(A,B)_, similarly for the following optimization version) and the corresponding optimization version (i.e., when the answer for _SSR(A,B)_ is yes, using the minimum number of symmetric reversals to convert \(A\) to \(B\)), is referred to as _SMSR(A,B)_. The main results of this paper are summarized as follows, where the input is a pair of chromosomes \(A\) and \(B\) with \(n\) repeats.
1. We present an \(O(n^{2})\) time algorithm to solve the decision problem _SSR(A,B)_, i.e., determine whether a chromosome \(A\) can be transformed into \(B\) by a series of symmetric reversals. This result is achieved by converting the problem to the circle graph, which has been augmented significantly from the traditional circle graph and a list of combinatorial properties must be proved to successfully answer the decision question.
2. We design an \(O(n^{2})\) time algorithm for a special 2-balanced case of _SMSR(A,B)_, where chromosomes \(A\) and \(B\) both have duplication number 2 and every repeat appears twice in different orientations in \(A\) and \(B\).
3. We show that SMSR is NP-hard even if the duplication number of the input chromosomes are at most 2, hence showing that the above positive optimization result is the best possible. As a by-product, we show that the _minimum Steiner tree_ problem on _circle graphs_ is NP-hard, settling the complexity status of a 38-year old open problem.
## 1 Introduction
In the 1980s, quite some evidence was found that some species have essentially the same set of genes, but their gene order differs [11, 12]. Since then, sorting permutations with rearrangement operations has gained a lot of interest in the area of computational biology in the last thirty years. Sankoff _et al._ formally defined the genome rearrangement events with some basic operations on genomes, e.g., reversals, transpositions and translocations [10], where the reversal operation is adopted the most frequently [13, 14, 15].
The complexity of the problem of sorting permutations by reversals is closely related to whether the genes are signed or not. Watterson _et al._ pioneered the research on sorting an unsigned permutation by reversals [15]. In 1997, Caprara established the NP-hardness of this problem [1]. Soon after, Berman _et al._ showed it to be APX-hard [1]. Kececioglu and Sankoff presented the first polynomial time approximation for this problem with a factor of 2 [13]. The approximation ratio was improved to 1.5 by Christie [14]. So far as we know, the best approximation ratio for the problem of sorting an unsigned permutation by reversals is 1.375 [1]. As for the more realistic problem of sorting signed permutations by reversals, Hannenhalli and Pevzner proposed an \(O(n^{4})\) time exact algorithm for this problem, where \(n\) is the number of genes in the given permutation (genomes) [11]. The time complexity was later improved to \(O(n^{2})\) by Kaplan _et al._[12]. The current best running time is \(O(n^{1.5}\sqrt{\log n})\) by Tannier _et al._[15].
On the other hand, some evidence has been found that the breakpoints where reversals occur could have some special property in the genomes [10, 11]. As early as in 1997, some studies showed that the breakpoints are often associated with repetitive elements on mammals and drosophila genomes [16, 1, 17]. In fact, the well-known "site-specific recombination",
which has an important application in "gene knock out" [11, 2, 12], also fulfills this rule. However, it was still not clear why and how repetitive elements play important roles in genome rearrangement. Recently, Wang _et al._ conducted a systematic study on comparing different strains of various bacteria such as Pseudomonas aeruginosa, Escherichia coli, Mycobacterium tuberculosis and Shewanella [13, 14]. Their study further illustrated that repeats are associated with the ends of rearrangement segments for various rearrangement events such as reversal, transposition, inverted block interchange, etc, so that the left and right neighborhoods of those repeats remain unchanged after the rearrangement events. Focusing on reversal events, the reversed regions are usually flanked by a pair of inverted repeats [2]. The following real example is from Pseudomonas aeruginosa strains in [15]. Such a phenomenon can also better explain why the famous "breakpoint reuse" (which were an interesting finding and discussed in details when comparing human with mouse) happen [16].
In this paper, we propose a new model called _sorting by symmetric reversals_, which requires each inverted region on the chromosomes being flanked by a pair of mutually inverted repeats. We investigate the decision problem of sorting by symmetric reversals (SSR for short), which asks whether a chromosome can be transformed into the other by a series of symmetric reversals. We devise an \(O(n^{2})\) time algorithm to solve this decision problem. We also study the optimization version (referred to as SMSR) that uses a minimum number of symmetric reversals to transform one chromosome into the other. We design an \(O(n^{2})\) time algorithm for a special 2-balanced case of SMSR, where chromosomes have duplication number 2 and every repeat appears twice in different orientations in each chromosome. We finally show that the optimization problem SMSR is NP-hard even if each repeat has at most 2 duplications in each of the input chromosome.
In the NP-hardness proof, we set up the relationship between our problem and the _minimum Steiner tree_ problem on _circle graphs_. The minimum Steiner tree problem on circle graphs has been considered to be in \(P\) as indicated by Johnson in 1985 [12]. Recently, Figueiredo _et al._ revisited Johnson's table and still marked the problem as in \(P\)[17], while leaving the reference as "ongoing". Here we clarify that the _minimum Steiner tree_ problem on _circle
Figure 1: Three symmetric reversals use the repeat ‘\(+B\)’ three times.
graphs_ is in fact NP-hard, settling this 38-year old open problem.
This paper is organized as follows. In Section 2, we give some definitions. We then present an algorithm to solve SSR under a special case, where the duplication number of the input chromosomes is 2 in Section 3. In Section 4, we present a polynomial algorithm for SMSR for the special 2-balanced case. In Section 5, we present an algorithm to solve SSR for the general case. In Section 6, we show that SMSR is NP-hard for the case that chromosomes have duplication number 2, with the help of the new NP-hardness result on the minimum Steiner tree problem on circle graphs. Finally, conclusions are given in Section 7.
## 2 Preliminaries
In the literature of genome rearrangement, we always have a set of integers \(\Sigma_{1}=\{1,\cdots,g\}\), where each integer stands for a long DNA sequence (syntenic block or a gene). For simplicity, we use "gene" hereafter. Since we will study symmetric reversals, we define \(\Sigma_{2}=\{r_{0},r_{1},r_{2},\cdots,r_{t}\}\) to be a set of symbols, each of them is referred to as a _repeat_ and represents a relative shorter DNA sequence compared with genes. We then set \(\Sigma=\Sigma_{1}\cup\Sigma_{2}\) to be the alphabet for the whole chromosome.
Since reversal operations work on a chromosome internally, a genome can be considered as a chromosome for our purpose, i.e., each genome is a singleton and contains only one chromosome. Here we assume that each gene appears exactly once on a chromosome, on the other hand, by name, a repeat could appear multiple times. A gene/repeat \(x\) on a chromosome may appear in two different orientations, i.e., either as \(+x\) or \(-x\). Thus, each chromosome of interest is presented by a sequence of signed integers/symbols.
The number of occurrences of a gene/repeat \(x\) in both orientations is called the _duplication number_ of \(x\) on the chromosome \(\pi\), denoted by \(dp[x,\pi]\). The duplication number of a chromosome \(\pi\), denoted by \(dp[\pi]\), is the maximum duplication number of the repeats on it. For example, chromosome \(\pi=[+r_{0},+1,-r,+2,\)\(+r,-r_{0}]\), \(dp[1,\pi]=dp[2,\pi]=1\), \(dp[r_{0},\pi]=dp[r,\pi]=2\), and \(dp[\pi]=2\). Two chromosomes \(\pi_{1}\) and \(\pi_{2}\) are _related_ if their duplication numbers for all genes and repeats are identical. Let \(|x|\in\Sigma\) be an integer or symbol, and \(+|x|\) and \(-|x|\) be two occurrences of \(|x|\), where the orientations of \(+|x|\) and \(-|x|\) are different. A chromosome of \(n\) genes/repeats is denoted as \(\pi=[x_{1},x_{2},\ldots,x_{n-1},x_{n}]\). A linear chromosome has two ends, and it can be read from either end to the other, so the chromosome \(\pi=[x_{1},x_{2},\ldots,x_{n-1},x_{n}]\) can also be described as \([-x_{n},-x_{n-1},\ldots,-x_{2},-x_{1}]\), which is called the _reversed and negated_ form of \(\pi\).
A _reversal_ is an operation that reverses a segment of continuous integers (or symbols) on the chromosome. A _symmetric reversal_ is a reversal, where the reversed segment is flanked by pair of identical repeats with different orientations, i.e, either \((+r,\cdots,-r)\) or \((-r,\cdots,+r)\) for some \(r\in\Sigma_{2}\). In other words, let \(\pi=[x_{1},x_{2},\ldots,x_{n}]\) be a chromosome. The reversal \(\rho(i,j)\) (\(1\leq i<j\leq n\)) reverses the segment \([x_{i},x_{i+1},\ldots,x_{j-1},x_{j}]\), and yields \(\pi^{\prime}=[x_{1},x_{2},\ldots,x_{i-1},-x_{j},-x_{j-1}]\)
\(\ldots,-x_{i+1},-x_{i},x_{j+1},\)\(\ldots,x_{n}\)]. If \(x_{i}=-x_{j}\), we say that \(\rho(i,j)\) is a symmetric reversal on \(|x_{i}|\). Reversing a whole chromosome will not change the relative order of the integers but their signs, so we assume that each chromosome is flanked by \(+r_{0}\) and \(-r_{0}\), then a chromosome will turn into its reversed and negated form by performing a symmetric reversal between \(+r_{0}\) and \(-r_{0}\).
Again, as a simple example, let \(\pi=[+r_{0},+1,-r_{1},+2,+r_{2},+r_{1},+r_{2},-r_{0}]\), then a symmetric reversal on \(r_{1}\) yields \(\pi^{\prime}=[+r_{0},+1,-r_{1},-r_{2},-2,+r_{1},+r_{2},-r_{0}]\).
Now, we formally define the problems to be investigated in this paper.
**Definition 2.1**: _Sorting by Symmetric Reversals, **SSR** for short._
_Instance: Two related chromosomes \(\pi\) and \(\tau\), such that \(dp[\pi]=dp[\tau]\geq 2\)._
_Question: Is there a sequence of symmetric reversals that transform \(\pi\) into \(\tau\)?._
**Definition 2.2**: _Sorting by the Minimum Symmetric Reversals, **SMSR** for short._
_Instance: Two related chromosomes \(\pi\) and \(\tau\) with \(dp[\pi]=dp[\tau]\geq 2\), and an integer \(m\)._
_Question: Is there a sequence of symmetric reversals \(\rho_{1},\rho_{2},\ldots,\rho_{m}\) that transform \(\pi\) into \(\tau\), such that \(m\) is minimized?_
There is a standard way to make a signed gene/repeat unsigned. Let \(\pi=[x_{0},x_{1},\ldots,x_{n+1}]\) be a chromosome, each occurrence of gene/repeat of \(\pi\), say \(x_{i}\) (\(0\leq i\leq n+1\)), is represented by a pair of ordered nodes, \(l(x_{i})\) and \(r(x_{i})\). If the sign of \(x_{i}\) is \(+\), then \(l(x_{i})=|x_{i}|^{h}\) and \(r(x_{i})=|x_{i}|^{t}\); otherwise, \(l(x_{i})=|x_{i}|^{t}\) and \(r(x_{i})=|x_{i}|^{h}\). Note that, if \(x_{i}\) and \(x_{j}\) (\(i\neq j\)) are different occurrences of the same repeat, i.e., \(|x_{i}|=|x_{j}|\), \(l(x_{i})\), \(l(x_{j})\), \(r(x_{i})\) and \(r(x_{j})\) correspond to two nodes \(|x_{i}|^{h}\) and \(|x_{i}|^{t}\) only. Consequently, \(\pi\) will also be described as \([l(x_{0}),r(x_{0}),l(x_{1}),r(x_{1}),\ldots,l(x_{n+1}),r(x_{n+1})]\). We say that \(r(x_{i})\) and \(l(x_{i+1})\), for \(0\leq i\leq n\), form an _adjacency_, denoted by \(\langle r(x_{i}),l(x_{i+1})\rangle\). (Note that in the signed representation of a chromosome \(\pi\), we simply say that \(\langle x_{i},x_{i+1}\rangle\) forms an adjacency; moreover, \(\langle x_{i},x_{i+1}\rangle=\langle-x_{i+1},-x_{i}\rangle\).) Also, we say that the adjacency \(\langle r(x_{i}),l(x_{i+1})\rangle\) is associated with \(x_{i}\) and \(x_{i+1}\). Let \(\mathcal{A}[\pi]\) represent the multi-set of adjacencies of \(\pi\). We take the chromosome \(\pi=[+r_{0},+1,-r_{1},+2,+r_{1},-r_{0}]\) as an example to explain the above notations. The multi-set of adjacencies is \(\mathcal{A}[\pi]=\{\langle r_{0}^{t},1^{h}\rangle\), \(\langle 1^{t},r_{1}^{t}\rangle\), \(\langle r_{1}^{h},2^{h}\rangle\), \(\langle 2^{t},r_{1}^{h}\rangle\), \(\langle r_{1}^{t},r_{0}^{t}\rangle\}\), \(\pi\) can also be viewed as \([r_{0}^{h},r_{0}^{t},1^{h},1^{t},r_{1}^{t},r_{1}^{h},2^{h},2^{t},r_{1}^{h},r_{1}^ {t},r_{0}^{t},r_{0}^{h}]\).
**Lemma 2.1**: _Let \(\pi\) be a chromosome and \(\pi^{\prime}\) is obtained from \(\pi\) by performing a symmetric reversal. Then \(\mathcal{A}[\pi]=\mathcal{A}[\pi^{\prime}]\)._
_Proof._ It is apparent that performing the symmetric reversal between \(+r_{0}\) and \(-r_{0}\) will not change \(\mathcal{A}[\pi]\). Assume that the symmetric reversal \(\rho(i,j)\) is performed on the chromosome \(\pi=[x_{0},x_{1},\ldots,x_{n+1}]\), such that \(x_{i}=-x_{j}\), where \(1\leq i<j\leq n\), and yields \(\pi^{\prime}=\pi\bullet\rho(i,j)=[x_{0},x_{1},\ldots,x_{i-1},-x_{j},-x_{j-1},\ldots\), \(-x_{i+1},-x_{i},x_{j+1},\ldots,x_{n+1}]\). Then \(\rho(i,j)\) breaks two adjacencies \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{j}),l(x_{j+1})\rangle\), and creates two new adjacencies \(\langle r(x_{i-1}),l(-x_{j})\rangle\) and \(\langle r(-x_{i}),l(x_{j+1})\rangle\). Since \(x_{i}=-x_{j}\), \(l(x_{i})=l(-x_{j})\) and \(r(x_{j})=r(-x_{i})\), thus
\(\langle r(x_{i-1}),l(x_{i})\rangle=\langle r(x_{i-1}),l(-x_{j})\rangle\) and \(\langle r(x_{j}),l(x_{j+1})\rangle=\langle r(-x_{i}),l(x_{j+1})\rangle\). Consequently, \({\cal A}[\pi]={\cal A}[\pi^{\prime}]\). \(\sqcap\)\(\sqcup\)
Actually, Lemma 2.1 implies a necessary condition for answering the decision question of **SSR**.
**Theorem 2.1**: _Chromosome \(\pi\) cannot be transformed into \(\tau\) by a series of symmetric reversals if \({\cal A}[\pi]\neq{\cal A}[\tau]\)._
A simple negative example would be \(\pi=[+r_{0},\,+r_{1},\,-2,\,+r_{1},\,-1,\,-r_{0}]\) and \(\tau=[+r_{0},\)\(-r_{1},\)\(+2,\)\(-r_{1},\,+1,\)\(-r_{0}]\). One can easily check that \({\cal A}[\pi]\neq{\cal A}[\tau]\), which means that there is no way to convert \(\pi\) to \(\tau\) using symmetric reversals. In the next section, as a warm-up, we first solve the case when each repeat appears at most twice in \(\pi\) and \(\tau\). Even though the method is not extremely hard, we hope the presentation and some of the concepts can help readers understand the details for the general case in Section 5 better.
## 3 An \(O(n^{2})\) Algorithm for SSR with Duplication Number 2
In this section, we consider a special case, where the duplication numbers for the two related chromosomes \(\pi\) and \(\tau\) are both 2. That is, \({\cal A}[\pi]={\cal A}[\tau]\) and \(dp[\pi]=dp[\tau]=2\). We will design an algorithm with running time \(O(n^{2})\) to determine if there is a sequence of symmetric reversals that transform \(\pi\) into \(\tau\).
Note that \({\cal A}[\pi]\) is a multi-set, where an adjacency may appear more than once. When the duplication number of each repeat in the chromosome is at most 2, the same adjacency can appear at most twice in \({\cal A}[\pi]\).
Let \(\pi=[x_{0},x_{1},\ldots,x_{n+1}]\) be a chromosome. Let \(x_{i}\) and \(x_{j}\) be the two occurrences of a repeat \(x\), and \(x_{i+1}\) and \(x_{j+1}\) the two occurrences of the other repeat \(x^{\prime}\) in \(\pi\). We say that \(|x_{i}|\) and \(|x_{i+1}|\) are _redundant_, if \(r(x_{i})=r(x_{j})\) and \(l(x_{i+1})=l(x_{j+1})\) (or \(r(x_{i})=l(x_{j})\) and \(l(x_{i+1})=r(x_{j-1})\)). In this case, the adjacency \(\langle r(x_{i}),l(x_{i+1})\) appears twice. In fact, it is the only case that an adjacency can appear twice. An example is as follows: \(\pi=[+r_{0},+r_{1},-r_{2},+1,+r_{2},-r_{1},-r_{0}]\), where the adjacency \(\langle+r_{1},-r_{2}\rangle\) appears twice (the second negatively), hence \(r_{1}\) and \(r_{2}\) are redundant. The following lemma tells us that if \(x_{i}\) and \(x_{j}\) are redundant, we only need to use one of them to do reversals and the other can be deleted from the chromosome so that each adjacency appears only once.
**Lemma 3.1**: _Given two chromosomes \(\pi=[x_{0},x_{1},\ldots,x_{n+1}]\) and \(\tau\), such that \({\cal A}[\pi]={\cal A}[\tau]\). Let \(|x_{i}|\) and \(|x_{i+1}|\) be two repeats in \(\pi\) that are redundant. Let \(\pi^{\prime}\) and \(\tau^{\prime}\) be the chromosomes after deleting the two occurrences of \(|x_{i+1}|\) from both \(\pi\) and \(\tau\), respectively. Then \(\pi\) can be transformed into \(\tau\) by a series of symmetric reversals if and only if \(\pi^{\prime}\) can be transformed into \(\tau^{\prime}\) by a series of symmetric reversals._
_Proof._ Without loss of generality, we assume that \(r(x_{i})=r(x_{j})=|x_{i}|^{a}\) and \(l(x_{i+1})=l(x_{j+1})=|x_{i+1}|^{b}\), where \(a,b\in\{h,t\}\). The proof of the other case is similar.
(\(\Rightarrow\)) Assume that there is a series of symmetric reversals, \(\rho_{1},\rho_{2},\ldots,\rho_{m}\), that transforms \(\pi\) into \(\tau\), to be specific, \(\pi_{0}=\pi\), \(\pi_{k}=\pi_{k-1}\bullet\rho_{k}\) for each \(1\leq k\leq m\), and \(\pi_{m}=\tau\). Suppose that there exists a symmetric reversal \(\rho_{k}(1\leq k\leq m)\), which is on \(|x_{i+1}|\). Lemma 2.1 guarantees that \(\langle|x_{i}|^{a},|x_{i+1}|^{b}\rangle\) still appears twice in \({\cal A}[\pi_{k-1}]\). Because the two occurrences of \(|x_{i+1}|\) have distinct sign in \(\pi_{k-1}\), the two \(|x_{i}|^{a}\)s are located at different sides of the two \(|x_{i+1}|^{b}\)s respectively, which implies that some \(|x_{i}|^{a}\) is the left node of some occurrence of \(|x_{i}|\) and the other \(|x_{i}|^{a}\) is the right node of the other occurrence of \(|x_{i}|\). Therefore the signs of the two occurrences of \(|x_{i}|\) are also distinct in \(\pi_{k-1}\). It is apparent that performing the symmetric reversal on \(|x_{i}|\) would also transform \(\pi_{k-1}\) into \(\pi_{k}\).
(\(\Leftarrow\)) Assume that there is a series of symmetric reversals, \(\rho^{\prime}_{1},\rho^{\prime}_{2},\ldots,\rho^{\prime}_{m^{\prime}}\), which transforms \(\pi^{\prime}\) into \(\tau^{\prime}\). Let the corresponding chromosomes be \(\pi^{\prime}_{0}=\pi^{\prime}\), \(\pi^{\prime}_{1}\),..., \(\pi^{\prime}_{m^{\prime}}=\tau^{\prime}\), where \(\pi_{k^{\prime}}=\pi_{k^{\prime}-1}\bullet\rho^{\prime}_{k^{\prime}}\) for each \(1\leq k^{\prime}\leq m^{\prime}\). We can obtain \(\overline{\pi^{\prime}}_{k^{\prime}}\) by substituting \(x_{i}\) with \([x_{i},x_{i+1}]\) and \(-x_{i}\) with \([-x_{i+1},-x_{i}]\) in \(\pi^{\prime}_{k^{\prime}}\). Clearly, \(\overline{\pi^{\prime}}_{0}=\pi\) and \(\overline{\pi^{\prime}}_{m^{\prime}}=\tau\). For each \(1\leq k^{\prime}\leq m^{\prime}\), \(\rho^{\prime}_{k^{\prime}}\) is applicable to \(\overline{\pi^{\prime}}_{k^{\prime}-1}\), since all the elements on \(\overline{\pi^{\prime}}_{k^{\prime}-1}\) have the same signs with those on \(\pi^{\prime}_{k^{\prime}-1}\); also, \(\overline{\pi^{\prime}}_{k^{\prime}}=\overline{\pi^{\prime}}_{k^{\prime}-1} \bullet\rho^{\prime}_{k^{\prime}}\), since the two adjacencies of the form \(\langle|x_{i}|^{a},|x_{i+1}|^{b}\rangle\) can not be changed by \(\rho^{\prime}_{k^{\prime}}\). \(\sqcap\)\(\sqcup\)
Regarding the previous example, \(\pi=[+r_{0},+r_{1},-r_{2},+1,+r_{2},-r_{1},-r_{0}]\), where \(r_{1}\) and \(r_{2}\) are redundant, following the above lemma, one can obtain \(\pi^{\prime}=[+r_{0},+r_{1},+1,-r_{1},-r_{0}]\). This is in fact equivalent to replacing the adjacency \(\langle+r_{1},-r_{2}\rangle\) by \(r_{1}\), and \(\langle+r_{2},-r_{1}\rangle\) by \(-r_{1}\).
A chromosome \(\pi\) is _simple_ if every adjacency in \({\cal A}[\pi]\) appears only once. Based on Lemma 3.1, we can remove the two occurrences of a redundant repeat from the chromosomes. Thus, if \(dp[\pi]=dp[\tau]=2\), we can always assume that both \(\pi\) and \(\tau\) are simple. Consequently, there is a unique bijection between two corresponding adjacency sets \({\cal A}[\pi]\) and \({\cal A}[\tau]\). We say that any pair of identical adjacencies are matched to each other.
For each repeat \(x\) with \(dp[\pi,x]=dp[\tau,x]=2\), let \(x_{i}\), \(x_{j}\) be the two occurrences of \(x\) in \(\pi\), and \(y_{i^{\prime}}\), \(y_{j^{\prime}}\) be the two occurrences of \(x\) in \(\tau\), there are four adjacencies associated with \(x_{i}\) and \(x_{j}\) in \(\pi\): \(\langle r(x_{i-1}),l(x_{i})\rangle\), \(\langle r(x_{i}),l(x_{i+1})\rangle\), \(\langle r(x_{j-1}),l(x_{j})\rangle\), \(\langle r(x_{j}),l(x_{j+1})\rangle\). Similarly, there are four adjacencies associated with \(y_{i^{\prime}}\) and \(y_{j^{\prime}}\) in \(\tau\). We say that \(x\) is an _neighbor-consistent_ repeat, if \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{i}),l(x_{i+1})\rangle\) are matched to two adjacencies both associated with \(y_{i^{\prime}}\) or both associated with \(y_{j^{\prime}}\). That is, the left and right neighbors of \(x_{i}\) are identical in both chromosomes. Note that \({\cal A}[\pi]={\cal A}[\tau]\) also implies that the left and right neighbors of the other occurrences \(x_{j}\) are also identical in both two chromosomes if \(x\) is neighbor-consistent. If \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{i}),l(x_{i+1})\rangle\) are matched to two adjacencies, one of which is associated with \(y_{i^{\prime}}\) and the other is associated with \(y_{j^{\prime}}\), then \(x\) is an _neighbor-inconsistent_ repeat. The genes and the repeats which appear once in \(\pi\) are also defined to be neighbor-consistent. (See Figure 2 for an example.) By definition and the fact that \({\cal A}[\pi]={\cal A}[\tau]\), we have
**Proposition 3.1**: _Performing a symmetric reversal on a repeat will turn the repeat from neighbor-consistent to neighbor-inconsistent or vice versa. (See Fig
ure 2.)_
**Theorem 3.1**: _Given two simple related chromosomes \(\pi^{*}\) and \(\tau\) with \(dp[\pi^{*}]=dp[\tau]=2\), \(\pi^{*}=\tau\) if and only if \(\mathcal{A}[\pi^{*}]=\mathcal{A}[\tau]\) and every repeat is neighbor-consistent._
_Proof._ Assume that \(\pi^{*}=[x_{0},x_{1},\ldots,x_{n},x_{n+1}]\) and \(\tau=[y_{0},y_{1},\ldots,y_{n},y_{n+1}]\), where \(x_{0}=y_{0}=+r_{0}\) and \(x_{n+1}=y_{n+1}=-r_{0}\).
The sufficiency part is surely true, since we have \(x_{i}=y_{i}\) for \(0\leq i\leq n+1\).
Now we prove the necessity inductively. Our inductive hypothesis is that \(x_{i}=y_{i}\) for \(0\leq i\leq n+1\). Initially, we have \(x_{0}=y_{0}=+r_{0}\) and \(x_{1}=y_{1}\) since \(\langle r_{0}^{t},l(x_{1})\rangle=\langle r_{0}^{t},l(y_{1})\rangle\). For the inductive step, consider \(x_{i+1}\). Because \(\mathcal{A}[\pi^{*}]=\mathcal{A}[\tau]\), the adjacency \(\langle r(x_{i}),l(x_{i+1})\rangle\) appears in both \(\mathcal{A}[\pi^{*}]\), and \(\mathcal{A}[\tau]\). Since \(|x_{i}|\) is even, the two adjacencies \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{i}),l(x_{i+1})\rangle\) must be matched to two adjacencies which are associated with a single occurrence of \(|x_{i}|\) in \(\tau\). From the inductive hypothesis, \(x_{i-1}=y_{i-1}\) and \(x_{i}=y_{i}\), \(\langle r(x_{i-1}),l(x_{i})\rangle\) has been matched to \(\langle r(y_{i-1}),l(y_{i})\rangle\), Thus, \(\langle r(x_{i}),l(x_{i+1})\rangle\) must be matched to \(\langle r(y_{i}),l(y_{i+1})\rangle\), together with \(x_{i}=y_{i}\), we have, \(l(x_{i+1})=l(y_{i+1})\) and \(x_{i+1}=y_{i+1}\). \(\sqcap\)\(\sqcup\)
Based on proposition 3.1 and Theorem 3.1, to transform \(\pi\) into \(\tau\), it is sufficient to perform an odd number (at least 1) of symmetric reversals on each neighbor-inconsistent repeat, and an even number (might be 0) of symmetric reversals on each neighbor-consistent repeat. Hereafter, we also refer an neighbor-consistent (resp. neighbor-inconsistent) repeat as an _even_ (resp. _odd_) repeat.
Figure 2: \(x_{i}\), \(x_{j}\) are the two occurrences of \(x\) in \(\pi\) with \(x_{i}=-x_{j}\), and \(y_{i^{\prime}}\), \(y_{j^{\prime}}\) be the two occurrences of \(x\) in \(\tau\). Case \((a)\): \(x\) is neighbor-consistent, and will turn to neighbor-inconsistent by a reversal on itself. Case \((b)\): \(x\) is neighbor-inconsistent, and will turn to neighbor-consistent by a reversal on itself.
The main difficulty to find a sequence of symmetric reversals between \(\pi\) and \(\tau\) is to choose a "correct" symmetric reversal at a time. Note that, for a pair of occurrences \((x_{i},,x_{j})\) of a repeat \(x\), the orientations may be the same at present and after some reversals, the orientations of \(x_{i}\) and \(x_{j}\) may differ. We can only perform a reversal on a pair of occurrences of a repeat with different orientations. Thus, it is crucial to choose a "correct" symmetric reversal at the right time. In the following, We will use "intersection" graph to handle this.
Suppose that we are given two simple related chromosomes \(\pi\) and \(\tau\) with \(dp[\pi]=dp[\tau]=2\) and \({\cal A}[\pi]={\cal A}[\tau]\). In this case, each repeat in the chromosomes represent an interval indicated by the two occurrences of the repeat. Thus, we can construct an _intersection graph_\(IG(\pi,\tau)=(V[\pi],E[\pi])\). For each repeat \(x\) with \(dp[\pi,x]=2\), construct a vertex \(x\in V_{\pi}\), and set its weight, \(\omega(x)=2\) if \(x\) is even, and \(\omega(x)=1\) if \(x\) is odd; set the color of \(x\) black if the signs of the two occurrences of \(x\) in \(\pi\) are different, and white otherwise. Construct an edge between two vertices \(x\) and \(y\) if and only if the occurrences of \(x\) and \(y\) appear alternatively in \(\pi\), i.e., let \(x_{i}\) and \(x_{j}\) (\(i<j\)) be the two occurrences of \(x\), and \(x_{k}\) and \(x_{l}\) (\(k<l\)) be the two occurrences of \(y\) in \(\pi\), there will be an edge between the vertices \(x\) and \(y\) if and only if \(i<k<j<l\) or \(k<i<l<j\). There are three types of vertices in \(V[\pi]\): black vertices (denoted as \(V_{b}[\pi]\)), white vertices of weight 1 (denoted as \(V_{w}^{1}[\pi]\)) and white vertices of weight 2 (denoted as \(V_{w}^{2}[\pi]\)). Thus, \(V[\pi]=V_{b}[\pi]\cup V_{w}^{1}[\pi]\cup V_{w}^{2}[\pi]\). In fact, the intersection graph is a circle graph while ignoring the weight and color of all the vertices.
**Lemma 3.2**: _A single white vertex of weight 1 cannot be a connected component in \(IG(\pi,\tau)\)._
_Proof._ We prove it by contradiction. Assume that \(x\) is a white vertex of weight 1, which forms a connected component of \(IG(\pi,\tau)\). Let the two occurrences of \(x\) be \(x_{i}\) and \(x_{j}\) (\(i<j\)) in \(\pi\), and \(y_{k}\) and \(y_{l}\) in \(\tau\). Since \(x\) is odd, w.l.o.g, assume that \(\langle r(x_{i}),l(x_{i+1})\rangle\) and \(\langle r(x_{j-1}),l(x_{j})\rangle\) are matched to two adjacencies both associated with \(y_{k}\). Because \(x\) is an isolated vertex in \(IG(\pi,\tau)\), all the other occurrences of \(|x_{i+1}|,\ldots,|x_{j-1}|\) must also locate in between \(x_{i}\) and \(x_{j}\) in \(\pi\).
Note that each adjacency is unique in \({\cal A}[\pi]\), as well as in \({\cal A}[\tau]\). In case that \(y_{k+1}\) is an occurrence of \(|x_{i+1}|\), the adjacency \(\langle r(y_{k+1}),l(y_{k+2})\rangle\) must be matched to an adjacency located in between \(x_{i}\) and \(x_{j}\) in \(\pi\), thus, an occurrence of \(|y_{k+2}|\) also locates in between \(x_{i}\) and \(x_{j}\) in \(\pi\), so does the other occurrence of \(|y_{k+2}|\) (if exist), hence, all the adjacencies associate with the occurrences of \(|y_{k+2}|\) locate in between \(x_{i}\) and \(x_{j}\) in \(\pi\), which implies that \(y_{k+3}\) exists. The recursion can not terminate until there is some \(y_{k+t}\) which is an occurrence of \(|x_{j}|\), also the adjacency \(\langle r(y_{k+t}),l(y_{k+t+1})\rangle=\langle r(x_{j-1}),l(x_{j})\rangle\). It is a contradiction since \(\langle r(y_{k-1}),l(y_{k})\rangle=\langle r(x_{i-1}),l(x_{i})\rangle\).
The argument when \(y_{k+1}\) is an occurrence of \(|x_{j-1}|\) is similar. \(\sqcap\)\(\sqcup\)
For each vertex \(x\) in \(IG(\pi,\tau)\), let \(N(x)\) denote the set of vertices incident to \(x\). For a black vertex, say \(x\), in \(IG(\pi,\tau)\), performing a symmetric reversal of \(x\) in \(\pi\), yields \(\pi^{\prime}\), where the intersection graph \(IG(\pi^{\prime})=(V[\pi^{\prime}],E[\pi^{\prime}])\) can be derived from \(IG(\pi,\tau)\) following the three rules:
* rule-I: for each vertex \(v\in N(x)\) in \(IG(\pi,\tau)\), change its color from black to white, and vice versa.
* rule-II: for each pair of vertices \(u,v\in N(x)\) of \(IG(\pi,\tau)\), if \((u,v)\in E[\pi]\), then \(E[\pi^{\prime}]=E[\pi]-\{(u,v)\}\); and if \((u,v)\notin E[\pi]\), then \(E[\pi^{\prime}]=E[\pi]\cup\{(u,v)\}\).
* rule-III: subtract the weight of \(x\) by one, if \(\omega(x)>0\), then \(V[\pi^{\prime}]=V[\pi]\); and if \(\omega(x)=0\), then \(V[\pi^{\prime}]=V[\pi]-\{x\}\).
If \(x\) is a black vertex in \(IG(\pi,\tau)\) and \(\omega(x)=1\), then performing the symmetric reversal of \(x\) in \(\pi\) yields \(\pi^{\prime}\). Let \(C_{1},C_{2},\ldots,C_{m}\) be the connected components introduced by the deletion of \(x\) in \(IG(\pi^{\prime},\tau)\), we go through some properties of performing this symmetric reversal.
**Lemma 3.3**: _In each \(C_{i}\) (\(1\leq i\leq m\)), there is at least one vertex \(z_{i}\) such that \(z_{i}\in N(x)\) in \(IG(\pi,\tau)\)._
_Proof._ Consider the scenario of \(IG(\pi^{\prime},\tau)\) just prior to deleting \(x\), all the connected components \(C_{1},C_{2},\ldots,C_{m}\) are in a single connected component, which also contains \(x\). But immediately after deleting \(x\), they become \(m\) separate connected components. \(\sqcap\)\(\sqcup\)
**Lemma 3.4**: _Let \(x^{\prime}\) be a black vertex, \(\omega(x)=\omega(x^{\prime})=1\), and \(x^{\prime}\in N(x)\) in \(IG(\pi,\tau)\). After performing the symmetric reversal of \(x\) in \(\pi\), let \(y\) be a vertex in the connected component \(C_{i}\), and \(x^{\prime}\) is in the connected component \(C_{j}\), \(i\neq j\). Let \(\pi^{\prime\prime}\) be the resulting chromosome after performing the symmetric reversal of \(x^{\prime}\) in \(\pi\), then the color of \(y\) is the same in \(IG(\pi^{\prime},\tau)\) and \(IG(\pi^{\prime\prime},\tau)\)._
_Proof._ We conduct the proof by considering the following two cases: (1) \(y\in N(x)\), (2) \(y\notin N(x)\) in \(IG(\pi,\tau)\). In case (1), \((y,x^{\prime})\in E[\pi]\), then the color of
\(y\) would be changed by performing the symmetric reversal on either \(x\) or \(x^{\prime}\) in \(\pi\). In case (2), \((y,x^{\prime})\notin E[\pi]\), then the color of \(y\) would not be changed by performing the symmetric reversal on either \(x\) or \(x^{\prime}\) in \(\pi\).
**Lemma 3.5**: _Let \(x^{\prime}\) be a black vertex, \(\omega(x)=\omega(x^{\prime})=1\), and \(x^{\prime}\in N(x)\) in \(IG(\pi,\tau)\). After performing the symmetric reversal of \(x\) in \(\pi\), let \(y,z\) be two vertices in the connected component \(C_{i}\), and \(x^{\prime}\) is in the connected component \(C_{j}\), \(i\neq j\). Let \(\pi^{\prime\prime}\) be the resulting chromosome after performing the symmetric reversal of \(x^{\prime}\) in \(\pi\). If \((y,z)\in E[\pi^{\prime}]\), then \((y,z)\in E[\pi^{\prime\prime}]\)._
_Proof._ We conduct the proof by considering the following three cases: (1) both \(y,z\in N(x)\), (2) only one, say \(y\), is in \(N(x)\), and (3) neither of them belongs to \(N(x)\) in \(IG(\pi,\tau)\). We illustrate the three cases in Figure 4.
(1) both \(y,z\in N(x)\). Then \((y,z)\notin E[\pi]\), since \(y\) and \(x^{\prime}\) are in distinct connected component, \((y,x^{\prime})\in E[\pi]\) and \((z,x^{\prime})\in E[\pi]\), thus, \((y,z)\in E[\pi^{\prime\prime}]\).
(2) \(y\in N(x)\), \(z\notin N(x)\). Then \((y,z)\in E[\pi]\), since \(y\) and \(x^{\prime}\) are in distinct connected component, \((y,x^{\prime})\in E[\pi]\) and \((z,x^{\prime})\notin E[\pi]\), thus, \((y,z)\in E[\pi^{\prime\prime}]\).
(3) \(y\notin N(x)\), \(z\notin N(x)\). Then \((y,z)\in E_{\pi}\), since \(y\) and \(x^{\prime}\) are in distinct connected component, \((y,x^{\prime})\notin E[\pi]\) and \((z,x^{\prime})\notin E[\pi]\), thus, \((y,z)\in E[\pi^{\prime\prime}]\).
**Theorem 3.2**: _If a connected component of \(IG(\pi,\tau)\) contains at least one black vertex, then there exists a symmetric reversal, after performing it, any newly created connected component containing a white vertex of weight 1 also contains a black vertex._
_Proof._ It is apparent that performing a symmetric reversal of a black vertex \(x\) of weight 2 will not introduce any new connected component, and the vertex \(x\) is still black. Assume that there is no black vertex of weight 2 in the connected component. For each black vertex \(x\), let \(\Delta(x)\) be the number of white vertices in the connected components, which contains white vertices of weight 1 but not any black vertex, introduced by performing the symmetric reversal of \(x\). Next, we show that, there must be some vertex \(x\), such that \(\Delta(x)=0\).
On the contrary, let \(x\) be the black vertex with minimum \(\Delta(x)>0\). Let \(C_{1},C_{2},\ldots,C_{m}\) be the introduced connected components after performing the symmetric reversal of \(x\). There could not be a connected component which is composed of white vertices of weight 2, since otherwise, following Lemma 3.3, the neighbor of \(x\) in this connected component is black prior to performing the
Figure 4: Three cases in the proof of Lemma 3.5.
symmetric reversal of \(x\), which is in contradiction with our assumption. W.l.o.g, assume that each of \(C_{1},C_{2},\ldots,C_{i}\) (\(1\leq i\leq m\)) contains a white vertex of weight 1 but no black vertex, and each of \(C_{i+1},C_{i+2},\ldots,C_{m}\) contains at least one black vertex.
Let \(x^{\prime}\) be the neighbor of \(x\) in \(C_{1}\). Then, \(x^{\prime}\) is a black vertex of weight 1 prior to performing the symmetric reversal of \(x\). It is sufficient to show that \(\Delta(x^{\prime})<\Delta(x)\), and hence contradicting with that \(\Delta(x)\) is minimum. \(\Delta(x)\) only counts the vertices in \(C_{1},C_{2},\ldots,C_{i}\). From Lemma 3.4 and Lemma 3.5, the color of all the vertices in \(C_{i+1},C_{i+2},\ldots,C_{m}\) are preserved, and all the edges in \(C_{i+1},C_{i+2},\ldots,C_{m}\) are preserved after performing the symmetric reversal of \(x^{\prime}\). Therefore, \(\Delta(x^{\prime})\) will also not count any vertex in \(C_{i+1},C_{i+2},\ldots,C_{m}\), which implies, \(\Delta(x^{\prime})\leq\Delta(x)\).
From Lemma 3.2, \(x^{\prime}\) must have a neighbor in \(C_{1}\). Let \(x^{\prime\prime}\) be a neighbor of \(x^{\prime}\) in \(C_{1}\). In \(IG(\pi,\tau)\), if \((x,x^{\prime\prime})\in E_{\pi}\), then \((x^{\prime},x^{\prime\prime})\notin E_{\pi}\), and \(x^{\prime\prime}\) is a black vertex prior to and after performing the symmetric reversal of \(x^{\prime}\). If \((x,x^{\prime\prime})\notin E_{\pi}\), then \((x^{\prime},x^{\prime\prime})\in E_{\pi}\), and \(x^{\prime\prime}\) is a white vertex prior to and after performing the symmetric reversal of \(x\), but become a black vertex after performing the symmetric reversal of \(x^{\prime}\). In either case, \(\Delta(x^{\prime})\) will not count \(x^{\prime\prime}\), thus, \(\Delta(x^{\prime})<\Delta(x)\). We illustrate the proof in Figure 5. \(\sqcap\)\(\sqcup\)
The main contribution of this section is the following theorem.
**Theorem 3.3**: _A chromosome \(\pi\) can be transformed into the other chromosome \(\tau\) if and only if (I) \({\cal A}[\pi]={\cal A}[\tau]\), and (II) each white vertex of weight 1 belongs to a connected component of \(IG(\pi,\tau)\) containing a black vertex._
_Proof._ (\(\Rightarrow\)) If there exists an connected component of \(IG(\pi,\tau)\), say \(C\), which is composed of white vertices including a white vertex \(x\) of wight 1, and all the vertex in \(C\) do not admit any symmetric reversal. Moreover, \(x\) is odd in \(\pi\). Thus, it is impossible to find a series of symmetric reversals to make \(x\) black, and then make \(x\) even. According to Theorem 3.1, \(\pi\) can not be transformed into \(\tau\).
(\(\Leftarrow\)) As each odd repeat in \(\pi\) corresponds to a vertex of weight 1, and each even repeat in \(\pi\) corresponds to a vertex of weight 2. Theorem 3.2 guarantees that, each vertex of weight 1 will be reversed once, and each vertex of weight 2 will either not be reversed or be reversed twice. Finally, all the repeats will become even, from Theorem 3.1, \(\pi\) has been transformed into \(\tau\). \(\sqcap\)\(\sqcup\)
Figure 5: Whether \((x,x^{\prime\prime})\in E_{\pi}\) or not, \(x^{\prime\prime}\) will always be black after performing the symmetric reversal of \(x^{\prime}\), thus \(\Delta(x^{\prime})<\Delta(x)\).
The above theorem implies that a breadth-first search of \(IG[\pi,\tau]\) will determine whether \(\pi\) can be transformed into \(\tau\), which takes \(O(n^{2})\) time, because \(IG(\pi,\tau)\) contains at most \(n\) vertices and \(n^{2}\) edges. We will show the details of the algorithm in Section 5, since it also serves as a decision algorithm for the general case.
In the next section, we show that the optimization version is also surprisingly polynomially solvable when the input genomes have some constraints.
## 4 An Algorithm for the 2-Balanced Case of SMSR
In this section, we consider a special case of SMSR, which we call _2-Balanced_, where the duplication numbers of the two simple related chromosomes \(\pi\) and \(\tau\) are both 2, and \(\tau\) contains both \(+r\) and \(-r\) for each repeat \(r\in\Sigma_{2}\). The algorithm presented here will give a shortest sequence of symmetric reversals that transform \(\pi\) into \(\tau\).
As \(\mathcal{A}[\pi]=\mathcal{A}[\tau]\) for two related chromosomes \(\pi\) and \(\tau\), there is a bijection between identical adjacencies of \(\mathcal{A}[\pi]\) and \(A[\tau]\). For each pair of identical adjacencies \(\langle r(x_{i}),l(x_{i+1})\rangle\in A[\pi]\) and \(\langle r(y_{j}),l(y_{j+1})\rangle\in A[\tau]\), define the sign of \(\langle r(x_{i}),l(x_{i+1})\rangle\) to be _positive_ if \(r(x_{i})=r(y_{j})\neq l(x_{i+1})=l(y_{j+1})\), and _negative_ if \(r(x_{i})=l(y_{j+1})\neq l(x_{i+1})=r(y_{j})\). We can always assume that \(\langle r(x_{0}),l(x_{1})\rangle\) and \(\langle r(x_{n}),l(x_{n+1})\rangle\) are positive, since otherwise we can reverse \(\pi\) in a whole. Note that if \(r(x_{i})=l(x_{i+1})=r(y_{j})=l(y_{j+1})\), \(\langle r(x_{i}),l(x_{i+1})\rangle\) can be either positive or negative, we call it an _entangled_ adjacency.
A segment \(I\) of \(\pi\) is referred to as _positive_ (resp. _negative_) if all the adjacencies in it have positive (resp. _negative_) directions, and \(I\) is _maximal_, if it is not a subsegment of any other positive (resp. negative) segment in \(\pi\) than \(I\). A maximal positive (resp. negative) segment is abbreviated as an _MPS_ (resp. _MNS_). (See Figure 6 for an example.)
Now we assign proper directions to the entangled adjacencies such that the number of _MNS_ of \(\pi\) is minimized. Let \(\langle r(x_{i}),l(x_{i+1})\rangle\) be an entangled adjacency, then \(r(x_{i})=l(x_{i+1})=r(y_{j})=l(y_{j+1})\), accordingly \(l(x_{i})=r(x_{i+1})=l(y_{j})=r(y_{j+1})\). Since \(\pi\) is simple, \(r(x_{i-1})\neq l(x_{i+2})\); hence, if
Figure 6: An example of _MPS_ and _MNS_. Positive adjacencies and _MPS_ are marked with straight lines, while negative adjacencies and _MNS_ are marked with curved lines, the boundaries are circled, and the entangled adjacency \(\langle r_{4},-r_{4}\rangle\) is colored green.
and \(l(x_{i+2})=l(y_{j+2})\), then both \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{i+1}),l(x_{i+2})\rangle\) are positive; if \(r(x_{i-1})=l(y_{j+2})\) and \(l(x_{i+2})=r(y_{j-1})\), then both \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{i+1}),l(x_{i+2})\rangle\) are negative. Thus, we have:
**Proposition 4.1**: _The two neighbors of any entangled adjacency always have the same direction. The number of MNS is minimized provided that the direction of each entangled adjacency is the same as its neighbors._
The two neighbors of any entangled adjacency always have the same direction. Therefore, the number of MNS is minimized provided that the direction of each entangled adjacency is the same as its neighbors.
Once the directions of all the adjacencies of \(\pi\) are fixed, the positive segments and negative segments appear alternatively on \(\pi\), in particular, both the leftmost and rightmost segments are positive. Let \(N_{\mbox{\it\scriptsize MPS}}[\pi]\) and \(N_{\mbox{\it\scriptsize MNS}}[\pi]\) be the number of _MPS_ and _MNS_ respectively on \(\pi\), then \(N_{\mbox{\it\scriptsize MPS}}[\pi]-N_{\mbox{\it\scriptsize MNS}}[\pi]=1\). Accordingly,
**Theorem 4.1**: \(\pi=\tau\) _if and only if \(N_{\mbox{\scriptsize MNS}}[\pi]=0\)._
_Proof._ The sufficient part is surely true.
We prove the efficient part inductively. Our inductive hypothesis is that \(x_{i}=y_{i}\) for \(0\leq i\leq n+1\). Initially, we have \(x_{0}=y_{0}=+0\). For the inductive step, let \(y_{j}\) be the other occurrence of \(|y_{i}|\), since \(y_{i}\) and \(y_{j}\) have distinct signs in \(\tau\), \(r(y_{i})=l(y_{j})\), together with the inductive hypothesis that \(x_{i}=y_{i}\), we have, \(r(x_{i})=r(y_{i})=l(y_{j})\). As the adjacency \(\langle r(x_{i}),l(x_{i+1})\rangle\) is positive, there must be some \(y_{k}\) of \(\tau\), such that \(r(y_{k})=r(x_{i})\) and \(l(y_{k+1})=l(x_{i+1})\). Since \(y_{i}\) is the unique occurrence satisfying \(r(y_{i})=r(x_{i})\), then \(k=i\), and consequently, \(l(y_{i+1})=l(x_{i+1})\) and \(x_{i+1}=y_{i+1}\). This completes the induction.
An occurrence of \(x_{i}\) (\(1\leq i\leq n\)), is called a _boundary_ if the directions of its left adjacency \(\langle r(x_{i-1}),l(x_{i})\rangle\) and right adjacency \(\langle r(x_{i}),l(x_{i+1})\rangle\) are distinct. Thus, any boundary is shared by an _MPS_ and an _MNS_. (See Figure 6 for an example.) Let \(x_{i}\) and \(x_{j}\) (\(i<j\)) be a pair of occurrences of \(|x_{i}|\) in \(\pi\) with distinct signs, performing a symmetric reversal \(\rho(i,j)\) on \(|x_{i}|\) yields \(\pi^{\prime}\). We have,
**Lemma 4.1**: \(N_{\mbox{\scriptsize MNS}}[\pi]-N_{\mbox{\scriptsize MNS}}[\pi^{\prime}]\leq 1\)_._
_Proof._ The proof is conducted by enumerating all possible cases that \(x_{i}\) and \(x_{j}\) may be boundaries, in the interior of a _MPS_ or a _MNS_. For a revenue of 1, the unique case is that both \(x_{i}\) and \(x_{j}\) are boundaries and the substring \([x_{i},\ldots,x_{j}]\) starts and ends with both _MNS_ or both _MPS_.
Lemma 4.1 implies that a scenario will be optimal, provided that each symmetric reversal subtracts the number of _MNS_ by one. Luckily, we can always find such symmetric reversals till \(\pi\) has been transformed into \(\tau\).
**Lemma 4.2**: _Let \(x_{i}\) and \(x_{j}\) (\(i<j\)) be the two occurrences of the repeat \(x\) in \(\pi\), (I) either they are both boundaries or none of them is a boundary. (II) If \(x_{i}\) and \(x_{j}\) are both boundaries, the adjacencies \(\langle r(x_{i}),l(x_{i+1})\rangle\) and \(\langle r(x_{j-1}),l(x_{j})\rangle\) have the same direction, and the adjacencies \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{j}),l(x_{j+1})\rangle\) have the same direction. (III) Moreover, if \(x_{i}=x_{j}\), then \(\langle r(x_{i}),l(x_{i+1})\rangle\) and
\(\langle r(x_{j-1}),l(x_{j})\rangle\) are matched to two consecutive adjacencies in \(\tau\), and \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{j}),l(x_{j+1})\rangle\) are matched to two consecutive adjacencies in \(\tau\)._
_Proof._ (I) Without loses of generality, assume that \(x_{i}\) is a boundary. Let \(y_{k}\) and \(y_{l}\) be the occurrences of \(|x_{i}|\) in \(\tau\) with distinct signs, thus \(l(y_{k})=r(y_{l})\neq r(y_{k})=l(y_{l})\). We partition the adjacencies associated with \(y_{k}\) and \(y_{l}\) into two groups: \(\langle r(y_{k-1}),l(y_{k})\rangle\) and \(\langle r(y_{l-1}),l(y_{l})\rangle\) form Group-(I) and \(\langle r(y_{k}),l(y_{k+1})\rangle\) and \(\langle r(y_{l}),l(y_{l+1})\rangle\) form Group-(II). The group partition guarantees that \(l(y_{k})\) and \(r(y_{l})\) are in different groups, and \(r(y_{k})\) and \(l(y_{l})\) are also in different groups. Since the two adjacencies \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{i}),l(x_{i+1})\rangle\) have distinct directions, so they are either matched to Group-(I) or Group-(II), accordingly, the two adjacencies \(\langle r(x_{j-1}),l(x_{j})\rangle\) and \(\langle r(x_{j}),l(x_{j+1})\rangle\) have to be matched to the other group. In either case, \(\langle r(x_{j-1}),l(x_{j})\rangle\) and \(\langle r(x_{j}),l(x_{j+1})\rangle\) have distinct directions. Thus, \(x_{j}\) is a boundary.
(II) If \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{i}),l(x_{i+1})\rangle\) are matched to group-(I), then \(\langle r(x_{i}),l(x_{i+1})\rangle\) is negative, \(l(x_{j})\) must be matched to either \(r(y_{k})\) or \(r(y_{l})\), which implies that \(\langle r(x_{j-1}),l(x_{j})\rangle\) is also negative. If \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{i}),l(x_{i+1})\rangle\) are matched to group-(II), then \(\langle r(x_{i}),l(x_{i+1})\rangle\) is positive, \(l(x_{j})\) must be matched to either \(l(y_{k})\) or \(l(y_{l})\), which implies that \(\langle r(x_{j-1}),l(x_{j})\rangle\) is also positive. Since the direction of \(\langle r(x_{i-1}),l(x_{i})\rangle\) is different from \(\langle r(x_{i}),l(x_{i+1})\rangle\), and the direction of \(\langle r(x_{j}),l(x_{j+1})\rangle\) is different from \(\langle r(x_{j-1}),l(x_{j})\rangle\), \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{j}),l(x_{j+1})\rangle\) have the same direction.
(III) When \(x_{i}=x_{j}\), the positive pair of adjacencies must be matched to the two adjacencies associated with the occurrence with sign "+", and the negative pair of adjacencies must be matched to the two adjacencies associated the occurrence with sign "-", so they are consecutive in \(\tau\). \(\sqcap\)\(\sqcup\)
**Lemma 4.3**: _Every MPS in \(\pi\) has an identical segment in \(\tau\), and every MNS in \(\pi\) has a reversed and negated segment in \(\tau\)._
_Proof._ Let \(I=[x_{i},x_{i+1},\ldots,x_{i+k}]\) be a _MPS_ in \(\pi\). There must be an adjacency \(\langle r(y_{j}),l(y_{j+1})\rangle\) such that \(r(x_{i})=r(y_{j})\) and \(l(x_{i+1})=l(y_{j+1})\), We conduct an inductive proof. Our inductive hypothesis is that \(x_{i+t}=y_{j+t}\), for all \(0\leq t\leq k\). Initially, we have \(x_{i}=y_{j}\). For the inductive step, let \(y_{j^{\prime}}\) be the other occurrence
Figure 7: An illustration of Lemma 4.2. Assume that \(x_{i}=y_{k}=-y_{l}\), \(\langle r(x_{j-1}),l(x_{j})\rangle\) and \(\langle r(x_{j}),l(x_{j+1})\rangle\) are matched to Group-(I). In case \((a)\), the signs of \(x_{i}\) and \(x_{j}\) are distinct; and in case \((b)\), the signs of \(x_{i}\) and \(x_{j}\) are identical.
of \(|y_{j+t}|\) in \(\tau\). Since \(y_{j^{\prime}}\) and \(y_{j+t}\) have distinct signs, \(r(y_{j+t})=l(y_{j^{\prime}})\), together with the inductive hypothesis, we have \(r(x_{i+t})=r(y_{j+t})=l(y_{j^{\prime}})\), which implies that \(\langle r(x_{i+t}),l(x_{i+t+1})\rangle\) must be matched to \(\langle r(x_{j+t}),l(y_{i+t+1})\rangle\). Consequently \(l(x_{i+t+1})=l(y_{i+t+1})\) and \(x_{i+t+1}=y_{i+t+1}\). This completes the induction.
The proof for an _MNS_ in \(\pi\) is similar hence omitted. \(\sqcap\)\(\sqcup\)
**Theorem 4.2**: _If there exists an MNS in \(\pi\), then there exists a pair of boundaries, which are occurrences of the same repeat with different orientations._
_Proof._ Assume to the contrary that each pair of boundaries, which are duplications of the same repeat, have the same sign. From Lemma 4.2 and 4.3, each pair of boundaries that have equal absolute values locate on two distinct _MPS_s, whose corresponding segments are adjacent in \(\tau\). As each _MPS_ is associated with two boundaries, there is a path along all the _MPS_s and boundaries in \(\pi\), the corresponding segments of all the _MPS_s forms a substring of \(\tau\). Note that \(\langle r(x_{0}),l(x_{1})\rangle\) and \(\langle r(x_{n}),l(x_{n+1})\rangle\) are both positive, so the substring contains both \(y_{0}\) and \(y_{n+1}\), and it becomes the whole of \(\tau\). That is a contradiction, since \(\tau\) also contains the corresponding segments of the _MNS_s. \(\sqcap\)\(\sqcup\)
Now, we formally present the algorithm as Algorithm 1 for computing the least number of symmetric reversals to transform \(\pi\) into \(\tau\).
**Theorem 4.3**: _Algorithm 1 gives a sequence of minimum number of symmetric reversals that transforms \(\pi\) into \(\tau\), and runs in \(O(n^{2})\) time._
_Proof._ Theorem 4.2 guarantees the existence of two boundaries with \(x_{i}=-x_{j}\), whenever there is an _MNS_. From Lemma 4.2-(II), performing the symmetric
reversal on \(|x_{i}|\) will decrease the number of _MNS_ by one. Thus, the number of symmetric reversals performed by the algorithm _Balanced-2 SMSR_ is equal to the number of _MNS_ in \(\pi\), which is optimum according to Theorem 4.
As for the time complexity, it takes \(O(n^{2})\) time to build the bijection between \(\mathcal{A}[\pi]\) and \(\mathcal{A}[\tau]\). Obviously, steps 2-4 take linear time. The **while**-loop runs at most \(n/2\) rounds, and in each round, it takes linear time to find a pair of proper boundaries. Totally, the time complexity is \(O(n^{2})\).
We comment that the result in this section is perhaps the best that we can hope, since we will show in section 6 that, when the duplication number is 2 but \(\tau\) is not required to contain both \(+|x|\) and \(-|x|\) for each repeat \(|x|\in\Sigma_{2}\), the _SMSR_ problem becomes NP-hard. More generally, when the duplication number of \(\pi\) and \(\tau\) is unlimited, the scenario is quite different. Nevertheless, in the next section, we proceed to handle this general case.
## 5 An \(O(n^{2})\) Decision Algorithm for the General Case
For the general case, i.e., when the duplication number for the two related input genomes is arbitrary, the extension of the algorithm in Section 3 is non-trivial as it is impossible to make the genomes simple. Our overall idea is to fix any bijection \(f\) between the (identical) adjacencies of the input genomes, and build the corresponding alternative-cycle graph. This alternative-cycle graph is changing according to the corresponding symmetric reversals; and we show that, when the graph contains only 1-cycles, then the target \(\tau\) is reached. Due to the changing nature of the alternative-cycle graph, we construct a blue edge intersection graph to capture these changes. However, this is not enough as the blue intersection graph built from the alternative-cycle graph could be disconnected and we need to make it connected by adding additional vertices such that the resulting sequence of symmetric reversals are consistent with the original input genomes, and can be found in the new intersection graph (called IG, which is based on the input genomes \(\pi\) and \(\tau\) as well as \(f\)). We depict the details in the following.
Suppose that we are given two related chromosomes \(\pi=[x_{0},x_{1},\ldots,x_{n+1}]\) and \(\tau=[y_{0},y_{1},\ldots,y_{n+1}]\), such that \(x_{0}=y_{0}=+r_{0}\) and \(x_{n+1}=y_{n+1}=-r_{0}\). Theorem 2.1 shows that \(\mathcal{A}[\pi]=\mathcal{A}[\tau]\) is a necessary condition, thus there is a bijection \(f\) between identical adjacencies in \(\mathcal{A}[\pi]\) and \(\mathcal{A}[\tau]\), as shown in Figure 8. Based on the bijection \(f\), we construct the alternative-cycle graph \(ACG(\pi,\tau,f)\) as follows. For each \(x_{i}\) in \(\pi\), construct an ordered pair of nodes, denoted by \(l(x_{i})\) and \(r(x_{i})\), which are connected by a red edge. For each \(y_{k}\) in \(\tau\), assume that \(\langle r(y_{k-1}),l(y_{k})\rangle\) is matched to \(\langle r(x_{i-1}),l(x_{i})\rangle\), and \(\langle r(y_{k}),l(y_{k+1})\rangle\) is matched to \(\langle r(x_{j-1}),l(x_{j})\rangle\), in the bijection \(f\). There are four cases:
1. \(l(y_{k})=l(x_{i})\) and \(r(y_{k})=r(x_{j-1})\), then connect \(l(x_{i})\) and \(r(x_{j-1})\) with a blue edge,
2. \(l(y_{k})=r(x_{i-1})\) and \(r(y_{k})=r(x_{j-1})\), then connect \(r(x_{i-1})\) and \(r(x_{j-1})\) with a blue edge,
3. \(l(y_{k})=l(x_{i})\) and \(r(y_{k})=l(x_{j})\), then connect \(l(x_{i})\) and \(l(x_{j})\) with a blue edge,
4. \(l(y_{k})=r(x_{i-1})\) and \(r(y_{k})=l(x_{j})\), then connect \(r(x_{i-1})\) and \(l(x_{j})\) with a blue edge.
Actually, two nodes connected by a red edge implies they are from the same occurrence of some repeat/gene in \(\pi\), so each occurrence of some repeat/gene in \(\pi\) corresponds to a red edge; and similarly, two nodes connected by a blue edge implies that they are from the same occurrence of some repeat/gene in \(\tau\), thus each occurrence of some repeat/gene in \(\tau\) corresponds to a blue edge. Note that each node associates with one red edge and one blue edge, so \(ACG(\pi,\tau,f)\) is composed of edge disjoint cycles, on which the red edges and blue edge appears alternatively. A cycle composed of \(c\) blue edges as well as \(c\) red edges is called a \(c\)-cycle, it is called a long cycle when \(c\geq 2\).
**Theorem 5.1**: _Given two chromosomes \(\pi^{*}\) and \(\tau\), \(\pi^{*}=\tau\) if and only if \(\mathcal{A}[\pi^{*}]=\mathcal{A}[\tau]\), and there is a bi bijection \(f\) between the identical adjacencies in \(\mathcal{A}[\pi^{*}]\) and \(\mathcal{A}[\tau]\), such that all the cycles in the resulting alternative-cycle graph \(ACG(\pi^{*},\tau,f)\) are 1-cycles._
_Proof._ Assume that \(\pi^{*}=[z_{0},z_{1},\ldots,z_{n},z_{n+1}]\) and \(\tau=[y_{0},y_{1},\ldots,y_{n},y_{n+1}]\), where \(z_{0}=y_{0}=+r_{0}\) and \(z_{n+1}=y_{n+1}=-r_{0}\).
The sufficiency part is surely true, since we have \(z_{i}=z_{i}\) and bijection \(f\) is to match \(\langle r(z_{i-1}),l(z_{i})\rangle\) with \(\langle r(y_{i-1}),l(y_{i})\rangle\).
Now we prove the necessity part inductively. Our inductive hypothesis is that \(z_{i}=y_{i}\) for \(0\leq i\leq n+1\). Initially, we have \(l(z_{0})=l(y_{0})=r_{0}^{h}\) and \(r(z_{0})=r(y_{0})=r_{0}^{t}\), as \(\mathcal{A}[\pi^{*}]=\mathcal{A}[\tau]\), we always assume that \(z_{1}=y_{1}\), since otherwise, we can reverse the whole chromosome \(\tau\). For the inductive step, as \(l(z_{i})\) and \(r(z_{i})\) form a 1-cycle, so then adjacency \(\langle r(z_{i}),l(z_{i+1})\rangle\) is matched to \(\langle r(y_{i}),l(y_{i+1})\rangle\), and from the inductive hypothesis that \(r(z_{i})=r(y_{i})\), we have, \(l(z_{i+1})=l(y_{i+1})\), and thus, \(z_{i+1}=y_{i+1}\).
Figure 8: The bijection between identical adjacencies in \(\mathcal{A}[\pi]\) and \(\mathcal{A}[\tau]\), and the corresponding alternative-cycle graph.
The above theorem gives us a terminating condition for our algorithm: let \(\pi\) and \(\tau\) be the input chromosomes, and our algorithm keeps updating the alternative-cycle graph until all cycles in it become 1-cycles. unfortunately, in the following, we observe that some cycles can not be performed on symmetric reversals directly, then we consider these cycles intersecting with each other as a connected component. But this is still not enough, since there could also be some connected components which do not admit any symmetric reversal, we managed to handle this case by joining all the cycles of the same repeat into a whole connected component.
**Lemma 5.1**: _In an alternative-cycle graph, each cycle corresponds to a unique repeat and every edge (both red and blue) in the cycle corresponds to an occurrence of the unique repeat._
_Proof._ W.l.o.g, assume that \(l(x_{i})\) are \(l(x_{j})\) connected with a blue edge, from the construction of the alternative-cycle graph, there must be an occurrence in \(\tau\), say \(y_{k}\), such that \(\{l(x_{i}),l(x_{j}\}=\{l(y_{k}),r(y_{k})\}\), thus, \(|x_{i}|=|x_{j}|=|y_{k}|\), and the blue edge \((l(x_{i}),l(x_{j}))\) corresponds the occurrence \(y_{k}\) of the repeat \(|y_{k}|\). \(\sqcap\)
Since each gene appears once in \(\pi\), Lemma 5.1 implies that each gene has a 1-cycle in \(ACG(\pi,\tau,f)\), these 1-cycles will be untouched throughout our algorithm.
**Lemma 5.2**: _In an alternative-cycle graph, if we add a green edge connecting each pairs of nodes \(r(x_{i})\) and \(l(x_{i+1})\) (for all \(0\leq i\leq n\)), then all the blue edges and green edges together form a (blue and green alternative) path. (See Figure 6.)_
_Proof._ Actually, the green edge connecting \(r(x_{i})\) and \(l(x_{i+1})\) (\(0\leq i\leq n\)) is the adjacency \(\langle r(x_{i}),l(x_{i+1})\rangle\) of \(\mathcal{A}[\pi]\), which is identical to some adjacency \(\langle r(y_{j}),l(y_{j+1})\rangle\) of \(\mathcal{A}[\tau]\) according to the bijection between identical adjacencies of \(\mathcal{A}[\pi]\) and \(\mathcal{A}[\tau]\). Therefore, \(y_{j}\) and \(y_{j+1}\) appears consecutively in \(\tau\), and following the construction of \(ACG(\pi,\tau,f)\) and Lemma 5.1, they correspond to the two blue edges, one of which is associated with \(r(x_{i})\) and the other is associated with \(l(x_{i+1})\) in \(ACG(\pi,\tau,f)\), thus, the two blue edges are connected through the green edge \((r(x_{i}),l(x_{i+1}))\). The above argument holds for every green edge, therefore, all the blue edges and green edges constitute a path. We show an example in Figure 8. \(\sqcap\)\(\sqcup\)
Let \(x\in\Sigma\) be a repeat. Let \(x_{i}\) and \(x_{j}\) be two occurrences of \(x\) in \(\pi\), where \(i\neq j\). A blue edge is _opposite_ if it connects \(l(x_{i})\) and \(l(x_{j})\) or \(r(x_{i})\) and \(r(x_{j})\). A blue edge is _non-opposite_ if it connects \(l(x_{i})\) and \(r(x_{j})\) or \(r(x_{i})\) and \(l(x_{j})\).
Proof: We conduct the proof according to the construction of the alternative-cycle graph \(ACG(\pi,\tau,f)\).
If \(l(x_{i})\) and \(l(x_{j})\) are connected with an opposite edge, there must be an occurrence of \(x\) in \(\tau\), say \(y_{k}\), such that \(\{l(x_{i}),l(x_{j})\}=\{l(y_{k}),r(y_{k})\}\). If \(l(x_{i})=l(y_{k})\) and \(l(x_{j})=r(y_{k})\), then the orientations of \(x_{i}\) and \(y_{k}\) are the same, and the orientations of \(x_{j}\) and \(y_{k}\) are different, thus \(x_{i}\) and \(x_{j}\) have different orientations. If \(l(x_{i})=r(y_{k})\) and \(l(x_{j})=l(y_{k})\), then the orientations of \(x_{i}\) and \(y_{k}\) are different, and the orientations of \(x_{j}\) and \(y_{k}\) are the same, also \(x_{i}\) and \(x_{j}\) have different orientations. It will be similar when \(r(x_{i})\) and \(r(x_{j})\) are connected with an opposite edge.
If \(l(x_{i})\) and \(r(x_{j})\) are connected with a non-opposite edge, there must be an occurrence of \(x\) in \(\tau\), say \(y_{k}\), such that \(\{l(x_{i}),r(x_{j})=\{l(y_{k}),r(y_{k})\}\). If \(l(x_{i})=l(y_{k})\) and \(r(x_{j})=r(y_{k})\), then both \(x_{i}\) and \(x_{j}\) have the same orientations as \(y_{k}\), thus \(x_{i}\) and \(x_{j}\) have the same orientations. If \(l(x_{i})=r(y_{k})\) and \(r(x_{j})=l(y_{k})\), then both \(x_{i}\) and \(x_{j}\) have different orientations with \(y_{k}\), thus \(x_{i}\) and \(x_{j}\) have the same orientations. It will be similar when \(r(x_{i})\) and \(l(x_{j})\) are connected with an opposite edge.
**Proposition 5.1**: _Given a \(k\)-cycle \(C\) of \(x\), performing a symmetric reversal on two occurrences of \(x\) that are connected by an opposite blue edge, will break \(C\) into a \((k-1)\)-cycle as well as a 1-cycle. Given a \(k_{1}\)-cycle \(C_{1}\) and a \(k_{2}\)-cycle \(C_{2}\) of \(x\), performing a symmetric reversal on the two occurrences of \(x_{i}\in C_{1}\) and \(x_{j}\in C_{2}\), will join \(C_{1}\) and \(C_{2}\) into a \((k_{1}+k_{2})\)-cycle._
Now, we construct the _blue edge intersection graph_\(BG(\pi,\tau,f)=(BV_{\pi},BE_{\pi},f)\) according to \(ACG(\pi,\tau,f)\), viewing each blue edge as an interval of the two nodes it connects. For each interval, construct an original vertex in \(BV_{\pi}\), and set its weight to be 1, set its color to be black if the blue edge is opposite, and white otherwise. An edge in \(BE_{\pi}\) connects two vertices if and only if their corresponding intervals intersect but neither overlaps the other. An example of the blue edge intersection graph is shown in Figure 9-\((b)\).
Note that each connected component of \(BG(\pi,\tau,f)\) forms an interval on \(\pi\), for each connected component \(P\) in \(BG(\pi,\tau,f)\), we use \(\overline{P}\) to denote its corresponding interval on \(\pi\).
**Lemma 5.4**: _Let \(P\) be some connected component of \(BG(\pi,\tau,f)\), the leftmost endpoint of \(\overline{P}\) must be a left node of some \(x_{i}\), i.e., \(l(x_{i})\), and the rightmost endpoint of \(\overline{P}\) must be a right node of some \(x_{j}\), i.e., \(r(x_{j})\), where \(i<j\)._
Proof: Since each blue edge connects two nodes of the interval \(\overline{P}\), the number of nodes in \(\overline{P}\) must be even. Hence the boundary nodes of an interval can neither be both left nodes nor be both right nodes.
Assume to the contrary that the leftmost endpoint of \(\overline{P}\) is \(r(x_{i})\) and the rightmost endpoint of \(\overline{P}\) is \(l(x_{j})\). There are a total of \(2(j-i)\) nodes appearing in \(\overline{P}\). If we connect each pairs of nodes \(r(x_{k})\) and \(l(x_{k+1})\) (for all \(i\leq k\leq j\)) with a green edge, then there are \(j-i\) green edges as well as \(j-i\) blue edges between the \(2(j-i)\) nodes of \(\overline{P}\). Since each node is associated with one green
edge and one blue edge, these green edges and blue edges form cycles, which is a contradiction to Lemma 5.2. \(\sqcap\)\(\sqcup\)
**Lemma 5.5**: _All the vertices in \(BG(\pi,\tau,f)\) corresponding to the blue edges on the same long cycle in \(ACG(\pi,\tau,f)\) are in the same connected component of \(BG(\pi,\tau,f)\)._
_Proof._ Assume to the contrary that there exist some connected components each containing a part of blue edges of some cycle. Let \(P\) be such a type of connected component that \(\overline{P}\) does not overlap any other interval of such type of connected components. Let \(e=(x_{i}^{t},x_{j}^{h})\) and \(e^{\prime}=(x_{i^{\prime}}^{t},x_{j^{\prime}}^{h})\) be two blue edges on a cycle \(C\), such that their corresponding vertices \(v\in P\), but \(v^{\prime}\notin P\). From the way we choose \(P\), the two nodes \(x_{i^{\prime}}^{t}\) and \(x_{j^{\prime}}^{h}\) connected by \(e^{\prime}\) in \(C\) can not be both inside the interval of \(P\). So they must be both outside the interval of \(P\). Also, the two nodes \(x_{i}^{t}\) and \(x_{j}^{h}\) connected by \(e\) in \(C\) can not be both on the boundary of the interval of \(P\), since otherwise, \(e\) will not intersect with any other blue edge corresponding to a vertex of \(P\). W.l.o.g, assume that \(x_{i}^{t}\) appears inside the interval of \(P\). Since \(e\) and \(e^{\prime}\) are on the same cycle \(C\), besides \(e\), there must be an alternative path from \(x_{i}^{t}\) to \(x_{j^{\prime}}^{h}\) on \(C\). From Lemma 5.4, it is impossible that the travel along an alternative path outside the scope of an interval via a red edge, so the alternative path from \(x_{i}^{t}\) to \(x_{j^{\prime}}^{h}\) must contain a blue edge which connects a node inside \(\overline{P}\) with a node outside \(\overline{P}\), which contracts that \(P\) is a connected component. \(\sqcap\)\(\sqcup\)
As the two blue edges of a non-opposite 2-cycle do not intersect each other, we have,
**Corollary 5.1**: _A non-opposite 2-cycle can not form a connected component of \(BG(\pi,\tau,f)\)._
For each repeat \(x\), assume that it constitutes \(k\) cycles in \(ACG(\pi,\tau,f)\). Let \(x_{i_{1}}\), \(x_{i_{2}}\),..., \(x_{i_{k}}\) be the \(k\) occurrences of \(x\) that are in distinct cycles in \(ACG(\pi,\tau,f)\), where \(1\leq i_{1}<i_{2}<\cdots<i_{k}\leq n\). We construct \(k-1\)_additional vertices_ corresponding to the intervals \([r(x_{i_{j}})-\epsilon,l(x_{i_{j+1}})+\epsilon]\) to \(BV_{\pi}\) (\(1\leq j\leq k-1\)), for each such vertex, set its weight to be 1, and set its color to be black if the signs of \(x_{i_{j}}\) and \(x_{i_{j+1}}\) are distinct, and white otherwise. See the vertex marked with 10 in Figure 9-\((c)\) for an example. Also, there is an edge between two vertices of \(BV_{\pi}\) if and only if their corresponding intervals intersect, but none overlaps the other. The resulting graph is called the _intersection graph_ of \(\pi\), denoted as \(IG(\pi,\tau,f)=(V[\pi],E[\pi])\). An example is shown in Figure 9-\((c)\). Let \(V_{\pi}^{w}\subseteq V[\pi]\) be the subset of vertices which corresponding to non-opposite blue edges on long cycles in \(ACG(\pi,\tau,f)\).
From Lemma 5.4 and the construction of the intersection graph of \(\pi\), all the vertices corresponding to all the blue edges of the same repeat are in the same connected component. Note that the intersection graph of \(\pi\) may be distinct, when the bijection between identical adjacencies of \(\mathcal{A}[\pi]\) and \(\mathcal{A}[\tau]\) differs. Nevertheless, we have,
**Lemma 5.6**: _Let \(\pi\) and \(\tau\) be two related chromosomes with \({\cal A}[\pi]={\cal A}[\tau]\). Let \(x_{i}\) and \(x_{j}\) (\(i<j\)) be two occurrences of \(x\) in \(\pi\), and \(x_{i^{\prime}}\) and \(x_{j^{\prime}}\) (\(i^{\prime}<j^{\prime}\)) be two occurrences of \(x^{\prime}\) in \(\pi\), if either \(i<i^{\prime}<j<j^{\prime}\) or \(i^{\prime}<i<j^{\prime}<j\) is satisfied, then, based on any bijection \(f\) between \({\cal A}[\pi]={\cal A}[\tau]\), in the intersection graph \(IG(\pi,\tau,f)\), the vertices corresponding to all the intervals of \(x\) and \(x^{\prime}\) are in the same connected component._
_Proof._ Assume that \(i<i^{\prime}<j<j^{\prime}\), since the other case is similar. From Lemma 5.4 and the construction of the intersection graph, in any intersection graph of \(\pi\), the vertices corresponding to the intervals associated with \(x_{i}\) and \(x_{j}\) are in the same connected component \(P\); also, the vertices corresponding to the intervals associated with \(x_{i^{\prime}}\) and \(x_{j^{\prime}}\) are in the same connected component \(P^{\prime}\). We show that \(\overline{P}\) intersects \(\overline{P^{\prime}}\), and thus \(P=P^{\prime}\).
It is impossible that \(\overline{P}\) and \(\overline{P^{\prime}}\) are disjoint, because the interval \([l(x_{i}),\ldots,r(x_{j})]\) is a part of \(\overline{P}\), and the interval \([l(x_{i^{\prime}}),\ldots,r(x_{j^{\prime}})]\) is a part of \(\overline{P^{\prime}}\), but \([l(x_{i}),\ldots,r(x_{j})]\) intersects with \([l(x_{i^{\prime}}),\ldots,r(x_{j^{\prime}})]\).
If \(\overline{P}\) overlaps \(\overline{P^{\prime}}\), in the intersection graph \(IG(\pi,\tau,f)\), there exists a path from some interval associated with \(x_{j}\) to some interval associated with \(x_{i}\), among the intervals corresponding to the vertices on this path, there must be one that intersects with \(\overline{P^{\prime}}\).
If \(\overline{P^{\prime}}\) overlaps \(\overline{P}\), in the intersection graph \(IG(\pi,\tau,f)\), there exists a path from some interval associated with \(x_{i^{\prime}}\) to some interval associated with \(x_{j^{\prime}}\), among the intervals corresponding to the vertices on this path, there must be one that intersects with \(\overline{P}\). \(\sqcap\)\(\sqcup\)
Actually, the connected components of the intersection graph partition the repeats on \(\pi\) into groups. From Lemma 5.4 and Lemma 5.6, the group partition of the repeats is independent of the bijection between identical adjacencies of \({\cal A}[\pi]\) and \({\cal A}[\tau]\). In other words, the group partition will be fixed once \(\pi\) and \(\tau\) are given.
Similar to the intersection graph of chromosomes with a duplication number of \(2\), the intersection graph of chromosomes with unrestricted duplication num
Figure 9: \((a)\) The alternative-cycle graph \(ACG(\pi,\tau,f)\), where each blue edge is marked with a number. \((b)\) The blue edge intersection graph \(BG(\pi,\tau,f)\). \((c)\) The intersection graph \(IG(\pi,\tau,f)\) with additional vertices, where each number represents an interval.
ber also admit the rule-I, rule-II, and rule-III, as in Section 3, while performing a symmetric reversal on \(\pi\).
**Theorem 5.2**: _If a connected component of \(IG(\pi,\tau,f)\) contains a black vertex, then there exists a symmetric reversal, after performing it, we obtain \(\pi^{\prime}\), any newly created connected component containing a white vertex, which corresponds to a blue edge on a non-opposite long cycle in \(ACG(\pi^{\prime},\tau,f)\), also contains a black vertex._
_Proof._ Each vertex in \(IG(\pi,\tau,f)=(V[\pi],E[\pi])\) corresponds to an interval that are flanked by two occurrences of the same repeat. A vertex is black when the two occurrences have different signs, thus admits a symmetric reversal. For each black vertex \(x\), let \(C_{1},C_{2},\ldots,C_{m}\) be the newly created connected components after performing the symmetric reversal on \(x\), where each of \(C_{1},C_{2},\ldots,C_{i}\) contains a white vertex corresponding to a blue edge on a non-opposite long cycle in \(ACG(\pi^{\prime},\tau,f)\), but does not contain a black vertex, each of \(C_{i+1},C_{i+2},\ldots,C_{j}\) contains additional vertices and vertices corresponding to blue edges on 1-cycles in \(ACG(\pi^{\prime},\tau,f)\), but does not black vertex, and each of \(C_{j+1},C_{j+2},\ldots,C_{m}\) contains a black vertex, \(0\leq i\leq j\leq m\). Let \(\Delta(x)\) be the number of white vertices in \(C_{1},C_{2},\ldots,C_{j}\).
We show next that if there exists a connected component, which contains a white vertex corresponding to a blue edge on a non-opposite long cycle in \(ACG(\pi^{\prime},\tau,f)\), but does not contain a black vertex, then there would be a black vertex \(x^{\prime}\) such that \(\Delta(x^{\prime})<\Delta(x)\).
Assume to the contrary that \(x\) is the black vertex with the minimum \(\Delta(x)>0\). Let \(x^{\prime}\) be the neighbor of \(x\) in \(C_{1}\). Then, \(x^{\prime}\) is a black vertex prior to performing the symmetric reversal of \(x\). It is sufficient to show that \(\Delta(x^{\prime})<\Delta(x)\), and hence contradicting with that \(\Delta(x)\) is minimum. From Lemma 3.4 and Lemma 3.5, the color of all the vertices in \(C_{j+1},C_{j+2},\ldots,C_{m}\) are preserved, and all the edges in \(C_{j+1},C_{j+2},\ldots,C_{m}\) are preserved after performing the symmetric reversal of \(x^{\prime}\). Therefore, \(\Delta(x^{\prime})\) will also not count any vertex in \(C_{j+1},C_{j+2},\ldots,C_{m}\), consequently, \(\Delta(x^{\prime})\leq\Delta(x)\).
Since \(C_{1}\) contains a white vertex corresponding to a blue edge on a non-opposite long cycle, following Lemma 5.5 and Corollary 5.1, \(C_{1}\) contains at least 3 vertices. Let \(N_{1}(x)\) be the neighbor set of \(x\) in \(C_{1}\), and \(x^{\prime}\in N_{1}(x)\). There must exist a neighbor of \(x^{\prime}\) in \(C_{1}\), say \(x^{\prime\prime}\). If \(x^{\prime\prime}\in N_{1}(x)\), then \((x^{\prime},x^{\prime\prime})\notin E[\pi]\), and \(x^{\prime\prime}\) is black prior to performing the symmetric reversal on \(x\), thus, performing a symmetric reversal on \(x^{\prime}\) instead of \(x\) will keep \(x^{\prime\prime}\) be black, which implies \(\Delta(x^{\prime})<\Delta(x)\). On the other side, if \(x^{\prime\prime}\notin N_{1}(x)\), then \((x^{\prime},x^{\prime\prime})\in E[\pi]\), and \(x^{\prime\prime}\) is white prior to performing the symmetric reversal on \(x\), Hence performing a symmetric reversal on \(x^{\prime}\) instead of \(x\) will make \(x^{\prime\prime}\) be black, which also implies \(\Delta(x^{\prime})<\Delta(x)\). Both contract the assumption that \(\Delta(x)\) is the minimum. \(\sqcap\)\(\sqcup\)
**Theorem 5.3**: _A chromosome \(\pi\) can be transformed into the other chromosome \(\tau\) by symmetric reversals if and only if (I) \(\mathcal{A}[\pi]=\mathcal{A}[\tau]\), and (II) each white vertex in \(V_{\pi}^{w}\) belongs to a connected component of \(IG(\pi,\tau,f)\) containing a black vertex._
Proof.: \((\Rightarrow)\) If there exists a connected component of \(IG(\pi,\tau,f)\), say \(C\), which contains a white vertex in \(V_{\pi}^{w}\) but does not contain a black vertex, then all the vertices of \(C\) are white, and they do not admit any symmetric reversal. Moreover, there must be a white vertex, say \(x\), which corresponds to a blue edge of a non-opposite long cycle in \(ACG(\pi,\tau,f)\). Thus, it is impossible to find a series of symmetric reversals to transform this long cycle into 1-cycles. According to Theorem 5.1, \(\pi\) can not be transformed into \(\tau\).
\((\Leftarrow)\) Theorem 5.2 guarantees that, each original vertex either can be performed with a symmetric reversal or its corresponding blue edge has been in a 1-cycle. Finally, all the blue edges which correspond to the original vertices are in 1-cycles; following Theorem 5.1, \(\pi\) has been transformed into \(\tau\).
Now, we are ready to formally present the decision algorithm based on Theorem 5.3 for both the general case and the case, where the duplication number 2 in Algorithm 2. We just directly test conditions (I) and (II) in Theorem 5.3. Note that each connected component in \(IG(\pi,\tau,f)\) may contain more than one black vertex. By setting \(Q=V_{\pi}^{b}\) in line 11, we can guarantee that each connected component in \(IG(\pi,\tau,f)\) is explored once during the breadth first search so that \(O(n^{2})\) running time can be kept.
**Running time of Algorithm 2:** Let us analyze the time complexity of Algorithm 2. Verifying whether \(\mathcal{A}[\pi]=\mathcal{A}[\tau]\) can be done in \(O(n^{2})\) time. It takes \(O(n^{2})\) time to build an bijection between \(\mathcal{A}[\pi]\) and \(\mathcal{A}[\tau]\), and construct the cycle graph \(ACG(\pi,\tau,f)\), as well as the corresponding intersection graph \(IG(\pi,\tau,f)\). It remains to analyze the size of \(IG(\pi,\tau,f)\). For each repeat, say \(x\), there are \(dp[x,\pi]\) original vertices and \(c[x]-1\) additional vertices in \(IG[\pi]\), where \(c[x]\) is the number of cycles of \(x\) in \(ACG(\pi,\tau,f]\). Note that \(c[x]\leq dp[x,\pi]\) and \(\sum_{x\in\sum}dp[x,\pi]=n\). Thus, the total number of vertices in \(IG(\pi,\tau,f)\) is bounded by \(\sum_{x\in\sum}(dp[x,\pi]+c[x]-1)\leq 2\sum_{x\in\sum}dp[x,\pi]-1=2n-1\), then the number of edges in \(IG[\pi]\) is at most \(4n^{2}\). The whole breadth-search process takes \(O(n^{2})\) time, since there are at most \(2n-1\) vertices and at most \(4n^{2}\) edges in \(IG(\pi,\tau,f)\). Therefore, Algorithm 2 runs in \(O(n^{2})\) time.
## 6 Hardness Result for SMSR
In this section, we show that the optimization problem _SMSR_ is NP-hard. When we initially investigated SMSR on the special case when the input genomes have duplication number 2, we found that it is somehow related to searching a Steiner set on the intersection graph, which is a type of circle graph with its vertices colored and weighted. We review the definitions on circle graphs as follows.
**Definition 6.1**: _A graph is a circle graph if it is the intersection graph of chords in a circle._
**Definition 6.2**: _A graph is an overlap graph, if each vertex corresponds to an interval on the real line, and there is an edge between two vertices if and only if their corresponding intervals intersect but one does not contain the other._
The graph class of overlap graphs is equivalent to circle graphs if we join the two endpoints of the real line into a circle. Circle graph can be recognized in polynomial time [10].
**Definition 6.3**: _Minimum Steiner Tree Problem:_
_Instance: A connected graph \(G\), a non-empty subset \(X\subseteq V(G)\), called terminal set._
_Task: Find a minimum size subset \(S\subseteq V(G)\backslash X\) such that the induced subgraph \(G[S\cup X]\) is connected._
More than thirty five years ago, Johnson stated that the _Minimum Steiner Tree_ problem on _Circle Graphs_ was in \(P\)[14] and referred to a personal communication as a reference. However, to date, there is no published polynomial algorithm to solve the problem. Figueiredo et al. revisited Johnson's table recently, and although they still marked the problem as in \(P\), the reference
remains "ongoing" [FMS22]. Here as a by-product, we clarify that the _Minimum Steiner Tree_ problem on _Circle Graphs_ is NP-hard. The reduction is from _MAX-(3,B2)-SAT_, in which each clause is of size exactly 3 and each variable occurs exactly twice in its positive and twice in the negative form.
**Theorem 6.1**: MAX-(3,B2)-SAT _is NP-hard_[BKS03]_._
Next, we conduct a reduction from _MAX-(3,B2)-SAT_ to the _Minimum Steiner Tree_ problem on _Circle Graphs_.
**Theorem 6.2**: _The Minimum Steiner Tree problem on Circle Graphs is NP-hard._
_Proof._ Given an instance of _MAX-(3,B2)-SAT_ with \(n\) variables \(\{x_{1},x_{2},...,x_{n}\}\) and \(m\) clauses \(\{c_{1},c_{2},...,c_{m}\}\), we construct a _circle graph_ denoted by \(OLG(3,B2)\) as follows.
* _Variable Intervals._ For each variable \(x_{i}\), let \(q_{i}=300(i-1)\). Construct four group of ladder intervals, where \(B[i]_{a}^{b}=[q_{i}+50(a-1)+4(b-1)+1,q_{i}+50(a-1)+4(b-1)+7]\), for \(a=1,2,3,4\) and \(b=1,2,3,4,5,6\). Let \(b[i]=\{B[i]_{a}^{b}|a=1,2,3,4,\) and \(b=2,5\}\), and \(B[i]=\{B[i]_{a}^{b}|a=1,2,3,4,\) and \(b=1,3,4,6\}\). Then construct four intervals: \(P[i]_{1}=[q_{i}+12,q_{i}+125]\), which intersects with both \(B[i]_{1}^{3}\) and \(B[i]_{3}^{6}\); \(P[i]_{2}=[q_{i}+62,q_{i}+175]\), which intersects with both \(B[i]_{2}^{3}\) and \(B[i]_{4}^{6}\); \(N[i]_{1}=[q_{i}+3,q_{i}+53]\), which intersects with both \(B[i]_{1}^{1}\) and \(B[i]_{2}^{1}\); and \(N[i]_{2}=[q_{i}+116,q_{i}+166]\), which intersects with both \(B[i]_{3}^{4}\) and \(B[i]_{4}^{4}\). Let \(P[i]=\{P[i]_{1},P[i]_{2}\}\), and \(N[i]=\{N[i]_{1},N[i]_{2}\}\).
* _Clause Intervals._ For each clause \(c_{a}\) (\(1\leq a\leq m\)), construct an interval \(c_{a}=[300n+3an,300n+3an+2n+1]\). We still use \(\mathbb{C}=\{c_{1},c_{2},\ldots,c_{m}\}\) to denote the set of intervals constructed by the clauses.
* _Positive Literal Intervals._ If \(x_{i}\) appears in \(c_{j}\) and \(c_{k}(j<k)\) as positive literals, construct two intervals \(f[i]^{j}=[q_{i}+200,q_{i}+210]\) and \(f[i]^{k}=[q_{i}+220,q_{i}+230]\), which are independent with all the previous intervals; and construct \(G[i]^{j}=[q_{i}+209,300n+3jn+i]\), which intersects with \(f[i]^{j}\) and \(c_{j}\); \(G[i]^{k}=[q_{i}+229,300n+3kn+i]\), which intersects with \(f[i]^{k}\) and \(c_{k}\). Construct \(D[i]^{j}=[q_{i}+25,q_{i}+201]\), which intersects with \(f[i]^{j}\) and \(B[i]_{1}^{6}\), and \(D[i]^{k}=[q_{i}+75,q_{i}+221]\), which intersects with \(f[i]^{k}\) and \(B[i]_{2}^{6}\).
* _Negative Literal Intervals_. If \(x_{i}\) appears in \(c_{j^{\prime}}\) and \(c_{k^{\prime}}(j^{\prime}<k^{\prime})\) as negative literals, construct two intervals \(f[i]^{j^{\prime}}=[q_{i}+240,q_{i}+250]\) and \(f[i]^{k^{\prime}}=[q_{i}+260,q_{i}+270]\), which are independent with all the previous intervals; and construct \(G[i]^{j^{\prime}}=[q_{i}+249,300n+3j^{\prime}n+i]\), which intersects with \(f[i]^{j^{\prime}}\) and \(c_{j^{\prime}}\), and \(G[i]^{k^{\prime}}=[q_{i}+269,300n+3k^{\prime}n+i]\), which intersects with \(f[i]^{k^{\prime}}\) and \(c_{k^{\prime}}\). Construct \(D[i]^{j^{\prime}}=[q_{i}+103,q_{i}+241]\), which intersects with \(f[i]^{j^{\prime}}\) and \(B[i]_{1}^{6}\), and \(D[i]^{k^{\prime}}=[q_{i}+153,q_{i}+261]\), which intersects with \(f[i]^{k^{\prime}}\) and \(B[i]_{2}^{6}\). Let \(f[i]=\{f[i]^{j},f[i]^{k},f[i]^{j^{\prime}},f[i]^{k^{\prime}}\}\) and let \(D[i]=\{D[i]^{j},D[i]^{k},D[i]^{j^{\prime}},D[i]^{k^{\prime}}\}\).
* _Subtree Intervals._ For each variable \(x_{i}\), construct four intervals \(t[i]=\{t[i]_{1},t[i]_{2},t[i]_{3},t[i]_{4}\}\), where \(t[i]_{a}=[-(4i+a)+4,q_{i}+50a-1]\) for \(a=1,2,3,4\). Finally, construct two intervals \(r=\{r_{1}=[-4n-1,301n],r_{2}=[-4n-2,0]\}\).
Let \(b=\cup_{1}^{n}b[i]\), \(f=\cup_{1}^{n}f[i]\), \(t=\cup_{1}^{n}t[i]\), \(\mathbb{B}=\cup_{1}^{n}B[i]\), \(\mathbb{P}=\cup_{1}^{n}P[i]\), \(\mathbb{N}=\cup_{1}^{n}N[i]\), \(\mathbb{D}=\cup_{1}^{n}D[i]\), \(\mathbb{G}=\cup_{1}^{n}G[i]\), \(t\cup_{1}^{n}t[i]\). Let \(OLG(3,B2)\) be the corresponding circle graph of all the constructed intervals, define the input terminal set of the _Steiner Tree Problem_ as the vertices corresponding to intervals in \(b\cup f\cup c\cup t\cup r\), and the vertices corresponding to the intervals in \(\mathbb{B}\cup\mathbb{P}\cup\mathbb{N}\cup\mathbb{D}\cup\mathbb{G}\) are candidate Steiner vertices.
It is not hard to observe that the vertices corresponding to the intervals in \(t\cup r\) have already formed a subtree, and all the candidate Steiner vertices are connect to this subtree, thus it does not matter whether they are connected themselves or not. The terminal vertices corresponding to the intervals in \(b\cup f\cup c\) are mutually independent, then they must connect to the subtree through Steiner vertices.
We show that the _MAX-(3,B2)-SAT_ instance is satisfiable if and only if the _minimum Steiner tree_ problem on \(OLG(3,B2)\) has an optimum solution of \(14n\) vertices.
For each variable \(x_{i}\), assume that \(c_{j}\) and \(c_{k}\) are the two clauses where \(x_{i}\) appears as positive literals, and \(c_{j^{\prime}}\) and \(c_{k^{\prime}}\) are the two clauses where \(x_{i}\) appears as negative literals.
(\(\Rightarrow\)) Assume that there is a truth assignment to the _MAX-(3,B2)-SAT_ instance, we can obtain a solution of \(14n\) intervals for the _Minimum Steiner Tree_ problem as follows.
If \(x_{i}\) is assigned true, by selecting the 6 vertices corresponding to intervals \(\{P[i]_{1}\), \(P[i]_{2}\), \(D[i]_{j^{\prime}}\), \(D[i]_{k^{\prime}}\), \(G[i]_{j}\), \(G[i]_{k}\}\), and 8 vertices corresponding to the ladder intervals \(\{B[i]_{1}^{3}\), \(B[i]_{2}^{3}\), \(B[i]_{3}^{6}\), \(B[i]_{4}^{6}\), \(B[i]_{1}^{4}\), \(B[i]_{2}^{4}\), \(B[i]_{3}^{1}\), \(B[i]_{4}^{1}\}\), all the 12 terminals vertices corresponding interval in \(b\cup f\), as well as the terminals corresponding to \(c_{j}\) and \(c_{k}\) will connect to the subtree.
If \(x_{i}\) is assigned false, by selecting the 6 vertices corresponding to intervals \(\{N[i]_{1}\), \(N[i]_{2}\), \(D[i]_{j}\), \(D[i]_{k}\), \(G[i]_{j^{\prime}}\), \(G[i]_{k^{\prime}}\}\), and 8 vertices corresponding to the ladder intervals \(\{B[i]_{1}^{1}\), \(B[i]_{2}^{1}\), \(B[i]_{3}^{4}\), \(B[i]_{4}^{4}\), \(B[i]_{1}^{6}\), \(B[i]_{2}^{6}\), \(B[i]_{3}^{3}\), \(B[i]_{4}^{3}\}\), all the 12 terminals vertices corresponding interval in \(b\cup f\), as well as the terminals corresponding to \(c_{j^{\prime}}\) and \(c_{k^{\prime}}\) will connect to the subtree. Therefore, we obtain a Steiner set of size \(14n\).
(\(\Leftarrow\)) Assume that the _Minimum Steiner Tree_ problem on \(OLG(3,B2)\) has a Steiner set of \(14n\) vertices, we show that there is a truth assignment to the _MAX-(3,B2)-SAT_ instance.
Firstly, for each variable \(x_{i}\), we have, constraint (I): in order to connect the 8 vertices corresponding to intervals in \(b[i]\) to the subtree, the Steiner set includes at least 8 vertices corresponding to intervals in \(B[i]\); and constraint (II): in order to connect the 4 vertices corresponding to intervals in \(f[i]\) to the subtree, the Steiner set must include at least 4 vertices corresponding to intervals in \(D[i]\cup G[i]\). But any 12 vertices satisfying the above constraints
are not enough, because the four terminals corresponding to the intervals in \(\{B[i]_{1}^{2},B[i]_{2}^{2},B[i]_{3}^{5},B[i]_{4}^{5}\}\) are still not connected to the subtree. For each of them, say \(B[i]_{1}^{2}\) for an example, if it must be connected to the subtree through \(B[i]_{1}^{3}\) and \(P[i]_{1}\), \(B[i]_{1}^{1}\) and \(N[i]_{1}\), or \(B[i]_{1}^{3}\) and \(B[i]_{1}^{4}\). If it chooses \(B[i]_{1}^{3}\) and \(B[i]_{1}^{4}\), then the terminal corresponding to \(B[i]_{1}^{5}\) has to connect to the subtree through \(B[i]_{1}^{6}\) and \(D[i]^{j}\), this implies both neighbors of the vertex corresponding to \(B[i]_{1}^{5}\) are selected in the Steiner set. Similar argument holds for each of \(\{B[i]_{1}^{2},B[i]_{2}^{2},B[i]_{3}^{5},B[i]_{4}^{5}\}\). Thus, to connect the four terminals corresponding to the intervals in \(\{B[i]_{1}^{2},B[i]_{2}^{2},B[i]_{3}^{5},B[i]_{4}^{5}\}\) to the subtree, besides a neighbor of each vertex, each of them also needs an additional vertex, though two of them may share an additional vertex. Hence the Steiner set must include at least another two vertices, which should either be \(P[i]_{1}\) and \(P[i]_{2}\), or \(N[i]_{1}\) and \(N[i]_{2}\). Therefore, since the Steiner set is of size \(14n\), for each variable \(x_{i}(1\leq i\leq n)\), it includes exactly 14 vertices corresponding to intervals in \(B[i]\cup P[i]\cup N[i]\cup D[i]\cup G[i]\); moreover, either the vertices corresponds to intervals of \(P[i]\) or the vertices corresponds to intervals of \(N[i]\) are selected.
If the Steiner set includes \(P[i]_{1}\) and \(P[i]_{2}\), to connect the two vertices corresponding to \(B[i]_{3}^{2}\) and \(B[i]_{4}^{2}\) to the subtree, it also includes \(D[i]^{j^{\prime}}\) and \(D[i]^{k^{\prime}}\), then to connect the two vertices corresponding to \(f[i]^{j}\) and \(f[i]^{k}\), the Steiner set could contains one of \(D[i]^{j}\) and \(G[i]^{j}\), and one of \(D[i]^{k}\) and \(G[i]^{k}\), we can revise the Steiner set such that it includes \(G[i]^{j}\) and \(G[i]^{k}\); subsequently, the vertices corresponding to the intervals \(c_{j}\) and \(c_{k}\) are connected to the subtree. In this case, we assign \(x_{i}\) true to satisfy \(c_{j}\) and \(c_{k}\).
If the Steiner set contains \(N[i]_{1}\) and \(N[i]_{2}\), to connect the two vertices corresponding to \(B[i]_{1}^{5}\) and \(B[i]_{2}^{5}\) to the subtree, it also contains \(D[i]^{j}\) and \(D[i]^{k}\), then to connect the two vertices corresponding to \(f[i]^{j^{\prime}}\) and \(f[i]^{k^{\prime}}\), the Steiner set could contain one of \(D[i]^{j^{\prime}}\) and \(G[i]^{j^{\prime}}\), and one of \(D[i]^{k^{\prime}}\) and \(G[i]^{k^{\prime}}\). Hence we can revise the Steiner set such that it contains \(G[i]^{j^{\prime}}\) and \(G[i]^{k^{\prime}}\), subsequently, the vertices corresponding to the intervals \(c_{j^{\prime}}\) and \(c_{k^{\prime}}\) are connected to the subtree. In this case, we assign \(x_{i}\) false to satisfy \(c_{j^{\prime}}\) and \(c_{k^{\prime}}\).
We take as instance of _MAX-(3,B2)-SAT_ as an example, where \(\{x_{1},x_{2},x_{3}\}\) is the set of the variables and \(\{c_{1}=\{x_{1},x_{2},x_{3}\},c_{2}=\{x_{1},\bar{x}_{2},\bar{x}_{3}\},c_{3}=\{ \bar{x}_{1},\bar{x}_{2},x_{3}\}\), \(c_{4}=\{\bar{x}_{1},x_{2},\bar{x}_{3}\}\}\) is the set of clauses. The constructed instance and the corresponding circle graph are in Fig. 10 and Fig. 11 respectively.
Finally, we conduct a reduction from the Minimum Steiner Tree problem on circle graphs to the SMSR problem even if the duplication number of the converted chromosomes is 2.
**Lemma 6.1**: _Given two related simple chromosomes \(\pi\) and \(\tau\), both have a duplication number 2, let \(f\) be the unique bijection between \(\mathcal{A}[\pi]\) and \(\mathcal{A}[\tau]\). \(ACG(\pi,\tau,f)\) is composed of 1-cycles and 2-cycles, and each appearance of an even repeat corresponds to a 1-cycle, and the two appearances of every odd repeat corresponds to a 2-cycle._
_Proof._ Since the duplication number of \(\pi\) and \(\tau\) is 2, from Lemma 5.1, each cycle of \(ACG(\pi,\tau,f)\) contains at most 2 red edges.
Let \(x\) be an even repeat and \(x_{i}\), \(x_{j}\) be its two occurrences in \(\pi\). Since \(x\) is neighbor-consistent, there must be two occurrences of \(x\), say \(y_{k}\) and \(y_{l}\) in \(\tau\) such that \(\langle r(x_{i-1}),l(x_{i})\rangle\) and \(\langle r(x_{i}),l(x_{i+1})\rangle\) are matched to \(\langle r(x_{k-1}),l(x_{k})\rangle\) and \(\langle r(x_{k}),l(x_{k+1})\rangle\), and \(\langle r(x_{j-1}),l(x_{j})\rangle\) and \(\langle r(x_{j}),l(x_{j+1})\rangle\) are matched to
Figure 11: The corresponding circle graph according to the intervals in Figure 10.
Figure 10: The intervals base on \(x_{1}\). Intervals corresponding to terminal vertices are red, and intervals corresponding to Steiner vertices are colored black.
\(\langle r(x_{l-1}),l(x_{l})\rangle\) and \(\langle r(x_{l}),l(x_{l+1})\rangle\) in the bijection \(f\). From the construction of \(ACG(\pi,\tau,f)\), \(l(x_{i})\) and \(r(x_{i})\) are connected by a blue edge, and \(l(x_{j})\) and \(r(x_{j})\) are connected by a blue edge, which implies that \(x_{i}\) and \(x_{j}\) correspond to 1-cycles respectively.
Let \(x\) be an odd repeat and \(x_{i}\), \(x_{j}\) be its two appearances in \(\pi\). Since \(x\) is neighbor-inconsistent, there must be two occurrences of \(x\), say \(y_{k}\) and \(y_{l}\) in \(\tau\) such that \(\langle r(x_{i-1}),l(x_{i})\rangle\) is matched to an adjacency involving \(y_{k}\) while \(\langle r(x_{i}),l(x_{i+1})\rangle\) is matched to an adjacency involving \(y_{l}\), thus \(l(x_{i})\) and \(r(x_{i})\) are not connected by a blue edge, which implies that \(x_{i}\) and \(x_{j}\) are in a 2-cycle. \(\sqcap\)\(\sqcup\)
**Theorem 6.3**: _The SMSR problem is NP-hard even if the input genomes have duplication number 2._
_Proof._ Given a circle graph \(G=(V,E)\), where \(X\subseteq V\) is the set of terminal vertices, for each vertex \(v\in V\), let \([l(v),r(v)]\) be its corresponding interval on the real line. Assume that \(l(v_{min})\) is the minimum and \(r(v_{max})\) is the maximum.
We first construct two 1-cycles: \((l(v_{min})-1-\delta,l(v_{min})-1+\delta)\), which corresponds to "+0"; and \((r(v_{max})+1-\delta,r(v_{max})+1+\delta)\), which corresponds to "-0".
For each terminal vertex \(t\in X\), we construct two intersecting non-opposite 2-cycles: one is \((l(t)-\delta,l(t)+\delta,r(t)-\delta,r(t)+\delta,l(t)-\delta)\), where \((l(t)-\delta\), \(l(t)+\delta)\) and \((r(t)-\delta\), \(r(t)+\delta)\) are red edges and \((l(t)-\delta\), \(r(t)+\delta)\) and \((l(t)+\delta\), \(r(t)-\delta)\) are blue edges. Let \(t_{1}\) be the repeat corresponding to this cycle, the two occurrences both have a "+"sign in \(\pi\); the other is \((r(t)-3\delta,r(t)-2\delta,r(t)+2\delta,r(t)+3\delta,r(t)-3\delta)\), where \((r(t)-3\delta\), \(r(t)-2\delta)\) and \((r(t)+2\delta\), \(r(t)+3\delta)\) are red edges and \((r(t)-3\delta\), \(r(t)+3\delta)\) and \((r(t)-2\delta\), \(r(t)+2\delta)\) are blue edges. Let \(t_{2}\) be the repeat corresponding to this cycle, the two occurrences both have a "+"sign in \(\pi\). Then, we denote \(l(t)-\delta\) and \(r(t)-\delta\) by \(t_{1}^{h}\), \(l(t)+\delta\) and \(r(t)+\delta\) by \(t_{1}^{t}\), \(r(t)-3\delta\) and \(r(t)+2\delta\) by \(t_{2}^{h}\), \(r(t)-2\delta\) and \(r(t)+3\delta\) by \(t_{2}^{h}\).
Choose an arbitrary terminal vertex \(x\in X\), we construct an opposite 2-cycle: \((r(x)-5\delta,r(x)-4\delta,r(x)+5\delta,r(x)+4\delta,r(x)-5\delta)\), where \((r(x)-5\delta\), \(r(x)-4\delta\)) and \((r(x)+4\delta\), \(r(x)+5\delta)\) are red edges and \((r(x)-5\delta\), \(r(x)+4\delta)\) and \((r(x)-4\delta\), \(r(x)+5\delta)\) are blue edges. Let \(x_{3}\) be the repeat corresponding to this cycle, the occurrence corresponding to the red edge \((r(x)-5\delta,r(x)-4\delta)\) has a "+" sign, and the occurrence corresponding to the red edge \((r(x)+4\delta,r(x)+5\delta)\) has a "-" sign in \(\pi\). Then, we denote \(r(x)-5\delta\) and \(r(x)+\delta\) by \(x_{3}^{h}\), \(r(x)-4\delta\) and \(r(x)+4\delta\) by \(x_{3}^{t}\).
For each candidate Steiner vertex \(s\), we construct two 1-cycles \((l(s)-\delta,l(s)+\delta)\), \(r(s)-\delta,r(s)+\delta)\). Let \(s_{1}\) be the repeat corresponding to this cycle, the two occurrences both have a "+" sign in \(\pi\). Then, we denote \(l(s)-\delta\) and \(r(s)-\delta\) by \(s_{1}^{h}\), \(l(s)+\delta\) and \(r(s)+\delta\) by \(s_{1}^{t}\).
Let the graph constructed above be denoted by \(ACG(G,X)\), Next, we show that \(ACG(G,X)\) is a well-defined alternative-cycle graph, i.e., there exist two related simple chromosomes \(\pi\) and \(\tau\) with \({\cal A}[\pi]={\cal A}[\tau]\) (then there is a bijection \(f\)), such that \(ACG(\pi,\tau,f)=ACG(G,X)\).
In \(ACG(G,X)\), each red edge corresponds to an occurrence of a repeat, thus clearly the chromosome \(\pi\) is a sequence of the occurrences. Then we can view
the blanks between red edges as adjacencies. From the above construction, each blue edge connects in the cycles corresponding to \(v_{i}\), then connects node \(v_{i}^{h}\) with node \(v_{i}^{t}\), thus an blue edge will correspond to an occurrence of \(v_{i}\) in \(\tau\). Therefore, from Lemma 5.2, it is sufficient to show that all the blanks and blue edges form a path.
Sorting the vertices of \(V\) in an increasing order of \(l(v)\), then \(ACG(G,X)\) is obtained by adding a 1-cycle, two intersecting non-opposite 2-cycles, and an opposite 2-cycle iteratively. We prove that the blanks and blue edges from a path inductively. Initially, the two blue edges of the cycles \((0^{h},0^{t})\) and \((0^{t},0^{h})\) surely form a path. Assume that we have a path \(\mathbb{P}\) of blanks and blue edges, \(\mathbb{P}=(0^{h},n_{1},\ldots,n_{k},0^{h})\).
(1) in case that a 1-cycle \((s_{1}^{h},s_{1}^{t})\) is added in between the blank of \(n_{i}\) and \(n_{j}\), if \(j=i+1\), then \(\mathbb{P}^{\prime}=(0^{h},\ldots,n_{i},s_{1}^{h},s_{1}^{t},n_{j},\ldots,n_{k})\) is a path; if \(j=i-1\), then \(\mathbb{P}^{\prime}=(n_{0},\ldots,n_{j},s_{1}^{t},s_{1}^{h},n_{i},\ldots,n_{k},0^{h})\) is also a path.
(2) in case that two intersecting non-opposite 2-cycles \((t_{1}^{h},t_{1}^{t},t_{1}^{h},t_{1}^{t},t_{1}^{h})\) and \((t_{2}^{h},t_{2}^{t},t_{2}^{h},t_{2}^{t},t_{2}^{h})\) are added in between the two blanks of \(n_{i}\), \(n_{j}\) and \(n_{i^{\prime}}\), \(n_{j^{\prime}}\). We just show the case that \(i+1=j<i^{\prime}=j^{\prime}-1\), since all other cases are similar. \(\mathbb{P}^{\prime}=(0^{h},\ldots,\)\(n_{i}\), \(t_{1}^{h}\), \(t_{1}^{t}\), \(t_{2}^{h}\), \(t_{2}^{t}\), \(t_{1}^{h}\), \(n_{j}\),..., \(n_{i^{\prime}}\), \(t_{2}^{h}\), \(t_{2}^{t}\), \(n_{j^{\prime}}\),..., \(n_{k},0^{h})\) is also a path. Specially, if two intersecting non-opposite 2-cycles are added in between one blank, the new path can be obtained by deleting the segment \((n_{j}\),..., \(n_{i^{\prime}})\) from \(\mathbb{P}^{\prime}\).
(3) in case that an opposite 2-cycle \((x_{3}^{h},x_{3}^{t},x_{3}^{h},x_{3}^{t},x_{3}^{h})\) is added in between the two blanks of \(n_{i}\), \(n_{j}\) and \(n_{i^{\prime}}\), \(n_{j^{\prime}}\). We just show the case that \(i+1=j<i^{\prime}=j^{\prime}-1\), since all other cases are similar. \(\mathbb{P}^{\prime}=(0^{h},\ldots,n_{i},x_{3}^{h},x_{3}^{t},n_{i^{\prime}}\),..., \(n_{j},x_{3}^{t},x_{3}^{h},n_{j^{\prime}},\ldots,n_{k},0^{h})\) is also a path.
Therefore, in \(ACG(G,X)\), all the blanks and blue edges form a path. We can obtain \(\tau\) along this path, if the path goes through a blue edge from \(v_{a}^{h}\) to \(v_{a}^{t}\), then it has a "+"sign in \(\tau\); and if the path goes through a blue edge from \(v_{a}^{t}\) to \(v_{a}^{h}\), then it has a "-" sign in \(\tau\). Since the blanks represents adjacencies in both \(\pi\) and \(\tau\), then \(\mathcal{A}[\pi]=\mathcal{A}[\tau]\). To make \(\pi\) and \(\tau\), we insert distinct genes between every adjacency.
Finally, we complete the proof by showing that \(G\) has a Steiner set of size \(k\) if and only if \(\pi\) can be transformed into \(\tau\) by \(2|X|+2k+1\) symmetric reversals.
(\(\Rightarrow\)) Assume that \(G\) has a Steiner set \(S\) of size \(k\), which implies that the \(G(X\cup S)\) forms a tree, thus, in the intersection graph \(IG(\pi,\tau)\), the repeats corresponds to \(X\cup S\) are in a single connected component, which contains a black vertex \(x_{3}\). From our construction, there are also two odd vertices corresponding to each vertex of \(X\), and one even vertex corresponding to each vertex of \(S\), thus \(\pi\) can be transformed into \(\tau\) by \(2|X|+2k+1\) symmetric reversals.
(\(\Leftarrow\)) Assume that \(\pi\) can be transformed into \(\tau\) by \(2|X|+2k+1\) symmetric reversals. Since there are \(2|X|+1\) odd vertices in the intersection graph \(IG(\pi,\tau)\), but only one black vertices, then the solution must perform at least \(2|X|+1\) symmetric reversals on these odd vertices, and at most \(2k\) symmetric reversals on \(k\) even vertices. A symmetric reversal on a vertex is applicable if and only if the vertex is in a connected component that contains black vertices. Thus, the \(2|X|+1\) odd vertices and \(k\) even vertices are in a connected component, which
implies that \(G\) has a Steiner set of size \(k\).
We give an example of the above reduction in Figure 12.
## 7 Concluding Remarks
This paper investigates a new model of genome rearrangements named sorting by symmetric reversals. This model is based on recent new findings from genome comparison. The decision problem, which asks whether a chromosome can be transformed into another by symmetric reversals, is polynomial solvable. But the optimization problem, which pursues the minimum number of symmetric reversals during the transformation between two chromosomes, is NP-hard. It is interesting to design some approximation algorithms for the optimization problem. Perhaps, polynomial time algorithms to solve the optimization problem for more realistic special cases are also interesting.
## Acknowledgments
This research is supported by NSF of China under grant 61872427 and 61732009.
|
2307.06031 | On the Design of Nonlinear MPC and LPVMPC for Obstacle Avoidance in
Autonomous Driving | In this study, we are concerned with autonomous driving missions when a
static obstacle blocks a given reference trajectory. To provide a realistic
control design, we employ a model predictive control (MPC) utilizing nonlinear
state-space dynamic models of a car with linear tire forces, allowing for
optimal path planning and tracking to overtake the obstacle. We provide
solutions with two different methodologies. Firstly, we solve a nonlinear MPC
(NMPC) problem with a nonlinear optimization framework, capable of considering
the nonlinear constraints. Secondly, by introducing scheduling signals, we
embed the nonlinear dynamics in a linear parameter varying (LPV) representation
with adaptive linear constraints for realizing the nonlinear constraints
associated with the obstacle. Consequently, an LPVMPC optimization problem can
be solved efficiently as a quadratic programming (QP) that constitutes the main
novelty of this work. We test the two methods for a challenging obstacle
avoidance task and provide qualitative comparisons. The LPVMPC shows a
significant reduction in terms of the computational burden at the expense of a
slight loss of performance. | Maryam Nezami, Dimitrios S. Karachalios, Georg Schildbach, Hossam S. Abbas | 2023-07-12T09:24:23Z | http://arxiv.org/abs/2307.06031v1 | # On the Design of Nonlinear MPC and LPVMPC for Obstacle Avoidance in Autonomous Driving*
###### Abstract
In this study, we are concerned with autonomous driving missions when a static obstacle blocks a given reference trajectory. To provide a realistic control design, we employ a model predictive control (MPC) utilizing nonlinear state-space dynamic models of a car with linear tire forces, allowing for optimal path planning and tracking to overtake the obstacle. We provide solutions with two different methodologies. Firstly, we solve a nonlinear MPC (NMPC) problem with a nonlinear optimization framework, capable of considering the nonlinear constraints. Secondly, by introducing scheduling signals, we embed the nonlinear dynamics in a linear parameter varying (LPV) representation with adaptive linear constraints for realizing the nonlinear constraints associated with the obstacle. Consequently, an LPVMPC optimization problem can be solved efficiently as a quadratic programming (QP) that constitutes the main novelty of this work. We test the two methods for a challenging obstacle avoidance task and provide qualitative comparisons. The LPVMPC shows a significant reduction in terms of the computational burden at the expense of a slight loss of performance.
## I Introduction
In recent years, there has been a growing interest in developing autonomous driving vehicles. One of the key challenges in autonomous driving is navigating through complex environments and avoiding collisions with obstacles safely. Model predictive control (MPC) is a powerful control algorithm that has been widely used in the area of autonomous driving. MPC is particularly effective in controlling a vehicle because it can incorporate prior knowledge of the system dynamics, environmental information, as well as state and input constraints when computing a control input. Considering these factors, MPC can generate optimized control inputs that satisfy the constraints, resulting in high system performance and safety.
MPC has been widely applied for obstacle avoidance in autonomous vehicles, see, e.g., [1, 2, 3]. It can be utilized to generate optimal trajectories that steer the vehicle away from obstacles in its path while respecting safety constraints. Given that vehicles are safety-critical systems, the use of nonlinear MPC (NMPC) is gaining popularity due to its ability to utilize high-fidelity nonlinear models of vehicle dynamics, thereby enabling more accurate and precise control actions. The work presented in [4] has objectives similar to the current study, but it assumed constant longitudinal speed to solve the NMPC problem with sequential quadratic programs (SQP). In [5], an NMPC algorithm for path tracking has been proposed. This approach incorporates braking control before steering at high speeds. Investigating the effect of obstacle constraints on the algorithm's performance is interesting. This paper aims to keep the vehicle's operation stable while hard constraints are enforced in the optimization problem. In [6], a method for generating safe and efficient driving trajectories for autonomous vehicles using NMPC has been introduced. The numerical solution of the NMPC was obtained using a genetic algorithm strategy, which does not offer a guarantee of convergence.
The computational burden is a significant challenge for applying NMPC, particularly for systems with many states and constraints. Given the computational challenges, there has been increasing attention to linear parameter varying (LPV) modeling methods to embed nonlinear dynamics in a linear setting [7, 8]. Although the application of LPVMPC in autonomous driving has not yet received much attention in the literature, there are promising results reported in recent studies. In [9], a control architecture for lane-keeping has been suggested where a tube-based LPVMPC showed robust performance in lane-keeping. In [10], an online planning solution based on LPVMPC for autonomous racing has been proposed to improve the computational time while preserving the system's performance.
_Contributions_: This paper proposes a novel LPV embedding for the nonlinear vehicle dynamics as a first step toward LPVMPC implementation. Such an LPV embedding could be of interest for convergence and feasibility analysis based on convex optimization tools. As a second step, at first, computation of the kinematic trajectories from a fixed map is presented. Then, we propose a linear formulation of the nonlinear constraints associated with the obstacle, which allows a tractable LPVMPC optimization problem using quadratic programming (QP). The proposed LPVMPC scheme integrates path planning and control into one optimization problem, deciding when to initiate the overtaking maneuver while ensuring the vehicle to be within the road boundaries. Finally, to verify the effectiveness of the proposed methods, simulation results are compared to the full nonlinear implementation, and further discussions are given.
_Contents_: Section II presents the nonlinear vehicle model and the nonlinear obstacle avoidance constraint, followed by the introduction of the linear representation of the obstacle constraint and the linear parameter varying modeling. In Section III, the models and constraints from Section II are used to set up the NMPC and the LPVMPC for obstacle avoidance. Section IV shows the implementation of the meth
ods and compares their performance in the proposed obstacle avoidance scenarios. Finally, a few concluding remarks are presented in Section V.
_Notations and definitions:_ The notation \(Q\succ 0\) represents the positive definiteness of a matrix \(Q\). The weighted norm \(\|x\|_{Q}\) is defined as \(\|x\|_{Q}^{2}=x^{\top}Qx\). The function \(\text{diag}(\mathbf{x})\) constructs a diagonal matrix from a vector \(\mathbf{x}\). A halfspace is defined as \(\{x\in\mathbb{R}^{n}|a^{\top}x\leq b\}\). The set of positive integers, including zero, is denoted by \(\mathbb{Z}_{+}\cup\{0\}\).
## II Vehicle Model and Constraints
Consider the following discrete-time nonlinear system
\[z_{k+1}=f(z_{k},u_{k}),\qquad\forall k\in\mathbb{Z}_{+}\cup\{0\}, \tag{1}\]
where \(z_{k}\in\mathbb{R}^{n}\) and \(u_{k}\in\mathbb{R}^{m}\) are the state and input vectors, respectively, at the instant \(k\). The initial condition is \(z_{0}\). The system is subject to the following state and input constraints:
\[z_{k}\in\mathcal{Z}_{k}\quad\text{and}\quad u_{k}\in\mathcal{U}=\{u_{k}\in \mathbb{R}^{m}|G^{u}u_{k}\leq h^{u}\}. \tag{2}\]
Here, \(\mathcal{Z}_{k}\) represents a time-varying set, and its formulation will be discussed in the subsequent section. Within the input constraint, we have \(G^{u}\in\mathbb{R}^{q_{u}\times m}\) and \(h^{u}\in\mathbb{R}^{q_{u}}\). The number of rows, \(q_{u}\), depends on the number of inputs that have upper bounds, lower bounds, both upper and lower bounds, or no bounds at all. In this section, the representation of the vehicle dynamics in the form (1) using a dynamic bicycle model [11, p. 27] is given, as well as the constraints formulation to handle the obstacle avoidance.
### _Dynamic Bicycle Model_
Based on [11, p. 27], the differential equations describing the motion at time \(t\geq 0\) of a vehicle are presented as follows
\[\dot{X}(t) =\upsilon(t)\cos\psi(t)-\nu(t)\sin\psi(t), \tag{3a}\] \[\dot{Y}(t) =\upsilon(t)\sin\psi(t)+\nu(t)\cos\psi(t),\] (3b) \[\dot{\upsilon}(t) =\omega(t)\nu(t)+a(t),\] (3c) \[\dot{\nu}(t) =-\omega(t)\upsilon(t)+\frac{2}{m}(F_{\text{yf}}(t)\cos\delta(t)+ F_{\text{yf}}(t)),\] (3d) \[\dot{\psi}(t) =\omega,\] (3e) \[\dot{\omega}(t) =\frac{2}{I_{\text{z}}}(l_{t}F_{\text{yf}}(t)-l_{\text{r}}F_{ \text{yr}}(t)), \tag{3f}\]
where \(X\), \(Y\), \(\upsilon\), \(\nu\), \(\psi\) and \(\omega\) denote the global \(X\) axis coordinate of the center of gravity (GoG), the global \(Y\) axis coordinate of the CoG, the longitudinal speed in body frame, the lateral speed in body frame, the vehicle yaw angle and the yaw angle rate, respectively. The control inputs of the system are the longitudinal acceleration \(a\) and the steering angle \(\delta\). The vehicle moment of inertia and mass are denoted by \(I_{\text{z}}\) and \(m\), respectively. The lateral forces acting on the front and rear tires are denoted as \(F_{\text{yf}}\) and \(F_{\text{yr}}\), respectively, and calculated as \(F_{\text{yf}}=C_{\alpha\text{f}}\alpha_{\text{f}}\), \(F_{\text{yr}}=C_{\alpha\text{r}}\alpha_{\text{r}}\). The parameters \(C_{\alpha\text{f}}\) and \(C_{\alpha\text{r}}\) represent the cornering stiffness of the front and rear tire, respectively. The slip angle of the front tire is \(\alpha_{\text{f}}\) and is calculated as \(\alpha_{\text{f}}=\delta-(\nu+l_{\text{f}}\omega)/\upsilon\). The rear tire slip angle is \(\alpha_{\text{r}}\) and is calculated as \(\alpha_{\text{r}}=(l_{\text{r}}\omega-\nu)/\upsilon\). The parameters and variables are illustrated in Fig. 1 and in Table I.
To utilize the model in Eq. (3) in an MPC framework, it is necessary to discretize the model. One of the commonly used methods for obtaining the corresponding discrete-time system is the forward Euler method1. Therefore, the vehicle dynamics in Eq. (3) can be written as in Eq. (1), where \(z_{k}=\begin{bmatrix}X_{k}&Y_{k}&\upsilon_{k}&\nu_{k}&\psi_{k}&\omega_{k}\end{bmatrix}^ {\top}\), \(u_{k}=\begin{bmatrix}\delta_{k}&a_{k}\end{bmatrix}^{\top}\) with the sampling time given in Table II.
Footnote 1: Forward Euler: \(\hat{z}(t_{k})\approx\frac{z(t_{k}+t_{z})-z(t_{k})}{t_{k}}\), for \(t_{k}=t_{s}k,\ k=0,1,\dots\)
### _Constraints_
To ensure the vehicle stays within the boundaries of the road, constraints are enforced on the \((X_{k},Y_{k})\) coordinates of the vehicle. One approach [12], involves computing the lateral error of the vehicle's center of gravity, \(e_{k}^{\text{lat}}\) as follows
\[e_{k}^{\text{lat}}=-\sin{(\psi_{k}^{\text{ref}})(X_{k}-X_{k}^{\text{ref}})}+ \cos{(\psi_{k}^{\text{ref}})(Y_{k}-Y_{k}^{\text{ref}})}, \tag{4}\]
where \(X_{k}^{\text{ref}}\), \(Y_{k}^{\text{ref}}\) and \(\psi_{k}^{\text{ref}}\) are the longitudinal position, the lateral position, and the orientation, respectively, on a point on a given reference trajectory at step \(k\). Then, the following constraint ensures that the vehicle's CoG always stays within the boundaries of the road
\[-R_{1,k}\leq e_{k}^{\text{lat}}\leq R_{2,k}, \tag{5}\]
where \(R_{1,k}\) and \(R_{2,k}\) are the road widths on the right and left sides of the reference trajectory at step \(k\). The
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Symbol** & **Variables** & **Unit** \\ \hline \(X\) & Global X-axis coordinates of the vehicle’s CoG & m \\ \(Y\) & Global Y-axis coordinates of the vehicle’s CoG & m \\ \(\upsilon\) & Longitudinal velocity of the vehicle & m/s \\ \(\nu\) & Lateral velocity of the vehicle & m/s \\ \(\psi\) & Yaw angle of the vehicle & rad \\ \hline \(\omega\) & Yaw rate of the vehicle & rad/s \\ \hline \(\delta\) & Steering angle of the front tire & rad \\ \hline \(a\) & Longitudinal acceleration of the vehicle & m/s2 \\ \hline \(\alpha_{\text{r}}\) & Front tire slip angle & rad \\ \hline \(\alpha_{\text{r}}\) & Rear tire slip angle & rad \\ \hline \hline
**Symbol** & **Parameter** & **Value/Unit** \\ \(C_{\alpha\text{r}}\) & Cornering stiffness front tire & \(156\) kN/rad \\ \(C_{\alpha\text{r}}\) & Cornering stiffness rear tire & \(193\) kN/rad \\ \(l_{\text{r}}\) & Distance CoG to front axle & \(1.04\) m \\ \hline \(l_{\text{r}}\) & Distance CoG to rear axle & \(1.4\) m \\ \hline \(I_{\text{z}}\) & Vehicle yaw inertia & \(2937\) kg\({}^{2}\) \\ \hline \(m\) & Vehicle mass & \(1919\) kg \\ \hline \hline \end{tabular}
\end{table} TABLE I: Vehicle parameters
Fig. 1: Vehicle dynamics representation
constraints associated with the road boundaries in Eq. (5) can be computationally expensive due to their nonlinear nature. To address this challenge, an alternative linear constraint is proposed as follows
\[\begin{bmatrix}a_{1,k}&b_{1,k}\\ -a_{2,k}&-b_{2,k}\end{bmatrix}\begin{bmatrix}X_{k}\\ Y_{k}\end{bmatrix}\leq\begin{bmatrix}c_{1,k}\\ -c_{2,k}\end{bmatrix}, \tag{6}\]
where \(-a_{1,k}X_{k}-b_{1,k}Y_{k}\geq-c_{1,k}\) and \(a_{2,k}X_{k}+b_{2,k}Y_{k}\geq c_{2,k}\) are half-spaces defined by the tangent to the road boundary at step \(k\), which ensure the \((X_{k},Y_{k})\) coordinates at step \(k\) to remain within the two half-spaces. Imposing such linear constraints allows more efficient computations in the MPC optimization problem.
An efficient representation of an obstacle is to be formulated as an ellipse. For simplicity, we consider here circular obstacles. To impose the obstacle constraints in the NMPC, one possible approach is to calculate the Euclidean distance between the \((X_{k},Y_{k})\) coordinates and the center of the obstacle and to ensure that the vehicle's \((X_{k},Y_{k})\) coordinates always remain outside the obstacle, as described below
\[(X_{\text{obs}}-X_{k})^{2}+(Y_{\text{obs}}-Y_{k})^{2}\geq r^{2}, \tag{7}\]
where \((X_{\text{obs}},Y_{\text{obs}})\) indicates the center of the obstacle and \(r\) represents its radius. However, it is usually desired to formulate linear constraints for the obstacle in order to reduce the computational complexity. For this purpose, we propose to replace the nonlinear constraint in Eq. (7) with a linear inequality constraint, see Eq. (8) below, which varies over the MPC prediction horizon according to the tangent to the circular obstacle boundary. Every linear inequality constraint represents a half-space, which defines a safe region to avoid collision with the obstacle. An illustration is depicted in Fig. 2. If a reference point falls inside the obstacle within the MPC horizon, a tangent defining a linear inequality constraint, such as \(h_{1}\) or \(h_{2}\), is calculated at the intersection point (the red dots in Fig. 2). The corresponding half-space includes the safe region to avoid the obstacle. The \((X_{k},Y_{k})\) coordinate of the vehicle should then be on that side, which can be defined by the linear inequality
\[h_{1}:a_{3,k}X_{k}+b_{3,k}Y_{k}\geq c_{3,k}, \tag{8}\]
where \(a_{3,k},b_{3,k}\) and \(c_{3,k}\) are the parameters of the tangent half-space to the obstacle.
## III Controller Design
### _The Kinematic Trajectories From a Fixed Map_
To consider a realistic setup for the problem, we assume that only the \((X_{i}^{\text{ref}},Y_{i}^{\text{ref}})\) values of the reference trajectory are available. However, we should compute the corresponding reference values for the remaining four states, \(v_{k}^{\text{ref}}\), \(\nu_{k}^{\text{ref}}\), \(\psi_{k}^{\text{ref}}\) and \(\omega_{k}^{\text{ref}}\), to track the reference trajectory effectively. For the computation of \(\psi_{k}^{\text{ref}}\), the global \((X_{i}^{\text{ref}},Y_{i}^{\text{ref}})\) can be directly used as follows
\[\psi_{k}^{\text{ref}}=\arctan\left(\frac{Y_{k-1}^{\text{ref}}-Y_{k}^{\text{ ref}}}{X_{k-1}^{\text{ref}}-X_{k}^{\text{ref}}}\right). \tag{9}\]
Next, \(\omega_{k}^{\text{ref}}\) can be calculated as \(\omega_{k}^{\text{ref}}=(\psi_{k}^{\text{ref}}-\psi_{k-1}^{\text{ref}})/t_{s}\), where \(\psi_{k-1}^{\text{ref}}\) is the reference yaw angle which was computed in the previous step by using Eq. (9). To calculate \(\upsilon_{k}^{\text{ref}}\) and \(\nu_{k}^{\text{ref}}\), which represent the longitudinal and lateral speeds in the body frame, the reference points in the body frame are determined as follows
\[\begin{bmatrix}x_{k}^{\text{ref}}\\ y_{k}^{\text{ref}}\end{bmatrix}\!=\!\begin{bmatrix}\cos\left(\psi_{k}^{\text{ ref}}\right)&\sin\left(\psi_{k}^{\text{ref}}\right)\\ -\sin\left(\psi_{k}^{\text{ref}}\right)&\cos\left(\psi_{k}^{\text{ref}}\right) \end{bmatrix}\begin{bmatrix}X_{k}^{\text{ref}}-X_{k-1}^{\text{ref}}\\ Y_{k}^{\text{ref}}-Y_{k-1}^{\text{ref}}\end{bmatrix}. \tag{10}\]
Then, the reference speeds can be readily computed as \(\upsilon_{k}^{\text{ref}}=(x_{k}^{\text{ref}}-x_{k-1}^{\text{ref}})/t_{s}\) and \(\nu_{k}^{\text{ref}}=(y_{k}^{\text{ref}}-y_{k-1}^{\text{ref}})/t_{s}\), where \(x_{i}^{\text{ref}}\) and \(y_{i}^{\text{ref}}\) are computed in Eq. (10).
### _Nonlinear MPC_
The constrained nonlinear optimal control for reference tracking w.r.t the decision variable \(U=\{u_{0|k},u_{1|k},\ldots,u_{N-1|k}\}\) is formulated as follows.
**Problem 1** (Nonlinear optimization problem): \[\min_{U} \|z_{N|k}-z_{N|k}^{\text{ref}}\|_{P}^{2}\!+\!\sum_{i=0}^{N-1}\|z_ {i|k}-z_{i|k}^{\text{ref}}\|_{Q}^{2}+\|u_{i|k}\|_{R}^{2}\] (11a) s.t. \[z_{i+1|k}\!=\!z_{i|k}\!+\!t_{s}f(z_{i|k},u_{i|k}),\forall i\!=\! 0,\ldots,N\!-\!1,\] (11b) \[z_{0|k}=z_{k},\] (11c) \[z_{i|k}\in\mathcal{Z}_{i|k},\quad\forall i=0,1,\ldots,N,\] (11d) \[u_{i|k}\in\mathcal{U},\qquad\forall i=0,1,\ldots,N-1,\] (11e)
where \(z_{i|k}^{\text{ref}}=\begin{bmatrix}x_{i|k}^{\text{ref}}&Y_{i|k}^{\text{ref}}& \upsilon_{i|k}^{\text{ref}}&\nu_{i|k}^{\text{ref}}&\psi_{i|k}^{\text{ref}}& \omega_{i|k}^{\text{ref}}\end{bmatrix}^{\top}\) is the reference value for the states at each step, which is computed by Eqs. (9) and (10). The tuning matrices are \(Q\succeq 0\in\mathbb{R}^{6\times 6}\), \(R\succ 0\in\mathbb{R}^{2\times 2}\) and \(P\succeq 0\in\mathbb{R}^{6\times 6}\). The MPC prediction horizon is denoted with \(N\). In the above optimization problem, \(f\) is the nonlinear dynamic bicycle model in Eq. (3), and \(z_{0|k}\) is the system's initial condition. Here state constraint, \(\mathcal{Z}_{i|k}\), includes the bounds on each state, the road boundary constraint (6), and the obstacle avoidance constraint (7). The input constraint \(\mathcal{U}\) is defined in Eq. (2).
### _Linear Parameter Varying MPC_
By introducing the scheduling signals \(\upsilon(t)\), \(\nu(t)\), \(\delta(t)\), and \(\psi(t)\), that form the scheduling variable vector
Fig. 2: Linear obstacle constraint computation. The center of the obstacle \((X_{\text{obs}},Y_{\text{obs}})\) is marked by the blue dot, and the reference trajectory is represented by the solid line with black dots.
\((\upsilon(t),\ \nu(t),\ \delta(t),\ \psi(t))\); the continuous-time nonlinear dynamics in Eq. (3) can be written equivalently in the LPV representation as
\[\begin{cases}\dot{z}(t)=A_{c}(p(t))z(t)+B_{c}(p(t))u(t),\\ p(t)=(\upsilon(t),\nu(t),\delta(t),\psi(t)),\ t\geq 0.\end{cases} \tag{12}\]
The state vector \(z(t)\) of dimension \(6\) can be defined as \(z(t):=\left[\begin{array}{cc}X(t)&Y(t)&\upsilon(t)&\nu(t)&\psi(t)&\omega(t )\end{array}\right]^{\top}\) with initial conditions \(z_{0}\) and the continuous-time system matrices \(A_{c}\in\mathbb{R}^{6\times 6},\ B_{c}\in\mathbb{R}^{2\times 2}\) as
\[A_{c}(p(t)):=\left[\begin{array}{ccccc}0&0&\cos(\psi(t))&-\sin( \psi(t))&0&0\\ 0&0&\sin(\psi(t))&+\cos(\psi(t))&0&0\\ 0&0&0&0&\nu(t)\\ 0&0&0&a_{44}(t)&0&a_{46}(t)\\ 0&0&0&0&1\\ 0&0&0&a_{64}(t)&0&a_{66}(t)\end{array}\right],\] \[\beta_{f}:=\] \[a_{44}(t):=-\beta_{f}\cos(\delta(t))\frac{1}{v(t)}-\beta_{r} \frac{1}{v(t)},\] \[a_{46}(t):=-v(t)-\beta_{f}\cos(\delta(t))\frac{1}{v(t)}\ell_{f}+ \beta_{r}\frac{1}{v(t)}\ell_{r},\] \[a_{64}(t):=\frac{1}{v(t)}(\gamma_{r}-\gamma_{f}),\ a_{6\delta}( t):=-\frac{1}{v(t)}(\gamma_{f}\ell_{f}+\gamma_{r}\ell_{r}),\]
and
\[B_{c}(p(t)):=\begin{bmatrix}0&0&0&\beta_{f}\cos(\delta(t))&0&\gamma_{f}\\ 0&0&1&0&0&0\end{bmatrix}^{\top}.\]
The discretization with the Euler method and a sampling time \(t_{s}\) results in the discrete-time LPV representation of Eq. (12) as
\[\begin{cases}z_{k+1}=A(p_{k})z_{k}+B(p_{k})u_{k},\\ p_{k}=(v_{k},\nu_{k},\delta_{k},\psi_{k}),\ k\in\mathbb{Z}_{+}\cup\{0\}\end{cases} \tag{13}\]
where \(A(p_{k})=I+t_{s}A_{c}(p_{k}),\ B(p_{k})=t_{s}B_{c}(p_{k})\) are the discrete-time LPV system matrices, and \(I\in\mathbb{R}^{6\times 6}\) is the identity matrix.
**Problem 2**: _QP optimization as \(\texttt{QP}(p_{i|k},z_{k},z_{i|k}^{\text{ref}})\)_
\[\underset{U}{\text{min}} \|z_{N|k}\!-\!z_{N|k}^{\text{ref}}\|_{P}^{2}\!+\!\sum_{i=0}^{N-1} \|z_{i|k}\!-\!z_{i|k}^{\text{ref}}\|_{Q}^{2}+\|u_{i|k}\|_{R}^{2} \tag{14a}\] \[\text{s.t.}\ z_{i+1|k}\!=\!A(p_{i|k})z_{i|k}\!+\!B(p_{i|k})u_{i|k},\ i=\!0,\!\ldots\!,\!N\!-\!1\] (14b) \[\quad z_{0|k}=z_{k},\] (14c) \[\quad z_{i|k}\in\bar{\mathcal{Z}}_{i|k},\quad\forall i=0,1,\ldots,N,\] (14d) \[\quad u_{i|k}\in\mathcal{U},\qquad\forall i=0,1,\ldots,N-1, \tag{14e}\]
_where, the reference trajectory \(z_{i|k}^{\text{ref}}\), the tuning matrices \(P\), \(Q\) and \(R\), as well as the decision variable \(U\) are as defined for the Problem 1. The initial condition is \(z_{0|k}\), and \(N\) denotes the MPC prediction horizon. The LPV model in Eq. (14b) is defined in Eq. (13). The input constraint \(\mathcal{U}\) is given in Eq. (2). The state constraint \(\bar{\mathcal{Z}}_{i|k}\) in Eq. (14d) includes the bounds on the states, the road boundary constraint in Eq. (6) and the linear obstacle avoidance constraint in Eq. (8). Therefore, the state constraint can be represented in the polytopic form \(\bar{\mathcal{Z}}_{k}=\{z_{k}\in\mathbb{R}^{6}|G_{k}^{z}z_{k}\leq h_{k}^{z}\}\). The steps implementing the above LPVMPC are given in Algorithm 1. A similar algorithm has been proposed for the quasi LPV case in [13].
```
0: Initial conditions \(z_{k}\), and the road reference \((X_{k},Y_{k}),\ k\in\mathbb{Z}_{+}\).
0: The control input \(u_{k},\ k=1,\ldots\), that drives the nonlinear system to the reference while avoiding obstacles.
1: Initialize for \(k=0\) the scheduling vector \(\hat{p}_{i|0}\) as \[\hat{p}_{i|0}:=(\upsilon_{0},\ \nu_{0},\ \delta_{0},\ \psi_{0})\,,\ i=0,\ldots,N-1\]
2:while\(k=0,1,\ldots\)do
3: Update the state \(z_{i|k}^{\text{ref}}\) as explained in Section III-A.
4: Solve the QP in Problem 2 \[\left[z_{i+1|k},u_{i|k}\right]\leftarrow\texttt{QP}(\hat{p}_{i|k},z_{k},z_{i|k}^ {\text{ref}}),\ i=0,\ldots,N-1\]
5: Update \(\hat{p}_{i|k}:=\left(\hat{v}_{i|k},\ \hat{v}_{i|k},\ u_{i|k},\ \hat{\psi}_{i|k}\right),\ i=0,\ldots,N\)
6: Apply \(u_{k}=u_{0|k}\) to the system
7: Measure \(z_{k+1}\)
8: Update \(\hat{p}_{i|k+1}=\hat{p}_{i+1|k},\ i=0,\ldots,N-1\)
9:\(k\gets k+1\)
10:endwhile
```
**Algorithm 1** The QP-based LPVMPC algorithm
## IV Results and Discussions
This section implements and compares the performance of the two MPCs, the NMPC and the LPVMPC. The simulations are performed on a Dell Latitude 5590 laptop with an Intel(R) Core(TM) i7-8650U CPU and 16 GB of RAM. The scenarios are implemented in Matlab [14], utilizing the YALMIP toolbox [15], with an optimality tolerance of \(10^{-4}\). To solve the nonlinear optimization problem, we employ the _IPOPT_ solver [16]. The Matlab _quadprog_ solver is used for solving the quadratic optimization problem.
The simulation scenario is to drive the vehicle in the middle on the right-hand side of a road to follow a reference trajectory using one of the proposed controllers in the previous sections. Then, an obstacle appears in the road, and the vehicle is controlled to overtake this obstacle safely and to return back to the reference trajectory in the middle on the right-hand side of the road. Table II presents the parameters utilized in the MPCs, along with the upper and lower limits of the states and input variables.
The reference trajectory is picked as a sine wave to mimic the road, and the \((X_{k}^{\text{ref}},Y_{i}^{\text{ref}})\) on the reference trajectory are intentionally selected to be non-equidistant in space. As a result, the vehicle's speed shall be adjusted based on the distance between successive \((X_{i}^{\text{ref}},Y_{i}^{\text{ref}})\) points. The initial condition of the vehicle is \(z_{0}=\begin{bmatrix}0&0&10&0&0\end{bmatrix}^{\top}\). Furthermore, the road on the left-hand side of the vehicle's
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Parameter** & **Value** & **Parameter** & **Value** \\ \hline Lower bound on \(X_{k}\) & -1 m & Upper bound on \(X_{k}\) & 150 m \\ \hline Lower bound on \(Y_{k}\) & -1 m & Upper bound on \(Y_{k}\) & 120 m \\ \hline Lower bound on \(\upsilon_{k}\) & 1 m & Upper bound on \(\upsilon_{k}\) & 100 m \\ \hline Upper bound on \(\psi_{k}\) & 10 m & Upper bound on \(\|\psi_{k}\|\) & \(\pi/2\) rad \\ \hline Upper bound on \(\|\omega_{k}\|\) & \(\frac{\pi}{4}\), \(\frac{\pi}{4}\), \(\frac{\text{rad}}{8}\) & Sampling time \(t_{s}\) & 0.05 s \\ \hline Upper bound on \(\otimes\delta_{k}\) & \(\frac{\text{rad}}{180}\) & Sampling frequency \(f_{s}\) & 20 Hz \\ \hline Lower bound on \(a_{k}\) & -6 \(\frac{\text{rad}}{8}\) & \(\frac{\text{rad}}{8}\) & Upper bound on \(a_{k}\) & 2 \(\frac{\text{m}}{s^{2}}\) \\ \hline \end{tabular}
\end{table} TABLE II: MPC Parameters
reference trajectory is assumed to be always \(4\) m wide, while on the right-hand side, it is always only \(1\) m wide, i.e., \(R_{1,k}=1\) m and \(R_{2,k}=4\) m, \(\forall k=0,1,\dots\) in Eq. (5). Also, for both MPCs, \(N=8\), \(R=\text{diag}\big{(}[0.1\quad 0.1]\big{)}\), \(Q=\text{diag}\big{(}\big{[}10\quad 10\quad 1\quad 1\quad 1\big{]}\big{)}\) and \(P=Q\). To keep the vehicle movements smoother, we constrain the rate of change of \(\delta_{k}\) and \(a_{k}\), i.e., \(|\delta_{k}-\delta_{k-1}|\leq 40\pi/180\) rad, \(|a_{k}-a_{k-1}|\leq 1.5\)\(\mathrm{m}/\mathrm{s}^{2}\).
To evaluate the effectiveness of our approach, we compare the performance of the NMPC and the LPVMPC in two problem setups: a reference tracking (RT) problem and an obstacle avoidance problem.
### _Reference Tracking (RT)_
In Fig. 3, a comparison of the application of the NMPC and the LPVMPC for reference tracking is demonstrated. In this figure, the blue line represents the position of the vehicle when controlled by the NMPC, and the red line is the position of the vehicle controlled by the LPVMPC. As illustrated in Fig. 3, the performance of the LPVMPC and the NMPC regarding tracking error is almost identical. By comparing the inputs generated by the controllers in Figs. 4 and 5, it is clear that the steering angles produced by the NMPC and LPVMPC are nearly identical. Similarly, the accelerations are quite similar, although the NMPC acceleration appears to be smoother. Table III displays the computation time for solving an optimization problem to generate the inputs for the NMPC and the LPVMPC at each time instant \(k\). The results confirm the reduction in the computation time by using the LPVMPC.
### _Obstacle Avoidance_
In this subsection, the results of the comparison between the NMPC and the LPVMPC in an obstacle avoidance scenario are presented. The obstacle is assumed to be in a circular shape with a radius of \(1\) m at \(X_{\text{obs}}=29.4819\) m, \(Y_{\text{obs}}=17.4753\) m. This means that the obstacle has blocked one side of the road in the studied scenario.
The result of applying each of these MPCs to the nonlinear vehicle dynamics Eq. (3) is illustrated in Fig. 6. In this figure, the blue line represents the vehicle's trajectory when controlled by the NMPC, while the red line indicates its trajectory when controlled by the LPVMPC. As Fig. 6 indicates, both the NMPC and LPVMPC are capable of controlling the vehicle to follow the desired reference trajectory and initiate the overtaking maneuver at an appropriate moment. However, the NMPC is performing a smoother maneuver. In Figs. 7 and 8, the steering angles and the accelerations generated by each controller are presented. Based on the information in these figures, the NMPC can generate smoother control inputs. The difference in the inputs justifies the smoother movement of the vehicle in Fig. 6. The computation time of both the NMPC and the LPVMPC are presented in Table IV. As the results confirm, using the LPVMPC can reduce the computation time significantly.
considered. We introduced a linear formulation for obstacle avoidance constraints, enabling the proposed LPVMPC scheme to integrate both path planning and control into a single optimization problem. The LPVMPC is comparable to the NMPC in terms of performance with a more efficient computational burden. Finally, better tuning, model generalizations with the LPV embedding, dynamical and more challenging obstacles avoidance scenarios, as well as considering theoretical analysis such as stability and recursive feasibility guarantees are left for future research endeavors as the analysis with the LPV formulation can be carried out efficiently within the well-defined linear systems framework.
|
2305.06890 | Abelian and non-abelian quantum two-block codes | We discuss quantum two-block codes, a large class of CSS codes constructed
from two commuting square matrices.Interesting families of such codes are
generalized-bicycle (GB) codes and two-block group-algebra (2BGA) codes, where
a cyclic group is replaced with an arbitrary finite group, generally
non-abelian. We present code construction and give several expressions for code
dimension, applicable depending on whether the constituent group is cyclic,
abelian, or non-abelian. This gives a simple criterion for an essentially
non-abelian 2BGA code guaranteed not to be permutation-equivalent to such a
code based on an abelian group. We also give a lower bound on the distance
which, in particular, applies to the case when a 2BGA code reduces to a
hypergraph-product code constructed from a pair of classical group codes. | Renyu Wang, Hsiang-Ku Lin, Leonid P. Pryadko | 2023-05-11T15:28:02Z | http://arxiv.org/abs/2305.06890v2 | # Abelian and non-abelian quantum two-block codes
###### Abstract
We discuss quantum two-block codes, a large class of CSS codes constructed from two commuting square matrices. Interesting families of such codes are generalized-bicycle (GB) codes and two-block group-algebra (2BGA) codes, where a cyclic group is replaced with an arbitrary finite group, generally non-abelian. We present code construction and give several expressions for code dimension, applicable depending on whether the constituent group is cyclic, abelian, or non-abelian. This gives a simple criterion for an essentially non-abelian 2BGA code guaranteed not to be permutation-equivalent to such a code based on an abelian group. We also give a lower bound on the distance which, in particular, applies to the case when a 2BGA code reduces to a hypergraph-product code constructed from a pair of classical group codes.
CSS codes, QECC, quantum LDPC codes, group algebra codes, group codes, two-block codes, 2BGA codes, GB codes, generalized bicycle codes
## I Introduction
Generally, any family of quantum low-density parity-check (LDPC) codes with stabilizer generators of bounded weight and distance scaling logarithmically or faster with the block length has a finite fault-tolerant threshold to scalable error correction [1, 2, 3, 4]. Recently, there was a significant progress in constructing such codes [5, 6, 7, 8, 9, 10, 11]. Unfortunately, many of the proposed "product" constructions, e.g., in Refs. [8, 9, 10, 11, 12, 13, 14, 15], tend to give rather long codes, and the existing lower bound for the generator weight to give asymptotically good quantum LDPC codes with finite rates and linear distance scaling is also very large [11].
In comparison, much shorter quantum codes, including quantum LDPC codes with bounded generator weights, can be constructed with a two-block anzats [16], a construction based on a pair of square commuting matrices. It gives a family of Calderbank-Shor-Steane (CSS) codes [17, 18] with relatively small block lengths, twice the size of the original matrices. The commutativity can be achieved, e.g., by taking a pair of circulant matrices, which gives generalized bicycle (GB) codes [5, 16, 19, 20], or using an arbitrary finite abelian group instead of the cyclic group [21]. An important advantage of two-block quantum LDPC codes is an overcomplete set of minimum-weight stabilizer generators which may improve their performance in the fault-tolerant setting. Finally, GB and more general two-block codes include certain families of hypergraph-product (HP) codes [12] as a subclass, which guarantees the existence of finite-rate codes with \(\mathcal{O}(\sqrt{n})\) distance scaling in this family, but they also include codes with linear distances [20]. In comparison, the distance of an HP code with the block length \(n\) cannot exceed \(\sqrt{n}\).
In this paper we discuss general quantum two-block codes. We introduce a family of _two-block group algebra_ (2BGA) codes based on an arbitrary finite group, abelian or non-abelian. Just like GB codes can be seen as CSS codes constructed from a pair of index-two quasicyclic codes, 2BGA codes are the smallest lifted-product (LP) codes [9, 11].
We give a formal expression based on idempotent matrices for the dimension of general two-block codes. The dimension is necessarily even for such codes based on an abelian group algebra [21] (which includes GB codes), as well as for more general quantum two-block codes identified by certain additional commutativity conditions. We show that this constraint is automatically satisfied for 2BGA codes based on a semi-simple group algebra; the dimension of such codes is necessarily even. This gives a simple sufficient criterion for an essentially non-abelian 2BGA code which cannot be reduced to such a code based on an abelian group. We also discuss the distance of 2BGA codes and, for a family of such codes, give a lower bound in terms of distances of classical group algebra codes. In particular, this bound applies in the case where a group algebra code reduces to an HP code.
The structure of the rest of the paper is as follows. We introduce necessary notations in Section II. Our main results are given in Sec. III, followed by conclusions in Sec. IV.
## II Preliminaries
A classical code \(\mathcal{C}\) linear over a finite field \(F\equiv\mathbb{F}_{q}\), where \(q>1\) is a power of a prime \(p\), the characteristic of the field, with parameters \([n,k,d]_{q}\), is a \(k\) dimensional vector space in \(F^{n}\), the set of all length-\(n\) strings using elements of \(F\) as characters. Such a code can be specified in terms of a generator matrix \(G\) whose rows are vectors from \(\mathcal{C}\) forming a complete basis, \(\operatorname{rank}G=k\), or its parity-check matrix \(H\) whose rows are orthogonal to any vector in \(\mathcal{C}\), with \(\operatorname{rank}H=n-k\),
\[\mathcal{C}=\mathcal{C}_{G}\equiv\mathcal{C}_{H}^{\perp},\quad GH^{T}=0. \tag{1}\]
The codes \(\mathcal{C}_{G}\) and \(\mathcal{C}_{H}\) generated by rows of \(G\) and \(H\), respectively, are called mutually dual. The _support_ of a vector \(\boldsymbol{x}\equiv(x_{1},x_{2},\ldots,x_{n})\in F^{n}\) is the set of indices \(i\) corresponding to non-zero components \(x_{i}\neq 0\), and its Hamming weight is the size of the support. The distance \(d\) of a linear code \(\mathcal{C}\) is the smallest Hamming weight of a non-zero vector in \(\mathcal{C}\); by convention, \(d=\infty\) for a trivial code with \(k=0\).
A very important class of codes are _cyclic_ linear codes [22], invariant under the group \(C_{n}\) of cyclic permutations. A generalization to an arbitrary group are _group codes_, or group algebra codes [23, 24, 25, 26].
Given a finite field \(F\) and a finite group \(G\), the group algebra \(F[G]\) is the linear space of all formal sums
\[x\equiv\sum_{g\in G}x_{g}g,\quad x_{g}\in F, \tag{2}\]
where group elements \(g\in G\) serve as basis vectors, equipped with the product naturally associated with the group operation,
\[ab=\sum_{g\in G}\biggl{(}\sum_{h\in G}a_{h}b_{h^{-1}g}\biggr{)}g,\quad a,b\in F [G]. \tag{3}\]
Similar to cyclic codes, a left (right) group algebra code is isomorphic to a left (right) _ideal_\(J\) in \(F[G]\), defined as a linear space such that for any \(x\in J\) and any \(r\in F[G]\), \(rx\in J\) for the left ideal (\(xr\in J\) for the right ideal).
The structure of ideals in \(F[G]\) is particularly simple when characteristic of the field and the group size are mutually prime, \(\gcd(p,|G|)=1\). In this case, according to Maschke's theorem, the group algebra is semisimple, and any ideal is a principal ideal generated by an idempotent element, e.g., \(J=e_{J}\cdot F[G]\) for a right ideal \(J=J_{R}\), with an idempotent \(e_{J}^{2}=e_{J}\in J\) (see, e.g., Corollary 2.2.5 in Ref. [27]).
A quantum Calderbank-Shor-Steane (CSS) code [17, 18]\(\mathcal{Q}\equiv\mathrm{CSS}(H_{X},H_{Z})\) can be defined as a direct sum of two quotient spaces, \(\mathcal{Q}\cong\mathcal{Q}_{X}\oplus\mathcal{Q}_{Z}\),
\[\mathcal{Q}_{X}=\mathcal{C}_{H_{Z}}^{\perp}/\mathcal{C}_{H_{X}},\quad\mathcal{ Q}_{Z}=\mathcal{C}_{H_{X}}^{\perp}/\mathcal{C}_{H_{Z}}. \tag{4}\]
For example, elements of \(\mathcal{Q}_{X}\) are equivalence classes of vectors in \(\mathcal{C}_{H_{Z}}^{\perp}\), where two vectors are equivalent, \(x\simeq y\), if they differ by an element of \(\mathcal{C}_{H_{X}}\), \(x-y\in\mathcal{C}_{H_{X}}\). Such a pair of equivalent vectors are called mutually _degenerate_, while any vector in the equivalence class of the zero vector is called _trivial_. The CSS _generator matrices_\(H_{X}\) and \(H_{Z}\) have equal number of columns, \(n\), and orthogonal rows, \(H_{X}H_{Z}^{T}=0\). The parameters of the code (4) are denoted \([[n,k,d]]_{q}\), where
\[k=n-\mathrm{rank}\,H_{X}-\mathrm{rank}\,H_{Z} \tag{5}\]
is the common dimension of the quotient spaces \(\mathcal{Q}_{X}\) and \(\mathcal{Q}_{Z}\), and \(d\equiv\min(d_{X},d_{Z})\) is the minimum weight of any non-trivial vector in \(\mathcal{Q}\), e.g.,
\[d_{Z}\equiv d(\mathcal{Q}_{Z})=\min_{\mathbf{u}\in C_{H_{X}}^{\perp}\backslash C _{H_{Z}}}\mathrm{wgt}(\mathbf{u}). \tag{6}\]
Physically, a quantum code \(\mathrm{CSS}(H_{X},H_{Z})\) operates in a Hilbert space \(\mathcal{H}_{q}^{\otimes n}\) associated with \(n\) quantum-mechanical systems of dimension \(q\) each, Galois-qudits [28], and a well defined basis of \(X\) and \(Z\) operators acting in \(\mathcal{H}_{q}^{\otimes n}\)[29]. Vectors of the codes \(\mathcal{C}_{H_{X}}\) and \(\mathcal{C}_{H_{Z}}\) correspond to \(X\)- and \(Z\)- operators in the stabilizer group whose generators must be measured frequently during the operation of the code; generating matrices \(H_{X}\) and \(H_{Z}\) with smaller row weights result in codes which are easier to implement in practice. Orthogonality condition \(H_{X}H_{Z}^{T}=0\) ensures that the stabilizer group is abelian. Non-trivial vectors in \(\mathcal{Q}_{Z}\) and \(\mathcal{Q}_{X}\) correspond to \(Z\) and \(X\) logical operators, respectively. Codes with larger distances have logical operators which involve more qudits; such codes typically give better protection against errors.
## III Two-block codes
In this work we discuss two-block CSS codes with generator matrices in the form [16]
\[H_{X}=(A,B),\quad H_{Z}^{T}=\begin{pmatrix}B\\ -A\end{pmatrix}, \tag{7}\]
where \(A\) and \(B\) are square commuting \(\ell\times\ell\) matrices with elements in \(F\). The commutativity guarantees the CSS orthogonality condition, \(H_{X}H_{Z}^{T}=0\).
**Code dimension**: Given a square size-\(\ell\) matrix \(A\) with elements in a finite field \(F\), consider square idempotent matrices \(E_{A}\) and \(F_{A}\) of the same size and rank such that
\[E_{A}^{2}=E_{A},\quad F_{A}^{2}=F_{A},\quad E_{A}A=AF_{A}=A. \tag{8}\]
While these matrices are not unique, they can always be constructed from the Smith normal form decomposition \(A=U_{A}D_{A}V_{A}\), where \(U_{A}\) and \(V_{A}\) are square invertible matrices, and \(D_{A}=\mathrm{diag}(1,\ldots,1,0,\ldots,0)\) has exactly \(\mathrm{rank}\,A\) non-zero elements along the diagonal. Namely, we may choose
\[E_{A}\equiv U_{A}D_{A}U_{A}^{-1},\quad F_{A}\equiv V_{A}^{-1}D_{A}V_{A}. \tag{9}\]
With idempotent matrices (8), it is easy to express the ranks of block matrices (7). Indeed, row and column transformations give (this is a simplified version of more general expressions in Refs. [30, 31])
\[\mathrm{rank}\,H_{X} = \mathrm{rank}\left(\begin{array}{cc}A&E_{A}B\\ 0&(I-E_{A})B\end{array}\right) \tag{10}\] \[= \mathrm{rank}(A)+\mathrm{rank}(I-E_{A})B,\]
and a similar result for the rank of the other matrix,
\[\mathrm{rank}\,H_{Z} = \mathrm{rank}\,A+\mathrm{rank}\,B(I-F_{A}). \tag{11}\]
In general, \(\mathrm{rank}\,H_{Z}\neq\mathrm{rank}\,H_{X}\). However, the equality can be achieved with some additional commutativity conditions. For example, if both \(E_{A}\) and \(F_{A}\) commute with \(B\), the second terms in the r.h.s. of Eqs. (10) and (11) are both equal \(\mathrm{rank}\,B-\mathrm{rank}\,AB\). This gives
**Statement 1**.: _Suppose that idempotents \(E_{A}\) and \(F_{A}\) in Eq. (8) commute with \(B\) in Eq. (7). Then,_
\[\mathrm{rank}\,H_{X}=\mathrm{rank}\,H_{Z},\quad\text{and}\quad k=2(\ell- \mathrm{rank}\,H_{X}). \tag{12}\]
Evidently, Eq. (12) also remains true after interchanging the blocks, e.g., if idempotents \(E_{B}\) and \(F_{B}\) commute with \(A\).
In particular, the conditions of Statement 1 are satisfied if \(A\) has a square-free minimal polynomial. Indeed, in such a case \(A\) can be diagonalized, \(A=S^{-1}\Lambda S\), where square matrix \(S\) over \(F\) is invertible, and the idempotents can be constructed as \(E_{A}=F_{A}=S^{-1}DS\), with \(D\) a diagonal matrix with elements equal to zero or one according to whether the corresponding element of \(\Lambda\) is zero or not. It is easy to check that thus constructed \(E_{A}=F_{A}\) necessarily commute with \(B\) if \(A\) does.
**Construction from classical group algebra codes**: To get a pair of commuting matrices, we use an ansatz introduced by Panteleev and Kalachev [9, 11]. Namely, given an element \(x\in F[G]\) of the group algebra with the group size \(\ell\equiv|G|\)
the \(\ell\times\ell\) square matrices \(\mathrm{L}(x)\) and \(\mathrm{R}(x)\), respectively, are defined by the left and right action on group elements,
\[[\mathrm{L}(x)]_{\alpha,\beta}\equiv\sum_{g\in G}x_{g}\delta_{\alpha,g\beta}, \quad[\mathrm{R}(x)]_{\alpha,\beta}\equiv\sum_{g\in G}x_{g}\delta_{\alpha,\beta g}, \tag{13}\]
where group elements \(\alpha,\beta\in G\) are used to index rows and columns, cf. Eq. (2), and \(\delta_{\alpha,\beta}=1\) if \(\alpha=\beta\) and \(0\) otherwise is the Kronecker delta. Row and column weights of \(\mathrm{L}(x)\) and \(\mathrm{R}(x)\) are equal to \(\mathrm{wgt}(x)\), the Hamming weight of the vector in \(F^{\ell}\) with components \(x_{\alpha}\), \(\alpha\in G\), which makes it easy to construct sparse matrices. Furthermore, for any \(a,b\in F[G]\), \(\mathrm{L}(a)\,\mathrm{L}(b)=\mathrm{L}(ab)\), \(\mathrm{R}(a)\,\mathrm{R}(b)=\mathrm{R}(ba)\), while a left and a right matrices always commute,
\[\mathrm{L}(a)\,\mathrm{R}(b)=\mathrm{R}(b)\,\mathrm{L}(a). \tag{14}\]
With a group algebra element entirely supported on a subgroup, \(x_{g}\neq 0\) only if \(g\in K<G\), one can also form smaller matrices, e.g., \([\mathrm{L}_{K}(x)]_{\alpha,\beta}\) of size \(|K|\times|K|\), with indices restricted to the same subgroup, \(\alpha,\beta\in K\). If we introduce the _support group_[32]
\[G_{x}\equiv\langle\{g\in G:x_{g}\neq 0\}\rangle \tag{15}\]
generated by elements of \(G\) in the support of \(x\), it is evident that matrices \(\mathrm{L}(x)\) and \(\mathrm{R}(x)\) are block-diagonal (up to a permutation), with square blocks of equal size \(|G_{x}|\), corresponding to, respectively, right and left cosets of the support group \(G_{x}\) in \(G\).
With these definitions, the _two-block group algebra_ (2BGA) codes, the CSS codes (7) with \(A\equiv\mathrm{L}(a)\) and \(B\equiv\mathrm{R}(b)\) given by Eq. (13), are the smallest lifted-product codes [11]\(\mathrm{LP}[a,b]\), where group algebra elements \(a,b\) are treated as \(1\times 1\) matrices over \(F[G]\). Previously considered special cases are GB codes [5, 16, 20], with \(G\) a cyclic group, and _abelian_ 2BGA codes [21], with \(G\) an abelian group.
The structure of matrices \(A\) and \(B\) is such that the row labeled by a group element \(x\in G\) is associated, respectively, with the block supported in the right coset \(G_{a}x\) and that in the left coset \(xG_{b}\). When the product of the two support groups (the double coset associated with the group identity element \(1\in G\)) does not contain all group elements, \(G_{a}G_{b}\subsetneq G\), the code \(\mathrm{LP}[a,b]\) is decomposed into smaller mutually disconnected subcodes associated with different double cosets in \(G_{a}\backslash G/G_{b}\). The individual _double-coset_ subcodes are not necessarily equivalent to each other; it is well known that even the sizes of double cosets may differ.
The case of GB codes [5, 16, 20] is recovered when \(G\) is a cyclic group,
\[C_{\ell}\equiv\langle x\rangle\equiv\{1,x,x^{2},\ldots,x^{\ell-1}\},\quad x^{ \ell}=1.\]
There is an obvious one-to-one map between the group algebra \(F[C_{\ell}]\) and the ring of modular polynomials \(F[x]/(x^{\ell}-1)\). Then, a 2BGA code \(\mathrm{LP}[a,b]\) is also a generalized-bicycle code \(\mathrm{GB}[a(x),b(x)]\) specified by a pair of polynomials \(a(x)\), \(b(x)\in F[x]/(x^{\ell}-1)\), and the square blocks in Eq. (7) are just the circulant matrices \(A=a(P)\) and \(B=b(P)\), where
\[P=\begin{pmatrix}0&\ldots&0&1\\ 1&&0\\ &\ddots&&\vdots\\ &&1&0\end{pmatrix} \tag{16}\]
is an \(\ell\times\ell\) cyclic permutation matrix. A simple expression for the dimension of a code \(\mathrm{GB}[a,b]\) was given in Ref. [5]. In this case \(\mathrm{rank}\,H_{X}=\mathrm{rank}\,H_{Z}=\ell-\deg h(x)\), and
\[k=2\deg h(x),\quad h(x)\equiv\gcd\left(a(x),b(x),x^{\ell}-1\right). \tag{17}\]
Evidently, Eq. (12) is satisfied, as it also does when the group \(G\) is abelian [21], or when one of the subgroups, \(G_{a}\) or \(G_{b}\) [see Eq. (15)], is cyclic. In the latter case the GB code is equivalent to a quasi-cyclic LP code [9].
More generally, consider _semi-abelian_ 2BGA codes satisfying the conditions of Statement 1. Namely, take a code \(\mathrm{LP}[a,b]\) where, e.g., \(a\in F[G]\) is such that the corresponding right \(a\cdot F[G]\) and left \(F[G]\cdot a\) ideals are generated by idempotents \(e_{a}\) and \(f_{a}\), \(e_{a}a=af_{a}=a\), and choose \(E_{A}=\mathrm{L}(e_{a})\) and \(F_{A}=\mathrm{L}(f_{a})\) to guarantee their commutativity with \(B=\mathrm{R}(b)\). In particular, a semi-abelian 2BGA code is always obtained if the group algebra \(F[G]\) is semisimple. Alternatively, we can select \(a\) so that the corresponding subgroup \(G_{a}\) in Eq. (15) has the order mutually prime with the field characteristic \(p\), \(\gcd(p,|G_{a}|)=1\), so that only the subalgebra \(F[G_{a}]\) be semi-simple. Then, the idempotents \(e_{a}\in F[G_{a}]\) and \(f_{a}\in F[G_{a}]\) also generate the right and left ideals of \(a\) in \(F[G]\), and, again, we can choose \(E_{A}=\mathrm{L}(e_{a})\) and \(F_{A}=\mathrm{L}(f_{a})\), so that the conditions of Statement 1 be satisfied.
To summarize, any abelian 2BGA code (including any GB code) or any semi-abelian 2BGA code, e.g., based on a semisimple group algebra, has an even dimension, see Eq. (12). Thus, any 2BGA code with an odd dimension \(k\) is _essentially non-abelian_, i.e., it is not permutation-equivalent to an abelian or a semi-abelian 2BGA code.
**Example 2**.: _Consider the alternating group \(A_{4}\), also known as the rotation group of a regular tetrahedron,_
\[T=\langle x,y|x^{3}=(yx)^{3}=y^{2}=1\rangle,\quad|T|=12,\]
_and the binary algebra \(\mathbb{F}_{2}[T]\). Select \(a=1+x+y+x^{-1}yx\) and \(b=1+x+y+yx\) to get an essentially non-abelian 2BGA code \(\mathrm{LP}[a,b]\) with parameters \([[24,5,3]]_{2}\)._
**Distances of GB codes**: Several existence bounds for unrestricted GB codes (without the limit on row weight) are given in Ref. [20]. In particular, with \(g(x)\equiv(x^{\ell}-1)/h(x)\) irreducible, cf. Eq. (17), a counting argument in the style of Gilbert-Varshamov bound proves the existence of GB codes with \(k=2\) and linear distance scaling (Example 8 in Ref. [20]), and rate-\(1/4\) GB codes with \(d\geq\sqrt{\ell}\) related to quadratic-residue cyclic codes (Example 9 in Ref. [20]). This should be contrasted with, e.g., HP codes whose distances satisfy the upper bound \(d<\sqrt{n}\).
In practice, we are more interested in quantum LDPC codes, with weight of stabilizer generators not exceeding some fixed \(w\). Unfortunately, the regular structure of GB codes is a disadvantage in this case, as any such code is equivalent to a code local on a hypercubic lattice \(\mathbb{Z}^{D}\), with \(D\leq w-1\), or \(D\leq w-2\) if \(\ell\) is prime (Statement 13 from Ref. [20]). With general results from Refs. [33, 34], this gives upper bounds
\[d\leq\mathcal{O}(n^{1-1/D})\text{ and }kd^{2/(D-1)}\leq\mathcal{O}(n). \tag{18}\]
Numerically, for a family of GB codes with \(k=2\), the distance scaling is consistent with \(d=A(w)n^{1/2}+B(w)\), with \(A(w)\) an increasing function of \(w\), although \(d=\mathcal{O}(n^{\alpha})\) with some \(\alpha=1/2+\epsilon\) with a small \(\epsilon>0\) cannot be excluded [20].
**Lower distance bounds for 2BGA codes**: Best known are the usual CSS bounds,
\[d_{Z}\geq d(C_{H_{X}}^{\perp}),\quad d_{X}\geq d(C_{H_{Z}}^{\perp}). \tag{19}\]
However, since the rows of \(H_{X}\) and \(H_{Z}\) are mutually orthogonal, we have, e.g., \(d(C_{H_{X}}^{\perp})\leq w_{Z}\), the minimum row weight of the matrix \(H_{Z}\). Since our main interest is in highly-degenerate quantum LDPC codes with bounded stabilizer weights and diverging distances, the CSS bounds (19) are not very useful.
Consider the special case of a 2BGA code \(\mathrm{LP}[a,b]\), with \(a,b\in F[G]\) such that the intersection subgroup \(N\equiv G_{a}\cap G_{b}\) is central in \(G\). In such a case, if we choose two transversal sets of coset representatives, \(\mathcal{A}\) from \(G_{a}/N\) and \(\mathcal{B}\) from \(G_{b}/N\), any element of a double coset \(G_{a}\backslash x/G_{b}\) can be written as \(\alpha x\beta\,\gamma\), with \(\alpha\in\mathcal{A}\), \(\beta\in\mathcal{B}\), and \(\gamma\in N\). This gives matrices \(A\) and \(B\) with individual square blocks of size \(c\equiv|N|\) given by, respectively, \(\mathrm{L}_{N}(a_{\alpha,\alpha^{\prime}})\) and \(\mathrm{R}_{N}(b_{\beta,\beta^{\prime}})\), with matrix elements \(a_{\alpha,\alpha^{\prime}},b_{\beta,\beta^{\prime}}\in N\) defined by the action of the two group algebra elements on the corresponding cosets, and indices \(\alpha,\alpha^{\prime}\in\mathcal{A}\) and \(\beta,\beta^{\prime}\in\mathcal{B}\). Explicitly, e.g., given the expansion (2) of \(a\in F[G]\), \(a_{\alpha^{\prime},\alpha}=\sum_{\gamma\in N}a_{\alpha^{\prime}\alpha-1 \gamma}\gamma\). This gives exactly the structure of a square-matrix _quasi-abelian_ LP code [9] over the group algebra \(F[N]\), and also the following lower bound:
**Statement 3** (Version of Theorem 5 from Ref. [16]).: _Given any two group algebra elements \(a,b\in F[G]\) such that the intersection subgroup \(N\equiv G_{a}\cap G_{b}\) of size \(c\equiv|N|\) is central in \(G\), consider classical codes with parity check matrices \(A=\mathrm{L}(a)\) and \(B=\mathrm{R}(b)\). Let \(d_{0}=\min\left\{d(C_{A}^{\perp}),d(C_{B}^{\perp})\right\}\) be the minimum of their distances. Then, the distance \(d_{Z}\) of the 2BGA code \(\mathrm{LP}[a,b]\) satisfies the inequality \(d_{Z}\geq\lceil d_{0}/c\rceil\)._
In fact, this lower bound becomes exact when the intersection subgroup is trivial, \(N=\{1\}\). In this case each double-coset subcode of the 2BGA code \(\mathrm{LP}[a,b]\) is equivalent to a hypergraph-product code constructed from classical codes with parity-check matrices \(\mathrm{L}_{G_{a}}(a)\) and \(\mathrm{R}_{G_{b}}(b)\) over the corresponding subgroups, the individual blocks of \(\mathrm{L}(a)\) and \(\mathrm{R}(b)\).
It is known [26] that group algebra codes include good codes with finite rates and finite relative distances. This guarantees the existence of finite-rate 2BGA codes with distance scaling as a square root of block length. Unfortunately, we do not have a matching upper bound for finite-rate 2BGA codes.
## IV Conclusions
In conclusion, we considered a family of quantum two-block codes, an ansatz particularly suitable for constructing short and intermediate-length quantum LDPC codes. This family includes previously studied GB codes and their generalization, 2BGA codes, which may be based on an abelian or a non-abelian group. Compared to "single-block" quantum cyclic codes [35, 36, 37] and a related construction based on a general finite group [38], the 2BGA codes have much more freedom: here the CSS orthogonality constraint is naturally satisfied for any pair of group algebra elements, and it is much easier to construct highly-degenerate quantum LDPC codes.
We constructed a general expression relating the dimension of a two-block code to those of single-block codes and, in the case of 2BGA code \(\mathrm{LP}[a,b]\), identified the cases of abelian, semi-abelian, and non-abelian 2BGA codes, depending on the group \(G\), the chosen group algebra elements \(a,b\in F[G]\), and the associated support groups \(G_{a}\) and \(G_{b}\). We also constructed a lower distance bound applicable when the subgroup \(N\equiv G_{a}\cap G_{b}\) is central in \(G\). The bound becomes exact when \(N=\{1\}\), a trivial subgroup, in which case the 2BGA code is equivalent to an HP code constructed from a pair of group algebra codes.
Research in progress [39] includes enumeration of 2BGA codes with row weights \(w\leq 8\) for all inequivalent small groups of size \(\ell\leq 50\). Of particular interest are 2BGA codes with larger \(k\) which have many redundant minimum-weight stabilizer generators and are expected to perform well in a fault-tolerant setting as data-syndrome codes [40, 41, 42, 43]. This could enable single-shot fault-tolerant quantum error correction [44, 45].
## Acknowledgments
We are grateful to Pavel Panteleev for helpful comments on an early version of the manuscript. This work was supported in part by the APS M. Hildred Blewett Fellowship (HKL) and the NSF Division of Physics via the grant 2112848 (LPP).
|
2306.17215 | Using the motion of S2 to constrain scalar clouds around SgrA* | The motion of S2, one of the stars closest to the Galactic Centre, has been
measured accurately and used to study the compact object at the centre of the
Milky Way. It is commonly accepted that this object is a supermassive black
hole but the nature of its environment is open to discussion. Here, we
investigate the possibility that dark matter in the form of an ultralight
scalar field ``cloud'' clusters around Sgr~A*. We use the available data for S2
to perform a Markov Chain Monte Carlo analysis and find the best-fit estimates
for a scalar cloud structure. Our results show no substantial evidence for such
structures. When the cloud size is of the order of the size of the orbit of S2,
we are able to constrain its mass to be smaller than $0.1\%$ of the central
mass, setting a strong bound on the presence of new fields in the galactic
centre. | GRAVITY Collaboration, A. Foschi, R. Abuter, N. Aimar, P. Amaro Seoane, A. Amorim, M. Bauböck, J. P. Berger, H. Bonnet, G. Bourdarot, W. Brandner, V. Cardoso, Y. Clénet, Y. Dallilar, R. Davies, P. T. de Zeeuw, D. Defrère, J. Dexter, A. Drescher, A. Eckart, F. Eisenhauer, M. C. Ferreira, N. M. Förster Schreiber, P. J. V. Garcia, F. Gao, E. Gendron, R. Genzel, S. Gillessen, T. Gomes, M. Habibi, X. Haubois, G. Heißel, T. Henning, S. Hippler, S. F. Hönig, M. Horrobin, L. Jochum, L. Jocou, A. Kaufer, P. Kervella, L. Kreidberg, S. Lacour, V. Lapeyrère, J. B. Le Bouquin, P. Léna, D. Lutz, F. Millour, T. Ott, T. Paumard, K. Perraut, G. Perrin, O. Pfuhl, S. Rabien, D. C. Ribeiro, M. Sadun Bordoni, S. Scheithauer, J. Shangguan, T. Shimizu, J. Stadler, O. Straub, C. Straubmeier, E. Sturm, C. Sykes, L. J. Tacconi, F. Vincent, S. von Fellenberg, F. Widmann, E. Wieprecht, E. Wiezorrek, J. Woillez | 2023-06-29T18:00:01Z | http://arxiv.org/abs/2306.17215v4 | # Using the motion of S2 to constrain scalar clouds around Sgr A*
###### Abstract
The motion of S2, one of the stars closest to the Galactic Centre, has been measured accurately and used to study the compact object at the centre of the Milky Way. It is commonly accepted that this object is a supermassive black hole but the nature of its environment is open to discussion. Here, we investigate the possibility that dark matter in the form of an ultralight scalar field "cloud" clusters around Sgr A*. We use the available data for S2 to perform a Markov Chain Monte Carlo analysis and find the best-fit estimates for a scalar cloud structure. Our results show no substantial evidence for such structures. When the cloud size is of the order of the size of the orbit of S2, we are able to constrain its mass to be smaller than 0.1% of the central mass, setting a strong bound on the presence of new fields in the galactic centre.
keywords: black holes physics - dark matter - gravitation - celestial mechanics - Galaxy: centre
## 1 Introduction
The orbit of the star S2 in the Galactic Centre (GC) has been monitored for almost 30 years with both spectroscopic and astrometric measurements, the latter reaching a precision of \(\simeq 50\,\mu\)as since the GRAVITY instrument at the Very Large Telescope Interferometer (VLTI) has been put into operation (GRAVITY Collaboration, 2017). S2 is a star with mass around \(10-15\,\mathrm{M}_{\odot}\) orbiting Sgr A\({}^{*}\) with a period of roughly 16 years and apparent magnitude \(K\sim 14\)(Ghez et al., 2003; Habibi et al., 2017). It is part of the so-called Sagittarius A\({}^{*}\) cluster, consisting of about 40 stars, known as S-stars, whose orbits are all located within one arcsecond distance from Sgr A\({}^{*}\)(Eckart & Genzel, 1996; Schodel et al., 2002; Ghez et al., 2003; Gillessen et al., 2009, 2012). The data collected has allowed constraining with unprecedented accuracy both the mass \(M\) of the central object and the GC distance \(R_{0}\). In particular, the trajectory of the S2 star, together with those of other stars in the S-cluster, showed that their motion is determined by a potential generated by a dark object with mass \(M\sim 4.3\cdot 10^{6}\mathrm{M}_{\odot}\) at a distance \(R_{0}\sim 8.3\,\mathrm{kpc}\)(Ghez et al., 2008; GRAVITY Collaboration, 2019, 2022), widely believed to be a supermassive black hole (SMBH, Genzel et al., 2010). This hypothesis has been supported by the direct observations of near-IR flares in the relativistic accretion zone of Sgr A\({}^{*}\), corresponding to the innermost stable circular orbit of a black hole (BH) (GRAVITY Collaboration, 2018), and, most recently, analysing the image of Sgr A\({}^{*}\) taken by the Event Horizon Telescope (EHT) which is compatible with the expected appearance of a Kerr BH with such a mass (Akiyama et al., 2022).
While the nature of the central object seems to be well established, its surrounding environment remains mostly unknown. In this context, an especially exciting prospect is that dark matter (DM) may cluster around supermassive BHs, producing spikes in the local density (Gondolo & Silk, 1999; Sadeghian et al., 2013), leaving imprints in the orbits of stars. The scattering of DM by passing stars or BHs, or accretion by the central BH induced by heating in its vicinities may significantly soften the spike distribution (Merritt et al., 2002; Merritt, 2004; Bertone & Merritt, 2005). Given the outstanding challenge that DM represents, it is specially important to test the presence of new forms of matter in the GC (for a review on the GC and how it can be used to constrain DM see De Laurentis et al. (2022)).
Data collected for S2 has been used to test the presence of an extended mass within its apocenter (\(r_{\mathrm{apo,S2}}=14\,\mathrm{mas}\)) with particular attention to spherically symmetric DM density distributions (see e.g. Lacroix (2018); Bar et al. (2019); Heissel et al. (2022); GRAVITY Collaboration (2022)).
Lacroix (2018) used data up to 2016 to fit the size of a DM spike within a halo described by a density profile (Zhao, 1996):
\[\rho_{\mathrm{NFW}}=\rho_{\mathrm{s}}\left(\frac{r}{r_{s}}\right)^{-\gamma} \left(1+\frac{r}{r_{s}}\right)^{\gamma-3}\,, \tag{1}\]
where \(r_{s}\) is the scale radius, \(\rho_{s}\) is the scale density which can be trivially related to the local DM density. Lacroix was able to exclude a spike with a radius greater than \(10^{3}\) pc (Figure 2, last plot), which corresponds to \(R_{\mathrm{BP}}\approx 4.8\cdot 10^{9}\,M\), which can be translated in an upper bound on the total "environmental" mass \(\delta M\) within the characteristic size of the orbit, \(\delta M\lesssim 4-5\cdot 10^{4}\,M_{\odot}\), i.e. \(\sim 1\%\,M\).
Bar et al. (2019) used similar data to constrain the presence of ultralight dark matter, i.e., matter in the form of a self-gravitating scalar condensate. This assumption fixes the density distribution of the mass profile, and they were able to set an upper bound on the soliton mass of \(\delta M\sim 5\cdot 10^{4}\,M_{\odot}\) for a fundamental scalar field with
mass \(m_{s}\sim 4\cdot 10^{-19}\) eV. For \(m_{s}\gtrsim 10^{-18}\) eV the soliton is confined inside S2 periastron and is degenerate with the BH mass.
Della Monica & de Martino (2023) used a similar procedure to derive an upper limit of \(<10^{-19}\) eV on the mass of ultralight boson to beat 95% confidence level.
Recently, GRAVITY Collaboration (2022) provided the current \(1\sigma\) upper bound on the environmental mass \(\delta M\) within the orbit of S2, namely \(\delta M\sim 4000\,M_{\odot}\), or \(0.1\%\) of the BH mass. This limit was obtained assuming a Plummer model for the matter profile,
\[\rho_{\rm Plummer}=\frac{3f_{\rm PL}M}{4\pi a_{0}^{3}}\left(1+\left(\frac{r}{a_ {0}}\right)^{2}\right)^{-5/2}\,, \tag{2}\]
with \(a_{0}\) a length scale of the external matter distribution, which has mass \(f_{\rm PL}M\). In fact, considering a scale length given by roughly S2's apoastron (\(a_{0}=0.3^{\gamma}\)), a best-fit value for a fraction of extended mass within S2's orbit of \(f_{\rm PL}=(2.7\pm 3.5)\cdot 10^{-3}\) was found, i.e. \(f_{\rm PL}\) is compatible with zero at \(1\sigma\) confidence level, and it can be interpreted as a null result. Using, in addition, the orbits of the other four S-stars, upper limits on the extended mass were imposed, of order \(10^{3}\,M_{\odot}\), equivalent to \(0.1\%\) of the central mass \(M\).
Thus far, the profile of the matter distribution has been mostly ad-hoc. Here, we study the possibility that new fundamental fields exist and that they "condense" in a bound state around the BH (for a review, see Brito et al. (2015)). These fields might be a significant component of dark matter, or simply as-yet unobserved forms of matter. It is a tantalizing possibility that supermassive BHs might then be used as particle detectors, a possibility that we explore, using the motion of S2 as a probe of the matter content. In this context, the matter profile is known and given by the spatial profile of bound states around spinning BHs (Detweiler, 1980; Cardoso & Yoshida, 2005; Dolan, 2007; Witek et al., 2013; Brito et al., 2015). It can be argued that also in the context of fuzzy dark matter, composed of an ultralight scalar, the near-horizon region is controlled by BH physics, hence governed by the same type of profile we consider here (Cardoso et al., 2022). The suggestion that the stars' motion can be used to probe light fields around BHs is not new (Cardoso et al., 2011; Ferreira et al., 2017; Fujita & Cardoso, 2017), but is here explored explicitly with data from the GRAVITY instrument.
## 2 The Setup
Light bosonic fields can arise in a variety of contexts, for example, in string-inspired theories (Arvanitaki et al., 2010). However, early examples arose out of the need to explain in a natural way the smallness of the neutron electric dipole moment. They invoked the existence of a new axionic, light, degree of freedom (Peccei & Quinn, 1977; Wilczek, 1978; Weinberg, 1978; Preskill et al., 1983; Abbott & Sikivie, 1983; Dine & Fischler, 1983).
In the presence of a spinning BH, small fluctuations of a massive scalar field can be exponentially amplified via superradiance, leading to a condensate - a bound state - outside the horizon (Brito et al., 2015). This structure can carry up to \(\sim 10\%\) of the BH mass if grown from vacuum. It is also possible that the scalar soliton existed on its own, for example, if it is part of dark matter, in which case the placing of a BH at its centre will lead to a long-lived structure (a "cloud") which on BH scales resembles the superradiant bound states (Cardoso et al., 2022, 2020). Here we will be agnostic regarding the origin of the scalar structure, but we will use our knowledge about the spatial profile of bound states around BHs.
### The scalar field profile
Consider a particle moving in a potential given by a central mass \(M\) surrounded by a scalar field cloud. Our starting point is the setup developed in GRAVITY Collaboration (2019), and here we recall the most relevant steps of their procedure.
A system composed of a central BH with mass \(M\) and a scalar field minimally coupled to gravity is described by the action
\[S=\int d^{4}x\sqrt{-g}\left(\frac{R}{16\pi G}-\frac{1}{2}\sigma^{\alpha\beta }\psi_{,\alpha}^{*}\psi_{,\alpha}^{*}-\frac{\mu^{2}}{2}\psi\psi^{*}\right)\,, \tag{3}\]
where \(R\) is the Ricci scalar, \(g_{\mu\nu}\) and \(g\) are the metric and its determinant. We assume that the BH spins along the \(z-\)axis, with adapted spherical coordinates \((t,r,\theta,\phi)\), with \(\theta=\pi/2\) defining the equator. The scalar \(\psi(t,r,\theta,\phi)\) is a complex field, and \(\mu\) is a mass parameter for the scalar field. It is related to the physical mass \(m_{s}\) via \(\mu=m_{s}c/\hbar\) and to the (reduced) Compton wavelength of the particle via \(\lambda_{C}=\mu^{-1}\). The principle of least action results in the Einstein-Klein-Gordon system of equations, where the energy-momentum tensor of the scalar field can be written as
\[T_{\mu\nu}=\frac{1}{2}\left[\psi_{,\alpha}\psi_{,\nu}^{*}+\psi_{,\nu}\psi_{, \mu}^{*}-g_{\mu\nu}\left(\psi^{,\sigma}\psi_{,\sigma}^{*}+\mu^{2}|\psi|^{2} \right)\right]\,. \tag{4}\]
In the low-energy limit, i.e. neglecting terms of \(\mathcal{O}(c^{-4})\), the energy density of the field reads
\[\rho=\frac{m_{s}^{2}c^{2}}{\hbar^{2}}|\psi|^{2}=\mu^{2}|\psi|^{2}=\left(\frac {\alpha}{M}\right)^{2}|\psi|^{2}\,, \tag{5}\]
where we have defined the dimensionless mass coupling \(\alpha\) as
\[\alpha=\left\lceil\frac{GM}{c^{2}}\right\rceil\left[\frac{m_{s}c}{\hbar}\right] \tag{6}\]
From now on we will use natural units (\(G=c=\hbar=1\)) unless otherwise stated.
The solution of the Klein-Gordon equation for the field \(\psi\) on a Kerr background can be decomposed into a radial and an angular part, as \(\psi=e^{-i\omega t+i\mu\phi}S_{lm}(\theta)R_{lm}(r)\), where \(l,m\) are the angular modes, and \(\omega\sim\mu\) defines the frequency of the field. In the limit of small coupling (\(\alpha\ll 1\)), the radial part is proportional to the generalised Laguerre polynomials \(L_{n}^{2l+1}\) and the angular part becomes \(S_{lm}(\theta)=P_{l}^{m}(\cos\theta)\) with \(P_{l}^{m}(\cos\theta)\) being the associated Legendre
Figure 1: Comparison between the scalar field density in Eq. (5) with \(\alpha=0.01\), \(\Lambda=10^{-3}\) and \(\theta=\pi/2\) (blue dashed line) and the Plummer density in Eq. (2) with \(a_{0}=0.3^{\gamma}\) and \(f_{\rm PL}=10^{-3}\) (orange solid line). Black dotted lines correspond to S2’s periastron (\(r_{\rm peri}\sim 3000\,M\)) and apoastron (\(r_{\rm apo}\sim 50000\,M\)).
polynomials. In this approximation, the fundamental mode \(n=0\), \(l=m=1\) of the scalar field is given by (Brito et al., 2015)
\[\psi=A_{0}e^{-i(\omega t-\phi)}\frac{r}{M}\alpha^{2}e^{-\frac{6\pi^{2}}{4P}}\sin \theta\,, \tag{7}\]
where the amplitude of the field \(A_{0}\) is related to the mass of the cloud via
\[M_{\rm cloud}=\int\rho s^{2}\sin\theta d\theta dsd\phi=\frac{64\pi A_{0}^{2}} {\alpha^{4}}\,M\,. \tag{8}\]
We can now use the energy density of the field to solve Poisson's equation \(\nabla^{2}U_{\rm sca}=4\pi\rho\), using the usual harmonic decomposition implemented in Poisson & Will (2012), i.e., expanding all quantities in spherical harmonics \(T_{lm}=Y_{lm}(\theta,\phi)\). For the energy density computed in (5) the only non-zero terms that contribute to the scalar potential are the \(l=m=0\) and \(l=2\), \(m=0\) terms, resulting in a potential given by
\[\begin{split} U_{\rm sca}&=4\pi\left[\frac{9\,00} {r}Y_{00}+p_{00}Y_{00}\right]+\frac{4\pi}{5}\left[\frac{q_{20}}{r^{3}}Y_{20}+p_ {20}r^{2}Y_{20}\right]\\ &=\Lambda\left(P_{1}(r)+P_{2}(r)\cos^{2}\theta\right)\,,\end{split} \tag{9}\]
where \(\Lambda=M_{\rm cloud}/M\) is the fractional mass of the scalar field cloud to the BH mass,
\[\begin{split} P_{1}(r)&=\frac{M}{r}+\frac{3M^{3}}{ r^{3}\alpha^{4}}-\frac{e^{-\frac{6\pi^{2}}{4P}}}{16M^{2}r^{3}\alpha^{4}}\left(48M^{5}+48 M^{4}r\alpha^{2}+40M^{3}r^{2}\alpha^{4}\right.\\ &\left.+20M^{2}r^{3}\alpha^{6}+6Mr^{4}\alpha^{8}+r^{5}\alpha^{10 }\right)\,,\end{split} \tag{10}\]
and
\[\begin{split} P_{2}(r)&=-\frac{9M^{3}}{r^{3}\alpha^ {4}}+e^{-\frac{9M^{2}}{4P}}\left(\frac{9M}{2}+\frac{9M^{3}}{r^{3}\alpha^{4}}+ \frac{9M^{2}}{r^{2}\alpha^{2}}+\frac{3\alpha^{2}}{2}\right.\\ &\left.+\frac{3r\alpha^{4}}{8M}+\frac{r^{2}\alpha^{6}}{16M^{2}} \right)\,.\end{split} \tag{11}\]
In Figure 1 we show the difference between the scalar field density in (5) along the equator (\(\theta=\pi/2\), with \(\Lambda=10^{-3}\) and \(\alpha=0.01\)) and the density given by a Plummer profile (2), where we use the same values as in GRAVITY Collaboration (2022): \(a_{0}=0.3\,^{\prime\prime}\) and \(f_{\rm PL}=10^{-3}\).
GRAVITY Collaboration (2019) showed that a scalar field cloud described by the potential (9) can leave imprints in the orbital elements of S2 if its mass coupling constant is in the range
\[0.005\lesssim\alpha\lesssim 0.05\,, \tag{12}\]
assuming a fixed direction of the BH spin axis with respect to the plane of the sky, which corresponds to an effective mass of the field in the range \(10^{-20}\,\mathrm{eV}\lesssim\mu\lesssim 10^{-18}\,\mathrm{eV}\). However, Kodama & Yoshino (2012) showed that for an SMBH with the mass of Sgr A*, the allowed range of effective masses that can engage a superradiant instability on a timescale smaller than the cosmic age is \(10^{-18}\,\mathrm{eV}\lesssim\mu\lesssim 10^{-15}\,\mathrm{eV}\). Hence, if a cloud exists and leaves detectable imprints in the orbit of S2, then its formation and existence must be explained by means of a different physical process, as discussed in Sec. 2. However, since the variations in the orbital elements induced by the cloud are potentially detectable with the current precision of the GRAVITY instrument, it is worth comparing these theoretical expectations with the available data. In particular we are interested in fitting the fractional mass of the cloud \(\Lambda=M_{\rm cloud}/M\) for a fixed value of the mass coupling constant \(\alpha\).
### The equations of motion
To obtain the equations of motion of a particle moving in a central potential plus the toroidal scalar field distribution described by (7) we started from the Lagrangian
\[\mathcal{Q}=\frac{1}{2}\left(\dot{r}^{2}+r^{2}\dot{\phi}^{2}+r^{2}\sin^{2} \theta\phi^{2}\right)+U(r,\theta)\,, \tag{13}\]
where
\[U(r,\theta)=\frac{M}{r}+\Lambda\left(P_{1}(r)+P_{2}(r)\cos^{2}\theta\right)\,, \tag{14}\]
is the sum of the Newtonian and the scalar potential. Solving the Euler-Lagrange equations translates into having the following equations of motion,
\[\begin{split}\ddot{r}&=-\frac{M}{r^{2}}+r\left( \ddot{\theta}^{2}+\sin^{2}\theta\phi^{2}\right)+\Lambda\left(P_{1}^{\prime}(r) +P_{2}^{\prime}(r)\cos^{2}\theta\right)\\ \ddot{\theta}&=\cos\theta\sin\theta\phi^{2}-\frac{2} {r}\dot{\theta}-\frac{\Lambda P_{2}(r)\sin 2\theta}{r^{2}}\\ \ddot{\phi}&=-\frac{2\dot{\phi}}{r}\left(\dot{r}+ \cot\theta\,r\ddot{\theta}\right)\end{split}\,, \tag{15}\]
where the prime (dot) indicates a derivative with respect to the radial (time) coordinate. Since the Schwarzschild precession has been detected in the orbit of S2 at \(7\sigma\) confidence level (GRAVITY Collaboration, 2022), we also included the first Post Newtonian correction in the equations of motion. The acceleration term is given by (Will, 2008)
\[\boldsymbol{a}_{\rm 1PN}=f_{\rm SP}\frac{1}{r^{2}}\left[\left(\frac{4M}{r}-v^{2} \right)\frac{\boldsymbol{r}}{r}+4\dot{\boldsymbol{r}}\right]\,, \tag{16}\]
where \(\boldsymbol{r}=r\dot{r}\),
\[\boldsymbol{v}=\left(\dot{r}\dot{r},r\dot{\theta}\dot{\theta},r\dot{\theta}\sin \theta\dot{\phi}\right)\,, \tag{17}\]
and \(v=|\boldsymbol{v}|\). Here we have also introduced the dimensionless parameter \(f_{\rm SP}\) that quantifies the Schwarzschild precession, and it is found to be \(f_{\rm SP}=0.99\pm 0.15\)(GRAVITY Collaboration, 2022). In this work we fixed \(f_{\rm SP}=1\).
If we impose \(\Lambda=0\) and \(f_{\rm SP}=0\) we recover the classical motion of a particle orbiting a central point mass. The 6 initial conditions for the set of equations in (15) can be obtained from the analytical solution of the Keplerian two-body problem, namely
\[\begin{split} r_{0}&=\frac{a_{\rm sma}(1-e^{2})}{1+e \cos\phi_{0}}\,,&\dot{r}_{0}=\frac{2\pi ea_{\rm sma}\sin\mathcal{E} }{P(1-e\cos\mathcal{E})}\\ \theta_{0}&=\frac{\pi}{2}\,,&\dot{\theta}=0 \\ \phi_{0}&=2\arctan\left(\sqrt{\frac{1+e}{1-e}}\tan \frac{\mathcal{E}}{2}\right)\,,&\dot{\phi}_{0}=\frac{2\pi(1-e)}{P(e \cos\mathcal{E}-1)^{2}}\sqrt{\frac{1+e}{1-e}}\end{split} \tag{18}\]
where \(e,a_{\rm sma},P\) are the eccentricity, the semi-major axis and the period of the orbit, respectively, while \(\mathcal{E}\) is the eccentric anomaly evaluated from Kepler's equation: \(\mathcal{E}-e\sin\mathcal{E}-\mathcal{M}=0\), where \(\mathcal{M}=n(t-t_{P})\) is the mean anomaly, \(n=2\pi/P\) is the mean angular velocity and \(t_{P}\) is the time of periastron passage. Details about how we performed the numerical integration and how we solved Kepler's equation are reported in Appendix A. The solution of the previous equations of motion gives the spherical coordinates of the star in the BH reference frame, related with Cartesian coordinates \(\{x_{\rm BH},y_{\rm BH},z_{\rm BH}\}\) via the usual transformation. In this frame, \(z_{\rm BH}\) is aligned with the BH spin axis. Following Grould et al. (2017) we can define a new reference frame \(\{x^{\prime},y^{\prime},z_{\rm obs}\}\) such that \(x^{\prime}=\) DEC,
\(\nu^{\prime}=\mathrm{R.A.}\) are the collected astrometric data, \(z_{\mathrm{obs}}\) points towards the BH and \(v_{z_{\mathrm{obs}}}\) corresponds to the radial velocity. Despite most of the S2 motion occurring in a Newtonian regime (i.e. with \(v\ll 1\)) making the above classical approximation appropriate, near the periastron it reaches a total space velocity of \(v\approx 7650\,\mathrm{km/s}\sim 10^{-2}\). In this region the numerical solution \(v_{z_{\mathrm{obs}}}\) obtained from Eqs. (15) must be corrected. We include the two main relativistic effects in order to model the measured radial velocity \(V_{R}\): the relativistic Doppler shift and the gravitational redshift. Moreover, due to the finite speed of light propagation, the dates of observation \(t_{\mathrm{obs}}\) are generally different from the dates of emission \(t_{\mathrm{em}}\). This is a pure classical effect known as Romer's delay, and for S2 we have \(\Delta t=t_{\mathrm{em}}-t_{\mathrm{obs}}\approx 8\,\mathrm{days}\) on average over the entire orbit. Including this effect in our simulation requires solving the so-called Romer's equation, namely:
\[t_{\mathrm{obs}}-t_{\mathrm{em}}-z_{\mathrm{obs}}(t_{\mathrm{em}})=0 \tag{19}\]
(here we corrected a minus sign in Grould et al. (2017)) that we solved using its first-order Taylor's expansion, as already done in GRAVITY Collaboration (2018a); Heissel et al. (2022).
Details about how to implement the transformation between the orbital frame and the observer frame, how to include the relativistic corrections and how we solved Eq. (19) are reported in Appendix B.
### Data
The set of available data \(D\) can be divided as follows:
* Astrometric data DEC, R.A.
* 128 data points collected using both the SHARP camera at New Technology Telescope (TNN) between 1992 and 2002 (\(\sim 10\) data points, accuracy \(\approx 4\,\mathrm{mas}\)) and the NACO imager at the VLT between 2002 and 2019 (118 data points, accuracy \(\approx 0.5\,\mathrm{mas}\));
* 76 data points collected by GRAVITY at VLT between 2016 and April 2022 (accuracy \(\approx 50\,\mu\mathrm{s}\)).
* Spectroscopic data \(V_{R}\)
* 102 data points collected by SINFONI at the VLT (100 points) and NIRC2 at Keck (2 points) collected between 2000 and March 2022 (accuracy in good conditions \(\approx 10-15\,\mathrm{km/s}\)).
### Model fitting approach
To fit S2 data we perform a Markov Chain Monte Carlo (MCMC) analysis using the Python package emcee(Foreman-Mackey et al., 2013). The fitting procedure is as follows: we set the value of the mass coupling \(\alpha\) roughly within the range reported in (12). For any given value of \(\alpha\) we fit for the following set of parameters,
\[\Theta_{i}=\{e,a_{\mathrm{sma}},\Omega_{\mathrm{orb}},i_{\mathrm{orb}},\omega _{\mathrm{orb}},t_{p},R_{0},M,x_{0},y_{0},v_{x_{0}},v_{y_{0}},v_{z_{0}},\Lambda \}\,, \tag{20}\]
where \(\Omega_{\mathrm{orb}}\), \(i_{\mathrm{orb}}\) and \(\omega_{\mathrm{orb}}\) are the three angles used to project the orbital frame in the observer reference frame using the procedure reported in Appendix B. The additional parameters \(\{x_{0},y_{0},v_{x_{0}},v_{y_{0}},v_{z_{0}}\}\) characterise the NACO/SINFONI data reference frame with respect to Sgr A*(Plewa et al., 2015). The log-likelihood is given by
\[\ln\mathcal{L}=\ln\mathcal{L}_{\mathrm{pos}}+\ln\mathcal{L}_{\mathrm{vel}}\,, \tag{21}\]
where
\[\ln\mathcal{L}_{\mathrm{pos}}=-\sum_{i=1}^{N}\left[\frac{(\mathrm{DEC}_{ \mathrm{i}}-\mathrm{DEC}_{\mathrm{model},\mathrm{i}})^{2}}{\sigma_{\mathrm{DEC }_{\mathrm{i}}}^{2}}+\frac{(\mathrm{R.A.}_{\mathrm{i}}-\mathrm{R.A.}_{\mathrm{ model},\mathrm{i}})^{2}}{\sigma_{\mathrm{R.A.}_{\mathrm{i}}}^{2}}\right]\,,\]
and
\[\ln\mathcal{L}_{\mathrm{vel}}=-\sum_{i=1}^{N}\frac{(V_{R,i}-V_{\mathrm{model}, \mathrm{i}})^{2}}{\sigma_{V_{R,i}}^{2}}\,. \tag{23}\]
The priors we used are listed in Table 1. We used uniform priors for the physical parameters, i.e. we only imposed physically motivated bounds and Gaussian priors for the additional parameters describing NACO data, since the latter have been instead well constrained by previous work by Plewa et al. (2015) and are not expected to change. The initial points \(\Theta_{i}^{0}\) in the MCMC are chosen such that they minimise the \(\chi^{2}\) with \(f_{\mathrm{SP}}=1\) and \(\Lambda=0\). The minimisation is performed using the Python package **lmfit.minimize**(Newville et al., 2016) with Levenberg-Marquardt method. In the sampling phase of the MCMC implementation, we used 64 walkers and \(10^{5}\) iterations. Since we started our MCMC at the minimum found by **minimize** we skipped the burning-in phase and we used the last 80% of the chains to compute the mean and standard deviation of the posterior distributions. The convergence of the MCMC analysis is assured by means of the auto-correlation time \(\tau_{c}\), i.e. we ran \(N\) iterations such that \(N\gg 50\,\tau_{c}\).
In a first preliminary check we set \(\Lambda=0\) and we fit for the first 13 parameters of (20) imposing \(f_{\mathrm{SP}}=1\). In Figure 2 we report the corner plot of the parameters, which are in very good agreement with the previous best estimates obtained in GRAVITY Collaboration (2022). In the following, we assume that \(z_{\mathrm{BH}}\) is aligned with \(z_{\mathrm{orb}}\), i.e. the direction of the BH spin axis is aligned with the angular momentum of the S2 orbit. This means that the motion happens in the equatorial plane (\(\theta=\pi/2\)) of the BH and the initial conditions for the numerical integration of the orbit are those reported in (18). We fit for the 14 parameters listed in (20).
\begin{table}
\begin{tabular}{l c c c} \hline Parameter & \(\Theta_{i}^{0}\) & Lower bound & Upper bound \\ \hline \(e\) & 0.88441 & 0.83 & 0.93 \\ \(a_{\mathrm{sma}}\) [as] & 0.12497 & 0.119 & 0.132 \\ \(i_{\mathrm{orb}}\) [\({}^{\circ}\)] & 134.69241 & 100 & 150 \\ \(\omega_{\mathrm{orb}}\) [\({}^{\circ}\)] & 66.28411 & 40 & 90 \\ \(\Omega_{\mathrm{orb}}\) [\({}^{\circ}\)] & 228.19245 & 200 & 250 \\ \(t_{p}\) [by] & 2018.37902 & 2018 & 2019 \\ \(M\) [\(10^{6}\,M_{\odot}\)] & 4.29950 & 4.1 & 4.8 \\ \(R_{0}\) [\(10^{5}\,\mathrm{pc}\)] & 8.27795 & 8.1 & 8.9 \\ \(\Lambda\) & 0.001 & 0 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Uniform priors used in the MCMC analysis. Initial guesses \(\Theta_{i}^{0}\) coincide with the best-fit parameters found by **minimize**.
\begin{table}
\begin{tabular}{l c c c} \hline Parameter & \(\Theta_{i}^{0}\) & \(\xi\) & \(\sigma\) \\ \hline \(x_{0}\) [mas] & -0.244 & -0.055 & 0.25 \\ \(y_{0}\) [mas] & -0.618 & -0.570 & 0.15 \\ \(v_{x_{0}}\) [mas/yr] & 0.059 & 0.063 & 0.0066 \\ \(v_{y_{0}}\) [mas/yr] & 0.074 & 0.032 & 0.019 \\ \(v_{z_{0}}\) [km/s] & -2.455 & 0 & 5 \\ \hline \end{tabular}
\end{table}
Table 2: Gaussian priors used in the MCMC analysis. Initial guesses \(\Theta_{i}^{0}\) coincide with the best-fit parameters found by **minimize**. \(\xi\) and \(\sigma\) represent the mean and the standard deviation of the distributions, respectively, and they come from Plewa et al. (2015).
## 3 Results
Before running the MCMC algorithm we used a \(\chi^{2}\) minimiser to evaluate the best-fit values of \(\Lambda\) and to quantify how accurately we can constrain the scalar cloud mass. Results are summarised in Fig. 3. For very small (\(\alpha\lesssim 0.0035\)) or large (\(\alpha\gtrsim 0.045\)) values of \(\alpha\), \(\Lambda\) has very large uncertainties, and the results are compatible with \(\Lambda=0\), i.e., having a vacuum environment.
Uncertainties on \(\Lambda\) become much smaller in the range \(0.01\lesssim\alpha\lesssim 0.03\). The underlying reason for this can be understood from the effective peak position of the scalar density distribution
\[R_{\rm peak}=\frac{\int_{0}^{\infty}\rho\bar{r}d\bar{r}}{\int_{0}^{\infty}pd \bar{r}}=\frac{3M}{\alpha^{2}}\;. \tag{24}\]
For the range of \(\alpha\) above, one finds \(3000M\lesssim R_{\rm peak}\lesssim 30000\,M\), i.e. when \(R_{\rm peak}\) is located between S2's apoastron and periastron and the star crosses regions of higher density. This analysis is reported in Fig. 3, where we show the behaviour of \(R_{\rm peak}\) as a function of \(\alpha\), dictated by Eq. (24), and S2's apoastron and periastron.
Notice that Fig. 3 seems to indicate that the motion of S2 is
Figure 2: Corner plot of the fitted parameters with \(f_{\rm SP}=1\) and \(\Lambda=0\). Red lines represent values from **minimize**, while dashed black lines represent the mean value and \(1\sigma\) interval of the posterior distributions.
compatible with a cloud of scalar field for \(0.01<\alpha<0.03\). However, as we now discuss, the statistical evidence for a nonzero \(\Lambda\) is not significant.
MCMC results confirm the trend observed in Fig. 3 but provide more insight into how \(\Lambda\) is distributed in the range of \(\alpha\) considered. In particular, we looked for the maximum likelihood estimator (MLE) of \(\Lambda\), i.e. \(\hat{\Lambda}=\arg\max\mathcal{L}(\Lambda;\mathrm{D})\). Results are summarised in Fig. 4. For \(0.006<\alpha<0.075\) the posteriors \(P(\Lambda_{\alpha}|D)\) look like normal distributions. Here \(\hat{\Lambda}\) and associated uncertainties coincide with the mean and standard deviation of the distributions, and they are roughly the same reported in Fig. 3. However, when we move away from this range, the posteriors start to be peaked around zero and \(\hat{\Lambda}\) does not coincide with the mean value of the distributions anymore, as a result of the prior bounds we imposed on \(\Lambda\). Since in these cases \(\hat{\Lambda}\) is always very close to zero (and far below the precision of current instruments), we estimated \(\Lambda_{1}\) and \(\Lambda_{2}\) such that \(P(\Lambda_{\alpha}<\Lambda_{1}|D)\approx 68\%\) and \(P(\Lambda_{\alpha}<\Lambda_{2}|D)\approx 99\%\) of \(P(\Lambda_{\alpha}|D)\). In this way we were able to obtain a rough upper bound on the fractional mass at \(1\), \(3\sigma\) confidence levels, reported in parenthesis in Table 3. We notice also that for smaller values of \(\alpha\), \(P(\Lambda_{\alpha}|D)\) flattens out, showing the difficulties of finding a meaningful MLE \(\hat{\Lambda}\) as soon as the cloud is located far away from S2's apoastron. These features are shown in Figure 4, where we report the one-dimensional projection of the (marginalised) posterior distributions of \(\Lambda\) for the values of \(\alpha\) reported in Table 3. We also show the mean (red dashed line) when distributions are normal and the \(1\sigma\) confidence interval (orange band, evaluated as explained above when the distribution is non-normal). Not surprisingly, we noticed that basically no relevant information can be extracted from those confidence intervals when \(R_{\mathrm{peak}}\) is far from S2's apoastron. However, in the case with \(\alpha=0.075\), which corresponds to \(R_{\mathrm{peak}}\approx 530\,M\), we found that \(\Lambda\lesssim 5\cdot 10^{-3}\) at \(3\ \sigma\) confidence level, roughly recovering the upper bound \(\delta M\lesssim 10^{-3}\,M\) found in GRAVITY Collaboration (2022).
In order to determine the statistical significance of our results we computed the Bayes factor \(K\), i.e. the ratio of the maximum likelihood computed for different values of \(\alpha\) and \(\hat{\Lambda}\) reported in Table 3 (that we call model \(\alpha\)) to the maximum likelihood associated with the non-perturbative case (model 0). According to Kass & Raftery (1995) if \(1\leq\log_{10}K\leq 2\) there is a strong evidence that model \(\alpha\) is preferred over model 0, while if \(\log_{10}K>2\) the strength of evidence is decisive. Negative values of \(\log_{10}K\) correspond to negative evidence, i.e. model 0 is preferred over model \(\alpha\). As expected, we found \(\log_{10}K\ll 1\) every time the cloud is located far away from S2 orbital range. In contrast, when \(r_{\mathrm{apo,S2}}\lesssim R_{\mathrm{peak}}\lesssim r_{\mathrm{peri,S2}}\) there is only mild evidence that model \(\alpha\) is preferred over model 0 (we found \(\log_{10}K<2\) always).
Figure 3: Best fit values of \(\Lambda\) with \(1\sigma\) uncertainty when \(\alpha\) is fixed and it is varied over the range \([6\cdot 10^{-4},10^{-1}]\). The dashed grey line represents \(R_{\mathrm{peak}}\) as a function of \(\alpha\) as illustrated in (24). The yellow band represents the orbital range of S2 delimited by its apoastron and periastron positions. Although a nonzero value of \(\Lambda\) is apparent for a restricted range of \(\alpha\), the statistical significance of this finding is not significant, see Table 3.
## 4 Discussion
Precision observations by the GRAVITY instrument can now be used to set exquisite constraints on possible dark matter structures around Sgr A*. We have shown that with current observations, scalar clouds - possibly of superradiant origin, with mass couplings in the range \(\alpha\in[0.015,0.045]\) can be ruled out, for cloud masses \(\Lambda\gtrsim 0.1\%\) of the central BH mass (equivalent to \(\delta M\sim\,4000\,M_{\odot}\)). It is similar to that of GRAVITY Collaboration (2022), who provided a \(1\sigma\) upper bound of \(0.1\%\) of \(M\) on the observational dark mass within the orbit of S2 assuming a Plummer profile for the distribution.
We also note that, for certain scalar couplings \(\alpha\), observational data are well fitted by a non-zero value of \(\Lambda\) of order \(10^{-3}\). However, all these values of \(\Lambda\) are consistent with zero within the \(3\,\sigma\) confidence interval. The computation of the Bayes' factor showed that this perturbed model is only mildly preferred over the non-perturbed model predicting a single central BH without a cloud. We conclude that there is no strong evidence to claim the existence of a scalar cloud around Sgr A* described by our setup.
Stronger constraints - or a detection - require more observations or the inclusion of other stars of the S-cluster in the fit. However, since the potential describing the cloud is non-spherically symmetric, the inclination of stars with each other plays a fundamental role - at least in theory - and this same analysis can not be performed straightforwardly. For the same reason, we were forced to set an initial angular position for S2 co-planar with the BH equator (\(\theta=\pi/2\)). This is the simplest choice but also the one that maximises the scalar potential in Eq. (9), i.e. our chances to actually detect the cloud. We can try to quantify the error we are making in setting the initial angular position of the star, by looking at the difference in the orbits for two different initial inclinations: \(\theta=\pi/2\) and \(\theta=0\), focusing on the interesting range of \(\alpha\): \(0.01\leq\alpha\leq 0.045\). We found that the maximum relative (percentage) difference in the astrometry is achieved for \(\alpha=0.01\), where \(\Delta\)DEC \(\sim\Delta\)R.A. \(\approx 25\%\), while the maximum difference in the radial velocity is found to be \(\Delta V_{R}\approx 15\%\) for \(\alpha=0.045\). Although these differences may seem significant, we point out that: (i) they would be smaller for any values of \(\theta\in[0,\,\pi/2]\) and (ii) they are only reached in correspondence of the two periastron passages, while they remain much smaller over the rest of the orbit. Hence we are relatively confident that there will be no significant changes in the best-fit parameters we found for different initial inclinations of S2. In addition, GRAVITY Collaboration (2019) showed that also the inclination of Sgr A*'s spin with respect to the observer frame plays an important role in the effects the cloud has on S2 motion. Indeed, results including the motion of other S-stars and Sgr A*'s spin direction are left for future works.
Recently, Sengo et al. (2023) studied constraints on scalar structures using EHT data. Not surprisingly, bounds are of order \(\Lambda\sim 10\%M\), compatible with the measurement precision of the telescope. Our results improve consistently and considerably this estimate for Sgr A*, showing that a bosonic structure can only exist with a maximum (fractional) mass of \(\Lambda\approx 10^{-3}\), at least for spin 0 fields.
Yuan et al. (2022) used the motion of S2 to derive an upper limit of \(\delta M\lesssim 10^{-4}\,M\) for a scalar cloud with particle mass \(m_{s}=10^{-18}\,\mathrm{eV}\) (\(\alpha\sim 0.015\)) interacting with either the Higgs boson or the photon. Their estimate only uses publicly available and not GRAVITY data, which, due to their very small uncertainties, dominate our likelihood. This is reflected in the best-fit parameters found which are not compatible (within \(3\sigma\) uncertainties) with the most recent ones reported in GRAVITY Collaboration (2022). We argue that this difference already at the non-perturbative level may lead to misleading results when the cloud is included in the fit.
Finally, we point out that the spin of Sgr A* is relevant when discussing superradiant phenomena, since it affects the possible origin of the scalar cloud. Despite a recent work by Fragione & Loeb (2020) placing a strong constraint on Sgr A* spin parameter (\(\chi\lesssim 0.1\)), other studies (Qi et al., 2021) question such result, and show that the current astrometric measurements are yet not sufficient to constrain the value of the spin. On the other hand, Kato et al. (2010) used quasi-periodic oscillations in the radio emissions of Sgr A* to claim that its spin is \(\chi=0.44\pm 0.08\). The current best estimate for Sgr A*'s spin comes from the EHT observations (Broderick et al., 2011), which reported a measurement of \(\chi=0.00\pm 0.64\) where the error is the \(1\sigma\) uncer
Figure 4: Posterior probability densities \(P(\Lambda_{\sigma}|D)\) for different values of \(\alpha\). Red dashed lines represent the mean value of Gaussian distributions (which coincides with the MLE Å), while orange bands correspond to \(1\sigma\) confidence level, i.e. \(\approx 68\%\) of \(P(\Lambda_{\sigma}|D)\) lies in that region.
tainty. Due to the high uncertainty of these results and the ongoing discussion about it, it can be assumed without loss of generality that Sgr A* is (was) in fact spinning enough to engage a superradiant instability. We note, however, that even a non-spinning BH can bind a scalar "cloud" if it was grown via some other mechanism (for example, primordial, Cardoso et al. (2022)).
An upgrade of the Gravity experiment towards Gravity+ is ongoing at the time of writing, as well as the commissioning of the ERIS instrument. The increased sensitivity of Gravity+ and the patrol field of view of ERIS strongly increase the prospects of detecting and tracking further stars in inner orbits, putting stronger constraints on the scalar cloud.
## Acknowledgements
We are very grateful to our funding agencies (MPG, ERC, CNRS [PNCG, PNGRAM], DFG, BMBF, Paris Observatory [CS, PhyFOG], Observatoire des Sciences de l'Univers de Grenoble, and the Fundacao para a Ciencia e a Tecnologia), to ESO and the Paranal staff, and to the many scientific and technical staff members in our institutions, who helped to make NACO, SINFONI, and GRAVITY a reality. V.C. is a Villum Investigator and a DNRF Chair, supported by VILLUM Foundation (grant no. VIL37766) and the DNRF Chair program (grant no. DNRF162) by the Danish National Research Foundation. V.C. acknowledges the financial support provided under the European Union's H2020 ERC Advanced Grant "Black holes: gravitational engines of discovery" grant agreement no. Gravitas-101052587. Views and opinions expressed are, however, those of the author only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101007855. We acknowledge the financial support provided by FCT/Portugal through grants 2022.01324.PTDC, PTDC/FIS-AST/7002/2020, UIDB/00099/2020 and UIDB/04459/2020.
## Data Availability
Publicly available data for astrometry and radial velocity up to 2016.38 can be found in Table 5 the electronic version of Gillessen et al. (2017) at this link: [https://iopscience.iop.org/article/10.3847/1538-4357/aa5c41/meta#apjaa5c41t5](https://iopscience.iop.org/article/10.3847/1538-4357/aa5c41/meta#apjaa5c41t5).
|
2307.00440 | Friezes over $\mathbb Z[\sqrt{2}]$ | A frieze on a polygon is a map from the diagonals of the polygon to an
integral domain which respects the Ptolemy relation. Conway and Coxeter
previously studied positive friezes over $\mathbb{Z}$ and showed that they are
in bijection with triangulations of a polygon. We extend their work by studying
friezes over $\mathbb Z[\sqrt{2}]$ and their relationships to dissections of
polygons. We largely focus on the characterization of unitary friezes that
arise from dissecting a polygon into triangles and quadrilaterals. We identify
a family of dissections that give rise to unitary friezes and conjecture that
this gives a complete classification of dissections which admit a unitary
frieze. | Esther Banaian, Libby Farrell, Amy Tao, Kayla Wright, Joy Zhichun Zhang | 2023-07-01T23:28:27Z | http://arxiv.org/abs/2307.00440v2 | # Friezes over \(\mathbb{Z}[\sqrt{2}]\)
###### Abstract
A frieze on a polygon is a map from the diagonals of the polygon to an integral domain which respects the Ptolemy relation. Conway and Coxeter previously studied positive friezes over \(\mathbb{Z}\) and showed that they are in bijection with triangulations of a polygon. We extend their work by studying friezes over \(\mathbb{Z}[\sqrt{2}]\) and their relationships to dissections of polygons. We largely focus on the characterization of unitary friezes that arise from dissecting a polygon into triangles and quadrilaterals. We identify a family of dissections that give rise to unitary friezes and conjecture that this gives a complete classification of dissections which admit a unitary frieze.
## 1 Introduction
In this paper, we will study friezes. A frieze is a ring homomorphism from a cluster algebra \(\mathcal{A}(Q)\) to an integral domain \(R\). When the cluster algebra arises from surface \(S\) with marked points \(M\), the generators of the algebra correspond to arcs on the surface with relations provided by skein relations [10]. Therefore, a frieze from such a cluster algebra can instead be viewed as a map from the arcs on \((S,M)\) to \(R\) which respects skein relations. We will mainly take this latter point of view.
The study of friezes in fact predates the study of cluster algebras. Finite frieze patterns were first studied by Coxeter in [11]. Frieze patterns are certain arrays of numbers which satisfy a local relation (the _diamond condition_). Finite frieze patterns and friezes on polygons are in bijection; we can interpret a frieze pattern as listing all the values of a frieze. We will usually use the language of a frieze in this article for notational convenience.
Conway and Coxeter showed that finite frieze patterns with entries in \(\mathbb{Z}_{\geq 0}\) are in bijection with triangulated polygons [11]. Caldero and Chapoton show that these finite frieze patterns also have a connection to both cluster algebras of type \(A\) and the module category of a path algebra from a type \(A\) quiver [11]. Friezes with values in \(\mathbb{Z}_{\geq 0}\) associated to cluster algebras of types \(\widetilde{A}\) and \(D\) were studied in [12] and [13] respectively.
Recently, Holm and Jorgensen investigated friezes associated to dissections of polygons [1]. The sizes of the subpolygons involved in the dissection determine the integral domain the frieze takes values in. Holm and Jorgensen show that there is a bijection between dissections that divide an \(n\)-gon into \(p\)-gons and friezes on an \(n\)-gon with values in \(\mathbb{Z}[2\cos(\pi/p)]\); that is, every frieze from this ring on an \(n\)-gon can be seen as arising from a dissection. This leads to the natural question: Is there a useful characterization of the friezes from more general dissections? In this article, we focus our attention on dissections into triangles and quadrilaterals. In Section 2, we provide examples to show that some natural first choices for characterizations do not adequately describe the set of friezes from dissections into triangles and quadrilaterals. That is, for each characterization we exhibit an example of such a frieze which does not arise from a dissection.
One type of frieze investigated in Section 2 is a _unitary frieze_. We say that a frieze \(f\) on a surface \(S\) over a ring \(R\) is unitary if there exists a triangulation \(T=\{\tau_{1},\ldots,\tau_{n}\}\) of \(S\) such that each \(f(\tau_{i})\) is a unit in \(R\). We let \(R^{\times}\) denote the set of units in \(R\). In Conjecture 1, we propose that a dissection of a polygon into triangles and quadrilaterals will produce a unitary frieze if and only if we can decompose the dissection into _towers_. A tower, defined in Section 3, is a dissection of a polygon into a straight row of quadrilaterals with a triangle on one end. In Theorem 1, we verify that every dissection which can be decomposed into towers provides a unitary frieze.
We prove that the opposite direction of this conjecture is true in several cases. In Section 4, we consider dissections where the sets of triangles and quadrilaterals are separated from each other. In Section 5, we consider dissections where every vertex is adjacent to at most three subpolygons. Part of our proof of the result for the latter type of dissection (Theorem 2) involves casework to show various types of arcs that form triangles with arcs from towers cannot have unit weight under the frieze from the dissection; more details about these cases are provided in Appendix A.
## 2 Friezes
### Background
We think of a _polygon_ or _\(n\)-gon_ as a finite set \(V=\{0,1,\ldots,n-1\}\) of vertices with the natural cyclic ordering. Arcs in the polygon are denoted by \((i,j)\) for two vertices \(i\neq j\). We say that two arcs \((i,j)\) and \((k,\ell)\)_cross_ when either \(i<k<j<\ell\) or \(i<\ell<j<k\), working cyclically modulo \(n\). Arcs of the form \((i,i+1)\) are boundary arcs and thus never cross any other arcs. Note that, while some figures in this article will not look convex in order to stress certain patterns, we always will assume we are working with convex polygons.
While friezes can be defined in more general settings, our article focuses on friezes from polygons so we will define a frieze in this context. Our definition largely follows the definition of a frieze in [1].
**Definition 1**.: Let \(P\) be a polygon with vertex set \(V\) and let \(R\) be an integral domain. A _frieze on \(P\)_ is a map \(f:V\times V\to R\) where the following conditions are satisfied:
1. \(f(i,j)=0\) if and only if \(i=j\)
2. \(f(i,i+1)=f(i,i-1)=1\)
3. \(f(i,j)=f(j,i)\)
4. If \((i,j)\) and \((k,\ell)\) are crossing diagonals of \(P\), then we have the Ptolemy relation \(f(i,j)f(k,\ell)=f(i,\ell)f(j,k)+f(i,k)f(j,\ell)\).
\(k\)\(j\)\(k\)\(j\)\(k
and let \(\lambda_{p}:=2\cos(\pi/p)\). Then, the _Euclidean frieze_, \({\cal L}_{p}\) on a \(p\)-gon is given by setting \({\cal L}_{p}(i,j)=U_{|j-i|-1}(\lambda_{p})\). For example, if \(p=5\), we define \({\cal L}_{p}(i,i+2)=2\cos(\pi/5)=\frac{1+\sqrt{5}}{2}\) for all \(0\leq i\leq 4\). Including this with the boundary conditions, \({\cal L}_{p}(i,i)=0\) and \({\cal L}_{p}(i,i+1)=1\) completely defines the frieze on the pentagon.
### Types of Friezes
Recall that a _dissection_ (also called a _partial triangulation_) of a polygon \(P\) is a set of pairwise non-crossing arcs on \(P\). A \(p\)_-angulation_ is a dissection that divides \(P\) into \(p\)-gons. Given a dissection \({\cal D}\) of a polygon \(P\), Holm and Jorgensen define a frieze \(f_{\cal D}\) on \(P\) which restricts to the Euclidean frieze \({\cal L}_{p_{i}}\) whenever evaluated on a pair of vertices which lie on the same \(p_{i}\)-gon [1]. In particular, if \((i,j)\) is an arc in \({\cal D}\), \(f_{\cal D}(i,j)=1\) since this arc is a boundary arc of multiple subpolygons. If \((i,j)\) crosses at least one arc in \({\cal D}\), we can determine \(f_{\cal D}(i,j)\) by iteratively using Condition 4 from Definition 1 at each intersection of \((i,j)\) and \({\cal D}\). We call \(f_{\cal D}\) the _frieze from dissection \({\cal D}\)_.
For instance, Example 1 is a frieze from a dissection as it arises from the dissection using arcs \((0,2)\) and \((0,3)\). This dissection is in fact a triangulation, and thus the entries of the frieze are all in \(\mathbb{Z}\). We give an example of a frieze from a dissection which is not a triangulation.
**Example 2**.: Consider the following dissection of a pentagon.
Since \((0,2)\) and \((1,3)\) are diagonals of a sub-quadrilateral, we set \(f_{\cal D}(0,2)=f_{\cal D}(1,3)=U_{1}(\lambda_{4})=2\cos(\pi/4)=\sqrt{2}\). We also know \(f_{\cal D}(0,3)=1\) since \((0,3)\in{\cal D}\). We can compute \(f_{\cal D}(2,4)\) by resolving the intersection of \((2,4)\) and \((0,3)\),
\[f_{\cal D}(2,4)=f_{\cal D}(2,4)\cdot f_{\cal D}(0,3)=f_{\cal D}(0,4)\cdot f_{ \cal D}(2,3)+f_{\cal D}(0,2)\cdot f_{\cal D}(3,4)=1\cdot 1+\sqrt{2}\cdot 1=1+ \sqrt{2}.\]
A similar calculation finds \(f_{\cal D}(1,4)=1+\sqrt{2}\).
We will focus on friezes from dissections that divide a polygon into triangles and quadrilaterals. Accordingly, our corresponding friezes will have values in \(\mathbb{Z}[\sqrt{2}]\). We define several other types of friezes over \(\mathbb{Z}[\sqrt{2}]\) and compare them to friezes from dissections. We first give two descriptions of friezes over \(\mathbb{Z}[\sqrt{2}]\) with elementary properties.
**Definition 2**.: Let \(f\) be a frieze on an \(n\)-gon \(P\) over \(\mathbb{Z}[\sqrt{2}]\).
1. We say \(f\) is a \(\mathbb{Z}[\sqrt{2}]_{\geq 1}\) frieze if, for all \(0\leq i<j\leq n-1\), \(f(i,j)\geq 1\).
2. We say \(f\) is a \(\mathbb{Z}_{\geq 0}[\sqrt{2}]\) frieze if, for all \(0\leq i<j\leq n-1\), \(f(i,j)=a+b\sqrt{2}\) and \(a,b\in\mathbb{Z}_{\geq 0}\). By the definition of a frieze, we cannot have \(a=b=0\).
The first frieze defined in Definition 2 could also be called a _super-unital frieze_. A related type of frieze is a _unitary frieze_; these were studied in connection to cluster algebras in [10]. Unitary friezes will play a leading role in the remainder of the article
**Definition 3**.: Let \(f\) be a frieze on a polygon \(P\) over an integral domain \(R\). We say that \(f\) is _unitary_ if there exists a triangulation \(T=\{\tau_{1},\ldots,\tau_{m}\}\) of \(P\) such \(f(\tau_{i})\in R^{\times}\) for all \(1\leq i\leq m\). In this case, we will also refer to \(T\) as an _unitary triangulation_.
Recall the norm of an element \(a+b\sqrt{2}\in\mathbb{Z}[\sqrt{2}]\) is given by \(|a^{2}-2b^{2}|\). The units in \(\mathbb{Z}[\sqrt{2}]\) (that is, elements with norm \(1\)) are exactly those elements of the form \((1+\sqrt{2})^{m}\) for \(m\in\mathbb{Z}\). Let \(\ell_{m}=(1+\sqrt{2})^{m}\).
Note that the frieze in Example 2 is a \(\mathbb{Z}[\sqrt{2}]_{\geq 1}\) frieze and a \(\mathbb{Z}_{\geq 0}[\sqrt{2}]\) frieze. Moreover, this frieze is unitary, with \((2,4)\) and \((1,4)\) forming a triangulation with unit weights under \(f_{\mathcal{D}}\).
We now describe relationships amongst the defined types of friezes. For convenience, in our examples for non-containments, we will give the values of a frieze in the form of a frieze pattern; see [11] for the definition of a frieze pattern.
**Proposition 1**.:
1. _The set of friezes from dissections is strictly contained in the set of_ \(\mathbb{Z}_{\geq 0}[\sqrt{2}]\) _friezes._
2. _The set of_ \(\mathbb{Z}_{\geq 0}[\sqrt{2}]\) _friezes is strictly contained in the set of_ \(\mathbb{Z}[\sqrt{2}]_{\geq 1}\) _friezes_
3. _The set of unitary friezes is incomparable with the sets of friezes from dissections,_ \(\mathbb{Z}_{\geq 0}[\sqrt{2}]\) _friezes, and_ \(\mathbb{Z}[\sqrt{2}]_{\geq 1}\) _friezes._
Proof.: 1) The fact that every frieze from a dissection is also a \(\mathbb{Z}_{\geq 0}[\sqrt{2}]\) frieze is a consequence of the concept of _traditionally-weighted matchings_, a combinatorial interpretation of entries of a frieze pattern from a dissection given in [1]. To see that this containment is strict, consider the following frieze pattern which gives the values of a frieze on an octagon.
Since there are no entries \(1\) in the middle five rows of the frieze pattern, there is no non-boundary arc \(\tau\) in the octagon such that \(f_{\mathcal{D}}(\tau)=1\). This means if this frieze came from a dissection, there would be no arcs in the dissection. However, this is not the Euclidean frieze \(\mathcal{L}_{8}\) since it is not \(1\)-periodic and the value \(2\cos(\pi/8)\) does not appear in the first row.Therefore, the frieze given by this frieze pattern cannot come from a dissection.
2) It is clear that every \(\mathbb{Z}_{\geq 0}[\sqrt{2}]\) frieze is a \(\mathbb{Z}[\sqrt{2}]_{\geq 1}\) frieze. To see that this containment is strict, consider the following frieze pattern giving the values of a frieze on a hexagon.
\[\begin{array}{ccccccccc}0&&0&&0&&0&&0&&0\\ &1&&1&&1&&1&&1&&1\\ 1+\sqrt{2}&&\sqrt{2}&&3-\sqrt{2}&&1+\sqrt{2}&&\sqrt{2}&&\\ &1+\sqrt{2}&&-3+3\sqrt{2}&&2\sqrt{2}&&1+\sqrt{2}&&-3+3\sqrt{2}\\ 1+\sqrt{2}&&\sqrt{2}&&3-\sqrt{2}&&1+\sqrt{2}&&\sqrt{2}&&\\ &1&&1&&1&&1&&1&&1\\ 0&&0&&0&&0&&0&&0\\ \end{array}\]
3) To see that there are unitary friezes that are not \(\mathbb{Z}[\sqrt{2}]_{\geq 1}\) friezes, consider the frieze \(f\) on a quadrilateral with vertices \(0,1,2,3\) which has \(f(0,2)=-1+\sqrt{2}\) and \(f(1,3)=2+\sqrt{2}\). Since the diagonal \((0,2)\) triangulates the quadrilateral and \(-1+\sqrt{2}\in\mathbb{Z}[\sqrt{2}]^{\times}\), \(f\) is a unitary frieze but \(-1+\sqrt{2}<1\) so \(f\) is not a \(\mathbb{Z}[\sqrt{2}]_{\geq 1}\) frieze, implying it is also not a \(\mathbb{Z}_{\geq 0}[\sqrt{2}]\) frieze nor a frieze from a dissection.
The Euclidean frieze \(\mathcal{L}_{4}\) on a quadrilateral is an example of a frieze from a dissection that is not unitary.
**Remark 1**.: Coxeter and Conway's Theorem in [1] shows that all of the types of friezes considered here are equivalent when working over \(\mathbb{Z}\). That is, all friezes on a polygon with entries in \(\mathbb{Z}_{\geq 0}\) are unitary; the arcs with unit weight comprise a triangulation of the polygon.
## 3 Tower Dissections give Unitary Friezes
In this section, we investigate friezes which are both from a dissection and unitary. We begin with a set of dissections that never give a unitary frieze. Recall a \(4\)-angulation of a polygon \(P\) is a dissection that divides \(P\) into quadrilaterals; necessarily, \(P\) must be a \(2n\)-gon for \(n>1\).
**Lemma 1**.: _A frieze from a \(4\)-angulation of a polygon is never unitary._
Proof.: Let \(\mathcal{D}\) be a \(4\)-angulation of a polygon \(P\) with vertices \(0,\ldots,n-1\). One can show by induction that, if \(k\) is odd, then \(f_{\mathcal{D}}(i,i+k+1)=b\sqrt{2}\), with \(b\in\mathbb{Z}_{\geq 1}\) and if \(k\) is even, \(f_{\mathcal{D}}(i,i+k+1)\in\mathbb{Z}_{\geq 1}\). Recall that the units of \(\mathbb{Z}[\sqrt{2}]\) are of the form \((1+\sqrt{2})^{n}\) for \(n\in\mathbb{Z}\); thus, the only possible unit in a frieze from a \(4\)-angulation is
1. Holm and Jorgensen show that the only time that \(f_{\mathcal{D}}(\gamma)=1\) is if \(\gamma\) is an arc in \(\mathcal{D}\)[10]. However, the set of arcs from \(\mathcal{D}\) will not be large enough to triangulate \(P\).
Next, we introduce a family of dissections, called _towers_, which produce unitary friezes. Informally, a tower is a dissection into a straight string of quadrilaterals with a triangle on one end.
**Definition 4**.: An _\(n\)-tower_ is a dissection of a \((2n+3)\)-gon which consists of one triangle and \(n\)-quadrilaterals such that no vertex is incident to more than two sub-polygons. We call the portion of the tower excluding the triangle a _stack_. The _roof point_ is the unique vertex of the tower which is only incident to the only triangle in the tower. A _tower arc_ is one which is contained in the tower and has one endpoint at the roof point; see the dashed lines below.
We allow \(n=0\) in the definition of a tower where a \(0\)-tower is simply a triangle. To show how towers yield unitary friezes, we first define two families of arcs in a stack.
**Definition 5**.: Given a stack, consider an arc \(\sigma_{n}\) of the form \((i,i+n)\) which goes between two vertices on the same side of the stack, so that, when \(n>0\), \(\sigma_{n}\) crosses \(n-1\) arcs from \(\mathcal{D}\). We define \(s_{n}\) to be the frieze value \(s_{n}:=f_{\mathcal{D}}(\sigma_{n})\). In particular, \(\sigma_{0}\) is a trivial arc with the same start and endpoint, so we have \(s_{0}=0\).
Similarly, for \(n\geq 1\), consider an arc \(\delta_{n}\) between two vertices on the opposite side of the stack which crosses \(n-1\) arcs in the stack. We define \(d_{n}\) to be the frieze value \(d_{n}:=f_{\mathcal{D}}(\delta_{n})\). We let \(\delta_{0}\) be an arc from \(\mathcal{D}\) of the stack, so that \(d_{0}=1\).
For example, below the thick arc would have weight \(d_{4}\) and the dashed arc would have weight \(s_{3}\). The first four values of \(s_{n}\), starting at \(n=0\), are \(0,1,2\sqrt{2},7\), and the first few values of \(d_{n}\) are \(1,\sqrt{2},3,5\sqrt{2}\). The Ptolemy relation implies a simple recurrence amongst these quantities.
**Lemma 2**.: _The quantities \(s_{n},d_{n}\) satisfy the initial conditions \(s_{0}=0,d_{0}=1,s_{1}=1\) and \(d_{1}=\sqrt{2}\) and for \(n\geq 2\),_
\[s_{n}=\sqrt{2}s_{n-1}+d_{n-1}\qquad d_{n-1}=\sqrt{2}d_{n-1}+s_{n-1}.\]
Proof.: Suppose that we are working in a stack of size \(m\) for \(m>>n\). Label the vertices of this stack \(0,1,\ldots,2m+1\) in such a way that the arcs of the dissection are of the form \((i,2m+3-i)\) for \(1\leq i\leq m-1\).
\begin{tabular}{c c c c} \(2m+2\) & \(2m+1\) & \(m+3\) & \(m+2\) \\ \hline & & \(\cdots\) & \\ & & & \\ \hline
1 & 2 & \(m\) & \(m+1\) \\ \end{tabular}
The initial conditions are clear. We focus on the arc \((0,n)\), since we know \(f_{\mathcal{D}}(0,n)=s_{n}\), and apply the Ptolemy relation to its intersection with the dissection arc \((n-1,2m+3-(n-1))\),
\[f_{\mathcal{D}}(0,n)f_{\mathcal{D}}(n-1,2m+3-(n-1)) =f_{\mathcal{D}}(0,n-1)f_{\mathcal{D}}(n,2m+3-(n-1))\] \[+f_{\mathcal{D}}(0,2m+3-(n-1))f_{\mathcal{D}}(n-1,n)\]
By definition \(f_{\mathcal{D}}(0,n-1)=s_{n-1}\) and \(f_{\mathcal{D}}(0,2m+3-(n-1))=d_{n-1}\). Moreover, the arc \((n-1,n)\) is either from \(\mathcal{D}\) or on the boundary, and \((n,2m+3-(n-1))\) is a diagonal in a quadrilateral in the dissection. Therefore we can conclude that \(s_{n}=f_{\mathcal{D}}(\sigma_{n})=\sqrt{2}s_{n-1}+d_{n-1}\). The other recurrence can be proven similarly.
The recurrence in Lemma 2 will allow us to show that tower arcs have unit weight. Recall we set \(\ell_{n}=(1+\sqrt{2})^{n}\).
**Lemma 3**.: _The quantities \(s_{n}\) and \(d_{n}\) satisfy_
\[s_{n}+d_{n}=\ell_{n}\]
Proof.: We induct on \(n\). The claim is true for \(n=0\) since \(s_{0}=0\) and \(d_{0}=1\). Suppose we have shown this is true for \(n-1\). From Lemma 2, we can expand \(s_{n}+d_{n}\) as
\[s_{n}+d_{n}=(\sqrt{2}s_{n-1}+d_{n-1})+(s_{n-1}+\sqrt{2}d_{n-1})=(1+\sqrt{2})(s _{n-1}+d_{n-1})=(1+\sqrt{2})^{n}\]
where the last equality holds by our inductive hypothesis.
**Corollary 1**.: _A tower arc \(\gamma\) which passes through an \(n\) tower has weight \(f_{\mathcal{D}}=\ell_{n}\)_
Proof.: If \(n=0\), then \(\gamma\in\mathcal{D}\) and \(f_{\mathcal{D}}(\gamma)=1\) by definition. If \(n>0\), then we consider the intersection between \(\gamma\) and the arc from \(\mathcal{D}\) in the tower which borders both the triangle and the first quadrilateral in the tower. Applying the Ptolemy relation to this intersection and using Lemma 3 yields the result.
Now that we know that tower arcs have unit weight, we use these to build unitary triangulations of dissections which are the result of combining multiple towers. We say a dissection \(\mathcal{D}\) of \(P\) is a gluing of towers if we can decompose \(P\) into a set of subpolygons such that \(\mathcal{D}\) restricted to each subpolygon is a tower.
**Theorem 1**.: _Let \(\mathcal{D}\), a dissection on polygon \(P\), which can be decomposed into a set of towers. Then, the frieze \(f_{\mathcal{D}}\) is unitary._
Proof.: We first show that a single tower yields a unitary frieze. Label the vertices of the tower as below.
By Corollary 1, every arc of the form \((0,i)\) for \(i\neq 0\) has unit weight under \(f_{\mathcal{D}}\). If \(1<i<2m+2\), the arc \((0,i)\) is not a boundary arc. Moreover, since no pair of distinct arcs from \(\{(0,i):1<i<2m+2\}\) will cross and this set is size \(2m\), we see that this set is a triangulation of the tower.
Now, suppose our dissection \(\mathcal{D}\) is composed of several towers, glued together along edges \(\tau_{1},\ldots,\tau_{\ell}\). We can form a unitary triangulation of \(\mathcal{D}\) by triangulating each tower, as described above, and then including the glued edges \(\tau_{1},\ldots,\tau_{\ell}\). Since \(f_{\mathcal{D}}(\tau_{i})=1\) by definition, this triangulation is unitary.
Since we consider a triangle as a \(0\)-tower, Theorem 1 recovers the fact that we can find a unitary triangulation for a frieze from a triangulation. When working with frieze from a triangulation \(T\), the only choice for a unitary triangulation is the original triangulation since any arc not in the \(T\) must have weight strictly larger than \(1\). This is true since any arc not in the \(T\) will cross at least one arc in \(T\), and so by the Ptolemy relation, the weight of the arc not in \(T\) must be a sum of two different non-negative integers.
Gunawan and Schiffler show that, in the case of a friezes over \(\mathbb{Z}\) on a polygon, there is a bijection between unitary friezes and triangulations of the polygon. The situation is different in our setting. For example, here we give a dissection of a \(10\)-gon which only admits two unitary friezes.
**Example 3**.: We draw a dissection which is the result of gluing two towers.
Note that there are two ways we can decompose this dissection into towers. This gives two options for unitary triangulations. One can show that these are in fact the only two options. For example, there are no arcs of the form \((1,v)\) which have unit weight, so we need to include the arc \((0,2)\) in any unitary triangulation. Conversely, since \(f_{\mathcal{D}}(2,4)=\sqrt{2}\), we cannot include \((2,4)\) in any unitary triangulation, and as a result we must have at least one arc incident to vertex \(3\).
* \(\{(0,2),(0,3),(0,4),(0,8),(4,8),(6,4),(6,8)\}\)
* \(\{(0,2),(0,3),(2,8),(6,2),(6,3),(6,4),(6,8)\}\)
We conjecture that Theorem 1 can be made stronger and that dissections from gluings of towers are the only types of dissections into triangles and quadrilaterals admitting a unitary triangulation.
**Conjecture 1**.: _Let \(\mathcal{D}\) be a dissection of a polygon into triangles and quadrilaterals. Then, the frieze \(f_{\mathcal{D}}\) is unitary if and only if \(\mathcal{D}\) can be decomposed into a set of towers._
In the remainder of this article, we make progress towards Conjecture 1 by verifying it for a couple families of dissections.
### Connection to Continued Fractions
We also note a relationship between the quantities \(s_{n}\) and \(d_{n}\) and the _continued fraction_ expansion of \(\sqrt{2}\). By continued fraction, we mean an expression
\[[t_{0},\dots,t_{n}]=t_{0}+\cfrac{1}{t_{1}+\cfrac{1}{t_{2}+\cfrac{1}{\ddots+ \cfrac{1}{t_{n}}}}},\]
where the \(t_{i}\in\mathbb{Z}_{>0}\). Every rational number can be written as a continued fraction with finitely many \(t_{i}\); this expansion can be calculated using the Euclidean algorithm. Moreover, the numerator and denominator we get from calculating a continued fraction can always be shown to be already relatively prime so that they are in lowest
term. We can also define an _infinite continued fraction_ as a limit of finite continued fractions; every irrational number can be written as an infinite continued fraction. For example, \(\sqrt{2}=[1,2,2,2,\ldots]\).
Let \(\frac{a_{n}}{b_{n}}=[1,2,\ldots,2]\) where there are \(n-1\) entries of \(2\). Set \(a_{0}=0\) and \(b_{0}=1\). Then, we have the following relationship between the sequences \(\{a_{n}\}_{n},\{b_{n}\}_{n},\{s_{n}\}_{n}\), and \(\{d_{n}\}_{n}\).
**Proposition 2**.: _If \(n\geq 0\) is even, then_
\[s_{n}=b_{n}\sqrt{2}\qquad d_{n}=a_{n},\]
_and if \(n>0\) is odd, then_
\[s_{n}=a_{n}\qquad d_{n}=b_{n}\sqrt{2}.\]
Proof.: Note that for \(n\geq 0\), \(\frac{a_{n}+b_{n}}{b_{n}}=\frac{a_{n}}{b_{n}}+1=[2,\ldots,2]\) where we have \(n\) entries of \(2\). This implies that \(a_{n}=a_{n-1}+2b_{n-1}\) and \(b_{n}=a_{n-1}+b_{n-1}\).
We induct on \(n\). The \(n=0\) case is immediate. Suppose we have shown the claim for the \(n-1\) case where \(n-1\) is even. Then, \(n\) is odd, and by Lemma 2 and the above recurrence on \(a_{n}\) and \(b_{n}\), we have
\[s_{n}=\sqrt{2}s_{n-1}+d_{n-1}=\sqrt{2}(b_{n-1}\sqrt{2})+a_{n-1}=a_{n}\]
and
\[d_{n}=\sqrt{2}d_{n-1}+s_{n-1}=\sqrt{2}a_{n-1}+\sqrt{2}b_{n-1}=\sqrt{2}b_{n}.\]
The even \(n\) case follows identically by swapping the roles of \(s_{n}\) and \(d_{n}\).
Since by definition \(\frac{a_{n}}{b_{n}}\to\sqrt{2}\), we see that the quantities \(s_{n}\) and \(d_{n}\) approach the same number as \(n\) gets large.
The sequence \(\{b_{n}\}_{n}\) is exactly the sequence of _Pell numbers_. In Remark 3 we use the Pell numbers to exhibit another family of arcs which have unit weight.
## 4 Separated Dissections
We call a dissection of a polygon \(P\) into triangles and quadrilaterals _separated_ if (1) there is only one arc, \(\tau_{a}\), in \(\mathcal{D}\) which borders both a triangle and a quadrilateral and (2) the quadrilateral incident to this arc has sides \(\tau_{a},\tau_{b},\tau_{c},\tau_{d}\) in clockwise order such that \(\tau_{b}\) and \(\tau_{d}\) are boundary edges of \(P\).
**Proposition 3**.: _Conjecture 1 holds when the dissection is separated._
Proof.: By Theorem 1, we just need to show that if a separated dissection \(\mathcal{D}\) is not a gluing of towers, then the frieze \(f_{\mathcal{D}}\) is not unitary. Since the dissection is separated, this means that either there are no triangles in \(\mathcal{D}\) or there is at least one triangle but the set of quadrilaterals in the dissection does not form a stack. We know the claim is true in the former case by Lemma 1, so we assume we are in the latter case.
Let \(P\) be an \(n\)-gon with a separated dissection \(\mathcal{D}\) which cannot be decomposed into towers. Let \(Q_{0}\) be the unique quadrilateral in \(\mathcal{D}\) which shares vertices with a triangle, and let this triangle be \(\Delta_{0}\). Label the vertices of \(P\) in clockwise order such that the vertices of \(Q_{1}\) are \(1,m,m+1,n\) as below, \(m<n\). Then, the vertex of \(\Delta_{0}\) which is not incident to \(Q_{0}\) is \(k\) with \(1<k<m\). By our assumption, there exists a quadrilateral \(Q_{1}\) in \(\mathcal{D}\) which is not in the stack containing \(Q_{0}\) but which does share vertices with a quadrilateral in this stack. Of the two vertices of \(Q_{1}\) which are not on the stack, let \(v\) be the one with larger index, as below. In the picture below, the hexagons represent arbitrary polygons with a triangulation; it is possible that one or both of these do not exist so that \(k\) could be \(2\) or \(m-1\). It is also possible that there are \(4\)-angulated polygons glued on all edges of quadrilaterals except \((1,n)\) and \((m,m+1)\).
Assume for sake of contradiction that there exists a unitary triangulation \(T\) of \(P\). Notice that for every vertex \(1\leq i\leq n\), we either have at least one arc in \(T\) incident to \(i\) or we include the arc \((i-1,i+1)\) in \(T\). If \(m<i<n\), we cannot add the arc
\((i-1,i+1)\) because the weight of this arc will be a positive integer multiple of \(\sqrt{2}\), so it will not be a unit. This means that there must be at least one arc from \(T\) incident to each vertex \(i\) for \(m<i<n\).
We apply this observation to vertex \(v\). One can check that \(f_{\mathcal{D}}(v,k)=(1+2\sqrt{2})\ell_{a}\) where \(a+1\) is the number of quadrilaterals \((v,k)\) passes through. We see that \(N((1+2\sqrt{2})\ell_{a})=N(1+2\sqrt{2})N(\ell_{a})=7\) so \((v,k)\) cannot be in \(T\). If the triangulated region only consists of this triangle, we are done. Otherwise, let \(i\) be the smallest value such that \((v,i)\in T\). We first assume \(i<k\).
We cannot have \((n,k)\in T\) since this would cross \((v,i)\). Let \(j\) be the smallest value such that \((n,j)\in T\). We know such a value exists by the previous discussion, and we know that \(j\leq i\) since otherwise \(T\) would contain a pair of intersecting arcs.
Since \(j\) is minimal, \((n,j)\) is in the same triangle in the triangulation as the boundary arc \((1,n)\). The third side of this triangle, \((1,j)\), only passes through the triangulated part of \(\mathcal{D}\). This means that \(f_{\mathcal{D}}(1,j)\in\mathbb{Z}\). Since we need this to be a unit, it must be that \(f_{\mathcal{D}}(1,j)=1\). Now, since \((1,k)\) and \((n,j)\) cross, we can use the Ptolemy relation we analyze the relation between \(f_{\mathcal{D}}(1,k)\) and \(f_{\mathcal{D}}(n,j)\),
\[f_{\mathcal{D}}(n,j)=f_{\mathcal{D}}(n,j)f_{\mathcal{D}}(1,k)=f_{\mathcal{D}} (1,n)f_{\mathcal{D}}(j,k)+f_{\mathcal{D}}(1,j)f_{\mathcal{D}}(k,n)=b+\ell_{1}\]
where \(b\geq 1\) is a positive integer since the arc \((j,k)\) only crosses triangles. We again see here that \(f_{\mathcal{D}}(n,j)\) is not an integer. Therefore, it is impossible to create such a triangulation.
If we had \(i>k\) instead, we could again show that such a triangulation is impossible by repeating the above arguments with vertices \(m\) and \(m+1\).
## 5 Type 3 Dissections
In this section we verify that Conjecture 1 is true for another family of dissections.
**Definition 6**.: A _type 3_ dissection is a polygon dissected into squares and triangles such that each vertex belongs to no more than three subpolygons. Equivalently, viewing the dissected polygon as a graph, every vertex of the polygon has degree at most 4.
**Remark 2**.: Note that a similarly defined _type 2_ dissection would be a stack, a tower, or a gluing of two towers. The fact that a stack cannot be given a unitary triangulation follows from Lemma 1 while a tower or gluing of two towers can be given a unitary triangulation by Theorem 1. Thus, we know our conjecture is true for type 2 dissections.
We first introduce a useful class of triangles in a triangulation. Let a _basic triangle_ in a dissection of a surface be a triangle with exactly two sides along the boundary of the surface. The following fact can be found for example in [1].
**Lemma 4**.: _In any triangulation of an \(n\)-gon, \(n\geq 4\), there are at least two basic triangles._
Proof.: Let \(P\) be an \((n+3)\)-gon and let \(T\) be a triangulation of \(P\). Then, \(T\) consists of \(n\) arcs and divides \(P\) into \(n+2\) triangles.
We will count, with multiplicity, the number of non-boundary sides of triangles in two ways. On the one hand, since there are \(n\) arcs in \(T\), this number must be \(2n\). Let \(A_{1}\geq 0\) be the number of basic triangles, \(A_{2}\geq 0\) the number of triangles with one edge along the boundary, and \(A_{3}\geq 0\) the number of triangles with no edges along the boundary. Then, \(n+2=A_{1}+A_{2}+A_{3}\), and
\[2n=A_{1}+2A_{2}+3A_{3}\leq A_{1}+2(A_{2}+A_{3}).\]
We see that this equation will not have any solutions unless \(A_{1}\) is at least \(2\).
We will show Conjecture 1 holds for type 3 dissections by describing an algorithm for building triangulations of a polygon which are unitary with respect to \(f_{\mathcal{D}}\) and then showing that \(\mathcal{D}\) must be a gluing of towers in order for our algorithm to terminate in step (3).
**Triangulation Algorithm.** The input of our algorithm will be an \(n\)-gon \(P=P_{0}\) on vertices \(\{0,\ldots,n-1\}\) and a dissection \(\mathcal{D}\) of \(P_{0}\) into triangles and quadrilaterals. We initialize \(T=\emptyset\). Notice we do not change the dissection \(\mathcal{D}\) during the algorithm.
1. Let \(i=0\).
2. If \(i=n-3\), then \(P_{i}\) is a triangle and the algorithm terminates.
3. If \(i<n-3\), and there exists a diagonal \(\tau_{i}=\{a,b\}\) which forms a basic triangle in \(P_{i}\) such that \(f_{\mathcal{D}}(a,b)\in\mathbb{Z}[\sqrt{2}]^{\times}\), we add \(\tau_{i}\) to \(T\). We form an \((n-i-1)\)-gon \(P_{i+1}\) by removing the boundary edges of this basic triangle, so that \((a,b)\) is now a boundary edge. Add \(1\) to \(i\) and return to step \(2\).
4. If \(i<n-3\), and for every diagonal \((a,b)\) forming a basic triangle in \(P_{i}\), \(f_{\mathcal{D}}(a,b)\notin\mathbb{Z}[\sqrt{2}]^{\times}\), then the algorithm terminates.
If the algorithm terminates in step (1), then the set \(T\) is a unitary triangulation of \(P\). If the algorithm terminates in step (3), then it is impossible for the partial triangulation \(T\) to be completed to a unitary triangulation.
There will often be more than one possible arc that we could add in step (2) of the Triangulation Algorithm, and this choice can affect future choices. Therefore, we must run the Triangulation Algorithm multiple times to exhaustively test whether there exists a unitary triangulation of a polygon \(P\). The number of times we would need to run the algorithm is bounded above by the product of the number of triangulations
of \(P\) and the number of permutations of the arcs in each triangulation; in particular, we only need to run the algorithm a finite number of times.
In Theorem 1, we showed that unitary triangulations always exist with respect to dissections which are the result of gluing towers. Thus, we will show Conjecture 1 is true for type 3 dissections by showing, conversely, that the Triangulation Algorithm will never terminate with a triangulation of \(P\) (i.e., in step (3)) if \(\mathcal{D}\) is not a gluing of towers.
**Theorem 2**.: _If \(\mathcal{D}\) is a type \(d\) dissection of polygon \(P\), for \(d\leq 3\), which is not a gluing of towers, then the Triangulation Algorithm will never produce a unitary triangulation when it begins with \(P\) and \(\mathcal{D}\). That is, the algorithm will always terminate in step (3)._
We will prove a series of smaller results and then put them together to prove Theorem 2. We begin by showing that the first arcs which appear in the Triangulation Algorithm must be tower arcs or arcs from \(\mathcal{D}\).
**Lemma 5**.: _Consider a basic triangle with both boundary edges having weight 1. Then, the non-boundary edge of this triangle is either a tower arc, an arc in the dissection, or has non-unit weight._
Proof.: If \(e\) is a diagonal skipping exactly one vertex, \(v\), in a polygon \(P\), then \(f_{\mathcal{D}}(e)=\sum_{p_{i}}\lambda_{p_{i}}\) where we sum over the sizes \(p_{i}\) of the subpolygons that \(v\) is incident to (see [1] ). Since the units in \(\mathbb{Z}[\sqrt{2}]\) are \((1+\sqrt{2})^{n}\), and a vertex can be incident to at most three subpolygons in a type 3 dissection, the only units we will see are 1 and \(1+\sqrt{2}\). The former occurs when \(e\) is an arc in the dissection bounding a triangle and the latter occurs when \(e\) is a tower arc.
It is possible that, in the course of running the triangulation algorithm, arcs other than tower arcs appear. These will be the result of forming basic triangles with arcs which were not boundary arcs in the original polygon.
**Definition 7**.: Consider a dissection in which an \(i\)-tower and a \(j\)-tower, with \(i\geq 1\) and \(j\geq 0\), which share one vertex. Moreover, this vertex is also part of a triangle; see Figure 1. We refer to the arc between the roof points of the towers (where we consider the roof point of the \(j\)-tower to be the vertex not adjacent to the \(i\)-tower) a _Pell arc_. This is arc \((a,b)\) using notation from Figure 1.
A Pell arc between an \(i\)-tower and a \(j\)-tower has weight \(\ell_{i+j+1}\).
The name "Pell arc" is explained in the following remark, showing a relationship with the Pell numbers.
**Remark 3**.: We show here that, if we remove the restriction of a type 3 dissection, then Pell arcs, with general \(i\) and \(j=0\) in the notation of Figure 1 sit in a larger family of arcs with unit weight. Consider a \((2m)\)-gon, with vertices \(\{0,\dots,2m-1\}\) and triangulated with arcs \(\{(1,2m-1\}\cup\{(2i,2m-2i),(2i,2m-2i+1):0<2i<m\}\cup\{(2i-1,2m-2i),2i-1,2m-(2i- 1)):m<2i+1<2m-1\}\). As a set of \(2m-3\), non-crossing arcs, this set gives a triangulation. Glue the boundary edge \((0,2m-1)\) of this triangulated polygon onto the one of the boundary edges on the last quadrilateral of a tower as below. Then, if the roof point of the tower is \(w\), we claim that \(f_{\mathcal{D}}(w,m)=\ell_{i+m-1}\).
Using \(f_{\mathcal{D}}(0,2m-1)=1\), and the fact that \((0,2m-1)\) and \((w,m)\) cross, we can express \(f_{\mathcal{D}}(w,m)\) in terms of arcs in the triangulated subpolygon and tower arcs,
\[f_{\mathcal{D}}(w,m)=\ell_{i}f_{\mathcal{D}}(0,m)+\ell_{i-1}f_{\mathcal{D}}(2 m-1,m).\]
As a consequence of Theorem A in [1], \(\frac{f_{\mathcal{D}}(0,m)}{f_{\mathcal{D}}(2m-1,m)}\) is given by the continued fraction \([2,2,\dots,2]\) consisting of \(m-1\)\(2\)'s. It is well-known that these continued
Figure 1: The arc between \(a\) and \(b\) is a Pell arc.
fractions have consecutive Pell numbers \(Q_{m}\) and \(Q_{m-1}\) in the numerator and denominator, and \(\gcd(Q_{m},Q_{m-1})=1\). Recall the Pell numbers \(Q_{k}\) are initialized \(Q_{0}=0,Q_{1}=1\) and for \(k\geq 2\), \(Q_{k}=2Q_{k-1}+Q_{k-2}\). Therefore, we have that \(f_{\mathcal{D}}(w,m)=Q_{m}\ell_{i}+Q_{m-1}\ell_{i-1}=\ell_{i-1}((Q_{m}+Q_{m-1} )+Q_{m-1}\sqrt{2})\).
Now, recall our notation from Proposition 2, where \(a_{n}\) was the numerator of \([1,2,\ldots,2]\) with \(n-1\) entries \(2\) and \(b_{n}\) was the denominator of the same continued fraction. We can use the identities from the proof to show \(a_{n}=b_{n}+b_{n-1}\). Since the \(b_{n}\) are exactly the Pell numbers, we have that \(Q_{m-1}\sqrt{2}=b_{m-1}\sqrt{2}\) and \(Q_{m}+Q_{m-1}=b_{m}+b_{m-1}=a_{m}\). Thus, from this Proposition, we conclude that \(Q_{m}+Q_{m-1}+Q_{m-1}\sqrt{2}=s_{m}+d_{m}=\ell_{m}\), so that \(f_{\mathcal{D}}(w,m)=\ell_{i+m-1}\).
When \(2m>4\), the dissections described here are not type 3 dissections as they require at least one vertex to be incident to four subpolygons. The existence of these arcs is part of our motivation to focus on type \(d\) dissections for \(d\leq 3\), given that our current techniques rely on checking a finite number of cases. However, the existence of these arcs does not lead us to believe Conjecture 1 is false.
We show next that Pell arcs are the only new type of arcs which can appear in our triangulation algorithm once we have a partial triangulation with arcs from \(\mathcal{D}\) and tower arcs.
**Lemma 6**.: _An arc that forms a triangle with two tower arcs or a tower arc and a boundary arc is either a tower arc, a Pell arc, or has non-unit weight._
Proof.: Since each vertex is incident to at most three subpolygons in the dissection, we can construct a finite list of ways we can form a triangle which has either two tower arcs as edges or a tower arc and a boundary arc as edges. See Appendix A for a table depicting every such case; note that some entries denote multiple cases. We can prove that, if the third arc is not a tower arc or a Pell arc, then the third arc has non-unit weight by analyzing each such case. The cases that give tower or Pell arcs are entry 2 where both optional shapes are triangles (this gives a tower arc), entry 3 where the optional shape is a quadrilateral (this gives a Pell arc where one tower is a 0-tower), and entry 17 where the optional shape is a triangle (this gives a more general Pell arc). Thus, we prove this Lemma by analyzing all other cases.
Here, we provide a sample calculation to show entry 12 in the table in Appendix A where the dashed line is deleted does not have unit weight. Let \(i,j\geq 1\). Note that the arc \((u,v)\) forms a triangle with the tower arcs \((u,x)\) and \((x,v)\).
Using the Ptolemy relation on the crossing of \((u,v)\) and \((x,y)\), we have
\[f_{\mathcal{D}}(u,v) =f_{\mathcal{D}}(u,v)f_{\mathcal{D}}(x,y)=f_{\mathcal{D}}(u,x)f_{ \mathcal{D}}(v,y)+f_{\mathcal{D}}(u,y)f_{\mathcal{D}}(v,x)\] \[=\ell_{i-1}d_{j}+\ell_{i}\ell_{j}=\ell_{i-1}(d_{j}+\ell_{j+1}).\]
Indeed, \((u,x),(u,y)\), and \((v,x)\) are tower arcs, and \(f_{\mathcal{D}}(v,y)=d_{j}\) by definition.
Since the norm function is multiplicative and \(N(\ell_{i-1})=1\), we have \(N(f_{\mathcal{D}}(u,v))=N(d_{j}+\ell_{j+1})\). Then, by separately evaluating cases where \(j\) is even or odd and using Proposition 2, we can show in each case that \(N(d_{j}+\ell_{j+1})>1\), which implies that \(f_{\mathcal{D}}(u,v)\) is not a unit.
The remaining cases can be shown with similar calculations.
Even though Pell arcs can appear during the Triangulation Algorithm, the next result shows that they will not be present in a unitary triangulation of a polygon with respect to a frieze from a type 3 dissection.
**Lemma 7**.: _The Triangulation Algorithm cannot successfully produce a unitary triangulation if in at least one step it adds a Pell arc to the set \(T\)._
Proof.: As in Lemma 6, we can write a finite list of cases for how each vertex can appear in a triangle with one side a Pell arc and a second side which is either a Pell arc, a tower arc, or from \(\mathcal{D}\). Then, we can compute that, in each case, the third arc of the triangle cannot have unit weight. We omit the full list for sake of brevity and instead provide a few example calculations.
In some cases, we can compute this by direct calculation. Consider the following triangle; arc \((u,y)\) is a tower arc while arc \((y,v)\) is a Pell arc. We assume the tower containing \(u\) is an \(i\)-tower, the tower containing vertex \(x\) is a \(j\)-tower and the tower containing vertex \(v\) is a \(k\)-tower for \(i,j\geq 1\) and \(k\geq 0\).
If we apply the Ptolemy relation to the intersection of \((u,v)\) and \((x,y)\) and perform some simplifications, we find that
\[f_{\mathcal{D}}(u,v)=\ell_{i+k}(\ell_{j+1}+\ell_{j}+\ell_{j-1}+(d_{j}+\sqrt{2}d _{j-1})).\]
It suffices to check whether the expression \(\ell_{j+1}+\ell_{j}+\ell_{j-1}+(d_{j}+\sqrt{2}d_{j-1})\) is a unit. This expression clearly has the strict lower bound of \(\ell_{j+1}\), so the smallest unit this could be is \(\ell_{j+2}\). The expression would equal \(\ell_{j+2}\) if and only if
\[\sqrt{2}\ell_{j+1}=\ell_{j}+\ell_{j-1}+(d_{j}+\sqrt{2}d_{j-1}).\]
We reduce both sides to be in terms of \(s_{j-1}\) and \(d_{j-1}\). The left hand side reduces to
\[\sqrt{2}\ell_{j+1}=(4+3\sqrt{2})\ell_{j-1}=(4+3\sqrt{2})(s_{j-1}+d_{j-1}),\]
while the righthand side reduces to
\[\ell_{j+1}+\ell_{j}+d_{j}+\sqrt{2}d_{j-1}=(3+\sqrt{2})s_{j-1}+(2+3\sqrt{2})d_ {j-1}\]
We see here that \(\sqrt{2}\ell_{j+1}>\ell_{j+1}+\ell_{j}+(d_{j}+\sqrt{2}d_{j-1})\). We have that \(\ell_{j+2}>\ell_{j+1}+\ell_{j}+\ell_{j}+(d_{j}+\sqrt{2}d_{j-1})>\ell_{j+1}\) and we know there are no units strictly between \(\ell_{j+1}\) and \(\ell_{j+2}\). Therefore, the original expression for \(f_{\mathcal{D}}(u,v)\) cannot be equal to any unit.
We note there are some ways to combine two Pell arcs which produce another arc with unit weight, as shown below.
One can compute that \(f_{\mathcal{D}}(u,x)\) is a unit for such a configuration. However, it is a consequence of Lemma 6 and the aforementioned case-work involving triangles with one Pell arc and one tower arc that \((u,x)\) could only be produced in the Triangulation Algorithm if we already had Pell arcs \((u,y)\) and \((y,x)\). Moreover, the Pell arc \((u,y)\)
would only exist in the triangulation if we already had \((y,z)\). But then we see that we will never include \((u,x)\) because \((u,x)\) and \((y,z)\) cross.
If the pair of triangles next to \(y\) were on the other side of the tower, so that the picture below \(y\) is reflected across a vertical axis, then the corresponding arc could be reached in the Triangulation Algorithm. But in this case, computation shows that the arc will not have a unit weight.
These cases show that, even though we know that a Pell arc can bound a side of one triangle, we cannot find a second triangle along a Pell arc whose other sides have unit weight. Therefore, if we choose a Pell arc in the Triangulation Algorithm, the algorithm cannot possibly terminate with a unitary triangulation of the polygon.
The final piece of our proof of Theorem 2 is showing that it is equivalent to say that a dissection is not a gluing of towers and a dissection \(\mathcal{D}\) does not admit a triangulation into tower arcs and arcs from \(\mathcal{D}\).
**Lemma 8**.: _If an \(n\)-gon \(P\) with a dissection \(\mathcal{D}\) can be triangulated using only tower arcs and the arcs from dissection, then \(\mathcal{D}\) must be a gluing of towers._
Proof.: If it is possible to triangulate a polygon \(P\) with dissection \(\mathcal{D}\) with only tower arcs and arcs from \(\mathcal{D}\), then we can see \(\mathcal{D}\) as a gluing of towers as follows. For every set of tower arcs which share a common endpoint in a roof point, we take the set of subpolygons triangulated by these arcs to be one tower. The arcs from \(\mathcal{D}\) glue these towers together.
We are now ready to prove the main result in this section.
Proof of Theorem 2.: Consider an \((n+3)\)-gon \(P_{0}\), for \(n>0\), with a dissection \(\mathcal{D}\) such that \(\mathcal{D}\) is a type 3 dissection and \(\mathcal{D}\) is not a gluing of towers. By Lemma 5, either the triangulation algorithm finds an arc \(\tau_{1}\) which is a tower arc or an arc from \(\mathcal{D}\), or the algorithm terminates at the first step. So suppose that the algorithm finds such an arc \(\tau_{1}\).
By Lemma 8, at some point the algorithm must either terminate or use an arc which is not a tower arc or from \(\mathcal{D}\). Suppose that at step \(k\), the algorithm finds such an arc \(\tau_{k}\) with \(f_{\mathcal{D}}(\tau_{k})\in\mathbb{Z}[\sqrt{2}]^{\times}\) where \(\tau_{k}\) is not a tower arc nor an arc in \(\mathcal{D}\); moreover, let \(k\) be the minimal number such that this is true. By Lemma 6, \(\tau_{k}\) must be a Pell arc since \(\tau_{1},\ldots,\tau_{k-1}\) were tower arcs or arcs from \(\mathcal{D}\). By Lemma 7, then we know the algorithm cannot terminate after we have chosen \(\tau_{k}\).
Theorems 1 and 2 combine to show that a unitary triangulation is only possible in a type 3 dissection if it can be decomposed into a set of towers. Type one dissections are trivial to check and Remark 2 explained the case for type 2 dissections.
**Corollary 2**.: _Conjecture 1 holds for type 1,2, and 3 dissections._
## Acknowledgements
This project was largely completed at the University of Minnesota School of Mathematics Summer 2021 REU program and was partially supported by NSF RTG grant DMS-1148634 and NSF grant DMS-1949896. The authors would like to thank Vic Reiner for organizing the REU and Trevor Karn for helping with computations on Sage.
|
2307.10653 | Refining the Optimization Target for Automatic Univariate Time Series
Anomaly Detection in Monitoring Services | Time series anomaly detection is crucial for industrial monitoring services
that handle a large volume of data, aiming to ensure reliability and optimize
system performance. Existing methods often require extensive labeled resources
and manual parameter selection, highlighting the need for automation. This
paper proposes a comprehensive framework for automatic parameter optimization
in time series anomaly detection models. The framework introduces three
optimization targets: prediction score, shape score, and sensitivity score,
which can be easily adapted to different model backbones without prior
knowledge or manual labeling efforts. The proposed framework has been
successfully applied online for over six months, serving more than 50,000 time
series every minute. It simplifies the user's experience by requiring only an
expected sensitive value, offering a user-friendly interface, and achieving
desired detection results. Extensive evaluations conducted on public datasets
and comparison with other methods further confirm the effectiveness of the
proposed framework. | Manqing Dong, Zhanxiang Zhao, Yitong Geng, Wentao Li, Wei Wang, Huai Jiang | 2023-07-20T07:33:36Z | http://arxiv.org/abs/2307.10653v1 | Refining the Optimization Target for Automatic Univariate Time Series Anomaly Detection in Monitoring Services
###### Abstract
Time series anomaly detection is crucial for industrial monitoring services that handle a large volume of data, aiming to ensure reliability and optimize system performance. Existing methods often require extensive labeled resources and manual parameter selection, highlighting the need for automation. This paper proposes a comprehensive framework for automatic parameter optimization in time series anomaly detection models. The framework introduces three optimization targets: prediction score, shape score, and sensitivity score, which can be easily adapted to different model backbones without prior knowledge or manual labeling efforts. The proposed framework has been successfully applied online for over six months, serving more than 50,000 time series every minute. It simplifies the user's experience by requiring only an expected sensitive value, offering a user-friendly interface, and achieving desired detection results. Extensive evaluations conducted on public datasets and comparison with other methods further confirm the effectiveness of the proposed framework.
## 1 Introduction
Industrial monitoring services typically oversee millions of time series data points on a daily basis, making timely and accurate time series anomaly detection vital for maintaining reliability and optimizing the performance of diverse systems and applications.
Various methods have been employed in the field of time series anomaly detection. For instance, statistical models analyze patterns in time series data using a historical time window and identify time points with extreme deviations as anomalies [1, 13]. Forecasting-based approaches, including traditional models like moving average [23], as well as neural sequence models such as LSTM [1] and Transformer [15], aim to predict values based on a given range of time series inputs, with anomalies characterized by significant deviations from the predicted values. On the other hand, reconstruction-based methods concentrate on reconstructing time series data, assuming that anomalies exhibit high reconstruction errors [1].
Despite the effectiveness of existing time series anomaly detection methods, they often require extensive labeled resources to achieve optimal performance. Additionally, manual and careful parameter selection is necessary to accommodate diverse needs. This underscores the urgent need for automation in time series anomaly detection, as it can alleviate the reliance on labeled resources and automate the parameter selection process.
Existing strategies for automatic parameter tuning can be classified into three categories. The first approach, as demonstrated in Prophet [12], involves optimizing parameters based solely on prediction errors. However, relying solely on prediction metrics can result in overfitting to anomalies, leading to increased false negatives. The second approach treats parameter tuning as a prediction task itself, where a model is trained to directly predict the best parameters for a given algorithm [14, 15]. This method requires prior knowledge and a significant amount of labeled resources for each algorithm. Additionally, when a new model is introduced, manual parameter tuning is necessary again to generate the correct parameter labels. The third approach treats anomaly detection as a binary classification problem and optimizes parameters based on anomaly labels [11]. However, this approach faces challenges when applied to industrial monitoring platforms. It heavily relies on labeled anomalies in the training data, which are often missing or difficult to obtain in general monitoring services. Moreover, this approach is not suitable for handling new incoming time series data, rendering the model unavailable for real-time applications.
The aforementioned automatic parameter tuning solutions are not applicable to monitoring services due to the following reasons. First, monitoring services typically handle an immense volume of time series data, making it impractical to label anomaly points for each individual time series. Second, monitoring services cater to diverse user groups, each with varying needs and sensitivities towards anomalies. For instance, some users may prefer an anomaly detector that only reports the most severe anomalies, while others may want to monitor all potential anomalies. Furthermore, users often lack in-depth expertise in anomaly detection algorithms.
f the optimized parameters do not meet their requirements, users would need to invest significant time and effort to gain proficiency in the detection algorithms for further fine-tuning.
To address the aforementioned challenges, we present a comprehensive framework for the automatic optimization of parameters in time series anomaly detection models, irrespective of the model type. Our framework introduces three optimization targets: prediction score, shape score, and sensitivity score. The prediction score guides the optimization process for prediction-based methods, while the shape score evaluates the visual shape of the detection results. The sensitivity score measures whether the model's detection performance aligns with the user's expected number of anomaly points. The framework can seamlessly adapt to new model backbones or new time series data. It accomplishes this by optimizing either one or multiple optimization targets, without necessitating prior knowledge or manual labeling efforts. Through extensive evaluations and real-world deployment for over six months, our framework has demonstrated remarkable results. From the user's perspective, our framework simplifies the process by requiring only a sensitivity value, as illustrated in Figure 1, enabling fully automated time series anomaly detection without manual intervention. We also offer a user-friendly fine-tuning interface with a small set of easily understandable parameters. Currently, our automatic time series anomaly detection framework is the most widely utilized algorithm in our monitoring platform, effectively serving over 50,000 time series every minute. In summary, our work makes the following key contributions:
* We formalize the optimization targets for parameter tuning, namely the prediction score, shape score, and sensitivity score. This enables automatic optimization of different model backbones by focusing on one or multiple targets.
* We introduce the shape score as a novel optimization target, evaluating the performance of time series anomaly detection based on intuitive observations. Our framework facilitates achieving the most appropriate detection shape for anomaly detection results. We also provide a user-friendly fine-tuning function with a small set of simple parameters that are easily understandable by users. This fine-tuning feedback serves as a valuable resource to improve our shape score model.
* We have implemented our proposed framework online, effectively serving multiple user groups and processing over 50,000 time series data every minute. The effectiveness of our approach is evident from the online deployment, where users effortlessly obtain desired detection results by providing a sensitivity value.
## 2 Related Work
AutoML has gained significant popularity in machine learning to enhance performance metrics. In the realm of time series analysis, AutoML techniques have been employed to automate various aspects, such as data cleaning and filling missing points [2] from the data side, as well as model selection and parameter tuning [16] from the model side. However, in this study, we specifically concentrate on the automation of parameter tuning for time series anomaly detection.
One approach involves directly optimizing the model parameters based on prediction errors. For example, Prophet [14] provides a function to optimize the model's parameters using metrics like root mean squared error (RMSE) and mean absolute percentage error (MAPE). However, optimizing a model solely based on prediction errors can cause it to fit to every point, including anomalies, resulting in an increased number of false negatives. Another approach, exemplified by TODS [13], treats anomaly detection as a binary classification problem and optimizes parameters using anomaly labels, typically relying on metrics such as F1 score or precision. However, this kind of approach is challenging to apply to cases without labeled data, making it impractical for real-world monitoring services where obtaining all the necessary labels beforehand is impossible. The last kind of approach trains a model to directly predict the best hyperparameters for a given method. This approach has been successfully employed by industry leaders such as Microsoft [15] and Facebook [12]. For instance, Ying et al.[19] utilized a LightGBM[17] regression model to learn the optimal hyperparameters for various anomaly detection models, enabling the prediction of the best parameter values when encountering new time series. Similarly, Zhang et al. [2021] employed an offline exhaustive parameter tuning process to determine the best-performing hyperparameters for different model and data combinations. They trained a multi-task neural network, where each task focused on predicting a specific parameter value, which could be either categorical or numerical. It is important to note that this method requires prior knowledge and a significant amount of labeled resources for each algorithm. Furthermore, when introducing a new model, manual parameter tuning is still necessary to generate the correct parameter labels.
To address the challenges mentioned above, we propose three primary optimization targets for automatic univariate time series anomaly detection, which can be seamlessly applied to various model backbones while minimizing the need for extensive labeling efforts.
## 3 Optimization Targets
We propose three general optimization targets for effectively optimizing the parameters of the anomaly detection model.
Figure 1: Example of our user interface for automatic time series anomaly detection
These optimization targets serve as evaluation metrics to assess the performance of the detection results. For example, for forecasting models, the detection output resembles the illustration in Figure 2. The prediction score evaluates the disparity between the raw time series values \(x\) and their corresponding forecasted values \(\hat{x}\). On the other hand, the shape score measures the shape of the detection boundary by considering the raw value \(x\), the upper boundary \(u\), and the lower boundary \(l\) as inputs to a shape score model \(f\), denoted as \(f(x,u,l)\), which outputs a score indicating the performance of the detection boundary. The sensitivity score governs the number of anomalies detected in the results. For instance, if the detected anomalies are represented as \(\mathcal{A}\) and a user's desired anomaly proportion is set at \(p=1\%\), the model strives to identify a suitable threshold that yields detection results containing approximately 1% anomaly points, expressed as \(\hat{p}=|\mathcal{A}|/|x|\). Therefore, in the case of forecasting models, the sensitivity score directly controls the width of the detection boundary.
It is important to note that not all optimization targets are applicable to every method. For instance, the prediction score is not relevant for methods that do not involve forecasting values. Nonetheless, we demonstrate that nearly all methods can be optimized using at least one of the proposed optimization targets. We will provide further details on setting the optimization targets for different methods. In the subsequent sections, we will outline the specifics for each optimization target. Overall, we present an automatic parameter tuning framework that tackles the following problem:
**Problem definition.** Given a time series \(x\), an anomaly detection model \(d\), a set of parameters \(\theta\), and a desired sensitivity value \(p\), the parameter tuning framework aims to discover the optimal parameter set \(\hat{\theta}\) for the model \(d\) based on one or multiple optimization targets. These optimization targets consist of the prediction score, shape score, and sensitivity score.
### Prediction Score
Prediction score is used to optimize the parameter \(\theta\) for a forecasting model \(d_{\theta}\) by minimizing the prediction error:
\[min_{\theta}\quad\mathcal{L}(x,d_{\theta}(x)) \tag{1}\]
where the evaluation metrics are commonly chosen from the followings
* Median absolute error (MAE): \(\frac{1}{N}\Sigma_{i=1}^{N}|x_{i}-\hat{x}_{i}|\)
* Median absolute error (MEDAE): \(\text{median}|x_{i}-\hat{x}_{i}|\)
* Root mean squared error (RMSE): \(\sqrt{\frac{1}{N}\Sigma_{i=1}^{N}(x_{i}-\hat{x}_{i})^{2}}\)
* Mean absolute percentage error (MAPE): \(\frac{100}{N}\Sigma_{i=1}^{N}|\frac{x_{i}-\hat{x}_{i}}{x_{i}}|\)
In contrast to traditional time series forecasting tasks, prediction-based time series anomaly detection tasks aim to predict the normal pattern rather than every individual point, including anomalies. In real-world time series data, the raw time series may indeed contain anomaly points, such as sudden spikes and dips. However, if a model is trained directly on raw inputs, it may inadvertently learn from both the noise and anomalies present in the data. Therefore, smoothing strategies are necessary to obtain suitable training prediction targets, allowing the model to learn to fit the normal pattern. Table 1 illustrates the performance of the model with different smoothing strategies. The smoothed inputs \(\widetilde{x}\) are considered the correct labels for prediction, and the loss is evaluated based on \(\mathcal{L}(\widetilde{x},f_{\theta}(x))\). It is evident that employing simple smoothing strategies, such as filtering extreme values and applying moving averages to each time point, significantly enhances model performance.
Apart from the smoothing strategy, we find that set the correct prediction loss can further enhance the prediction performance. Table 2 shows the results with using different evaluation metrics as the optimization target. We can see that using MAPE as the optimization target can obtain a model with the best performance.
In summary, we use two strategies to ensure the optimization process for a prediction model to be resistant to the noisy anomalies, one is use the smoothed inputs as the prediction target, and another is using MAPE as the optimization target for the model training.
### Shape Score
The shape score is utilized to evaluate the shape of the detection boundary and determine whether the outputs align with the ideal detection boundary as perceived by humans. For instance, in Figure 3, the left figure illustrates an example where a model has a lower prediction score but a higher shape score. In comparison, the right figure has a higher prediction score but the results appear to be overly fitted to each point, including the anomaly points, indicating a lower shape score. Intuitively, we prefer the model shown in the left figure because the spike in the figure is more likely to be an anomaly. Thus,
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Smooth Strategy & MAE & MEDAE & RMSE & MAPE \\ \hline None & 59.38\(\pm\)1.83 & 44.30\(\pm\)3.34 & 183.56\(\pm\)8.29 & 89.41\(\pm\)6.41 \\ Filter & 48.55\(\pm\)2.90 & 38.99\(\pm\)2.21 & 22.95\(\pm\)2.74 & 64.00\(\pm\)4.26 \\ Filter + MA & 44.89\(\pm\)1.70 & 35.99\(\pm\)1.31 & 2.93\(\pm\)0.27 & 59.31\(\pm\)2.79 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model performance with different smooth strategies.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Optimization Metric & MAE & MEDAE & RMSE & MAPE \\ \hline MAE & 47.73\(\pm\)1.83 & 37.72\(\pm\)1.33 & 8.24\(\pm\)0.24 & 65.95\(\pm\)2.72 \\ MEDAE & 47.89\(\pm\)1.64 & 57.84\(\pm\)1.24 & 8.09\(\pm\)0.23 & 64.48\(\pm\)2.59 \\ RMSE & 44.75\(\pm\)1.63 & 35.57\(\pm\)1.24 & 5.66\(\pm\)0.23 & 59.87\(\pm\)2.53 \\ MAPE & 44.61\(\pm\)1.63 & 35.59\(\pm\)1.25 & 2.11\(\pm\)0.23 & 59.54\(\pm\)2.59 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model performance on different evaluation metrics with different optimization metrics.
Figure 2: Explanation about the three optimization targets for a forecasting model.
we can say that the shape of the detection results in the left figure is better than the shape of the detection results in the right figure. We quantify this intuitiveness as the _shape score_ and train a shape score model to capture it. We formalize the shape score model \(f\) to take the raw values \(x\) and the boundaries \((u,l)\) as the inputs and produces a single shape score value \(\hat{s}=f(x,u,l)\) ranging from 0 to 1, where a higher score indicates better performance in capturing the desired shape of the detection results. The shape score is used to optimize the parameter \(\theta\) for an anomaly detection model \(d_{\theta}\) by maximizing the shape score:
\[max_{\theta}\quad f(x,u,l)\quad u=g(d_{\theta}(x)),l=h(d_{\theta}(x)) \tag{2}\]
where \(g\) and \(h\) are the transformation of the prediction values \(d_{\theta}(x)\).
Dataset Formulation.To train an effective shape score model, it is crucial to have a high-quality labeled dataset comprising both good and bad detection cases. We formulate the base training dataset using the following strategies. For the **good cases**, we employ a combination of data synthesis and manual labeling. Initially, we manually annotate a small subset from our monitoring services and assign shape scores ranging from 0 to 1 based on intuitive observations. For instance, in Figure 3 (a), a shape score of 1 is assigned, while in Figure 3 (b), a shape score of 0.6 is assigned. Additional instances will be marked as good cases via our user fine-tuning service mentioned in Section 4.3. A manual filtering strategy will be used to selectively add examples that exhibit new patterns compared to the existing labeled cases. In the synthesized dataset, we generate various base patterns \(x_{b}\) such as seasonal sine-like curves, sparse inputs, and random-walk-like patterns. To simulate anomalies in real-world monitoring services, we introduce noises and anomalies \(x_{a}\) to the base patterns, resulting in the synthesized data \(x=x_{b}+x_{a}\). Considering previous observations, we assert that the detection boundary should only learn from the base patterns. Thus, we set the ideal upper boundary as \(u=x_{b}+3\sigma\) and the ideal lower boundary as \(l=x_{b}-3\sigma\), where \(\sigma\) represents the standard deviation of the inputs. We label the shape score for this set of detection results \((x,u,l)\) as 1, indicating good performance. For the **bad cases**, we introduce eight types of anomalies into the detection boundaries of the good cases. These anomalies include inverting the upper and lower boundaries, positioning the lower boundary above the raw values, positioning the upper boundary below the raw values, excessively narrow or broad boundaries, boundaries with numerous high-deviation noises, boundaries with extreme value peaks, and boundaries aligned with significantly changed raw values. The shape scores assigned to these cases are lower than the original shape score, indicating poor performance in capturing the desired shape of the detection results.
Model StructureThe shape score model takes the raw value \(x\) and the boundaries \((u,l)\) as inputs. In real-world scenarios, the length \(N\) of the time series \(x\) can vary from days to weeks, requiring the shape score model to handle time series of different sizes. One approach is to train a shape score model that can handle inputs with a fixed window size \(W\). If a time series is longer than this window, it is divided into multiple windows to obtain individual shape scores, and the sum of these scores is used as the final shape score. However, this approach has a limitation in that the model can only focus on the shape score within a specific window, potentially missing anomalies that are only detectable when considering the entire time series. To address this issue, we need to reduce or increase the dimensionality of the time series to ensure consistent input dimensions for the shape score model. A suitable approach is to transform the input into images since the shape score is also based on visual observations. Specifically, we utilize GASF (Gramian Angular Summation Field) [22] to represent the time series as an image. The core idea of GASF is to first use Piecewise Aggregation Approximation (PAA) [14] to smooth the time series while preserving its trends and reducing its size. Next, the reduced time series is projected onto a polar coordinate system, ensuring a bijective transformation that preserves the information. As a result, the inputs \((x,u,l)\) can be transformed into three 2-dimensional images, forming a composite image with three layers. This allows us to utilize deep learning frameworks, with CNN (Convolutional Neural Network) chosen as the model backbone, to learn the shape score.
The optimization process for the shape score model involves minimizing the loss function \(\mathcal{L}(s,f(x,u,l))\). It is important to note that while the shape score model is still learned through supervised learning, our proposed method offers a distinct advantage compared to the methods described in [15] and [16]. Our approach presents a general model that can be applied to various methods and different time series datasets without the need for additional labeling efforts.
### Sensitive Score
Real-world monitoring services typically involve the monitoring of millions of time series data. In such scenarios, it becomes challenging for users to manually label all anomalies present in the data. Moreover, users may have varying degrees of sensitivity towards anomalies. Some users may prioritize capturing every possible anomaly, while others may focus only on the most extreme anomalies.
To accommodate these preferences, we introduce the notion of an anomaly ratio denoted as \(p\). The anomaly ratio represents the user's desired proportion of anomalies in the detection results. For instance, if a user sets \(p=0.05\), it indicates that they expect the detection results to contain approximately 5% of the total anomalies present in the data. By
Figure 3: Example for cases a model has (a) lower prediction score and higher shape score, and (b) higher prediction score and lower shape score.
adjusting the anomaly ratio, users can customize the sensitivity of the model to align with their specific requirements and priorities.
The most straightforward approach to tune a model based on the anomaly ratio \(p\) is to search for a threshold that precisely satisfies the desired ratio. However, this approach can result in two potential cases that lead to false negatives, as illustrated in Figure 4. In Figure 4(a), the missed peak point within the detection boundary could also be considered an anomaly. Similarly, in Figure 4 (b), the points near the top 2% of anomaly points should also be identified as anomalies. To address this issue, it is crucial to consider the relationship between the threshold value and its corresponding anomaly ratio. Figure 5 illustrates this relationship given the time series data from Figure 4. We observe that increasing the threshold value at certain intervals can significantly reduce the detected anomaly ratio, while in other cases, increasing the threshold has only a slight effect on reducing the ratio. Based on this observation, we can determine the threshold value by identifying either: 1) the starting point with lower derivatives or 2) the active points that can significantly reduce the sensitivity values. This transforms our problem into finding the threshold values for the knee points and active points in a given time series. To achieve this, we employ the methods proposed in [1] to detect the knee points, and we identify the active points by selecting those that contribute the most to the decrease in the anomaly ratio. As a result, we obtain a set of threshold values denoted as \(\{t|t\in T\}\), and the corresponding anomaly ratio for a given detection result is \(p_{t}=P(f(x)|t)\). Our objective is to find a threshold that minimizes the distance to the user-defined anomaly ratio \(p\). We refer to this objective as the sensitivity score:
\[\min_{t}|p-p_{t}|\quad\text{s.t.}\quad\{t|t\in T\} \tag{3}\]
## 4 The Automation Framework at eBay
At eBay, we have successfully integrated our proposed parameter tuning targets into our existing platform to optimize detection algorithm parameters (Figure 6). When a user submits a detection job, the data dumper retrieves the relevant time series data. Our automation framework uses a trained LightGBM [1] classifier to identify patterns, such as seasonality, sparsity, or randomness, and selects the appropriate detection model. The parameter tuning framework then determines optimal parameters for the selected method, categorized into prediction score, shape score, and sensitive score. The optimization is conducted sequentially, targeting each objective. If a method lacks parameters for a specific target, we optimize using the other targets. Overlapping parameters between prediction score and shape score are optimized based on their combined score. Once the algorithm and tuned parameters are determined, they are applied to the detection flow, generating the final results. In cases where users require further fine-tuning, their feedback becomes a valuable resource for updating the shape score model.
### Parameter Tuning for Different Methods
As mentioned earlier, we consider three types of time series patterns: seasonal, sparse, and random patterns. We employ three corresponding algorithms to detect anomalies in each pattern.
* **Random Pattern**. For random-walk-like patterns, we utilize a moving average method, similar to the approach proposed in [23], to detect anomalies. The prediction for the next point is based on the average value (mean or median) within a specified time window with size window_size, and the anomaly boundary is determined using a threshold-sigma calculation on the variation. To optimize this method, we first select the parameters for average and window_size based on the shape score. Then, we optimize the threshold using the sensitive score.
* **Sparse Pattern**. In the case of sparse time series, we focus solely on extreme anomaly values. To detect such anomalies, we employ Extreme Value Theory (EVT) [16]. This method produces upper and lower boundaries without forecasting values. Two parameters control the initial location of the boundary: the first parameter truncates the original data distribution to focus on the higher values, while the second parameter sets the initial expected anomaly ratio. To optimize this
Figure 4: Examples for the tuned threshold based on the given anomaly ratio only.
Figure 5: The relationship between threshold value and anomaly ratio given the time series shown in Figure 4.
Figure 6: The automation framework at eBay
method, we first tune these two parameters based on the shape score. Subsequently, we optimize another parameter, the threshold that directly controls the boundary, using the sensitive score.
* **Seasonal Pattern**. For seasonal time series data, we utilize a seasonal decomposition method [11] to detect anomalies. The prediction for a point is based on its value within the same past seasonal window, as well as values from the past time window. Parameters controlling the window sizes of the seasonal, trend, and residual windows are tuned based on the prediction score. The anomaly boundary for this method is determined by the distribution of prediction residuals. Additionally, there is a threshold parameter that controls the width of the boundary, which we optimize using the sensitive score.
### Evaluation on eBay's monitoring service
We evaluate the parameter tuning performance using eBay's monitoring dataset. The dataset comprises 50 time series collected from eBay's production environment over the past month, with each time series representing minute-level data. In our evaluation, we employed the model selection algorithm that determines the most suitable algorithm for each time series based on their unique pattern features. Subsequently, we assessed the performance before and after applying the tuned parameters. To evaluate the effectiveness of the tuning process, we utilized two widely-used evaluation metrics: the point-wise F1 score and the AUC measure, as mentioned in [19]. Table 3 presents the evaluation results for the time series classified into the three methods, as well as the overall performance. The findings clearly demonstrate that our proposed parameter tuning methods have a significant positive impact on the performance of the algorithms.
### User Fine-tuning Service
In some cases, customers may still find the detection results unsatisfactory even after tuning the parameters. This can happen due to two reasons: firstly, the trained model may lack the necessary business knowledge, and secondly, there may be new cases that are not covered by the model. To address these situations, we offer a user-friendly fine-tuning service that enables users to directly adjust the detection results, which serves as valuable training cases for the shape score model. Specifically, examples that exhibit new patterns compared to the existing labeled cases are added to the training dataset. This enhances the model's ability to handle novel cases. Our fine-tuning service exposes four parameters, illustrated in Figure 7. The first parameter is the threshold, which controls the width of the detection boundary. The second parameter is the upper baseline, where values below this line are not considered anomalies. Similarly, the third parameter, the lower baseline, ensures that values above this line are not classified as anomalies. The fourth parameter, called direction, allows users to specify the side of the anomalies they want to focus on. The fine-tuning process is based on the loaded model cache and does not require any additional training, making it a quick operation that can be completed within seconds. By providing these user-defined parameters and obtaining the updated detection results, we can further refine the shape score model through additional training. This iterative process enables continuous improvement and adaptation to specific user requirements and evolving anomaly patterns.
## 5 Extensional Experiments
To further evaluate the performance of the proposed parameter tuning framework, we conduct experiments on several public datasets that also used to monitoring services and test the parameter tuning performance on other time series anomaly detection methods.
Datasets.We evaluate parameter tuning on the following public datasets: **IOPS**1: 58 time series reflecting web service indicators, machine health, and scale. Average length: 100,000 data points. Anomaly ratio: 2%. **Yahoo**2: 367 real and synthetic time series based on production traffic. Average length: 1,561 data points. Anomaly ratio: 0.7%. **SMD**[20]: 5-week dataset from a large Internet company. 281 time series from 28 machines. Average length: 25,562 data points. Anomaly ratio: 3.52%.
Footnote 1: [http://iops.ai/](http://iops.ai/)
Footnote 2: Yahoo dataset.
Methods.In addition to the previously mentioned moving average (MA) method and extreme value theory (EVT) method, we evaluate several other public methods for time series anomaly detection. We examine these methods both with and without utilizing our proposed parameter tuning optimization targets. However, it is important to note that we do not include the results for Matrix Profile [20] due to its limited parameters, such as the time window, which can be easily determined through frequency analysis.
* Elastic Net [18] is a prediction-based anomaly detection algorithm that combines linear re
\begin{table}
\begin{tabular}{l c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{F1} & \multicolumn{2}{c}{AUC} \\ & Before & After & Before & After \\ \hline Random method & 0.404 & 0.874\(\dagger\) & 0.716 & 0.965\(\dagger\) \\ Sparse method & 0.458 & 0.905\(\dagger\) & 0.929 & 0.957\(\dagger\) \\ Seasonal method & 0.507 & 0.902\(\dagger\) & 0.692 & 0.965\(\dagger\) \\ \hline \end{tabular}
\end{table}
Table 3: Evaluation results: before and after parameter tuning
Figure 7: Our user interface for the fine-tuning service.
gression with L1 and L2 loss. The ratio of the two losses and the input window size are crucial parameters. We use the shape score to determine the optimal parameter combinations.
* Prophet [14] is a decomposition-based time series forecasting algorithm. Two parameters, namely the expected change points ratios and the input window size, influence the shape of the detection results. We tune these parameters based on the shape score.
* Local Outlier Factor (LOF) [1] is a clustering-based anomaly detection approach that assigns a binary label to each point based on values within a specified window. The window size and the number of neighborhoods are two parameters with a significant impact on the detection results. We optimize these parameters step by step using the sensitive score.
* DBSCAN [1] is a clustering-based anomaly detection method. The sliding window length (controlling the number of values for calculation), epsilon (the radius of a circle), and min points (the minimum number of points in a circle) are key factors that affect the detection results. We tune each parameter individually using the sensitive score.
* CNN [15] is a forecasting-based approach for anomaly detection, where the shape score is used to determine the appropriate kernel size and stride.
* LSTM [1] is a forecasting-based approach for anomaly detection. The size of the hidden units and the number of neural layers are two crucial hyperparameters that impact the prediction performance. We utilize the shape score to search for the optimal combinations.
* Autoencoder (AE) [1] utilizes the reconstruction error to detect anomalies. The size of the sliding window is a significant hyperparameter that affects the shape of the reconstructed inputs. Therefore, we employ the shape score to find the best value for this hyperparameter.
* D-Linear [16] is a simple one-layer neural prediction model. The input sequence size and the forecasting window size are two key factors that impact the performance. We use the shape score to select a suitable value for these parameters.
* Informer [17] is a transformer-based time series forecasting model that encodes the inputs to hidden units and directly predicts the output by feeding masked inputs. The input sequence size, the number of encoder and decoder layers, and the dimension of the hidden units are key hyperparameters that contribute to the final results. We tune them using the shape score.
ResultsWe split the data into training and testing sets, using the initial 30% of the data for training and the remaining for testing. For methods that use the sensitive score, we set the expected anomaly ratio in the training data to match the actual anomaly ratio. If there are no anomaly points in the training data, we use a default ratio of 1%. In general, we observe that using the proposed parameter tuning optimization targets improves the detection performance. However, the extent of improvement varies depending on the dataset and the method used. There are two main reasons for these variations. Firstly, when the methods are applied individually on the shape score trained by our production datasets, the public datasets may exhibit different patterns. Secondly, some methods can only learn based on a single sensitive score, thereby missing out on the benefits from the other optimization targets.
## 6 Conclusion
In conclusion, our proposed comprehensive framework for automatic parameter optimization in time series anomaly detection on monitoring services offers three optimization targets: the prediction score, the shape score, and the sensitivity score. Through extensive evaluations and real-world deployment, our framework has showcased remarkable results, effectively reducing the need for manual expert fine-tuning and streamlining the detection process for users. However, it is important to acknowledge that the effectiveness of the framework may vary depending on the dataset and algorithm employed. Factors such as dataset characteristics and algorithmic limitations can impact the performance of the parameter tuning methods. Further research is warranted to explore additional optimization targets and their applicability to enhance time series anomaly detection in diverse scenarios.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Dataset & Metrics & Method & MA & EVT & Elastic & Prophet & LOF & DBSCAN & CNN & LSTM & AE & D-Linear & Informer \\ \hline IOPS & F1 & Before & 0.171 & 0.266 & 0.282 & 0.311 & 0.080 & 0.140 & 0.273 & 0.328 & 0.130 & 0.241 & 0.307 \\ & & After & 0.195\(\dagger\) & 0.270\(\dagger\) & 0.287\(\dagger\) & 0.312\(\dagger\) & 0.095\(\dagger\) & 0.150\(\dagger\) & 0.282\(\dagger\) & 0.357\(\dagger\) & 0.309\(\dagger\) & 0.277\(\dagger\) & 0.311\(\dagger\) \\ & AUC & Before & 0.659 & 0.648 & 0.789 & 0.772 & 0.500 & 0.580 & 0.774 & 0.795 & 0.630 & 0.745 & 0.782 \\ & & After & 0.696\(\dagger\) & 0.672\(\dagger\) & 0.794\(\dagger\) & 0.778\(\dagger\) & 0.644\(\dagger\) & 0.700\(\dagger\) & 0.788\(\dagger\) & 0.811\(\dagger\) & 0.804\(\dagger\) & 0.782\(\dagger\) & 0.806\(\dagger\) \\ \hline Yahoo & F1 & Before & 0.240 & 0.063 & 0.598 & 0.573 & 0.110 & 0.050 & 0.454 & 0.477 & 0.060 & 0.607 & 0.534 \\ & & After & 0.243\(\dagger\) & 0.093\(\dagger\) & 0.646\(\dagger\) & 0.575\(\dagger\) & 0.085\(\dagger\) & 0.040\(\dagger\) & 0.479\(\dagger\) & 0.491\(\dagger\) & 0.065\(\dagger\) & 0.666\(\dagger\) & 0.561\(\dagger\) \\ & AUC & Before & 0.928 & 0.696 & 0.964 & 0.902 & 0.860 & 0.670 & 0.822 & 0.873 & 0.790 & 0.930 & 0.915 \\ & & After & 0.930\(\dagger\) & 0.700\(\dagger\) & 0.974\(\dagger\) & 0.904\(\dagger\) & 0.862\(\dagger\) & 0.630\(\dagger\) & 0.839\(\dagger\) & 0.886\(\dagger\) & 0.809\(\dagger\) & 0.941\(\dagger\) & 0.921\(\dagger\) \\ \hline SMD & F1 & Before & 0.178 & 0.162 & 0.240 & 0.235 & 0.180 & 0.360 & 0.267 & 0.257 & 0.090 & 0.283 & 0.266 \\ & & After & 0.185\(\dagger\) & 0.172\(\dagger\) & 0.245\(\dagger\) & 0.247\(\dagger\) & 0.184\(\dagger\) & 0.360\(\dagger\) & 0.269\(\dagger\) & 0.255\(\dagger\) & 0.152\(\dagger\) & 0.296\(\dagger\) & 0.265\(\dagger\) \\ & AUC & Before & 0.570 & 0.625 & 0.676 & 0.757 & 0.690 & 0.700 & 0.751 & 0.753 & 0.630 & 0.762 & 0.751 \\ & & After & 0.597\(\dagger\) & 0.624\(\dagger\) & 0.677\(\dagger\) & 0.754\(\dagger\) & 0.695\(\dagger\) & 0.710\(\dagger\) & 0.754\(\dagger\) & 0.749\(\dagger\) & 0.658\(\dagger\) & 0.772\(\dagger\) & 0.752\(\dagger\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experiments before/after using our proposed optimization targets.
### Contribution Statement
Manqing Dong played a pivotal role in drafting, designing, and deploying the proposed parameter tuning optimization targets and the general automation framework. Additionally, she conducted experiments for AE and Informer using public datasets. Zhanxiang Zhao conducted the main experiments, including the evaluation on eBay's production dataset and experiments with MA, EVT, CNN, and LSTM on public datasets. Yitong Geng made contributions to the feature engineering for model selection and conducted experiments with Elastic Net, Prophet, and LOF on the public datasets. Wentao Li contributed to the experiments with DBSCAN on the public dataset, and provided valuable suggestions for the logical flow of the paper, as well as verifying the business and customer needs both on paper and in real-life scenarios. Wei Wang significantly contributed to the development of the user interface and backend engineering for the fine-tuning function. Huai Jiang provided numerous valuable suggestions regarding the overall idea for the automl service and the organization of the paper.
### Acknowledgments
We would like to acknowledge the contributions of Huibin Duan for his work on model management and job management, Yuting Tan for her contributions to the data dumper on our online platform, and Yuan Li for her substantial contribution to the user interface design of our anomaly detection platform. Their efforts and expertise have greatly contributed to the success of this research project.
|
2305.06648 | Generalization bounds for neural ordinary differential equations and
deep residual networks | Neural ordinary differential equations (neural ODEs) are a popular family of
continuous-depth deep learning models. In this work, we consider a large family
of parameterized ODEs with continuous-in-time parameters, which include
time-dependent neural ODEs. We derive a generalization bound for this class by
a Lipschitz-based argument. By leveraging the analogy between neural ODEs and
deep residual networks, our approach yields in particular a generalization
bound for a class of deep residual networks. The bound involves the magnitude
of the difference between successive weight matrices. We illustrate numerically
how this quantity affects the generalization capability of neural networks. | Pierre Marion | 2023-05-11T08:29:34Z | http://arxiv.org/abs/2305.06648v2 | # Generalization bounds for neural ordinary differential equations and deep residual networks
###### Abstract.
Neural ordinary differential equations (neural ODEs) are a popular family of continuous-depth deep learning models. In this work, we consider a large family of parameterized ODEs with continuous-in-time parameters, which include time-dependent neural ODEs. We derive a generalization bound for this class by a Lipschitz-based argument. By leveraging the analogy between neural ODEs and deep residual networks, our approach yields in particular a generalization bound for a class of deep residual networks. The bound involves the magnitude of the difference between successive weight matrices. We illustrate numerically how this quantity affects the generalization capability of neural networks.
## 1. Introduction
Neural ordinary differential equations (neural ODEs, Chen et al., 2018) are a flexible family of neural networks used in particular to model continuous-time phenomena. Along with variants such as neural stochastic differential equations (neural SDEs, Tzen and Raginsky, 2019) and neural controlled differential equations (Kidger et al., 2020), they have been used in diverse fields such as pharmokinetics (Lu et al., 2021; Qian et al., 2021), finance (Gierjatowicz et al., 2020), and transportation (Zhou et al., 2021). We refer to Massaroli et al. (2020) for a self-contained introduction to this class of models.
Despite their empirical success, the statistical properties of neural ODEs have not yet been fully investigated. What is more, neural ODEs can be thought of as the infinite-depth limit of (properly scaled) residual neural networks (He et al., 2016), a connection made by, e.g., E (2017); Haber and Ruthotto (2017); Lu et al. (2017). Since standard measures of statistical complexity of neural networks grow with depth (see, e.g., Bartlett et al., 2019), it is unclear why infinite-depth models, including neural ODEs, should enjoy favorable generalization properties.
To better understand this phenomenon, our goal in this paper is to study the statistical properties of a class of time-dependent neural ODEs that write
\[dH_{t}=W_{t}\sigma(H_{t})dt, \tag{1}\]
where \(W_{t}\in\mathbb{R}^{d\times d}\) is a weight matrix that depends on the time index \(t\), and \(\sigma:\mathbb{R}\to\mathbb{R}\) is an activation function applied component-wise. Time-dependent neural ODEs were first introduced by Massaroli et al. (2020) and generalize time-independent neural ODEs
\[dH_{t}=W\sigma(H_{t})dt, \tag{2}\]
as formulated in Chen et al. (2018), where \(W\in\mathbb{R}^{d\times d}\) now denotes a weight matrix independent of \(t\). There are two crucial reasons to consider time-dependent neural ODEs rather than the more restrictive class of time-independent neural ODEs. On the one hand, the time-dependent formulation is more flexible, leading to competitive results on image classification tasks (Queiruga et al., 2020, 2021). As a consequence, obtaining generalization
guarantees for this family of models is a valuable endeavor by itself. On the other hand, time dependence is required for the correspondence with general residual neural networks to hold. More precisely, the time-dependent neural ODE (1) is the limit, when the depth \(L\) goes to infinity, of the deep residual network
\[H_{k+1}=H_{k}+\frac{1}{L}W_{k+1}\sigma(H_{k}),\quad 0\leqslant k\leqslant L-1, \tag{3}\]
where \((W_{k})_{1\leqslant k\leqslant L}\in\mathbb{R}^{d\times d}\) are weight matrices and \(\sigma\) is still an activation function. We refer to Marion et al. (2022); Sander et al. (2022); Thorpe and van Gennip (2022) for statements that make precise under what conditions and in which sense this limit holds, as well as its consequences for learning. These two key reasons compel us to consider the class of time-dependent ODEs (1) for our statistical study, which in turn will inform us on the properties of the models (2) and (3).
In fact, we extend our study to the larger class of _parameterized ODEs_, which we define as the mapping from \(x\in\mathbb{R}^{d}\) to the value at time \(t=1\) of the solution of the initial value problem
\[H_{0}=x,\qquad dH_{t}=\sum_{i=1}^{m}\theta_{i}(t)f_{i}(H_{t})dt, \tag{4}\]
where \(H_{t}\) is the variable of the ODE, \(\theta_{i}\) are functions from \([0,1]\) into \(\mathbb{R}\) that parameterize the ODE, and \(f_{i}\) are fixed functions from \(\mathbb{R}^{d}\) into \(\mathbb{R}^{d}\). Time-dependent neural ODEs (1) are obtained by setting a specific entrywise form for the functions \(f_{i}\) in (4).
Since the parameters \(\theta_{i}\) belong to an infinite-dimensional space, in practice they need to be approximated in a finite-dimensional basis of functions. For example, the residual neural networks (3) can be seen as an approximation of the neural ODEs (1) on a piecewise-constant basis of function. But more complex choices are possible, such as B-splines (Yu et al., 2022). However, the formulation (4) is agnostic from the choice of finite-dimensional approximation. This more abstract point of view is fruitful to derive generalization bounds, for at least two reasons. First, the statistical properties of the parameterized ODEs (4) only depend on the characteristics of the functions \(\theta_{i}\) and not on the specifics of the approximation scheme, so it is more natural and convenient to study them at the continuous level. Second, their properties can then be transfered to any specific discretization, such as the deep residual networks (3), resulting in generalization bounds for the latter.
Regarding the characteristics of the functions \(\theta_{i}\), we make the structural assumption that they are Lipschitz-continuous and uniformly bounded. This is a natural assumption to ensure that the initial value problem (4) has a unique solution in the usual sense of the Picard-Lindelof theorem. Remarkably, this assumption on the parameters also enables us to obtain statistical guarantees despite the fact that we are working with an infinite-dimensional set of parameters.
Contributions. We provide a generalization bound for the large class of parameterized ODEs (4), which include time-dependent and time-independent neural ODEs (1) and (2). To the best of our knowledge, this is the first available bound for neural ODEs. By leveraging on the connection between (time-dependent) neural ODEs and deep residual networks, our approach allows us to provide a depth-independent generalization bound for the class of deep residual networks (3). The bound is precisely compared with earlier results. Our bound depends in particular on the magnitude of the difference between successive weight matrices, which is, to our knowledge, a novel way of controlling the statistical complexity of neural networks. Numerical illustration is provided to show the relationship between this quantity and the generalization ability of neural networks.
Organization of the paper. Section 2 presents additional related work. In Section 3, we specify our class of parameterized ODEs, before stating the generalization bound for this class and for neural ODEs as a corollary. The generalization bound for residual networks is presented in Section 4 and compared to other bounds, before some numerical illustration.
Section 5 concludes the paper. The proof technique is discussed in the main paper, but the core of the proofs is relegated to the Appendix.
## 2 Related work
Hybridizing deep learning and differential equations. The fields of deep learning and dynamical systems have recently benefited from sustained cross-fertilization. On the one hand, a large line of work is aimed at modeling complex continuous-time phenomena by developing specialized neural architectures. This family includes neural ODEs, but also physics-informed neural networks (Raissi et al., 2019), neural operators (Li et al., 2021) and neural flows (Bilos et al., 2021). On the other hand, successful recent advances in deep learning, such as diffusion models, are theoretically supported by ideas from differential equations (Huang et al., 2021).
Generalization for continuous-time neural networks. Obtaining statistical guarantees for continuous-time neural networks has been the topic of a few recent works. For example, Fermanian et al. (2021) consider a class of continuous-time recurrent neural networks (RNNs) that can be written as input-driven ODEs. They show that these models are actually kernel methods, which entails a generalization bound. Lim et al. (2021) also show a generalization bound for ODE-like RNNs, and argue that adding stochasticity (that is, replacing ODEs with SDEs) helps with generalization. Yin et al. (2021) propose a neural ODE model to enable transfer learning across multiple environments and provide a generalization bound in this setting.
Lipschitz-based generalization bounds for deep neural networks. From a high-level perspective, our proof technique is similar to previous works (Bartlett et al., 2017; Neyshabur et al., 2018) that show generalization bounds for deep neural networks, which scale at most polynomially with depth. More precisely, these authors show that the network satisfies some Lipschitz continuity property (either with respect to the input or to the parameters), then exploit results on the statistical complexity of Lipschitz function classes. Under stronger norm constraints, these bounds can even be made depth-independent (Golowich et al., 2018). However, their approach differs from ours insofar as we consider neural ODEs and the associated family of deep neural networks, whereas they are solely interested in finite-depth neural networks. As a consequence, their hypotheses on the class of neural networks differ from ours. Section 4 develops a more thorough comparison. Similar Lipschitz-based techniques have also been applied to obtain generalization bounds for deep equilibrium networks (Pabbaraju et al., 2021). Going beyond statistical guarantees, Bethune et al. (2022) study approximation and robustness properties of Lipschitz neural networks.
## 3 Generalization bounds for parameterized ODEs
We start by recalling the usual supervised learning setup and introduce some notation in Section 3.1, before presenting our parameterized ODE model and the associated generalization bound in Section 3.2. We then apply the bound to the specific case of time-invariant neural ODEs in Section 3.3.
### Learning procedure
We place ourselves in a supervised learning setting. Let us introduce the notation that are used throughout the paper (up to Section 4.1). The input data is a sample of \(n\) i.i.d. pairs \((x_{i},y_{i})\) with the same distribution as some generic pair \((x,y)\), where \(x\) (resp. \(y\)) takes its values into some bounded ball \(\mathcal{X}=B(0,R_{\mathcal{X}})\) (resp. \(\mathcal{Y}=B(0,R_{\mathcal{Y}})\)) of \(\mathbb{R}^{d}\), for some \(R_{\mathcal{X}},R_{\mathcal{Y}}>0\). This setting encompasses regression but also classification tasks by (one-hot) encoding labels in \(\mathbb{R}^{d}\). Note that we assume for simplicity that the input and output have the same dimension, but our analysis easily extends to the case where they have different dimensions by adding (parameterized) projections at the beginning or at the end of our model. Given a parameterized class of models \(\mathcal{F}_{\Theta}=\{F_{\theta},\theta\in\Theta\}\), the parameter \(\theta\) is fitted by empirical risk minimization using a loss
function \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{+}\) that we assume to be Lipschitz with respect to its first argument, with a Lipschitz constant \(K_{\ell}>0\). In the following, we write for the sake of concision that such a function is \(K_{\ell}\)-Lipschitz. We also assume that \(\ell(x,x)=0\) for all \(x\in\mathbb{R}^{d}\). The theoretical and empirical risks are respectively defined, for any \(\theta\in\Theta\), by
\[\mathscr{R}(\theta)=\mathbb{E}[\ell(F_{\theta}(x),y)]\quad\text{and}\quad \widehat{\mathscr{R}}_{n}(\theta)=\frac{1}{n}\sum_{i=1}^{n}\ell\big{(}F_{ \theta}(x_{i}),y_{i}\big{)},\]
where the expectation \(\mathbb{E}\) is evaluated with respect to the distribution of \((x,y)\). Letting \(\widehat{\theta}_{n}\) a minimizer of the empirical risk, the generalization problem consists in providing an upper bound on the difference \(\mathscr{R}(\widehat{\theta}_{n})-\widehat{\mathscr{R}}_{n}(\widehat{\theta}_ {n})\).
### Generalization bound
Model. We start by making more precise the parameterized ODE model introduced in Section 1. The setup presented here can easily be specialized to the case of neural ODEs, as we will see in Section 3.3. Let \(f_{1},\dots,f_{m}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be fixed \(K_{f}\)-Lipschitz functions for some \(K_{f}>0\). Denote by \(M\) their supremum on \(\mathcal{X}\) (which is finite since these functions are continuous). The parameterized ODE \(F_{\theta}\) is defined by the following initial value problem that maps some \(x\in\mathbb{R}^{d}\) to \(F_{\theta}(x)\in\mathbb{R}^{d}\):
\[H_{0} =x \tag{5}\] \[dH_{t} =\sum_{i=1}^{m}\theta_{i}(t)f_{i}(H_{t})dt\] \[F_{\theta}(x) =H_{1},\]
where the parameter \(\theta=(\theta_{1},\dots,\theta_{m})\) is a function from \([0,1]\) to \(\mathbb{R}^{m}\). We have to impose constraints on \(\theta\) for the model \(F_{\theta}\) to be well-defined. To this aim, we endow (essentially bounded) functions from \([0,1]\) to \(\mathbb{R}^{m}\) with the following \((1,\infty)\)-norm
\[\|\theta\|_{1,\infty}=\sup_{0\leqslant t\leqslant 1}\sum_{i=1}^{m}|\theta_{i}( t)|. \tag{6}\]
We can now define the set of parameters
\[\Theta=\{\theta:[0,1]\to\mathbb{R}^{m},\,\|\theta\|_{1,\infty}\leqslant R_{ \Theta}\text{ and }\theta_{i}\text{ is }K_{\Theta}\text{-Lipschitz for }i\in\{1,\dots,m\}\}, \tag{7}\]
for some \(R_{\Theta}>0\) and \(K_{\Theta}\geqslant 0\). Then, for \(\theta\in\Theta\), the following Proposition, which is a consequence of the Picard-Lindelof Theorem, shows that the mapping \(x\mapsto F_{\theta}(x)\) is well-defined.
**Proposition 1** (Well-posedness of the parameterized ODE).: _For \(\theta\in\Theta\) and \(x\in\mathbb{R}^{d}\), there exists a unique solution to the initial value problem (5)._
An immediate consequence of Proposition 1 is that it is legitimate to consider \(\mathcal{F}_{\Theta}=\{F_{\theta},\theta\in\Theta\}\) for our model class.
When \(K_{\Theta}=0\), the parameter space \(\Theta\) is finite-dimensional since each \(\theta_{i}\) is constant. This setting corresponds to the time-independent neural ODEs of Chen et al. (2018). In this case, the norm (6) reduces to the \(\|\cdot\|_{1}\) norm over \(\mathbb{R}^{m}\). Note that, to fit exactly the formulation of Chen et al. (2018), the time \(t\) can be added as a variable of the functions \(f_{i}\), which amounts to adding a new coordinate to \(H_{t}\). This does not change the subsequent analysis. In the richer time-dependent case where \(K_{\Theta}>0\), the set \(\Theta\) belongs to an infinite-dimensional space and therefore, in practice, \(\theta_{i}\) is approximated in a finite basis of functions, such as Fourier series, Chebyshev polynomials, and splines. We refer to Massaroli et al. (2020) for a more detailed discussion, including formulations of the back-propagation algorithm (a.k.a. the adjoint method) in this setting.
Note that we consider the case where the dynamics at time \(t\) are linear with respect to the parameter \(\theta_{i}(t)\). Nevertheless, we emphasize that the mapping \(x\mapsto F_{\theta}(x)\) remains a highly non-linear function of each \(\theta_{i}(t)\). To fix ideas, this setting can be seen as analogous to working with pre-activation residual networks instead of post-activation (see He et al., 2016b, for definitions of the terminology), which is a mild modification.
Statistical analysis. Since \(\Theta\) is a subset of an infinite-dimensional space, complexity measures based on the number of parameters cannot be used. Instead, our approach is to resort to Lipschitz-based complexity measures. More precisely, to bound the complexity of our model class, we propose two building blocks: we first show that the model \(F_{\theta}\) is Lipschitz-continuous with respect to its parameters \(\theta\). This allows us to bound the complexity of the model class depending on the complexity of the parameter class. In a second step, we assess the complexity of the class of parameters itself.
Starting with our first step, we show the following estimates for our class of parameterized ODEs. Here and in the following, \(\|\cdot\|\) denotes the \(\ell_{2}\) norm over \(\mathbb{R}^{d}\).
**Proposition 2** (The parameterized ODE is bounded and Lipschitz).: _Let \(\theta\) and \(\tilde{\theta}\in\Theta\). Then, for any \(x\in\mathcal{X}\),_
\[\|F_{\theta}(x)\|\leqslant R_{\mathcal{X}}+MR_{\Theta}\exp(K_{f}R_{\Theta})\]
_and_
\[\|F_{\theta}(x)-F_{\tilde{\theta}}(x)\|\leqslant 2MK_{f}R_{\Theta}\exp(2K_{f}R _{\Theta})\|\theta-\tilde{\theta}\|_{1,\infty}.\]
The proof, given in the Appendix, makes extensive use of Gronwall's inequality, a standard tool to obtain estimates in the theory of ODEs, in order to bound the magnitude of the solution \(H_{t}\) of (5).
The next step is to assess the magnitude of the covering number of \(\Theta\). Recall that, for \(\varepsilon>0\), the \(\varepsilon\)-covering number of a metric space is the number of balls of radius \(\varepsilon\) needed to completely cover the space, with possible overlaps.
**Proposition 3** (Covering number of the ODE parameter class).: _For \(\varepsilon>0\), let \(\mathcal{N}(\varepsilon)\) be the \(\varepsilon\)-covering number of \(\Theta\) endowed with the \((1,\infty)\)-norm (6). Then_
\[\log\mathcal{N}(\varepsilon)\leqslant m\log\Big{(}\frac{16mR_{\Theta}}{ \varepsilon}\Big{)}+\frac{m^{2}K_{\Theta}\log(4)}{\varepsilon}.\]
Proposition 3 is a consequence of a classical result, see, e.g., Kolmogorov and Tikhomirov (1959, example 3 of paragraph 2). A self-contained proof is given in the Appendix for completeness. We also refer to Gottlieb et al. (2017) for more general results on covering numbers of Lipschitz functions.
The two propositions above and an \(\varepsilon\)-net argument allow to prove the first main result of our paper.
**Theorem 1** (Generalization bound for parameterized ODEs).: _Consider the class of parameterized ODEs \(\mathcal{F}_{\Theta}=\{F_{\theta},\theta\in\Theta\}\), where \(F_{\theta}\) is given by (5) and \(\Theta\) by (7). Let \(\delta>0\)._
_Then, for \(n\geqslant 9\max(m^{-2}R_{\Theta}^{-2},1)\), with probability at least \(1-\delta\),_
\[\mathscr{R}(\widehat{\theta}_{n})\leqslant\widehat{\mathscr{R}}_{n}(\widehat {\theta}_{n})+B\sqrt{\frac{(m+1)\log(R_{\Theta}mn)}{n}}+B\frac{m\sqrt{K_{ \Theta}}}{n^{1/4}}+\frac{B}{\sqrt{n}}\sqrt{\log\frac{1}{\delta}},\]
_where \(B\) is a constant depending on \(K_{\ell},K_{f},R_{\Theta},R_{\mathcal{X}},R_{\mathcal{Y}},M\). More precisely,_
\[B=6K_{\ell}K_{f}\exp(K_{f}R_{\Theta})\big{(}R_{\mathcal{X}}+MR_{\Theta}\exp(K_ {f}R_{\Theta})+R_{\mathcal{Y}}\big{)}.\]
Three terms appear in our upper bound of \(\mathscr{R}(\widehat{\theta}_{n})-\widehat{\mathscr{R}}_{n}(\widehat{\theta }_{n})\). The first and the third ones are classical (see, e.g. Bach, 2023, Sections 4.4 and 4.5). On the contrary, the second term is more surprising with its convergence rate in \(\mathcal{O}(n^{-1/4})\). This slower convergence rate is due to the fact that the space of parameters is infinite-dimensional. In particular, for \(K_{\Theta}=0\), corresponding to a finite-dimensional space of parameters, we recover the usual
\(\mathcal{O}(n^{-1/2})\) convergence rate, however at the cost of considering a much more restrictive class of models. Finally, it is noteworthy that the dimensionality appearing in the bound is not the input dimension \(d\) but the number of mappings \(m\).
Note that this result is general and may be applied in a number of contexts that go beyond deep learning, as long as the instantaneous dependence of the ODE dynamics to the parameters is linear. One such example is the predator-prey model, describing the evolution of two populations of animals, which reads \(dx_{t}=x_{t}(\alpha-\beta y_{t})dt\) and \(dy_{t}=-y_{t}(\gamma-\delta x_{t})dt\), where \(x_{t}\) and \(y_{t}\) are real-valued variables and \(\alpha\), \(\beta\), \(\gamma\) and \(\delta\) are model parameters. This ODE falls into the framework of this section, if one were to estimate the parameters by empirical risk minimization. We refer to Deuflhard and Roblitz (2015, section 3) for other examples of parameterized biological ODE dynamics and methods for parameter identification.
Nevertheless, for the sake of brevity, we focus on applications of this result to deep learning, and more precisely to neural ODEs, which is the topic of the next section.
### Application to neural ODEs
As explained in Section 1, parameterized ODEs include both time-dependent and time-independent neural ODEs. Since the time-independent model is more common in practice, we develop this case here and leave the time-dependent case to the reader. We thus consider the following neural ODE:
\[H_{0} =x \tag{8}\] \[dH_{t} =W\sigma(H_{t})dt\] \[F_{W}(x) =H_{1},\]
where \(W\in\mathbb{R}^{d\times d}\) is a weight matrix, and \(\sigma:\mathbb{R}\to\mathbb{R}\) is an activation function applied component-wise. We assume \(\sigma\) to be \(K_{\sigma}\)-Lipschitz for some \(K_{\sigma}>0\). This assumption is satisfied by all common activation functions. To put the model in the form of Section 3.2, denote \(e_{1},\ldots,e_{d}\) the canonical basis of \(\mathbb{R}^{d}\). Then the dynamics (8) can be reformulated as
\[dH_{t}=\sum_{i,j=1}^{d}W_{ij}\sigma_{ij}(H_{t})dt,\]
where \(\sigma_{ij}(x)=\sigma(x_{j})e_{i}\). Each \(\sigma_{ij}\) is itself \(K_{\sigma}\)-Lipschitz, hence we fall in the framework of Section 3.2. In other words, the functions \(f_{i}\) of our general parameterized ODE model form a shallow neural network with pre-activation. Denote by \(\|W\|_{1,1}\) the sum of the absolute values of the elements of \(W\). We consider the following set of parameters, which echoes the set \(\Theta\) of Section 3.2:
\[\mathcal{W}=\{W\in\mathbb{R}^{d\times d},\|W\|_{1,1}\leqslant R_{\mathcal{W}}\}, \tag{9}\]
for some \(R_{\mathcal{W}}>0\). We can then state the following result as a consequence of Theorem 1.
**Corollary 1** (Generalization bound for neural ODEs).: _Consider the class of neural ODEs \(\mathcal{F}_{\mathcal{W}}=\{F_{W},W\in\mathcal{W}\}\), where \(F_{W}\) is given by (8) and \(\mathcal{W}\) by (9). Let \(\delta>0\)._
_Then, for \(n\geqslant 9R_{\mathcal{W}}^{-1}\max(d^{-4}R_{\mathcal{W}}^{-1},1)\), with probability at least \(1-\delta\),_
\[\mathscr{R}(\widehat{W}_{n})\leqslant\widehat{\mathscr{R}}_{n}(\widehat{W}_{ n})+B(d+1)\sqrt{\frac{\log(R_{\mathcal{W}}dn)}{n}}+\frac{B}{\sqrt{n}}\sqrt{\log \frac{1}{\delta}},\]
_where \(B\) is a constant depending on \(K_{\ell},K_{\sigma},R_{\mathcal{W}},R_{\mathcal{X}},R_{\mathcal{Y}},M\). More precisely,_
\[B=6\sqrt{2}K_{\ell}K_{\sigma}\exp(K_{\sigma}R_{\mathcal{W}})\big{(}R_{ \mathcal{X}}+MR_{\mathcal{W}}\exp(K_{\sigma}R_{\mathcal{W}})+R_{\mathcal{Y}} \big{)}.\]
Note that the term in \(\mathcal{O}(n^{-1/4})\) from Theorem 1 is now absent. Since we consider a time-independent model, we are left with the other two terms, recovering a standard \(\mathcal{O}(n^{-1/2})\) convergence rate.
## 4 Generalization bounds for deep residual networks
As highlighted in Section 1, there is a strong connection between neural ODEs and discrete residual neural networks. The previous study of the continuous case in Section 3 paves the way for deriving a generalization bound in the discrete setting of residual neural networks, which is of great interest given the pervasiveness of this architecture in modern deep learning.
We begin by presenting our model and result in Section 4.1, before detailing the comparison of our approach with other papers in Section 4.2 and giving some numerical illustration in Section 4.3.
### Model and generalization bound
Model. We consider the following class of deep residual networks:
\[\begin{split} H_{0}&=x\\ H_{k+1}&=H_{k}+\frac{1}{L}W_{k+1}\sigma(H_{k}), \quad 0\leqslant k\leqslant L-1\\ F_{\mathbf{W}}(x)&=H_{L},\end{split} \tag{10}\]
where the parameter \(\mathbf{W}=(W_{k})_{1\leqslant k\leqslant L}\in\mathbb{R}^{L\times d\times d}\) is a set of weight matrices and \(\sigma\) is still a \(K_{\sigma}\)-Lipschitz activation function. To emphasize that \(\mathbf{W}\) is here a third-order tensor, as opposed to the case of time-invariant neural ODEs in Section 3.3, where \(W\) was a matrix, we denote it with a bold notation. We also assume in the following that \(\sigma(0)=0\). This assumption could be alleviated at the cost of additional technicalities. Owing to the \(\nicefrac{{1}}{{L}}\) scaling factor, the deep limit of this residual network is a (time-dependent) neural ODE of the form studied in Section 3. We refer to Marion et al. (2022) for further discussion on the link between scaling factors and deep limits. We simply note that this scaling factor is not common practice, but preliminary experiments show it does not hurt performance and can even improve performance in a weight-tied setting (Sander et al., 2022). The space of parameters is endowed with the following \((1,1,\infty)\)-norm
\[\|\mathbf{W}\|_{1,1,\infty}=\sup_{1\leqslant k\leqslant L}\sum_{i,j=1}^{d}|W_{ k,i,j}|. \tag{11}\]
Also denoting \(\|\cdot\|_{\infty}\) the element-wise maximum norm for a matrix, we consider the class of matrices
\[\begin{split}\mathcal{W}=\Big{\{}\mathbf{W}\in\mathbb{R}^{L \times d\times d},&\|\mathbf{W}\|_{1,1,\infty}\leqslant R_{ \mathcal{W}}\quad\text{and}\\ &\|W_{k+1}-W_{k}\|_{\infty}\leqslant\frac{K_{\mathcal{W}}}{L}\; \;\text{for}\;\;1\leqslant k\leqslant L-1\Big{\}},\end{split} \tag{12}\]
for some \(R_{\mathcal{W}}>0\) and \(K_{\mathcal{W}}\geqslant 0\), which is a discrete analogous of the set \(\Theta\) defined by (7). In particular, the upper bound on the difference between successive weight matrices is to our knowledge a novel way of constraining the parameters of a neural network. It corresponds to the discretization of the Lipschitz continuity of the parameters introduced in (7). By analogy, we refer to it as a constraint on the Lipschitz constant of the weights. Note that, for standard initialization schemes, the difference between two successive matrices is of the order \(\mathcal{O}(1)\) and not \(\mathcal{O}(1/L)\), or, in other words, \(K_{\mathcal{W}}\) scales as \(\mathcal{O}(L)\). This issue can be solved by adding correlations across layers at initialization by taking, for \(k\in\{1,\ldots,L\}\) and \(i,j\in\{1,\ldots,d\}\), \(\mathbf{W}_{k,i,j}=\frac{1}{\sqrt{d}}f_{i,j}(\frac{k}{L})\), where \(f_{i,j}\) a smooth function, for example a Gaussian process with a RBF kernel. Note that such a non-i.i.d. initialization scheme is necessary for the correspondence between deep residual networks and neural ODEs to hold (Marion et al., 2022). Furthermore, Sander et al. (2022) prove that, with such an initialization scheme, the constraint on the Lipschitz constant also holds for the _trained_ network, with \(K_{\mathcal{W}}\) independent of \(L\).
Statistical analysis. At first sight, a reasonable strategy would be to bound the distance between the model (10) and its limit \(L\to\infty\) that is a parameterized ODE, then _apply_ Theorem1. This strategy is straightforward, but comes at the cost of an additional \(\mathcal{O}(1/L)\) term in the generalization bound, as a consequence of the discretization error between the discrete iterations (10) and their continuous limit. For example, we refer to Fermanian et al. (2021) where this strategy is used to prove a generalization bound for discrete RNNs and where this additional error term is incurred. We follow another way by mimicking all the proof with a finite \(L\). This is a longer approach but it yields a sharper result since we avoid the \(\mathcal{O}(1/L)\) discretization error. The proof structure is similar to Section3: the following two Propositions are the discrete counterparts of Propositions2 and 3.
**Proposition 4** (The residual network is bounded and Lipschitz).: _Let \(\mathbf{W}\) and \(\tilde{\mathbf{W}}\in\mathcal{W}\). Then, for any \(x\in\mathcal{X}\),_
\[\|F_{\mathbf{W}}(x)\|\leqslant R_{\mathcal{X}}\exp(K_{\sigma}R_{\mathcal{W}})\]
_and_
\[\|F_{\mathbf{W}}(x)-F_{\tilde{\mathbf{W}}}(x)\|\leqslant\frac{R_{\mathcal{X}} }{R_{\mathcal{W}}}\exp(2K_{\sigma}R_{\mathcal{W}})\|\mathbf{W}-\tilde{\mathbf{ W}}\|_{1,1,\infty}.\]
**Proposition 5** (Covering number of the residual network parameter class).: _Let \(\mathcal{N}(\varepsilon)\) be the covering number of \(\mathcal{W}\) endowed with the \((1,1,\infty)\)-norm (11). Then_
\[\log\mathcal{N}(\varepsilon)\leqslant d^{2}\log\Big{(}\frac{16d^{2}R_{ \mathcal{W}}}{\varepsilon}\Big{)}+\frac{d^{4}K_{\mathcal{W}}\log(4)}{ \varepsilon}.\]
The proof of Proposition4 is a discrete analoguous of Proposition2. On the other hand, Proposition5 can be proven as a _consequence_ of Proposition3, by showing the existence of an injective isometry from \(\mathcal{W}\) into a set of the form (7). Equipped with these two propositions, we are now ready to state the generalization bound for our class of residual neural networks.
**Theorem 2** (Generalization bound for deep residual networks).: _Consider the class of neural networks \(\mathcal{F}_{\mathcal{W}}=\{F_{\mathbf{W}},\mathbf{W}\in\mathcal{W}\}\), where \(F_{\mathbf{W}}\) is given by (10) and \(\mathcal{W}\) by (12). Let \(\delta>0\)._
_Then, for \(n\geqslant 9R_{\mathcal{W}}^{-1}\max(d^{-4}R_{\mathcal{W}}^{-1},1)\), with probability at least \(1-\delta\),_
\[\mathscr{R}(\widehat{\mathbf{W}}_{n})\leqslant\widehat{\mathscr{R}}_{n}( \widehat{\mathbf{W}}_{n})+B(d+1)\sqrt{\frac{\log(R_{\mathcal{W}}dn)}{n}}+B \frac{d^{2}\sqrt{K_{\mathcal{W}}}}{n^{1/4}}+\frac{B}{\sqrt{n}}\sqrt{\log\frac {1}{\delta}}, \tag{13}\]
_where \(B\) is a constant depending on \(K_{\ell},K_{\sigma},R_{\mathcal{W}},R_{\mathcal{X}},R_{\mathcal{Y}}\). More precisely,_
\[B=6\sqrt{2}K_{\ell}\max\Big{(}\frac{\exp(K_{\sigma}R_{\mathcal{W}})}{R_{ \mathcal{W}}},1\Big{)}(R_{\mathcal{X}}\exp(K_{\sigma}R_{\mathcal{W}})+R_{ \mathcal{Y}}).\]
We emphasize that this result is non-asymptotic and valid for any width \(d\) and depth \(L\). Furthermore, the depth \(L\) does not appear in the upper bound (13). This should not surprise the reader since Theorem1 can be seen as the deep limit \(L\to\infty\) of this result, hence we expect that our bound remains finite when \(L\to\infty\) (otherwise the bound of Theorem1 would be infinite). However, \(L\) appears as a scaling factor in the definition of the neural network (10) and of the class of parameters (12). This is crucial for the depth independence to hold, as we will comment further on in the next section.
Furthermore, the depth independence comes at the price of a \(\mathcal{O}(n^{-1/4})\) convergence rate. Note that, by taking \(K_{\mathcal{W}}=0\), we obtain a generalization bound for weight-tied neural networks with a faster convergence rate in \(n\), since the term in \(\mathcal{O}(n^{-1/4})\) vanishes.
### Comparison with other bounds
As announced in Section 2, we now compare Theorem 2 with the results of Bartlett et al. (2017) and Golowich et al. (2018). Beginning by Bartlett et al. (2017), we first state a slightly weaker version of their result to match our notations and facilitate comparison.
**Corollary 2** (of Theorem 1.1 of Bartlett et al. (2017)).: _Consider the class of neural networks \(\mathcal{F}_{\tilde{\mathcal{W}}}=\{F_{\mathbf{W}},\mathbf{W}\in\tilde{ \mathcal{W}}\}\), where \(F_{\mathbf{W}}\) is given by (10) and \(\tilde{\mathcal{W}}=\{\mathbf{W}\in\mathbb{R}^{L\times d\times d},\|\mathbf{W }\|_{1,1,\infty}\leqslant R_{\mathcal{W}}\}\)._
_Assume that \(L\geqslant R_{\mathcal{W}}\) and \(K_{\sigma}=1\), and let \(\gamma,\delta>0\). Consider \((x,y),(x_{1},y_{1}),\ldots,(x_{n},y_{n})\) drawn i.i.d. from any probability distribution over \(\mathbb{R}^{d}\times\{1,\ldots d\}\) such that a.s. \(\|x\|\leqslant R_{\mathcal{X}}\)._
_Then, with probability at least \(1-\delta\), for every \(\mathbf{W}\in\tilde{\mathcal{W}}\),_
\[\mathbb{P}\Big{(}\operatorname*{arg\,max}_{1\leqslant j\leqslant d}F_{\mathbf{ W}}(x)_{j}\neq y\Big{)}\leqslant\widehat{\mathscr{R}}_{n}(\mathbf{W})+C\frac{R_{ \mathcal{X}}R_{\mathcal{W}}\exp(R_{\mathcal{W}})\log(d)\sqrt{L}}{\gamma\sqrt{n }}+\frac{C}{\sqrt{n}}\sqrt{\log\frac{1}{\delta}}, \tag{14}\]
_where \(\widehat{\mathscr{R}}_{n}(\mathbf{W})\leqslant n^{-1}\sum_{i=1}^{n}\mathbf{1 }_{F_{\mathbf{W}}(x_{i})_{y_{i}}\leqslant\gamma+\max_{j\neq y_{i}}f(x_{i})_{j}}\) and \(C\) is a universal constant._
We first note that the setting is slightly different from ours: they consider a large margin predictor for a multi-class classification problem, whereas we consider a general Lipschitz-continuous loss \(\ell\). This being said, the model class is identical to ours, except for one notable difference: the constraint on the Lipschitz constant of the weights appearing in equation (12) is not required here.
Comparing (13) and (14), we see that our bound enjoys a better dependence on the depth \(L\) but a worse dependence on the width \(d\). Regarding the depth, our bound (13) does not depend on \(L\), whereas the bound (14) scales as \(\mathcal{O}(\sqrt{L})\). This comes from the fact that we consider a smaller set of parameters (12), by adding the constraint on the Lipschitz norm of the weights. This constraint allows us to control the complexity of our class of neural networks independently of depth, as long as \(K_{\mathcal{W}}\) is independent of \(L\). If \(K_{\mathcal{W}}\) scales as \(\mathcal{O}(L)\), which is the case for i.i.d. initialization schemes, our result also features a scaling in \(\mathcal{O}(\sqrt{L})\). As for the width, Bartlett et al. (2017) achieve a better dependence by a subtle covering numbers argument that takes into account the geometry induced by matrix norms. Since our paper focuses on a depth-wise analysis by leveraging the similarity between residual networks and their infinite-depth counterpart, improving the scaling of our bound with width is left for future work. Finally, note that both bounds have a similar exponential dependence in \(R_{\mathcal{W}}\).
As for Golowich et al. (2018), they consider non-residual neural networks of the form \(x\mapsto M_{L}\sigma(M_{L-1}\sigma(\ldots\sigma(M_{1}x))).\) These authors show that the generalization error of this class scales as
\[\mathcal{O}\bigg{(}R_{\mathcal{X}}\frac{\Pi_{F}\sqrt{\log\big{(}\frac{\Pi_{F}}{ \pi_{S}}\big{)}}}{n^{1/4}}\bigg{)},\]
where \(\Pi_{F}\) is an upper-bound on the product of the Frobenius norms \(\prod_{k=1}^{L}\|M_{k}\|_{F}\) and \(\pi_{S}\) is a lower-bound on the product of the spectral norms \(\prod_{k=1}^{L}\|M_{k}\|\). Under the assumption that both \(\Pi_{F}\) and \(\nicefrac{{\Pi_{F}}}{{\pi_{S}}}\) are bounded independently of \(L\), their bound is indeed depth-independent, similarly to ours. Interestingly, as ours, the bound presents a \(\mathcal{O}(n^{-1/4})\) convergence rate instead of the more usual \(\mathcal{O}(n^{-1/2})\). However, the assumption that \(\Pi_{F}\) is bounded independently of \(L\) does not hold in our residual setting, since we have \(M_{k}=I+\frac{1}{L}W_{k}\) and thus we can lower-bound
\[\prod_{k=1}^{L}\|M_{k}\|_{F}\geqslant\prod_{k=1}^{L}\Big{(}\|I\|_{F}-\frac{1}{ L}\|M_{k}\|_{F}\Big{)}\geqslant\big{(}\sqrt{d}-\frac{R_{\mathcal{W}}}{L} \big{)}^{L}\approx d^{\frac{L}{2}}e^{-\frac{R_{\mathcal{W}}}{\sqrt{d}}}.\]
In our setting, it is a totally different assumption, the constraint that two successive weight matrices should be close to one another, which allows us to derive depth-independent bounds.
### Numerical illustration
The bound of Theorem 2 features two quantities that depend on the class of neural networks, namely \(R_{\mathcal{W}}\) that bounds a norm of the weight matrices and \(K_{\mathcal{W}}\) that bounds the maximum _difference_ between two successive weight matrices, a.k.a. the Lipschitz constant of the weights. The first one belongs to the larger class of norm-based bounds that has been extensively studied (see, e.g., Neyshabur et al., 2015). We are therefore interested in getting a better understanding of the role of the second quantity, which is much less common, in the generalization ability of deep residual networks.
To this aim, we train deep residual networks (10) (of width \(d=30\) and depth \(L=1000\)) on MNIST. We prepend the network with an initial weight matrix to project the data \(x\) from dimension \(768\) to dimension \(30\), and similarly postpend it with another matrix to project the output \(M_{\mathbf{W}}(x)\) into dimension \(10\) (i.e. the number of classes in MNIST). Finally, we consider two training settings: either the initial and final matrices are trained, or they are fixed random projections. We use the initialization scheme outlined in Section 4.1. Further experimental details are postponed to the Appendix.
We report in Figure 0(a) the generalization gap of the trained networks, that is, the difference between the test and train errors (in terms of cross entropy loss), as a function of the maximum Lipschitz constant of the weights \(\sup_{0\leqslant k\leqslant L-1}(\|W_{k+1}-W_{k}\|_{\infty})\). We observe a positive correlation between these two quantities. To further analyze the relationship between the Lipschitz constant of the weights and the generalization gap, we then add the penalization term \(\lambda\cdot\big{(}\sum_{k=0}^{L-1}\|W_{k+1}-W_{k}\|_{F}^{2}\big{)}^{1/2}\) to the loss, for some \(\lambda\geqslant 0\). The obtained generalization gap is reported in Figure 0(b) as a function of \(\lambda\). We observe that this penalization allows to reduce the generalization gap. These two observations go in support of the fact that a smaller Lipschitz constant improves the generalization power of deep residual networks, in accordance with Theorem 2.
However, note that we were not able to obtain an improvement on the test loss by adding the penalization term. This is not all too surprising since previous work has investigated a
Figure 1. Link between the generalization gap and the Lipschitz constant of the weights
related penalization, in terms of the Lipschitz norm of the layer sequence \((H_{k})_{0\leq k\leq L}\), and was similarly not able to report any improvement on the test loss (Kelly et al., 2020).
## 5 Conclusion
We provide a generalization bound that applies to a wide range of parameterized ODEs. As a consequence, we obtain the first generalization bounds for time-independent and time-dependent neural ODEs. By discretizing our reasoning, we also provide a bound for a class of deep residual networks. In the future, it should also be interesting to extend our result to the more involved case of neural SDEs, which have also been found to be deep limits of a large class of residual neural networks (Cohen et al., 2021; Marion et al., 2022).
## Acknowledgments
The author is supported by a grant from Region Ile-de-France, by a Google PhD Fellowship, and by MINES Paris - PSL. The author thanks Eloise Berthier, Gerard Biau and Clement Mantoux for their thorough proofreading and suggestions on this paper.
|
2306.07006 | Singularity Categories of Higher Nakayama Algebras | For a higher Nakayama algebra $A$ in the sense of Jasso-K\"{u}lshammer, we
show that the singularity category of $A$ is triangulated equivalent to the
stable module category of a self-injective higher Nakayama algebra. This
generalizes a similar result for usual Nakayama algebras due to Shen. Our proof
relies on the existence of $d\mathbb{Z}$-cluster tilting subcategories in the
module category of $A$ and the result of Kvamme that each $d\mathbb{Z}$-cluster
tilting subcategory of $A$ induces a $d\mathbb{Z}$-cluster tilting subcategory
in its singularity category. Moreover, our result provides many concrete
examples of the triangulated Auslander-Iyama correspondence introduced by
Jasso-Muro, namely, there is a bijective correspondence between the equivalence
classes of the singularity categories of $d$-Nakayama algebras with its basic
$d\mathbb{Z}$-cluster tilting object and the isomorphism classes of
self-injective $(d+1)$-Nakayama algebras. | Wei Xing | 2023-06-12T10:19:51Z | http://arxiv.org/abs/2306.07006v2 | # Singularity categories of higher Nakayama algebras
###### Abstract.
For a higher Nakayama algebra \(A\) in the sense of Jasso-Kulshammer, we show that the singularity category of \(A\) is triangulated equivalent to the stable module category of a self-injective higher Nakayama algebra. This generalizes a similar result for usual Nakayama algebras due to Shen. Our proof relies on the existence of \(d\mathbb{Z}\)-cluster tilting subcategories in the module category of \(A\) and the result of Kvamme that each \(d\mathbb{Z}\)-cluster tilting subcategory of \(A\) induces a \(d\mathbb{Z}\)-cluster tilting subcategory in its singularity category. Moreover, our result provides many concrete examples of the triangulated Auslander-Iyama correspondence introduced by Jasso-Muro, namely, there is a bijective correspondence between the equivalence classes of the singularity categories of \(d\)-Nakayama algebras with its basic \(d\mathbb{Z}\)-cluster tilting object and the isomorphism classes of self-injective \((d+1)\)-Nakayama algebras.
Keywords: Higher Nakayama Algebra; \(n\mathbb{Z}\)-Cluster Tilting subcategory ; Singularity Category; Wide Subcategory
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 \(d\)-cluster tilting subcategories
* 2.2 \(d\)-abelian categories and wide subcategories
* 2.3 Higher Nakayama algebras
* 3 Singularity category of higher Nakayama algebras
* 3.1 Resolution quiver for \(A_{\underline{\ell}}^{(d)}\)
* 3.2 Singularity category of \(A_{\underline{\ell}}^{(d)}\)
* 4 Examples
## 1. Introduction
Auslander-Reiten theory is a fundamental tool to study representation theory from a homological point of view. A generalization of this theory, called higher Auslander-Reiten theory, was introduced by Iyama [14, 15, 16]. In this theory, the object of study is some category \(\mathcal{A}\), usually the module category of a finite dimensional algebra or its bounded derived category, equipped with a \(d\)-cluster tilting subcategory \(\mathcal{M}\subset\mathcal{A}\), possibly with some additional property. Depending on different settings, \(d\)-cluster tilting subcategories give rise to higher notions in homological algebra. For instance, if \(\mathcal{A}\) is abelian and \(\mathcal{M}\) is \(d\)-cluster tilting, then \(\mathcal{M}\) is a \(d\)-abelian category in the sense of Jasso [17]. If \(\mathcal{A}\) is triangulated and \(\mathcal{M}\) is \(d\)-cluster tilting with the additional property that \(\mathcal{M}\)
is closed under the \(d\)-fold suspension functor then \(\mathcal{M}\) is \((d+2)\)-angulated in the sense of Geiss-Keller-Oppermann [1].
Let \(A\) be a finite dimensional algebra, \(\operatorname{mod}A\) be the category of finitely generated right \(A\)-modules and \(D^{b}(\operatorname{mod}A)\) be the bounded derived category of \(\operatorname{mod}A\). Assume \(\mathcal{M}\) is a \(d\)-cluster tilting subcategory of \(\operatorname{mod}A\). A natural question is whether we can construct a \(d\)-cluster tilting subcategory \(\mathcal{U}\) of some triangulated category related to \(A\) out of \(\mathcal{M}\). If \(A\) has global dimension \(d\), then the subcategory
\[\mathcal{U}=\operatorname{add}\{M[di]\in D^{b}(\operatorname{mod}A)\mid M\in \mathcal{M}\text{ and }i\in\mathbb{Z}\}\]
is \(d\)-cluster tilting inside \(D^{b}(\operatorname{mod}A)\), see [10]. In general, if we drop the assumption that \(A\) has global dimension \(d\), there is no known cluster tilting subcategories inside \(D^{b}(\operatorname{mod}A)\). If \(A\) is self-injective, its stable module category \(\underline{\operatorname{mod}A}\) has a triangulated structure, see [11]. Then \(\mathcal{U}=\underline{\mathcal{M}}\) is \(d\)-cluster tilting. Indeed, all \(d\)-cluster tilting subcategories of \(\underline{\operatorname{mod}A}\) arise in this way.
Assume in addition, \(\mathcal{M}\) is \(d\mathbb{Z}\)-cluster tilting [12]. As shown in [1], the naive approach doesn't necessarily give a \(d\)-cluster tilting subcategory in \(D^{b}(\operatorname{mod}A)\). However, Kvamme [14] showed that, every \(d\mathbb{Z}\)-cluster tilting subcategory of \(\operatorname{mod}A\) gives rise to a \(d\mathbb{Z}\)-cluster tilting subcategory of the singularity category \(D_{sg}(A)\). Note that if \(\operatorname{gldim}A=d\), then \(D_{sg}(A)=0\) and any \(d\)-cluster tilting subcategory is trivially \(d\mathbb{Z}\)-cluster tilting. If \(A\) is self-injective, then \(D_{sg}(A)=\underline{\operatorname{mod}A}\) and there is a bijective correspondence between \(d\mathbb{Z}\)-cluster tilting subcategories in \(\operatorname{mod}A\) and \(\underline{\operatorname{mod}A}\).
However, the existance of \(d\mathbb{Z}\)-cluster tilting imposes a strong restriction on \(A\). Nakayama algebras provide some examples of such algebras. Recently, Herschend-Kvamme-Vaso [1] gave a complete description of \(d\mathbb{Z}\)-cluster tilting subcategories of Nakayama algebras. Another typical class of such algebras is higher Nakayama algebras constructed by Jasso-Kulshammer [1]. As a generalization of usual Nakayama algebras, higher Nakayama algebras admit complex homological structures while being convenient to compute combinatorically. For this reason, we restrict our objects of study to higher Nakayama algebras.
The singularity category of an algebra was introduced by Buchweitz [1]. The name "singularity category" is justified by the fact that an algebra \(A\) has finite global dimension if and only if \(D_{sg}(A)\) vanishes. The singularity category captures the stable homological properties of an algebra. As stated in the Buchweitz-Happel theorem that if \(A\) is Iwanaga-Gorenstein, the singularity category \(D_{sg}(A)\) is triangulated equivalent to the stable category of maximal Cohen-Macaulay modules over \(A\). However, for non Iwanaga-Gorenstein algebras, the singularity category is more difficult to describe.
For a usual Nakayama algebra, Shen [1] showed that its singularity category is triangulated equivalent to the stable category of a self-injective Nayakama algebra. Moreover, an explicit construction of such an equivalence is given, which relies on the resolution quiver of a Nakayama algebra introduced by Ringel [15]. Motivated by the above results, we focus on higher Nakayama algebras and describe their singularity categories. The main result states as follows.
**Theorem 1.1**.: _Let \(A\) be a \(d\)-Nakayama algebra. Then the singularity category \(D_{sg}(A)\) is triangulated equivalent to the stable category \(\underline{\operatorname{mod}B}\) with \(B\) a self-injective \(d\)-Nakayama algebra._
Here \(B\) is actually an idempotent subalgebra of \(A\), i.e. \(B=\operatorname{End}_{A}(eA)\) for a certain idempotent \(e\in A\). To get \(B\), or equivalently, find the idempotent \(e\), we generalize the notion of resolution quiver to higher Nakayama algebras. In precise terms, if \(A\) is a \(d\)-Nakayama
algebra with Kupisch series \(\underline{\ell}=(\ell_{1},\ldots,\ell_{n})\), the vertex set of the resolution quiver \(R(A)\) is \(\{1,2,\ldots,n\}\), and there is an arrow from \(i\) to \(j\) if \(j\equiv i-\ell_{i}-d+1\mod n\). When \(d=1\), our definition coincides with the resolution quivers for usual Nakayama algebras defined in [11]. Let \(J\) be the subset of \(\{1,2,\ldots,n\}\) which consists all the numbers that lie in a cycle of \(R(A)\) and let \(I=J+n\mathbb{Z}\). It turns out that the idempotent \(e\) corresponds to the direct sum of indecomposable projective \(A\)-modules whose coordinates are in \(I\). Here by coordinates, we follow a slightly modified version of the notation given in [1] where indecomposable \(A\)-modules are indexed by ordered sequences of length \(d+1\) with certain restrictions given by \(\underline{\ell}\).
Our result provides concrete examples of the triangulated Auslander-Iyama correspondence introduced by Jasso-Muro [1], which states that there is a bijective correspondence between the equivalence classes of pairs \((\mathcal{T},c)\) consisting of an algebraic Krull-Schmidt triangulated category \(\mathcal{T}\) with a basic \(d\mathbb{Z}\)-cluster tilting object \(c\in\mathcal{T}\) and the Morita equivalence classes of twisted \((d+2)\)-periodic self-injective algebras by sending \(c\) to \(\mathcal{T}(c,c)\). In particular, this correspondence provides a method of recognizing such a pair \((\mathcal{T},c)\) via \(\mathcal{T}(c,c)\). Several recognition theorems for algebraic triangulated categories were discussed in [1, Section 6]. In particular, [1, Theorem 6.5.2] gave a recognition theorem of the stable module categories of self-injective higher Nakayama algebras. Theorem 1.1 extends this recognition theorem to the singularity categories of all higher Nakayama algebras. Thus, we extend the library of algebraic triangulated categories that can be recognized via the triangulated Auslander-Iyama correspondence.
**Notation and conventions.** Throughout this paper, we fix positive integers \(d\) and \(n\). We work over an arbitrary field \(k\). Unless stated otherwise, all algebras are finite dimensional \(k\)-algebras and all modules are finite dimensional right modules. We denote by \(D\) the \(k\)-duality \(\operatorname{Hom}_{k}(-,k)\).
All subcategories considered are supposed to be full. Let \(F:\mathcal{C}\to\mathcal{D}\) be a functor, the essential image of \(F\) is the full subcategory of \(\mathcal{D}\) given by
\[F\,\mathcal{C}=\{D\in\mathcal{D}\mid\exists C\in\mathcal{C}\text{ such that }FC\cong D\}.\]
Let \(\mathcal{C}\) be a triangulated category. We denote by \(\Sigma\) the suspension functor of \(\mathcal{C}\). By \(\operatorname{tri}(E)\) we mean the smallest triangulated subcategory of \(\mathcal{C}\) containing the set of objects \(E\) in \(\mathcal{C}\).
Let \(A\) be a finite dimensional algebra over \(k\) and \(\operatorname{mod}\!A\) the category of finitely generated right \(A\)-modules. We denote by \(\underline{\operatorname{mod}\!A}\) the projectively stable module category of \(A\), that is the category with the same objects as \(\operatorname{mod}\!A\) and morphisms given by \(\underline{\operatorname{Hom}}_{A}(M,N)=\operatorname{Hom}_{A}(M,N)/ \mathcal{P}(M,N)\) where \(\mathcal{P}(M,N)\) denotes the subspace of morphisms factoring through projective modules. We denote by \(\Omega:\underline{\operatorname{mod}\!A}\to\underline{\operatorname{mod}\!A}\) the syzygy functor defined by \(\Omega(M)\) being the kernel of the projective cover \(P(M)\twoheadrightarrow M\). Let \(\Omega^{0}(M)=M\) and \(\Omega^{i+1}(M)=\Omega(\Omega^{i}(M))\) for \(i\geq 0\). The injectively stable module category \(\overline{\operatorname{mod}\!A}\) of \(A\) and the cosyzyg functor \(\Omega^{-1}:\overline{\operatorname{mod}\!A}\to\overline{\operatorname{mod}\!A}\) are defined dually. When \(A\) is a self-injective algebra, \(\operatorname{mod}\!A\) is a Frobenius category thus \(\underline{\operatorname{mod}\!A}\) has a triangulated category structure with the suspension functor \(\Omega^{-1}\). We refer to [10] for more details.
We consider the \(d\)-Auslander-Reiten translations \(\tau_{d}:\underline{\operatorname{mod}\!A}\to\overline{\operatorname{mod}\!A}\) and \(\tau_{d}^{-}:\overline{\operatorname{mod}\!A}\to\underline{\operatorname{mod} \!A}\) defined by \(\tau_{d}=\tau\Omega^{d-1}\) and \(\tau_{d}^{-}=\tau^{-}\Omega^{-(d-1)}\) where \(\tau\) and \(\tau^{-}\) denote the usual Auslander-Reiten translations.
Recall that \(D^{b}(\operatorname{mod}\!A)\) denotes the bounded derived category of \(\operatorname{mod}\!A\). The category \(\operatorname{mod}\!A\) is a full subcategory of \(D^{b}(\operatorname{mod}\!A)\) by identifying an \(A\)-module with the corresponding stalk complex concentrated at degree zero. A complex in \(D^{b}(\operatorname{mod}\!A)\) is perfect provided that it is quasi-isomorphic to a bounded complex of finitely generated projective
\(A\)-modules. Perfect complexes form a thick subcategory of \(D^{b}(\operatorname{mod}\!A)\), which is denoted by \(\operatorname{perf}(A)\). The singularity category of \(A\), denoted by \(D_{sg}(A)\) is the quotient of triangulated categories given as
\[D_{sg}(A)=D^{b}(\operatorname{mod}\!A)/\operatorname{perf}(A).\]
Recall that \(K^{-,b}(\operatorname{proj}\!A)\) denotes the upper bounded homotopy category of \(\operatorname{proj}\!A\) and \(K^{b}(\operatorname{proj}\!A)\) denotes bounded homotopy category of \(\operatorname{proj}\!A\), which is a thick triangulated subcategory of \(K^{-,b}(\operatorname{proj}\!A)\). Via the equivalences \(K^{-,b}(\operatorname{proj}\!A)\cong D^{b}(\operatorname{mod}\!A)\) and \(K^{b}(\operatorname{proj}\!A)\cong\operatorname{perf}(A)\), we have that
\[D_{sg}(A)\cong K^{-,b}(\operatorname{proj}\!A)/K^{b}(\operatorname{proj}\!A).\]
Denote by \(q^{\prime}:D^{b}(\operatorname{mod}\!A)\to D_{sg}(A)\) the quotient functor. Observe that the functor \(\operatorname{mod}\!A\to D^{b}(\operatorname{mod}\!A)\xrightarrow{q^{\prime} }D_{sg}(A)\) vanishes on projective modules. Hence it induces a functor \(q:\underline{\operatorname{mod}\!A}\to D_{sg}(A)\).
By a singular equivalence between two algebras \(A\) and \(B\), we mean a triangle equivalence between their singularity categories.
## 2. Preliminaries
### \(d\)-cluster tilting subcategories
Let \(\mathcal{M}\) be a subcategory of a category \(\mathcal{C}\) and let \(C\in\mathcal{C}\). A right \(\mathcal{M}\)-approximation of \(C\) is a morphism \(f:M\to C\) with \(M\in\mathcal{M}\) such that all morphisms \(g:M^{\prime}\to C\) with \(M^{\prime}\in\mathcal{M}\) factor through \(f\). \(\mathcal{M}\) is contravariantly finite in \(\mathcal{C}\) if every \(C\in\mathcal{C}\) admits a right \(\mathcal{M}\)-approximation. The notions of left \(\mathcal{M}\)-approximation and covariantly finite are defined dually. We say that \(\mathcal{M}\) is functorially finite in \(\mathcal{C}\) if \(\mathcal{M}\) is both contravariantly finite and covariantly finite. In particular, if \(M\in\operatorname{mod}\!A\), then \(\operatorname{add}\!M\) is functorially finite. By a right (minimal) \(\operatorname{add}\!M\)-resolution of \(X\in\operatorname{mod}\!A\) we mean the following complex
\[\cdots\xrightarrow{}M_{n}\xrightarrow{f_{n}}\cdots\xrightarrow{}M_{1} \xrightarrow{f_{1}}M_{0}\xrightarrow{f_{0}}X\]
with \(f_{0}\) a (minimal) right \(\operatorname{add}\!M\)-approximation of \(X\) and \(f_{i}\) a (minimal) right \(\operatorname{add}\!M\)-approximation of \(\operatorname{Ker}\!f_{i-1}\) for all \(i\geq 1\). A left (minimal) \(\operatorname{add}\!M\)-resolution is defined dually. Recall in the case when \(\mathcal{C}\) is abelian, \(\mathcal{M}\) is called a generating (resp. cogenerating) subcategory if for any object \(C\in\mathcal{C}\), there exists an epimorphism \(M\to C\) (resp. monomorphism \(C\to M\)) with \(M\in\mathcal{M}\).
**Definition 2.1** ([17, 18, 19]).: _Let \(d\) be a positive integer. Let \(\mathcal{C}\) be an abelian or a triangulated category, and \(A\) a finite-dimensional \(k\)-algebra._
1. _We call a subcategory_ \(\mathcal{M}\) _of_ \(\mathcal{C}\) \(a\) \(d\)_-cluster tilting subcategory if it is functorially finite, generating-cogenerating if_ \(\mathcal{C}\) _is abelian and_ \[\mathcal{M} =\{C\in\mathcal{C}\mid\operatorname{Ext}_{C}^{i}(C,\mathcal{M})=0 \text{ for }1\leq i\leq d-1\}\] \[=\{C\in\mathcal{C}\mid\operatorname{Ext}_{C}^{i}(\mathcal{M},C)=0 \text{ for }1\leq i\leq d-1\}.\] _If moreover_ \(\operatorname{Ext}_{C}^{i}(\mathcal{M},\mathcal{M})\neq 0\) _implies that_ \(i\in d\mathbb{Z}\)_, then we call_ \(\mathcal{M}\) \(a\) \(d\mathbb{Z}\)_-cluster tilting subcategory._
2. _A finitely generated module_ \(M\in\operatorname{mod}\!A\) _is called a_ \(d\)_-cluster tilting module (respectively_ \(d\mathbb{Z}\)_-cluster tilting module) if_ \(\operatorname{add}\!M\) _is a_ \(d\)_-cluster tilting subcategory (respectively_ \(d\mathbb{Z}\)_-cluster tilting subcategory) of_ \(\operatorname{mod}\!A\)_._
**Remark 2.2**.: _If \(\mathcal{C}\) is a triangulated category, then_
\[\operatorname{Ext}_{C}^{i}(X,Y)=\operatorname{Hom}_{\mathcal{C}}(X,\Sigma^{i}Y) \text{ for }X,Y\in\mathcal{C}.\]
_Therefore \(\mathcal{M}\) is a \(d\mathbb{Z}\)-cluster tilting subcategory of \(\mathcal{C}\) if and only if \(\mathcal{M}\) is \(d\)-cluster tilting and \(\Sigma^{d}\mathcal{M}\subset\mathcal{M}\). Recall that in this case, \(\mathcal{M}\) has a \((d+2)\)-angulated structure in the sense of [1]._
**Definition 2.3**.: _[_1_, Definition 2.5]_ _We call \((A,\mathcal{M})\) a \(d\)-homological pair if \(A\) is a finite dimensional \(k\)-algebra and \(\mathcal{M}\subset\operatorname{mod}\nolimits A\) is a \(d\)-cluster tilting subcategory._
**Proposition 2.4** ([1]).: _Let \((A,\mathcal{M})\) be a \(d\)-homological pair. Then each \(X\in\operatorname{mod}\nolimits A\) has a minimal right \(\mathcal{M}\)-resolution and a minimal left \(\mathcal{M}\)-resolution, which are exact._
The following result relates \(d\mathbb{Z}\)-cluster tilting subcategories of \(\operatorname{mod}\nolimits A\) and \(D_{sg}(A)\).
**Theorem 2.5** ([11]).: _Let \(A\) be a finite dimensional algebra and \(\mathcal{M}\) a \(d\mathbb{Z}\)-cluster tilting subcategory of \(\operatorname{mod}\nolimits A\). Then the subcategory_
\[\underline{\mathcal{M}}=\{X\in D_{sg}(A)\mid X\cong M[di]\text{ for some }M\in\mathcal{M}\text{ and }i\in\mathbb{Z}\}\]
_is an \(d\mathbb{Z}\)-cluster tilting subcategory of \(D_{sg}(A)\). In particular, \(\underline{\mathcal{M}}\) is a \((d+2)\)-angulated category._
### \(d\)-abelian categories and wide subcategories
The notion of \(d\)-abelian categories was introduced by Jasso [16]. It relies on the notion of \(d\)-kernel, \(d\)-cokernel and \(d\)-extension, which we now recall following [1].
**Definition 2.6**.: _[_1_, Definition 2.1]_ _Let \(\mathcal{M}\) be an additive category and_
\[\mathbb{E}:\ M_{d+1}\rTo M_{d}\rTo\cdots\rTo M_{1}\rTo M_{0}\]
_a sequence in \(\mathcal{M}\)._
1. _We call_ \[M_{d+1}\rTo M_{d}\rTo\cdots\rTo M_{1}\] \(a\) \(d\)_-kernel of_ \(g\) _is exact for all_ \(M\in\mathcal{M}\)_._
2. _We call_ \[M_{d}\rTo\cdots\rTo M_{1}\rTo M_{0}\] \(a\) \(d\)_-cokernel of_ \(f\) _if_ \[0\rTo\mathcal{M}(M_{0},M)\rTo\mathcal{M}(M_{1},M)\rTo\cdots\rTo\mathcal{M}(M_{ d+1},M)\] _is exact for all_ \(M\in\mathcal{M}\)_._
3. _If both (_1_) and (_2_) are satisfied we call_ \(\mathbb{E}\) \(a\) \(d\)_-exact sequence (or a_ \(d\)_-extension of_ \(M_{0}\) _by_ \(M_{d+1}\)_)._
4. _We say that_ \(\mathcal{M}\) _is_ \(d\)_-abelian if it is idempotent split, every morphism admits a_ \(d\)_-kernel and a_ \(d\)_-cokernel, and every monomorphism_ \(f\) _respectively epimorphism_ \(g\) _fits into a_ \(d\)_-exact sequence of the form_ \(\mathbb{E}\)_._
As shown in [16], \(d\)-cluster tilting subcategories of abelian categories are \(d\)-abelian. Now we recall the notion of wide subcategories of \(d\)-abelian categories.
**Definition 2.7** ([16]).: _An additive subcategory \(\mathcal{W}\) of a \(d\)-abelian category \(\mathcal{M}\) is called wide if it satisfies the following conditions:_
1. _Each morphism in_ \(\mathcal{W}\) _has a_ \(d\)_-kernel in_ \(\mathcal{M}\) _which consists of objects from_ \(\mathcal{W}\)_._
2. _Each morphism in_ \(\mathcal{W}\) _has a_ \(d\)_-cokernel in_ \(\mathcal{M}\) _which consists of objects from_ \(\mathcal{W}\)_._
3. _Each_ \(d\)_-exact sequence in_ \(\mathcal{M}\)_,_ _with_ \(W^{\prime},W^{\prime\prime}\in\mathcal{W}\)_, is Yoneda equivalent to a_ \(d\)_-exact sequence in_ \(\mathcal{M}\)_,_ _with_ \(W_{i}\in\mathcal{W}\) _for each_ \(i\)_._
In the following theorem, statement \((a)\) provides a construction of wide subcategories inside \(d\)-cluster tilting subcategories which was obtained in [16]. Here we have relaxed the condition \((iii)\). The same proof and same statement in [16, Theorem B] still apply. Moreover, based on the same setting, we obtain statement \((b)\), which provides a singular equivalence between two \(k\)-algebras under suitable conditions. If in addition, we require \(\mathcal{M}\) in the \(d\)-homological pair \((A,\mathcal{M})\) to be \(d\mathbb{Z}\)-cluster tilting, then by Theorem 2.5, we obtain statement \((c)\), which gives an equivalence between two \((d+2)\)-angulated categories in the sense of [1].
**Theorem 2.8**.: _Let \((A,\mathcal{M})\) be a \(d\)-homological pair. Let \(\mathcal{W}\subset\mathcal{M}\) be an additive subcategory. Let \(P\in\mathcal{W}\) be a module and set \(B=\operatorname{End}_{A}(P)\), so that \(P\) becomes a \(B\)-\(A\)-bimodule. Assume the following:_
1. _As an_ \(A\)_-module_ \(P\) _has finite projective dimension._
2. \(\operatorname{Ext}_{A}^{t}(P,P)=0\) _for all_ \(i\geq 1\)_._
3. _Each_ \(W\in\mathcal{W}\) _admits an exact_ add_\(P\)_-resolution_ \[\cdots\xrightarrow{}P_{m}\xrightarrow{}\cdots\xrightarrow{}P_{1} \xrightarrow{}P_{0}\xrightarrow{}W\xrightarrow{}0,\,P_{i}\in\operatorname{ add}P.\]
4. \((B,\mathcal{N})\) _is a_ \(d\)_-homological pair where_ \(i_{\mathsf{p}}=\operatorname{Hom}_{A}(P,-):\operatorname{mod}A\to \operatorname{mod}B\) _and_ \(i_{\mathsf{p}}(\mathcal{W})\)_._
_Then the following statements hold_
1. \(\mathcal{W}\) _is a wide subcategory of_ \(\mathcal{M}\) _and there is an equivalence of categories_ \[i_{\lambda}=-\otimes_{B}P:\mathcal{N}\subset\mathcal{W}.\]
2. \(i_{\lambda}:\operatorname{mod}B\to\operatorname{mod}A\) _induces a fully faithful triangle functor between the singularity categories of_ \(A\) _and_ \(B\)_,_ \[D_{sg}(i_{\lambda}):D_{sg}(B)\to D_{sg}(A).\] _Moreover,_ \(D_{sg}(i_{\lambda})\) _is a triangle equivalence if for any indecomposable_ \(M\in\mathcal{M}\)_, there exists an integer_ \(n\in\mathbb{N}\) _such that_ \(\Omega^{n}(M)\in\mathcal{W}\)_._
3. _If in addition_ \(\mathcal{M}\) _is a_ \(d\mathbb{Z}\)_-cluster tilting subcategory of_ \(\operatorname{mod}\!A\)_, then_ \(\underline{\mathcal{M}}\subset D_{sg}(A)\)_,_ \(\underline{\mathcal{N}}\subset D_{sg}(B)\) _are_ \(d\mathbb{Z}\)_-cluster tilting and hence_ \((d+2)\)_-angulated. Moreover,_ \(D_{sg}(i_{\lambda})\) _restricts to an equivalence between_ \((d+2)\)_-angulated categories_ \[D_{sg}(i_{\lambda}):\underline{\mathcal{N}}\to\underline{\mathcal{M}}.\]
Proof.: We refer to [16, Section 3] for the proof of \((a)\). We remind the reader that condition \((iii)\) in [16, Theorem B] requires such a resolution to be finite, while we allow it here to be infinite. The same proof applies.
Now we prove \((b)\). As in the proof of [10, Theorem B], we may apply [10, Lemma 3.3] to get that \({}_{B}P\) is projective. Hence \(i_{\lambda}\) is an exact functor. Therefore \(i_{\lambda}\) induces a triangle functor
\[i_{\lambda}^{*}:D^{b}(\operatorname{mod}B)\to D^{b}(\operatorname{mod}A).\]
By [10, Lemma 3.6], \(i_{\lambda}^{*}\) is fully faithful.
Moreover \(i_{\rho}\) and \(i_{\lambda}\) restrict to quasi-inverse equivalences between \(\operatorname{add}P\) and \(\operatorname{add}B\). Hence \(i_{\lambda}\) preserves perfect complexes since \(P\in\operatorname{perf}(A)\) by (i). Therefore \(i_{\lambda}\) induces a triangle functor between singularity categories
\[D_{sg}(i_{\lambda}):D_{sg}(B)\to D_{sg}(A).\]
Next we show that \(D_{sg}(i_{\lambda})\) is fully faithful. Firstly we claim that
\[i_{\lambda}^{*}(\operatorname{perf}(B))=i_{\lambda}^{*}(D^{b}(\operatorname{ mod}B))\cap\operatorname{perf}(A).\]
Again \(P\in\operatorname{perf}(A)\) shows \(i_{\lambda}^{*}(\operatorname{perf}(B))\subset i_{\lambda}^{*}(D^{b}( \operatorname{mod}B))\cap\operatorname{perf}(A)\). To see the other inclusion, we take \(X_{\bullet}\in D^{b}(\operatorname{mod}B)\) such that \(i_{\lambda}^{*}(X_{\bullet})\in\operatorname{perf}(A)\). Choose \(Q_{\bullet}\in K^{-,b}(\operatorname{proj}B)\) such that \(Q_{\bullet}\cong X_{\bullet}\) in \(D^{b}(\operatorname{mod}B)\). Let \(n\) be the largest integer such that \(H_{n}(Q_{\bullet})\neq 0\). Denote by \(\sigma_{\geq n}Q_{\bullet}\) the brutal truncation of \(Q_{\bullet}\) at degree \(\geq n\). Then \(\sigma_{\geq n}Q_{\bullet}\cong\Sigma^{n}(M)\) in \(D^{b}(\operatorname{mod}B)\) for some \(M\in\operatorname{mod}B\). We have the following triangle
\[\sigma_{<n}Q_{\bullet}\to Q_{\bullet}\to\sigma_{\geq n}Q_{\bullet}\to\Sigma \sigma_{<n}Q_{\bullet}.\]
Applying \(i_{\lambda}^{*}\) to it yields a triangle in \(D^{b}(\operatorname{mod}A)\)
\[i_{\lambda}^{*}(\sigma_{<n}Q_{\bullet})\to i_{\lambda}^{*}(Q_{\bullet})\to i _{\lambda}^{*}(\sigma_{\geq n}Q_{\bullet})\to i_{\lambda}^{*}(\Sigma\sigma_{<n }Q_{\bullet}).\]
Note that \(\sigma_{<n}Q_{\bullet}\in\operatorname{perf}(B)\), so \(i_{\lambda}^{*}(\sigma_{<n}Q_{\bullet})\in\operatorname{perf}(A)\). By assumption \(i_{\lambda}^{*}(Q_{\bullet})\cong i_{\lambda}^{*}(X_{\bullet})\in\operatorname {perf}(A)\). Hence \(\Sigma^{-n}i_{\lambda}^{*}(\sigma_{\geq n}Q_{\bullet})\cong i_{\lambda}(M) \in\operatorname{perf}(A)\). If \(M\) were not in \(\operatorname{perf}(B)\), then \(\operatorname{proj}.\dim M_{B}=\infty\). That is, for any positive integer \(i\), \(\operatorname{Ext}_{B}^{i}(M,\Omega^{i}M)\neq 0\). Then \(\operatorname{Ext}_{A}^{i}(i_{\lambda}(M),i_{\lambda}(\Omega^{i}M))\cong \operatorname{Ext}_{B}^{i}(M,\Omega^{i}M)\neq 0\) by [10, Lemma 3.6]. But this contradicts with the fact that \(i_{\lambda}(M)\in\operatorname{perf}(A)\). So \(M\in\operatorname{perf}(B)\) and \(i_{\lambda}^{*}(\sigma_{\geq n}Q_{\bullet})\in i_{\lambda}^{*}(\operatorname{ perf}(B))\). Using the above triangle we get \(i_{\lambda}^{*}(X_{\bullet})\cong i_{\lambda}^{*}(Q_{\bullet})\in i_{\lambda}^{*}( \operatorname{perf}(B))\).
So we have the following commutative diagram.
Thus it suffices to show that the induced functor
\[i_{\lambda}^{*}(D^{b}(\operatorname{mod}B))/(\operatorname{perf}(A)\cap i_{ \lambda}^{*}(D^{b}(\operatorname{mod}B)))\to D_{sg}(A)\]
is fully faithful. To do this we apply [11, Proposition 10.2.6].
Let \(f:X_{\bullet}\to Y_{\bullet}\) be a morphism in \(D^{b}(\operatorname{mod}A)\) where \(X_{\bullet}\in\operatorname{perf}(A)\) and \(Y_{\bullet}\in i_{\lambda}^{*}(D^{b}(\operatorname{mod}B))\). We may assume that \(X_{\bullet}\in K^{b}(\operatorname{proj}A)\) and \(Y_{\bullet}\in C^{-}(\operatorname{add}P)\). Suppose \(X_{i}=0\) for some \(i>m\) and let \(Z_{\bullet}=\sigma_{\leq m}Y_{\bullet}\) be the brutal truncation of \(Y_{\bullet}\) at degree \(\leq m\). Since
\[\operatorname{Hom}_{D^{b}(\operatorname{mod}A)}(X_{\bullet},Y_{\bullet})\cong \operatorname{Hom}_{K^{-}(\operatorname{mod}A)}(X_{\bullet},Y_{\bullet}),\]
\(f\) factors through \(Z_{\bullet}\). Note that \(Z_{\bullet}\in C^{b}(\operatorname{add}P)\) so \(Z_{\bullet}\in\operatorname{perf}(A)\cap i_{\lambda}^{*}(D^{b}(\operatorname{ mod}B))\) as \(P\in\operatorname{perf}(A)\). Therefore [11, Proposition 10.2.6] applies and the functor \(D_{sg}(i_{\lambda})\) is fully faithful.
It remains to show that \(D_{sg}(i_{\lambda})\) is dense. Since by [1, Corollary 2.3], the singularity category of \(A\) is the stabilization of \(\underline{\text{mod}}A\), the essential image \(\mathcal{F}\) of \(D_{sg}(i_{\lambda})\) contains all objects \(X\in\underline{\text{mod}}A\) such that \(\Omega^{s}(X)\cong i_{\lambda}(Y)\) for some \(s\in\mathbb{N}\) and \(Y\in\underline{\text{mod}}B\).
By \((a)\), the functor \(i_{\lambda}\) induces an equivalence \(i_{\rho}(\mathcal{W})\xrightarrow{\sim}\mathcal{W}\). Thus by the condition given for \(\mathcal{M}\) in \((b)\), we have that \(\mathcal{M}\subset\mathcal{F}\).
By Proposition 2.4, each \(X\in\text{mod}A\) admits a right minimal \(\mathcal{M}\)-resolution
which is exact and so we have that \(X\cong M_{\bullet}\) in \(D^{b}(\text{mod}A)\) where
Hence \(X\in\text{tri}(\mathcal{M})\) which implies that \(D^{b}(\text{mod}A)=\text{tri}(\mathcal{M})\).
Since \(\mathcal{F}\) is a triangulated subcategory of \(D_{sg}(A)\), it follows that \(\mathcal{F}=D_{sg}(A)\). In other words, \(D_{sg}(i_{\lambda})\) is dense.
To see (c), we have that \(\underline{\mathcal{M}}\) is a \(d\mathbb{Z}\)-cluster tilting subcategory of \(D_{sg}(A)\) and hence a \((d+2)\)-angulated category by Theorem 2.5. Then \(\Sigma^{d}\underline{\mathcal{M}}\subset\underline{\mathcal{M}}\) by Remark 2.2. Since \(i_{\lambda}:\mathcal{X}\xrightarrow{\sim}\mathcal{W}\) by (a), we have that \(D_{sg}(i_{\lambda})(\underline{\mathcal{M}})=\underline{\mathcal{W}}\). Moreover \(\underline{\mathcal{W}}=\underline{\mathcal{M}}\) by (b). Hence \(\Sigma^{d}(\underline{\mathcal{M}})\subset\underline{\mathcal{M}}\) which yields that \(\underline{\mathcal{M}}\) is a \(d\mathbb{Z}\)-cluster tilting subcategory of \(D_{sg}(B)\). Here \(\Sigma^{\prime}\) is the suspension functor of \(D_{sg}(B)\). Therefore, by restricting \(D_{sg}(i_{\lambda})\) to \(\underline{\mathcal{M}}\), we have an equivalence \(\underline{\mathcal{M}}\cong\underline{\mathcal{M}}\) as \((d+2)\)-angulated categories.
**Remark 2.9**.: _If \(P\) is a projective \(A\)-module, condition \((i)\) and \((ii)\) are automatically satisfied. Condition \((iii)\) is fulfilled if each \(W\in\mathcal{W}\) has a projective resolution with all terms in \(\text{add}P\)._
### Higher Nakayama algebras
We recall some definitions and basic facts about higher Nakayama algebras constructed by Jasso-Kulshammer [1]. We follow the notations in their paper with a slight modification (see Remark 2.10).
Recall that \(\ell_{\infty}=(\ldots,\ell_{-1},\ell_{0},\ell_{1},\ldots)\) is called a Kupisch series of type \(A_{\infty}^{\infty}\) if for all \(i\in\mathbb{Z}\) there are inequalities \(1\leq\ell_{i}\leq\ell_{i-1}+1\).
* \(\ell_{\infty}\) is connected if \(\ell_{i}\geq 2\) for all \(i\in\mathbb{Z}\).
* \(\ell_{\infty}\) is \(\ell\)-bounded for some positive integer \(\ell\) if \(\ell_{i}\leq\ell\) for all \(i\in\mathbb{Z}\).
* \(\ell_{\infty}\) is \(n\)-periodic if \(\ell_{i}=\ell_{i+n}\) for \(n\in\mathbb{N}\), in this case \(\underline{\ell}\) is called Kupisch series of type \(\widehat{\mathbb{A}}_{n-1}\) and we use the notation \(\underline{\ell}=(\ell_{1},\ell_{2},\ldots,\ell_{n})\).
Denote by \(\ell_{\infty}[1]=(\ldots,\ell_{-1}[1],\ell_{0}[1],\ldots)\) the Kupisch series obtained from \(\ell_{\infty}\) by letting \(\ell_{i}[1]=\ell_{i+1}\).
Let \(d\) be a positive integer. We recall the definition of ordered sequences \((os^{d}_{\ell_{\infty}},\preccurlyeq)\) from [1]
\[os^{d}_{\ell_{\infty}}:=\{x=(x_{1},x_{2},\ldots,x_{d})\mid x_{1}<x_{2}<\cdots<x _{d}\text{ and }x_{d}-x_{1}+1\leq\ell_{x_{d}}+d-1\},\]
with the relation \(\preccurlyeq\) defined as \(x\preccurlyeq y\) if \(x_{1}\leq y_{1}<x_{2}\leq y_{2}<\cdots<x_{d}\leq y_{d}\) for \(x=(x_{1},\ldots,x_{d}),y=(y_{1},\ldots,y_{d})\in os^{d}_{\ell_{\infty}}\).
Now we describe the \(d\)-Nakayama algebra of type \(A_{\infty}^{\infty}\) with Kupisch series \(\ell_{\infty}\) by quiver with relations. The set of vertices of quiver \(Q^{d}_{\ell_{\infty}}\) is the set \(os^{d}_{\ell_{\infty}[1]}\). Let \(\{e_{i}\mid 1\leq i\leq n\}\) be the standard basis of \(\mathbb{Z}^{n}\). There is an arrow \(a_{i}(x):x\to x+e_{i}\) whenever \(x+e_{i}\in os^{d}_{\ell_{\infty}[1]}\). Let \(I\) be the ideal of the path category \(kQ^{d}_{\ell_{\infty}}\) generated by \(a_{i}(x+e_{j})a_{j}(x)-a_{j}(x+e_{i})a_{i}(x)\) with \(1\leq i,j\leq n\). By convention, \(a_{i}(x)=0\) whenever \(x\) or \(x+e_{i}\) is not in \(os^{d}_{\ell_{\infty}[1]}\), hence some
of the relations are indeed zero relations. Then the \(d\)-Nakayama algebra of type \(A_{\infty}^{\infty}\) with Kupisch series \(\ell_{\infty}\) is given by \(\mathcal{A}_{\ell_{\infty}}^{(d)}=kQ_{\ell_{\infty}}^{d}/I\).
**Remark 2.10**.:
1. _The definition of_ \(\omega_{\ell_{\infty}}^{d}\) _is slightly different from that in_ _[_1_, Definition 1.9]__. We add_ \((1,2,\ldots,d)\) _to each of the ordered sequence defined in_ _[_1_]_ _to make it strictly increasing._
2. _By definition_ \(\mathcal{A}_{\ell_{\infty}}^{(d)}\) _is a locally bounded_ \(k\)_-linear category. By abuse of notation, we still call it an algebra. We also identify categories with finitely many objects and algebras._
By construction in [1], \(\mathcal{A}_{\ell_{\infty}}^{(d)}\) has a distinguished \(d\mathbb{Z}\)-cluster tilting subcategory
\[\mathcal{M}_{\ell_{\infty}}^{(d)}=\{\widehat{M}(x)\mid x\in\text{os}_{\ell_ {\infty}}^{d+1}\}.\]
Here as a representation \(\widehat{M}(x)\) assigns \(k\) to vertex \(z\in\text{os}_{\ell_{\infty}[1]}^{d}\) if \((x_{1},\ldots,x_{d})\preccurlyeq z\preccurlyeq(x_{2}-1,\ldots,x_{d+1}-1)\) and \(0\) otherwise. All arrows \(k\to k\) act as identity, while other arrows act as zero.
Then
\[\text{Hom}_{\mathcal{A}_{\ell_{\infty}}^{(d)}}(\widehat{M}(x),\widehat{M}(y) )\cong\left\{\begin{array}{ll}kf_{xx}&x\preccurlyeq y\\ 0&\text{otherwise.}\end{array}\right.\]
Here \(f_{xx}\) is given by \(k\xrightarrow{1}k\) at vertices \(z\) where \(\widehat{M}(x)_{z}=\widehat{M}(y)_{z}=k\) and \(0\) otherwise. The composition of morphisms in \(\mathcal{M}_{\ell_{\infty}}^{(d)}\) is completely determined by
\[f_{zy}\circ f_{yx}=\left\{\begin{array}{ll}f_{zx}&x\preccurlyeq z\\ 0&\text{otherwise.}\end{array}\right.\]
In the following proposition, we recall some homological properties of modules in \(\mathcal{M}_{\ell_{\infty}}^{(d)}\) described by combinatorial data.
**Proposition 2.11**.: _[_1_, Proposition 2.22, Proposition 2.25, Theorem 3.16]_ _Let \(x\in\text{os}_{\ell_{\infty}}^{d+1}\). The following statements hold._
1. \(\operatorname{top}\widehat{M}(x)=S_{(x_{2}-1,\ldots,x_{d+1}-1)}\) _and_ \(\operatorname{soc}\widehat{M}(x)=S_{(x_{1},\ldots,x_{d})}\)_._
2. \(\widehat{M}(x)\) _is simple if and only if_ \(x=(i,i+1,\ldots,i+d)\) _for some integer_ \(i\)_._
3. \(\widehat{M}(x)\) _is projective if and only if_ \(x_{1}=\min\{y\mid(y,x_{2},\ldots,x_{d+1})\in\text{os}_{\ell_{\infty}}^{d+1}\}\)_, or equivalently,_ \(x_{1}=x_{d+1}-\ell_{x_{d+1}}-d+1\)_._
4. \(\widehat{M}(x)\) _is injective if and only if_ \(x_{d+1}=\max\{y\mid(x_{1},\ldots,x_{d},y)\in\text{os}_{\ell_{\infty}}^{d+1}\}\)_._
5. _If_ \(x_{1}>x_{d+1}-\ell_{x_{d+1}}-d+1\)_, then there exists an exact sequence_ _with_ \(P_{1}=\widehat{M}(x_{d+1}-\ell_{x_{d+1}}-d+1,x_{1},\ldots,x_{i-1},x_{i+1}, \ldots,x_{d+1})\) _for_ \(1\leq i\leq d\) _and_ \(\Omega^{d}(\widehat{M}(x))=\widehat{M}(x_{d+1}-\ell_{x_{d+1}}-d+1,x_{1}, \ldots,x_{d})\)_._
6. \(\tau_{d}(\widehat{M}(x))=\widehat{M}(x_{1}-1,x_{2}-1,\ldots,x_{d+1}-1)\)_._
7. _For each_ \(i\in\mathbb{Z}\) _the indecomposable projective_ \(\mathcal{A}_{\ell_{\infty}}^{(d)}\)_-module at the vertex_ \((i-d+1,\ldots,i)\) _has Loewy length_ \(\ell_{i}\)_._
Recall that for a locally bounded \(k\)-linear category \(\mathcal{C}\), a group action given by \(G\) is called admissible if \(gx\ncong x\) for any indecomposable object \(x\) in \(\mathcal{C}\) and \(g\in G\backslash\{1\}\).
Let \(n\) be a fixed positive integer. From now on we assume \(\ell_{\infty}\) is \(n\)-periodic and let \(\underline{\ell}=(\ell_{1},\ldots,\ell_{n})\). Then \(G=\langle\sigma\rangle\) where \(\sigma=\tau_{d}^{n}\), is an admissible group acting on \(\mathcal{A}_{\ell_{\infty}}^{(d)}\)
Jasso-Kulshammer [1] constructed \(d\)-Nakayama algebras of type \(\widetilde{\mathbb{A}}_{n-1}\) as the orbit category
\[A_{\underline{\ell}}^{(d)}:=\mathcal{A}_{\ell_{\infty}}^{(d)}/G.\]
The covering functor:
\[F:\mathcal{A}_{\ell_{\infty}}^{(d)}\to A_{\underline{\ell}}^{(d)}\]
induces an exact functor
\[F^{*}:\mathrm{Mod}A_{\underline{\ell}}^{(d)}\to\mathrm{Mod}\mathcal{A}_{\ell_ {\infty}}^{(d)}\]
called pull-up, given by \(F^{*}(M)=M\circ F\). This functor has a left adjoint
\[F_{*}:\mathrm{Mod}\mathcal{A}_{\ell_{\infty}}^{(d)}\to\mathrm{Mod}A_{\underline {\ell}}^{(d)}\]
called push-down, which is also exact. In particular, \(F_{*}\) induces a functor \(F_{*}:\mathrm{mod}\mathcal{A}_{\ell_{\infty}}^{(d)}\to\mathrm{mod}A_{\underline {\ell}}^{(d)}\) between the category of finitely generated modules of \(\mathcal{A}_{\ell_{\infty}}^{(d)}\) and \(A_{\underline{\ell}}^{(d)}\) respectively. The \(d\mathbb{Z}\)-cluster tilting subcategory \(\mathcal{M}_{\ell_{\infty}}^{(d)}\) is \(G\)-equivariant, i.e. \(\mathcal{M}_{\ell_{\infty}}^{(d)}\) and \(\sigma_{*}(\mathcal{M}_{\ell_{\infty}}^{(d)})\) have the same isomorphism closure in \(\mathrm{mod}\mathcal{A}_{\ell_{\infty}}^{(d)}\) where \(\sigma_{*}:\mathrm{mod}\mathcal{A}_{\ell_{\infty}}^{(d)}\to\mathrm{mod} \mathcal{A}_{\ell_{\infty}}^{(d)}\) is the induced automorphism of module category defined by precomposition with \(\sigma^{-1}\). Then [1, Theorem 2.3] implies that \(F_{*}\mathcal{M}_{\ell_{\infty}}^{(d)}\) is a \(d\mathbb{Z}\)-cluster tilting subcategory of \(\mathrm{mod}A_{\underline{\ell}}^{(d)}\), which we denote by \(\mathcal{M}_{\underline{\ell}}^{(d)}\), that is
\[\mathcal{M}_{\underline{\ell}}^{(d)}=F_{*}\mathcal{M}_{\ell_{\infty}}^{(d)}= \mathrm{add}\{M(x)\mid x\in\sigma_{\ell_{\infty}}^{d+1}\}\text{ where }M(x)=F_{*}\widehat{M}(x).\]
Note that \(M(x)\cong M(\sigma(x))\) and the orbit category \(\mathcal{M}_{\underline{\ell}}^{(d)}\) is a graded category with the natural grading given by \(G\). More precisely, take \(M(x),M(y)\in\mathcal{M}_{\underline{\ell}}^{(d)}\) for \(x,y\in\sigma_{\ell_{\infty}}^{d+1}\). Then
\[\mathrm{Hom}_{A_{\underline{\ell}}^{(d)}}(M(x),M(y))=\bigoplus_{i=a_{yx}}^{b_ {yx}}\mathrm{Hom}_{A_{\ell_{\infty}}^{(d)}}(\widehat{M}(x),\widehat{M}(\sigma ^{i}(y))).\]
Here \(a_{yx}\) (resp.\(b_{yx}\)) is the minimal (resp.maximal) integer such that \(x\preccurlyeq\sigma^{i}(y)\). Denote by \(f_{yx}^{i}\in\mathrm{Hom}_{A_{\underline{\ell}}^{(d)}}(M(x),M(y))\) the image of \(f_{\sigma^{i}(y),x}\in\mathrm{Hom}_{A_{\underline{\ell}_{\infty}}^{(d)}}( \widehat{M}(x),\widehat{M}(\sigma^{i}(y)))\) in \(\mathcal{M}_{\underline{\ell}}^{(d)}\). By the composition law in \(\mathcal{M}_{\ell_{\infty}}^{(d)}\), we have
\[f_{xy}^{j}\circ f_{yx}^{i}=\left\{\begin{array}{ll}f_{zx}^{i+j}&x\preccurlyeq \sigma^{i+j}(z)\\ 0&\text{otherwise}.\end{array}\right.\]
Note that \(\dim_{k}\mathrm{Hom}_{A_{\underline{\ell}}^{(d)}}(M(x),M(y))=b_{yx}-a_{yx}+1\).
**Remark 2.12**.: _A non-semisimple \(d\)-Nakayama algebra of type \(\widetilde{\mathbb{A}}_{n-1}\) is self-injective if and only if \(\underline{\ell}=(\ell,\ldots,\ell)\) for some integer \(\ell\geq 2\)[1, Theorem 4.10]. In this case, we denote \(A_{\underline{\ell}}^{(d)}\) by \(A_{n,\ell}^{(d)}\) and its distinguished \(d\mathbb{Z}\)-cluster tilting subcategory by \(\mathcal{M}_{n,\ell}^{(d)}\) following the notations in [1, Section 4.1]._
**Example 2.13**.: _Let \(d=2\), \(n=5\) and \(\underline{\ell}=(3,4,4,4,4)\) be a periodic Kupisch series. Then the Gabriel quiver \(Q_{\underline{\ell}}^{(2)}\) of \(A_{\underline{\ell}}^{(2)}\) is given as follows._
_Here the leftmost and the rightmost lines should be identified. The modules \(M(358)\) and \(M(368)\) are given as representations of \(Q_{\underline{\ell}}^{(2)}\). Note that both of them are projective \(A_{\underline{\ell}}^{(2)}\)-modules._
\(M(358)\)_:_
_There is a nonzero morphism \(\phi:M(358)\to M(368)\) since \((358)\preccurlyeq(368)\). Indeed, \(\phi_{36}=\phi_{37}=\phi_{46}=\phi_{47}=1_{k}\) and \(\phi_{ij}=0\) for all other indices \(ij\)._
_The Auslander-Reiten quiver of the distinguished \(2\mathbb{Z}\)-cluster tilting subcategory \(\mathcal{M}_{\underline{\ell}}^{(2)}\) is given below._
**Example 2.14**.: _Let \(d=2\), \(n=4\) and \(\ell=3\). Then the Gabriel quiver \(Q_{4,3}^{(2)}\) of the self-injective \(2\)-Nakayama algebra \(A_{4,3}^{(2)}\) is given as follows._
_And the distiguished \(2\mathbb{Z}\)-cluster tilting subcategory \(\mathcal{M}_{4,3}^{(2)}\) is given below._
## 3. Singularity category of higher Nakayama algebras
### Resolution quiver for \(A_{\underline{\ell}}^{(d)}\)
In this section, we introduce the resolution quiver of a given higher Nakayama algebra, which is defined combinatorically but reflects certain homological properties of the algebra.
For fixed positive integers \(n\) and \(d\), let \(A=A_{\underline{\ell}}^{(d)}\) be the \(d\)-Nakayama algebra of type \(\widetilde{\mathbb{A}}_{n-1}\) with Kupisch series \(\underline{\ell}=(\ell_{1},\ldots,\ell_{n})\). Recall that \(\mathcal{M}_{\underline{\ell}}^{(d)}=\operatorname{add}\{M(x)\mid x\in os_{ \ell_{n}}^{d+1}\}\) is the distinguished \(d\mathbb{Z}\)-cluster tilting subcategory of mod\(A\) as we described in Section 2.3. Let \(\mathcal{P}=\operatorname{add}\{M(x)\in\mathcal{M}_{\underline{\ell}}^{(d) }\mid x_{d+1}-x_{1}+1=\ell_{x_{d+1}}+d\}\) be the additive category consisting of projective objects in \(\mathcal{M}_{\underline{\ell}}^{(d)}\).
Define the map
\[f:\mathbb{Z}\to\mathbb{Z}\]
by \(f(i)=i-\ell_{i}-d+1\).
**Remark 3.1**.: _The map \(f\) has the following properties._
1. _We have that_ \(f(i+n)=f(i)+n\) _since_ \(\ell_{i}=\ell_{i+n}\)_. Indeed,_ \[f(i+n)=(i+n)-\ell_{i+n}-d+1=i-\ell_{i}-d+1+n=f(i)+n.\]
2. \(f\) _is non-decreasing. We know that_ \(i-\ell_{i}\geq(i-1)-\ell_{i-1}\) _since_ \(\ell_{i}\leq\ell_{i-1}+1\)_. This implies_ \(f(i)\geq f(i-1)\)_._
By Remark 3.1\((i)\), \(f\) induces a function
\[\overline{f}:\{1,2,\ldots,n\}\to\{1,2,\ldots,n\}\]
such that \(\overline{f}(i)\equiv i-\ell_{i}-d+1\mod n\).
**Definition 3.2**.: _The resolution quiver \(R(A)\) of \(A\) is defined as follows:_
* _vertices:_ \(1,2,\ldots,n\)_,_
* _arrows:_ \(i\to\overline{f}(i)\) _for each vertex_ \(i\)_._
The notion resolution quiver is justified in the sense that \(\overline{f}\), which is a normalization of \(f\), detects periodic \(\Omega^{d}\)-orbits of \(\mathcal{M}_{\underline{\ell}}^{(d)}\) as shown in Proposition 3.3. Moreover, when \(d=1\), our definition coincides with the definition of the resolution quiver for usual Nakayama algebras in [10], which was originally introduced in [11].
**Proposition 3.3**.: _Let \(x=(x_{1},\ldots,x_{d+1})\in os_{\ell_{n}}^{d+1}\). Then \(M(x)\in\mathcal{P}\) if and only if \(x_{1}=f(x_{d+1})\). Otherwise \(\Omega^{d}(M(x))=M(f(x_{d+1}),x_{1},\ldots,x_{d})\). Moreover, we have the following exact sequence_
_where \(P_{i}=M(f(x_{d+1}),x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{d+1})\) is projective for \(1\leq i\leq d\)._
Proof.: \(x=(x_{1},\ldots,x_{d+1})\in os_{\ell}^{d+1}\) implies that \(f(x_{d+1})\leq x_{1}\). As shown in section 2.3, \(M(x)=F_{*}\widehat{M}(x)\) with \(\widehat{M}(x)\in\widehat{M}_{\ell_{\infty}}^{(d)}\). Since \(F_{*}\) is left adjoint to an exact functor, \(F_{*}\) preserves projective modules. Thus \(M(x)\) is projective if and only if \(\widehat{M}(x)\) is projective if and only if \(x_{1}=f(x_{d+1})\) by Proposition 2.11\((iii)\).
In the case \(f(x_{d+1})<x_{1}\), we have the beginning of the minimal projective resolution of \(\widehat{M}(x)\) by Proposition 2.11\((v)\)
with \(Q_{i}=\widehat{M}(f(x_{d+1}),x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{d+1})\) for \(1\leq i\leq d\) and \(\Omega^{d}(\widehat{M}(x))=\widehat{M}(f(x_{d+1}),x_{1},\ldots,x_{d})\). We shall apply \(F_{*}\) which is exact and preserves projectivity to obtain the following exact sequence
where \(P_{i}=F_{*}Q_{i}\) for \(1\leq i\leq d\). The claim follows.
**Example 3.4**.: _Let \(n=5\) and \(\underline{\ell}=(3,4,4,4,4)\). When \(d=1\), we have that \(\overline{f}(i)\equiv i-\ell_{i}\mod 5\). Thus the resolution quiver \(R(A_{\underline{\ell}}^{(1)})\) is given by_
_When \(d=2\), \(\overline{f}(i)\equiv i-\ell_{i}-1\mod 5\). Then we have the resolution quiver of \(R(A_{\underline{\ell}}^{(2)})\) as follows._
_When \(d=3\), \(\overline{f}(i)\equiv i-\ell_{i}-2\mod 5\). Then we have the resolution quiver of \(R(A_{\underline{\ell}}^{(3)})\) as follows._
We wish to capture homological information of singularity categories of higher Nakayama algebras using their resolution quivers. To do this, we explore more properties of the functions \(f\) and \(\overline{f}\).
Let \(J=\{i\mid 1\leq i\leq n,\overline{f}^{N}(i)=i\) for some \(N\in\mathbb{N}_{>0}\}\) and \(I=J+n\mathbb{Z}\). For \(i\in\mathbb{Z}\), denote by \(\overline{i}\in\{1,2,\ldots,n\}\) the representative of \(i\) such that \(i=\overline{i}+n\mathbb{Z}\). Note that \(i\in I\) if and only if \(\overline{i}\in J\).
**Proposition 3.5**.: _The following statements hold._
1. \(\overline{f}|_{J}:J\to J\) _is bijective._
2. \(f|_{I}:I\to I\) _is a bijection. Moreover, there exist_ \(s,t\in\mathbb{N}\) _such that_ \(f^{s}(i)=i+tn\) _for all_ \(i\in I\)_._
3. _There exists some_ \(N\in\mathbb{N}\) _such that for all_ \(i\in\mathbb{Z}\)_,_ \(f^{N}(i)\in I\)_._
Proof.: (i) Since \(\overline{f}(J)=J\) by definition and \(J\) is a finite set, the statement follows.
(ii) For \(i\in I\), there is an \(N\in\mathbb{N}\) such that \(\overline{f}^{N}(\overline{i})=\overline{i}\). So we have \(i=f^{N}(i)+pn=f^{N}(i+pn)\) for some \(p\in\mathbb{Z}\). Thus the surjectivity of \(f|_{I}\) follows. For injectivity, suppose \(f(i)=f(j)\) with \(i,j\in I\). Then \(\overline{f}(\overline{i})=\overline{f}(\overline{j})\) which implies that \(j=i+qn\) for some \(q\in\mathbb{Z}\). But \(f(i)=f(j)=f(i+qn)=f(i)+qn\) forces \(q=0\). Therefore, \(i=j\).
If \(|J|=1\), then we take \(s=t\in\mathbb{N}\). Otherwise, consider \(i,j\) which are adjacent in \(I\). Since \(f\) is non-decreasing and bijective on \(I\), \(f(i),f(j)\) are also adjacent. By induction, \(f^{m}(i),f^{m}(j)\) are adjacent for any \(m\in\mathbb{N}\).
As before, there are \(M,N\in\mathbb{N}\) such that \(f^{M}(i)=i+pn\) and \(f^{N}(j)=j+qn\) for some \(p,q\in\mathbb{Z}\). Hence we can choose \(s\in\mathbb{N}\) such that \(f^{s}(i)=i+t_{i}n\) and \(f^{s}(j)=j+t_{j}n\) for some \(t_{i},t_{j}\in\mathbb{N}\). Since \(f^{s}(i),f^{s}(j)\) are adjacent, we have \(t_{i}=t_{j}\). Therefore, we can choose \(s,t\in\mathbb{N}\) such that \(f^{s}(i)=i+tn\) for all \(i\in I\).
(iii) Let \(i\in\mathbb{Z}\). Then there is \(N_{i}\in\mathbb{N}\) such that \(\overline{f}^{N_{i}}(\overline{t})\in J\). So \(f^{N_{i}}(\overline{t})\in I\) and \(f^{N_{i}}(i)\in I\). Take \(N=N_{1}N_{2}\cdots N_{n}\). Then \(f^{N}(i)\in I\) for all \(1\leq i\leq n\) and thus for all \(i\in\mathbb{Z}\).
Let \(J^{\prime}=\{1,2,\ldots,n^{\prime}\}\) with \(n^{\prime}=|J|\). Denote by \(\mathfrak{l}:\mathbb{Z}\to I\) the unique order preserving bijection. Let \(\ell^{\prime}_{\infty}=(\ldots,\ell^{\prime}_{-1},\ell^{\prime}_{0},\ldots)\) where \(\ell^{\prime}_{k}=|[f(\mathfrak{l}(k)),\mathfrak{l}(k)]\cap I|-d\). The following proposition shows \(\ell^{\prime}_{\infty}\) is a series with constant values.
**Proposition 3.6**.: _Notations are as above. There exists an integer \(\ell^{\prime}\) such that \(\ell^{\prime}_{k}=\ell^{\prime}\) for all \(k\in\mathbb{Z}\)._
Proof.: Firstly we show that \(\ell^{\prime}_{k}\leq\ell^{\prime}_{k-1}+1\) for all \(k\in\mathbb{Z}\). Since \(f\) is non-decreasing, we have the following inequalities
\[\ell^{\prime}_{k} =|[f(\mathfrak{l}(k)),\mathfrak{l}(k)]\cap I|-d\] \[\leq|[f(\mathfrak{l}(k-1)),\mathfrak{l}(k)]\cap I|-d\] \[=|[f(\mathfrak{l}(k-1)),\mathfrak{l}(k-1)]\cap I|-d+1\] \[=\ell^{\prime}_{k-1}+1.\]
We claim that \(\ell^{\prime}_{k}\leq\ell^{\prime}_{k-1}\). Otherwise, \(\ell^{\prime}_{k}=\ell^{\prime}_{k-1}+1\). Then
\[|[f(\mathfrak{l}(k)),\mathfrak{l}(k-1)]\cap I|+1=|[f(\mathfrak{l}(k)), \mathfrak{l}(k)]\cap I|=|[f(\mathfrak{l}(k-1)),\mathfrak{l}(k-1)]\cap I|+1.\]
Since \(f\) is bijective on \(I\), this implies \(f(\mathfrak{l}(k-1))=f(\mathfrak{l}(k))\) and thus \(k=k-1\) which is impossible. Therefore,
\[\ell^{\prime}_{k}\leq\ell^{\prime}_{k-1}\leq\cdots\leq\ell^{\prime}_{k-n^{ \prime}}=\ell^{\prime}_{k}\]
which implies that \(\ell^{\prime}_{k}=\ell^{\prime}_{k-1}\) for all \(k\in\mathbb{Z}\).
**Example 3.7**.: _Let \(n=5\), \(d=2\) and \(\underline{\ell}=(3,4,4,4,4)\). By Example 3.4, we have that \(J=\{2,3,4,5\}\) and \(I=J+5\mathbb{Z}\). Recall that \(f(i)=i-\ell_{i}-1\). Then_
\[f(1)=-3,f(2)=-3,f(3)=-2,f(4)=-1,f(5)=0.\]
_Following our construction, \(J^{\prime}=\{1,2,3,4\}\) and_
\[\mathfrak{l}(1)=2,\mathfrak{l}(2)=3,\mathfrak{l}(3)=4,\mathfrak{l}(4)=5.\]
_Hence_
\[\ell^{\prime}_{1} =|[f(\mathfrak{l}(1)),\mathfrak{l}(1)]\cap I|-2=|[-3,2]\cap I|-2=3,\] \[\ell^{\prime}_{2} =|[f(\mathfrak{l}(2)),\mathfrak{l}(2)]\cap I|-2=|[-2,3]\cap I|-2=3,\] \[\ell^{\prime}_{3} =|[f(\mathfrak{l}(3)),\mathfrak{l}(3)]\cap I|-2=|[-1,4]\cap I|-2=3,\] \[\ell^{\prime}_{4} =|[f(\mathfrak{l}(4)),\mathfrak{l}(4)]\cap I|-2=|[0,5]\cap I|-2=3.\]
_We have that \(\ell^{\prime}_{\infty}=(\ldots,3,3,3,\ldots)\)._
**Remark 3.8**.: _The function \(\mathfrak{l}:\mathbb{Z}\to I\) is actually a relabelling of \(I\). We have the following commutative diagram_
_where \(f^{\prime}=\mathfrak{l}^{-1}\circ f|_{I}\circ\mathfrak{l}\). Since \(f|_{I}:I\to I\) is bijective we have that \(f^{\prime}\) is bijective. Moreover, \(\ell^{\prime}_{i}=i-f^{\prime}(i)-d+1\) for \(i\in\mathbb{Z}\)._
_As can be seen from the above formula, we obtained \(\ell^{\prime}_{\infty}\) by forcing \(f^{\prime}\) to play the role of \(f|_{I}\). Observe that when \(\ell^{\prime}\geq 2\), \(\ell^{\prime}_{\infty}\) is a connected Kupisch series with constant values. The case \(\ell^{\prime}\leq 1\) is addressed below._
We extend \(f\) to \(os^{d+1}_{\ell_{\infty}}\cup\{0\}\) in the following way. For \(x=(x_{1},\ldots,x_{d+1})\in os^{d+1}_{\ell_{\infty}}\), define \(f(x)=(f(x_{1}),\ldots,f(x_{d+1}))\) if \(f(x_{1})<f(x_{2})<\cdots f(x_{d+1})<x_{1}\) and \(f(x)=0\) otherwise. Further define \(f(0)=0\). Note that if \(f(x)\neq 0\), then \(f(x)\in os^{d+1}_{\ell_{\infty}}\) since \(f(x_{d+1})<x_{1}\) implies \(f^{2}(x_{d+1})\leq f(x_{1})\).
**Lemma 3.9**.: _Let \(x\in os^{d+1}_{\ell_{\infty}}\). Then \(\operatorname{proj.dim}M(x)\leq d^{2}\) if \(f(x)=0\). Otherwise \(\Omega^{d(d+1)}M(x)=M(f(x))\). In particular, if \(f^{N}(x)=0\) for some \(N\geq 1\) then \(\operatorname{proj.dim}M(x)<\infty\)._
Proof.: If \(f(x_{d+1})=x_{1}\), then \(M(x)\) is projective. Thus \(f(x)=0\) and the statement holds. If \(f(x_{d+1})<x_{1}\), then \(\Omega^{d}M(x)=M(f(x_{d+1}),x_{1},\ldots,x_{d})\) by Proposition 3.3. Iterating this process, we find either that some \(\Omega^{d_{i}}M(x)\) is projective where \(1\leq i\leq d\) and \(f(x)=0\) or \(f(x)\neq 0\) and \(\Omega^{d(d+1)}M(x)=M(f(x))\). By induction, the second claim follows.
**Proposition 3.10**.: _If \(\ell^{\prime}\leq 1\), then \(D_{sg}(A)=0\)._
Proof.: We claim that for all \(x\in os^{d+1}_{\ell_{\infty}}\), \(f^{s}(x)=0\) for some \(s\gg 0\). By Proposition 3.5\((iii)\), there exists \(N\gg 0\) such that \(f^{N}(x_{i})\in I\) for all \(1\leq i\leq d+1\). In the case \(\ell^{\prime}\leq 0\), we have \(f^{N+1}(x)=0\). If not, then \((f^{N+1}(x_{d+1}),f^{N}(x_{1}),\ldots,f^{N}(x_{d}))\in os^{d+1}_{\ell_{\infty}}\) which is impossible. When \(\ell^{\prime}=1\), we have \(f^{N+1}(x_{d+1})=f^{N}(x_{1})\). In other words, \(M(f^{N}(x))\) is projective thus \(f^{N+1}(x)=0\).
In both cases, \(\operatorname{proj.dim}M(x)<\infty\) by Lemma 3.9. Thus \(M(x)\in K^{b}(\operatorname{proj}A)\). As we have seen in the proof of Theorem 3.18 we have that \(D^{b}(\operatorname{mod}A)=\operatorname{tri}(\mathcal{M}^{(d)}_{\ell_{ \infty}})\). Therefore, \(D^{b}(\operatorname{mod}A)=K^{b}(\operatorname{proj}A)\) and \(D_{sg}(A)=0\).
**Remark 3.11**.: _From now on we assume \(\ell^{\prime}\geq 2\). Then \(\ell^{\prime}_{\infty}\) is a connected Kupisch series with constant values._
### Singularity category of \(A^{(d)}_{\ell}\)
In this section, we will define a self-injective higher Nakayama algebra \(A^{\prime}\) by the Kupisch series obtained from section 3.1. By identifying \(A^{\prime}\) with an idempotent subalgebra \(B\) of \(A\) and applying Theorem 3.18, we obtain a singular equivalence between \(A\) and \(B\). Therefore the singularity category of a \(d\)-Nakayama algebra is triangulated equivalent to the stable module category of a selfinjective \(d\)-Nakayama algebra.
Let \(A^{\prime}=A^{(d)}_{n^{\prime},\ell^{\prime}}\) be the selfinjective \(d\)-Nakayama algebra of type \(\widetilde{\mathbb{A}}_{n^{\prime}-1}\) with Kupisch series \((\ell^{\prime},\ldots,\ell^{\prime})\) and
\[\mathcal{M}^{(d)}_{n^{\prime},\ell^{\prime}}=F_{*}\mathcal{M}^{(d)}_{\ell_{ \infty}}=\operatorname{add}\{M(x)=F_{*}\widehat{M}(x)\mid x\in os^{d+1}_{\ell _{\infty}}\}\]
be the distinguished \(d\mathbb{Z}\)-cluster tilting subcategory of \(\operatorname{mod}\!A^{\prime}\).
Let
\[\operatorname{\mathit{os}}_{I}^{d+1}=\{x=(x_{1},\ldots,x_{d+1})\in \operatorname{\mathit{os}}_{\mathfrak{t}_{\mathfrak{m}}}^{d+1}\mid x_{i}\in I \text{ for all }1\leq i\leq d+1\}\]
be the set with ordered squences whose coordinates are in \(I\). Let
\[\mathcal{W}=\operatorname{add}\{M(x)\in\mathcal{M}_{\underline{\ell}}^{(d)} \mid x\in\operatorname{\mathit{os}}_{I}^{d+1}\}\subset\mathcal{M}_{ \underline{\ell}}^{(d)}\]
and \(\mathcal{P}_{I}=\mathcal{W}\cap\mathcal{P}\), i.e. the full subcategory of projective objects in \(\mathcal{W}\).
Recall that \(\mathfrak{u}:\mathbb{Z}\to I\) is the unique order-preserving bijection. Now we extend \(\mathfrak{u}\) to a bijection preserving the relation \(\preccurlyeq\) as follows
\[\mathfrak{u}:\operatorname{\mathit{os}}_{\mathfrak{t}_{\mathfrak{m}}^{d+1}}^{ d+1}\to\operatorname{\mathit{os}}_{I}^{d+1}\]
where \(\mathfrak{u}(x)=(\mathfrak{u}(x_{1}),\ldots,\mathfrak{u}(x_{d+1}))\) for \(x=(x_{1},\ldots,x_{d+1})\in\operatorname{\mathit{os}}_{\mathfrak{t}_{ \mathfrak{m}}^{d+1}}^{d+1}\).
We define a \(k\)-linear functor induced by \(\mathfrak{u}\) as follows.
\[\mathfrak{u}^{*}:\mathcal{M}_{n^{\prime},\,\ell^{\prime}}^{(d)} \to\mathcal{W}\] \[M(x) \mapsto M(\mathfrak{u}(x))\] \[[M(x) \xrightarrow{f_{xy}^{l}}M(y)] \mapsto[M(\mathfrak{u}(x))\xrightarrow{f_{(\mathfrak{u}(x))}^{l}}M( \mathfrak{u}(y))].\]
Observe that \(\mathfrak{u}^{*}(Id_{M(x)})=\mathfrak{u}^{*}(f_{xx}^{0})=f_{\mathfrak{u}(x) (x)}^{0}=Id_{\mathfrak{u}^{*}(M(x))}\) and \(\mathfrak{u}^{*}(f_{yz}^{j}f_{yx}^{i})=\mathfrak{u}^{*}(f_{yz}^{j})\mathfrak{ u}^{*}(f_{yx}^{i})\) since \(\mathfrak{u}\) preserves the relation \(\preccurlyeq\). Moreover \(\mathfrak{u}^{*}\) is fully faithful since it maps a \(k\)-basis of \(\operatorname{Hom}_{A^{\prime}}(M(x),M(y))\) to a \(k\)-basis of \(\operatorname{Hom}_{A}(M(\mathfrak{u}(x)),M(\mathfrak{u}(y)))\). For an indecomposable object \(M(z)\in\mathcal{W}\), we have that \(\mathfrak{u}^{-1}(z)\in\operatorname{\mathit{os}}_{\mathfrak{t}_{\mathfrak{m} }^{\prime}}^{d+1}\) since \(\mathfrak{u}\) is bijective. Thus \(M(\mathfrak{u}^{-1}(z))\in\mathcal{M}_{n^{\prime},\ell^{\prime}}^{(d)}\) and \(\mathfrak{u}^{*}(M(\mathfrak{u}^{-1}(z)))=M(z)\). This implies that \(\mathfrak{u}^{*}\) is dense. Therefore, \(\mathfrak{u}^{*}\) is a \(k\)-linear equivalence.
**Proposition 3.12**.: _We have a \(k\)-linear equivalence \(\mathfrak{u}^{*}:\mathcal{M}_{n^{\prime},\ell^{\prime}}^{(d)}\to\mathcal{W}\). Moreover, when restricted to projective objects, we obtain the equivalence \(\mathfrak{u}^{*}:\operatorname{add}\!A^{\prime}\to\operatorname{add}\!P\) where \(P\) is a basic additive generator of \(\mathcal{P}_{I}\)._
Proof.: By Proposition 3.3, \(M(x)\in\mathcal{M}_{n^{\prime},\ell^{\prime}}^{(d)}\) is projective if and only if \(x_{1}=f^{\prime}(x_{d+1})\). Since \(f^{\prime}=\mathfrak{u}^{-1}\circ f|\operatorname{\mathit{\mathit{\mathit{I }}}}\circ\mathfrak{u}\) by Remark 3.8, this is equivalent to \(\mathfrak{u}(x_{1})=f\circ\mathfrak{u}(x_{d+1})\), which is fulfilled if and only if \(\mathfrak{u}^{*}(M(x))\) is projective. Therefore, \(\mathfrak{u}^{*}\) restricts to an equivalence \(\operatorname{add}\!A^{\prime}\xrightarrow{\sim}\operatorname{add}\!P\).
We denote by \(B=\operatorname{End}_{A}(P)\) the endomorphism algebra of \(P\). Since \(P\) is a basic projective \(A\)-module, there exists an idempotent \(e\in A\) such that \(P=eA\) and \(B=eAe\). In other words, \(B\) is an idempotent subalgebra of \(A\).
We have the canonical functor
\[i_{\lambda}=-\otimes_{B}P:\operatorname{mod}\!B\to\operatorname{mod}\!A,N \mapsto N\otimes_{B}P,\]
which admits an exact right adjoint functor
\[i_{\rho}=\operatorname{Hom}_{A}(P,-):\operatorname{mod}\!A\to\operatorname{ mod}\!B,M\mapsto Me.\]
Since \(i_{\lambda}(B)=P\) and \(i_{\rho}(P)=B\), \(i_{\lambda},i_{\rho}\) restrict to an additive equivalence
**Proposition 3.13**.: _We have that \(B\cong A^{\prime}\). That is, \(B\) is a self-injective \(d\)-Nakyama algebra._
Proof.: Write \(A^{\prime}=\bigoplus_{x\in X}M(x)\) where \(X\) is the set of indices of indecomposable \(A^{\prime}\)-projective modules. Consider the equivalence \(i_{\rho}\circ\mathfrak{l}^{*}:\operatorname{add}A^{\prime}\to\operatorname{ add}B,M(x)\mapsto\operatorname{Hom}_{A}(P,M(\mathfrak{l}(x)))\). We have that
\[A^{\prime} =\operatorname{End}_{A^{\prime}}(A^{\prime})\] \[\cong\bigoplus_{x,y\in X}\operatorname{Hom}_{A^{\prime}}(M(x),M( y))\] \[\cong\bigoplus_{x,y\in X}\operatorname{Hom}_{A}(M(\mathfrak{l}( x)),M(\mathfrak{l}(y)))\] \[\cong\bigoplus_{x,y\in X}\operatorname{Hom}_{B}(\operatorname{ Hom}_{A}(P,M(\mathfrak{l}(x))),\operatorname{Hom}_{A}(P,M(\mathfrak{l}(y))))\] \[\cong\operatorname{Hom}_{B}(\operatorname{Hom}_{A}(P,\bigoplus_ {x\in X}M(\mathfrak{l}(x))),\operatorname{Hom}_{A}(P,\bigoplus_{y\in X}M( \mathfrak{l}(y))))\] \[\cong\operatorname{End}_{B}(\operatorname{Hom}_{A}(P,P))\] \[=B.\]
We use the above isomorphism to identify \(B\) with \(A^{\prime}\). In the same way we identify \(\operatorname{add}B=\operatorname{add}A^{\prime}\) and the distinguished \(d\mathbb{Z}\)-cluster tilting subcategory of \(B\) with \(\mathcal{M}^{(d)}_{n^{\prime},\ell^{\prime}}\).
**Proposition 3.14**.: _For a nonprojective module \(M(x)\in\mathcal{W}\), there is a positive integer \(s\) and an exact sequence_
_with \(P_{i}\in\mathcal{P}_{I}\)._
Proof.: Since \(f(I)\subset I\) and \(x\in os_{I}^{d+1}\), the sequence
from Proposition 3.3 lies in \(\mathcal{W}\). In particular, \(P_{i}\in\mathcal{P}_{I}\) for \(1\leq i\leq d\). Moreover, since \(f|_{I}:I\to I\) is bijective by Proposition 3.5\((ii)\), we get that \(\Omega^{d}M(x)=M(f(x_{d+1}),x_{1},\ldots,x_{d})\in\mathcal{W}\) is not projective. Indeed, \(x_{d}<x_{d+1}\) implies \(f(x_{d})<f(x_{d+1})\). By iteration, we get
for any \(r\geq 1\). By Lemma 3.9, we have that \(\Omega^{d(d+1)s}M(x)=M(f^{s}(x))\) for all \(s\geq 1\). By Proposition 3.5\((ii)\), there exists \(s,t\in\mathbb{N}\) such that \(f^{s}(i)=i+tn\) for all \(i\in I\). Then
\[\Omega^{d(d+1)s}(M(x))=M(f^{s}(x))\cong M(\sigma^{t}(x))\cong M(x).\]
In particular, we have the following exact sequence
\[0\rightharpoonup M(x)\rightharpoonup P_{d(d+1)s}\rightharpoonup\cdots \rightharpoonup P_{1}\rightharpoonup M(x)\rightharpoonup 0\]
with \(P_{i}\in\mathcal{P}_{I}\) for all \(1\leq i\leq d(d+1)s\).
**Proposition 3.15**.: _There is a \(k\)-linear equivalence \(i_{\rho}:\mathcal{W}\xrightarrow{\sim}\mathcal{M}^{(d)}_{n^{\prime},\ell^{ \prime}}\)._
Proof.: By our identification \(\operatorname{add}\!B\cong\operatorname{add}\!A^{\prime}\) given by Proposition 3.13, it follows that \(i_{\rho}\circ\mathfrak{i}^{*}\) restricted to \(\operatorname{add}\!B\) is isomorphic to \(Id_{\operatorname{add}\!B}\). Hence, for \(M(x),M(y)\in\operatorname{add}\!P\), \(i_{\rho}(M(x))=M(\mathfrak{i}^{-1}(x))\) and \(i_{\rho}(f_{yx}^{i})=f_{\mathfrak{i}^{-1}(y)\mathfrak{i}^{-1}(x)}^{i}\) for \(f_{yx}^{i}:M(x)\to M(y)\). For a nonprojective indecomposable object \(M(x)\in\mathcal{W}\) with \(x=(x_{1},\ldots,x_{d+1})\), we take the minimal projective presentation of \(M(x)\)
\[M(x^{1})\to M(x^{0})\to M(x)\to 0\]
where \(x^{0}=(f(x_{d+1}),x_{2},\ldots,x_{d+1})\) and \(x^{1}=(f(x_{d+1}),x_{1},x_{3},\ldots,x_{d+1})\). Applying the exact functor \(i_{\rho}\) to it gives us the minimal projective presentation of \(i_{\rho}(M(x))\)
\[M(\mathfrak{i}^{-1}(x^{1}))\to M(\mathfrak{i}^{-1}(x^{0}))\to i_{\rho}(M(x))\to 0.\]
Thus \(i_{\rho}(M(x))=M(\mathfrak{i}^{-1}(x))\). For \(f_{yx}^{i}:M(x)\to M(y)\), we obtain, using the above identification, that \(i_{\rho}(f_{yx}^{i})=f_{\mathfrak{i}^{-1}(y)\mathfrak{i}^{-1}(x)}^{i}:M( \mathfrak{i}^{-1}(x))\to M(\mathfrak{i}^{-1}(y))\). Since \(i_{\rho}\) is \(k\)-linear, it follows that \(i_{\rho}\) is an additive equivalence, namely the quasi-inverse is given by \(\mathfrak{i}^{*}\).
**Proposition 3.16**.: _The following statements hold._
1. \(\mathcal{W}\) _is a wide subcategory of_ \(\mathcal{M}_{\underline{\ell}}^{(d)}\)_._
2. \(i_{\lambda}\) _and_ \(i_{\rho}\) _restrict to a quasi-inverse equivalence of_ \(d\)_-abelian categories_
Proof.: We consider \(P\in\mathcal{W}\). To apply Theorem 2.8, condition \((i)\) and \((ii)\) are trivial since \(P_{A}\) is projective. Take a nonprojective object \(M(x)\in\mathcal{W}\). By splicing the exact sequences \((*)\) in Proposition 3.14, we obtain an exact (periodic) \(\operatorname{add}\!P\)-resolution of \(M(x)\). Proposition 3.15 verifies condition \((iv)\). Therefore \(\mathcal{W}\) is a wide subcategory of \(\mathcal{M}_{\underline{\ell}}^{(d)}\). Part (b) also follows from Theorem 2.8.
**Example 3.17**.: _Let \(n=5\), \(d=2\) and \(\underline{\ell}=(3,4,4,4,4)\). By Example 2.13 and Example 3.4, we have the Auslander-Reiten quiver of \(\mathcal{W}\) as follows. Note that it is the same as the Auslander-Reiten quiver of \(\mathcal{M}_{4,3}^{(2)}\) as in Example 2.14._
\[\begin{array}{c}\includegraphics[width=142.26378pt]{23771717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171771717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717717171717171717171717171717171717717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171717171771717171717171717171771717171717171717171717171717177171717171717717171717717171717171717177171717171717171717171717171717177171717171717171717171717717171717171717171717171717171717171717171717171717171771717171717171717171771717171717171717171717171717171717171717177171717171717171717177171717177171717171717171717171717171717171717171717171717171717171717717171717171717171717717171717177171717171717171717171717171717171717177171717171717171717171771
2.8\((c)\), \(D_{sg}(i_{\lambda})\) restricts to an equivalence between \((d+2)\)-angulated categories \(\underline{\mathcal{M}}_{n^{\prime},\ell^{\prime}}^{(d)}\) and \(\underline{\mathcal{M}}_{\underline{\ell}}^{(d)}\).
**Corollary 3.19**.: _The singularity category of \(A\) is triangulated equivalent to the stable module category of \(B\). More precisely, \(i_{\lambda}:\operatorname{mod}\!B\to\operatorname{mod}\!A\) induces an triangule equivalence_
\[D_{sg}(i_{\lambda}):\underline{\operatorname{mod}}B\to D_{sg}(A).\]
Proof.: Since \(B\) is self-injective, we have that \(D_{sg}(B)\cong\underline{\operatorname{mod}}B\). By Theorem 3.18, the statement follows.
**Remark 3.20**.: _We have the following commutative diagram_
_Hence \(D_{sg}(i_{\lambda})=q\circ\dot{i}_{\underline{\lambda}}\)._
By [13, Theorem A], there is a bijective correspondence between the equivalence classes of pairs \((\mathcal{T},c)\) with \(\mathcal{T}\) an algebraic Krull-Schmidt triangulated category with finite dimensional morphism spaces and \(c\) a basic \(d\mathbb{Z}\)-cluster tilting object of \(\mathcal{T}\) and the equivalence classes of pairs \((\Lambda,I)\) with \(\Lambda\) a basic twisted \((d+2)\)-periodic self-injective algebra and \(I\) an invertible \(\Lambda\)-bimodule, such that \(I\cong\Omega_{\Lambda^{c}}^{d+2}(\Lambda)\) in \(\underline{\operatorname{mod}}\Lambda^{\epsilon}\) where \(\Lambda^{\epsilon}=\Lambda\otimes_{k}\Lambda^{op}\). By restricting this correspondence to our setting, we have the following proposition.
**Proposition 3.21**.: _Let \(A,B\) be as above. Let \(M\) (resp. \(N\)) be the distinguished \(d\mathbb{Z}\)-cluster tilting module of \(A\) (resp. \(B\)). Then_
\[D_{sg}(M,M)\cong\underline{\operatorname{End}}_{B}(N)=A_{n^{\prime},\ell^{ \prime}-1}^{(d+1)}.\]
_Moreover:_
1. \(D_{sg}(A)\) _has a unique_ \(dg\)_-enhancement._
2. _Let_ \(\mathcal{T}\) _be an algebraic Krull-Schmidt triangulated category with finite dimensional morphism spaces. If there exists a basic_ \(d\mathbb{Z}\)_-cluster tilting object_ \(c\in\mathcal{T}\) _such that_ \(\mathcal{T}(c,c)\cong A_{n^{\prime},\ell^{\prime}-1}^{(d+1)}\)_. Then_ \[\mathcal{T}\simeq D_{sg}(A)\] _as triangulated categories._
Proof.: Combined with Corollary 3.19, this is parallel to [13, Theorem 6.5.2].
Let \(\Lambda=A_{n,\ell-1}^{(d+1)}\) be a self-injective \((d+1)\)-Nakayama algebra with \(n\geq 1,\ell\geq 2\). Denote by \(Q_{\Lambda}\) the Gabriel quiver of \(\Lambda\). We define an automorphism \(\Phi\) of \(Q_{\Lambda}\) as follows.
\[\Phi:Q_{\Lambda} \to Q_{\Lambda}\] \[(x_{1},\ldots,x_{d+1}) \mapsto(f(x_{d+1}),x_{1},\ldots,x_{d})\] \[[a_{i}(x):x\to x+e_{i}] \mapsto[a_{i+1}(\Phi(x)):\Phi(x)\to\Phi(x)+e_{i+1}]\]
where \(f(i)=i-\ell-d+1\) for \(i\in\mathbb{Z}\). By convention, let \(a_{d+2}=a_{1}\) and \(e_{d+2}=e_{1}\). This is well-defined since \(x_{1}\geq x_{d+1}-\ell-d+2>f(x_{d+1})\). Moreover, the relations of \(Q_{\Lambda}\) are invariant under \(\Phi\) since
\[0 =\Phi(a_{j}(x+e_{i})a_{i}(x)-a_{i}(x+e_{j})a_{j}(x))\] \[=a_{j+1}(\Phi(x)+e_{i+1})a_{i+1}(\Phi(x))-a_{i+1}(\Phi(x)+e_{j+1} )a_{j+1}(\Phi(x)).\]
Hence we can extend \(\Phi\) linearly to get an algebra automorphism \(\Phi:\Lambda\xrightarrow{\sim}\Lambda\). Additionally, we denote by \((-)_{\Phi}:\operatorname{mod}\Lambda\to\operatorname{mod}\Lambda\) the auto-equivalence induced by \(\Phi\).
**Proposition 3.22**.: _Let \(\Lambda\) and \(\Phi\) be as above. Then \(\Lambda\) is twisted \((d+2)\)-periodic, that is, \(\Omega^{d+2}_{\Lambda}\cong(\ )_{\Phi}\) as functors in \(\underline{\operatorname{mod}}\Lambda\)._
Proof.: By Proposition 3.21, \(\Lambda\cong\underline{\operatorname{End}}_{\Gamma}(M)\) where \(\Gamma=A^{(d)}_{n,\ell}\) and \(M\) is the distinguished \(d\mathbb{Z}\)-cluster tilting module of \(\Gamma\). Let \(H=\underline{\operatorname{Hom}}_{\Gamma}(M,-)\).
We identify the Gabriel quiver \(Q_{\Lambda}\) of \(\Lambda\) with the Auslander-Reiten quiver of \(\underline{\operatorname{add}}M\). By Proposition 3.3, it follows that \(\Omega^{d}_{\Gamma}(M(x))=M(\Phi(x))\) with \(M(x)\in\underline{\operatorname{add}}M\).
Let \(f^{i}_{yx}:M(x)\to M(y)\) be a nonzero morphism in \(\underline{\operatorname{add}}M\). We claim that \(H\Omega^{d}_{\Gamma}(f^{i}_{yx})=(H(f^{i}_{yx}))_{\Phi}=f^{i}_{\Phi(y)\Phi(x)}\). Without loss of generality, we may assume \(i=0\). It follows that \(f(y_{d+1})<x_{1}\). If not, then \(x\preccurlyeq x^{\prime}=(x_{1},\ldots,x_{d},x_{1}+\ell+d-1)\preccurlyeq y\). Hence \(f^{0}_{yx}=f^{0}_{yx^{\prime}}f^{0}_{x^{\prime}x}\). Note that \(M(x^{\prime})\) is projective. This contradicts with \(f^{0}_{yx}\) being nonzero in \(\underline{\operatorname{add}}M\).
Applying Proposition 3.3 to \(M(x),M(y)\) and lifting \(f^{0}_{yx}\) we obtain
Observe that \(g_{i}\neq 0\) for \(1\leq i\leq d\) since \(f(y_{d+1})<x_{1}\) as shown above. As claimed \(H\Omega^{d}_{\Gamma}(f^{i}_{yx})\cong(H(f^{i}_{yx}))_{\Phi}\). Hence we have the following commutative diagram, for a more general statement, cf [1, Proposition 2.2.7].
Denote by \(\varepsilon:H\Omega^{d}_{\Gamma}\xrightarrow{\sim}(-)_{\Phi}H\) the natural isomorphism.
Now we show that \(\Omega^{d+2}_{\Lambda}\cong(-)_{\Phi}\) on \(\underline{\operatorname{mod}}\Lambda\).
Let \(N\) be an indecomposable object in \(\underline{\operatorname{mod}}\Lambda\) and take a minimal projective presentation of \(N\) in \(\operatorname{mod}\Lambda\)
\[P_{1}\xrightarrow{g}P_{0}\to N\to 0.\]
Then there exist \(M_{0},M_{1}\in\operatorname{add}M\) and \(\beta:M_{1}\to M_{0}\) such that \(P_{i}\cong HM_{i}\) for \(i=0,1\) and \(g=H\beta\).
Since \(\underline{\operatorname{add}}M\) is a \(d\mathbb{Z}\)-cluster tilting subcategory of \(\underline{\operatorname{mod}}\Gamma\), it has a \((d+2)\)-angulated structure. Thus we embed \(\beta\) into a \((d+2)\)-angle
\[(**)\ \ \Omega^{d}_{\Gamma}M_{0}\xrightarrow{\sim}M_{d+1}\xrightarrow{\sim} \cdots\xrightarrow{\sim}M_{2}\xrightarrow{\sim}M_{1}\xrightarrow{\beta}M_{0}\.\]
Applying \(H\) to \((**)\), we have the following exact sequence
\[H(\Omega_{\Gamma}^{d}M_{1})\stackrel{{ h}}{{\rightharpoonup}}H( \Omega_{\Gamma}^{d}M_{0})\xrightarrow{}H(M_{d+1})\xrightarrow{}\cdots \xrightarrow{}H(M_{1})\xrightarrow{g}H(M_{0})\xrightarrow{}N\xrightarrow{}0\.\]
Consider the following commutative diagram with exact rows.
Since \(\varepsilon_{M_{1}}\) and \(\varepsilon_{M_{0}}\) are isomorphisms, we have that \(\Omega_{\Lambda}^{d+2}(N)\cong N_{\Phi}\).
Let \(\varphi:N\to N^{\prime}\) be a morphism in \(\underline{\mathrm{mod}}\Lambda\). We have the following diagram.
The rightmost vertical square commutes since all the other squares commute. This show that \(\Omega_{\Lambda}^{d+2}\cong(-)_{\Phi}\) as functors on \(\underline{\mathrm{mod}}\Lambda\).
**Example 3.23**.: _Let \(n=5\), \(d=2\) and \(\underline{\ell}=(3,4,4,4,4)\). By Example 3.17, the Gabriel quiver of \(\Lambda=A_{4,2}^{(3)}\) is given as follows._
\(\Lambda\) _is twisted \(4\)-periodic and the twist is induced by the automorphism \(\Phi\) which sends \((x_{1},x_{2},x_{3})\) to \((x_{3}-4,x_{1},x_{2})\)._
## 4. Examples
In this section, we give more examples.
**Example 4.1**.: _(Compare to [1, Example 5.4]) Let \(n=4\), \(d=1\) and \(\underline{\ell}=(5,6,7,6)\). Let \(A=A_{\underline{\ell}}^{(1)}\) be the usual Nakayama algebra and \(\mathrm{mod}A\) is the \(1\mathbb{Z}\)-cluster tilting subcategory. The resolution quiver is given as follows._
_Then \(J=\{2,4\}\) and \(I=J+4\mathbb{Z}\). Thus \(\mathfrak{u}(1)=2\) and \(\mathfrak{u}(2)=4\). Therefore \(\ell^{\prime}=|[f(\mathfrak{u}(1)),\mathfrak{u}(1)]\cap I|-1=|[-4,2]\cap I|-1=3\) and \(B=A^{(1)}_{2,3}\). The Auslander-Reiten quiver of the wide subcategory \(\mathcal{W}\) of \(\operatorname{mod}\!\!A\) is as follows._
_Hence we have that \(\Lambda=A^{(2)}_{2,2}\) which is twisted \(3\)-periodic._
**Example 4.2**.: _Let \(n=5\), \(d=4\) and \(\underline{\ell}=(5,5,6,6,5)\). Let \(A=A^{(4)}_{\underline{\ell}}\) be the \(4\)-Nakayama algebra defined by \(\underline{\ell}\) and \(\mathcal{M}\) the distinguished \(4\mathbb{Z}\)-cluster tilting subcategory. Recall that \(\overline{f}(i)\equiv i-\ell_{i}-3\mod 5\). Thus we have the resolution quiver_
_Then \(J=\{2,4,5\}\) and \(I=J+5\mathbb{Z}\). We have the Auslander-Reiten quiver of the wide subcategory \(\mathcal{W}\) of \(\mathcal{M}\) as follows._
_Thus \(B=A^{(4)}_{3,2}\) and \(\Lambda=k\oplus k\oplus k\)._
## Acknowledgments
The author would like to thank her advisor Martin Herschend for many helpful comments and discussions.
|
2305.11334 | Writing your own book: A method for going from closed to open book QA to
improve robustness and performance of smaller LLMs | We introduce two novel methods, Tree-Search and Self-contextualizing QA,
designed to enhance the performance of large language models (LLMs) in
question-answering tasks. Tree-Search is a sampling technique specifically
created to extract diverse information from an LLM for a given prompt.
Self-contextualizing QA leverages Tree-Search to enable the model to create its
own context using a wide range of information relevant to the prompt, evaluate
it explicitly and return a open book answer to the initial prompt . We
demonstrate that the quality of generated answers improves according to various
metrics, including accuracy, informativeness, coherence, and consistency, as
evaluated by GPT3.5(text-davinci-003). Furthermore, we show that our methods
result in increased robustness and that performance is positively correlated
with tree size, benefiting both answer quality and robustness. Finally, we
discuss other promising applications of Tree-Search, highlighting its potential
to enhance a broad range of tasks beyond question-answering.
\noindent We also discuss several areas for future work, including refining
the Tree-Search and Self-Contextualizing QA methods, improving the coherence of
the generated context, and investigating the impact of bootstrapping on model
robustness | Giorgi Kokaia, Pratyush Sinha, Yutong Jiang, Nozha Boujemaa | 2023-05-18T22:47:06Z | http://arxiv.org/abs/2305.11334v1 | Writing your own book: A method for going from closed to open book QA to improve robustness and performance of smaller LLMs
###### Abstract
We introduce two novel methods, Tree-Search and Self-contextualizing QA, designed to enhance the performance of large language models (LLMs) in question-answering tasks. Tree-Search is a sampling technique specifically created to extract diverse information from an LLM for a given prompt. Self-contextualizing QA leverages Tree-Search to enable the model to create its own context using a wide range of information relevant to the prompt, evaluate it explicitly and return a open book answer to the initial prompt. We demonstrate that the quality of generated answers improves according to various metrics, including accuracy, informativeness, coherence, and consistency, as evaluated by GPT3.5(text-davinci-003). Furthermore, we show that our methods result in increased robustness and that performance is positively correlated with tree size, benefiting both answer quality and robustness. Finally, we discuss other promising applications of Tree-Search, highlighting its potential to enhance a broad range of tasks beyond question-answering.
We also discuss several areas for future work, including refining the Tree-Search and Self-Contextualizing QA methods, improving the coherence of the generated context, and investigating the impact of bootstrapping on model robustness
## 1 Introduction
The most notable breakthrough in artificial intelligence research over the past few years has been the significant progress in natural language processing (NLP) driven by large language models (LLMs). These transformer-based neural networks Vaswani et al. (2017) are trained on enormous corpora of web-text data, utilizing a self-supervised objective that involves predicting the next word in a given partial sentence. As a result, LLMs, such as BERT Devlin et al. (2019), GPT-3 Brown et al. (2020), and GPT-4, have demonstrated exceptional performance across a wide array of NLP tasks, including machine translation, sentiment analysis, text summarization, and question-answering (QA) (Yang et al., 2019; Bubeck et al., 2023).
Despite the remarkable achievements of LLMs, they still face challenges in robustness, context understanding, and generalization, particularly in question-answering tasks under closed
book and open book settings (Chen et al., 2017). Closed book QA systems derive answers solely based on the internal knowledge gained during the pre-training phase of the model, while open book QA systems leverage external information sources, such as knowledge bases or documents, to provide more accurate and contextually relevant responses.
In this study, we introduce two novel methodologies: Tree-Search and Self-Contextualizing QA. Tree-Search is a new decoding strategy designed to extract a diverse range of information from a given model, enabling the generation of richer context for question-answering tasks. Self-Contextualizing QA refers to the process of transforming closed book QA into open book QA by creating context from the model's own outputs. By combining these two methodologies, we aim to enhance the performance and robustness of large-scale language models in QA tasks.
## 2 Closed book vs Open book
There is a large number of different tasks on which LLMs are trained, all of which help build their capabilities and knowledge (see e.g. Raffel et al., 2020 for a good breakdown). We consider two of them, closed book QA and open book QA.
Closed book QA refers to a setting where the model generates answers based on its internal knowledge acquired during the pre-training process, without access to external information (e.g. Chen et al., 2017). LLMs such as GPT-3 Brown et al., 2020, have shown remarkable performance in closed book QA tasks due to their ability to store and recall vast amounts of knowledge.
Open book QA involves providing the model with access to external information, such as documents or databases, which it can utilize to generate more accurate and up-to-date answers (e.g. Lewis et al., 2020). This extra information is generally referred to as _context_.
There are of course some advantages and disadvantages to each approach. Closed book QA will be faster and require less computational resources as it accesses knowledge stored within its weights. This also means that is limited to the this knowledge, which can result in outdated or incomplete answers (Roberts et al., 2020) making open book QA more reliable for the types of tasks that require up to date, domain specific knowledge (Thorne et al., 2018). Additionally, given the required time as well as the computational and environmental costs associated with training the largest LLMs (as discussed in e.g. Touvron et al., 2023) it becomes completely unfeasible to constantly retrain them to keep them up to date making some form of open book QA a must.
A crucial aspect distinguishing open book QA from closed book QA is the increased robustness exhibited by the former. In this context, robustness refers to the ability of the model to generate answers that are less sensitive to small changes in the input prompt. This issue has been identified even in the largest models, such as GPT-4 (Bubeck et al., 2023).
In yet to be published work by _Jiang et al., (2023)_, an experiment was conducted where a model (T5) was asked the same question with and without a provided context. The results indicated that the open book QA exhibited a larger difference in probabilities between the top two tokens in the first position, signifying greater confidence in the prediction. This increased certainty is crucial because small variations in the input prompt can lead to different probabilities for the top predicted tokens. Given the auto-regressive nature of language model predictions, this shift in probabilities can cause a cascading change in the entire prediction. Open book QA's enhanced stability in token predictions makes it a more robust approach, which is a key reason for why this study focuses on developing a method to transform closed book QA into open book QA.
By transforming closed book QA to open book QA, we aim to not only improve the quality of generated answers but also enhance the overall robustness of the model's behavior. This increased robustness should result in more reliable and consistent predictions, which are less sensitive to minor variations in the input prompts. We reproduce and extend the experiment by _Jiang et al., (2023)_ in our work to further demonstrate the benefits of this approach.
Tree-Search Method
Tree-Search is a sampling method which aims to extract the most varied information possible from a LLM when given a specific prompt. This approach is particularly useful when seeking diverse responses from the model in order to explore a broader range of solutions or insights. The Tree-Search method can be broken down into three main steps (identifying high entropy positions, creating branches, and iterating the process), which are described in detail below.
#### 3.0.1 Identifying High Entropy Positions
The first step in the Tree-Search method is to identify high entropy positions within the model's decoded output. High entropy positions represent points where the model has low confidence in its predictions, making them ideal for branching and exploration. We propose two ways to identify these positions:
1. **Relative Probability Threshold:** Calculate the relative probabilities between the top tokens at each position. If the relative probability falls below a pre-defined threshold (near unity), it indicates that the model has low confidence at that position, and hence, it is a high entropy position.
2. **Probability Cut-off:** Set a probability cut-off at a reasonably low value (e.g., 0.01). Count the number of remaining tokens at each position after applying this cut-off. If the number of tokens exceeds a certain threshold, it indicates that the position is a high entropy position.
#### 3.0.2 Creating Branches
Once high entropy positions are identified, the next step is to create branches in the decoding process. This can be done in one of two ways:
1. **Non-Greedy Token Selection:** For each high entropy position, select any token below the threshold instead of the highest probability one. This encourages exploration of less probable but potentially interesting and relevant outputs.
2. **Random Token Selection:** Alternatively, for each high entropy position, randomly select a token above the probability cut-off. This approach promotes diversity in the generated responses.
#### 3.0.3 Iterating the Process
The final step of the Tree-Search method involves repeating the branching process. Continue to create branches either until no more branches can be formed (as the criteria for high entropy positions are no longer met) or until a desired depth is reached. The depth represents the number of times each sequence has been branched and can be adjusted to control the extent of exploration. We call a complete output (i.e. an output that includes \(<\)/eos\(>\)) a leaf.
### Tree-Search versus Traditional Beam Search
Beam search is a widely used technique for generating sequences in AI models. It works by expanding the search space in a breadth-first manner, maintaining a fixed number of top candidate sequences (called "beams") at each step. Beam search aims to strike a balance between computational efficiency and the quality of generated sequences. However, it tends to produce less diverse outputs, as it follows a more focused search strategy, retaining only the most likely sequences at each step. Which is a fundamentally different strategy from what we are proposing in this study.
Contrarily, the proposed Tree-Search approach aims for diverse and exploratory outputs by targeting high entropy positions during the model's decoding process. By branching at these positions and adopting non-greedy or random token selection, Tree-Search explores less probable but potentially intriguing solutions, thereby generating a wider range of responses.
The distinguishing factors between Tree-Search and beam search include the output diversity, with Tree-Search yielding more varied results; exploration versus exploitation, where Tree-Search fosters search space exploration while beam search exploits the model's most probable predictions; and customizability, where Tree-Search provides more user control over exploration depth and output diversity, whereas beam search primarily focuses on maintaining a fixed number of top sequences.
We illustrate the differences by showing the resulting tree from a very simple prompt, "describe the features of a dog", in figure 1. Whilst the answers are not what one might expect, they do answer the prompt and they do so with a rather large variety. In fact, the greedy answer which is "The dog is a member of the Canidae family" is both a worse answer to the prompt and incidentally does not even appear in the tree. We do not show the comparison with beam search in the figure as performing a beam-search with 100 beams and picking the top 10 beams gives essentially the same greedy answers with minor variations to it.
## 4 Setup of the experiment
Throughout this study we conduct all the experiments using the model T0_3B (Sanh et al., 2022). In the main experiment we apply the Tree-Search to create context that transforms closed book QA prompts into open book QA prompts. The goal of this experiment is to assess whether providing context generated by Tree-Search leads to better answers and more robust behavior. The experiment is as follows:
Figure 1: The tree generated from the prompt “Describe the features of a dog” using tree search with random token selection as described in section 3.0.2. It should be noted that in this particular case the tree branched at the very first token, giving it the appearance of two separate outputs, although this is not the case.
Our process begins with the assembly of a QA dataset composed of general knowledge, open-ended questions such as "What caused the French Revolution?" We initially prompt the model with these questions, storing the responses as a baseline for comparison with answers obtained using Tree-Search and the subsequent open book QA approach. Next, we apply Tree-Search to the model using the same dataset, yielding a variety of potential answers to each question. We then take each Tree-Search output, prune all the duplicate text and concatenate all unique outputs. This becomes the context for the model that takes it from closed to open book QA.
With this context, the model is prompted again, this time following an open book QA approach that utilizes the provided context to generate answers. These responses are stored for further analysis and comparison with the initial closed book QA outputs. Lastly, we evaluate both sets of responses using GPT3.5(text-davinci-003), a model demonstrated to perform on par with humans in text annotation (Huang et al., 2023). The entire process is illustrated by the flowchart in figure 2
## 5 Results
We have put together a dataset of 1475 open ended general knowledge questions. We have applied the process described in the previous section to this dataset, where we build the tree using a relative probability threshold of 1.4 and doing an exhaustive Tree-Search to a depth of 3. We do however set the maximum number of branches to 20 for computational efficiency. Once context is created we then prompt the model according to this very simple template that we arrived at following some experimentation.
**Context: {{context}}**
**Question: {{question}}**
**Answer:**
We then compare the original answer with the new one using GPT3.5(text-davinci-003) in four different ways 1) which answer is the most informative, 2) which is the most accurate 3) is the most consistent and 4) is the most consistent. In the prompt to text-davinci-003 we provide the question, the original as well as the new answer and then ask it to evaluate which answer best, given the way in which we are comparing them, or if they are very similar. In an effort to minimise bias we then ask the opposite question (i.e. "which answer is the least informative"), we set the true value to be the average of these and then we bootstrap the outputs in order to get an estimate of the uncertainty. The results of this experiment can be found in table 1.
Figure 2: The flowchart shows illustrates the process of going from closed book to open book QA described in section 4.
Below, two typical types of outputs are shown to two different prompts. In the first one we see that the open book provides a more detailed and informative answer as we would expect it to do. In the second one the open book also provides a more informative answer, however there we see that some errors are propagated from the simplistic way in which the context is put together.
\begin{table}
\begin{tabular}{l c c c} Metric & Closed Book[\%] & Open Book[\%] & Same/Similar[\%] \\ \hline Informative & \(8.4^{+1.0}_{-1.4}\) & \(53.3^{+21.0}_{-17.8}\) & \(38.3^{+13.4}_{-11.7}\) \\ Accuracy & \(12.5^{+1.3}_{-1.8}\) & \(31.2^{+8.2}_{-7.1}\) & \(56.3^{+17.8}_{-16.1}\) \\ Coherent & \(11.9^{+1.5}_{-1.1}\) & \(29.6^{+8.7}_{-7.4}\) & \(58.5^{+32.7}_{-29.9}\) \\ Consistent & \(14.1^{+3.2}_{-2.7}\) & \(26.4^{+7.4}_{-7.8}\) & \(59.5^{+16.7}_{-14.2}\) \\ \hline \end{tabular}
\end{table}
Table 1: The figure shows the evaluation of closed book as well as the open book answers by GPT3.5(text-davinci-003). We evaluate the original answers as well as the new answers in four different ways; how much information is in it, the quality of it, its accuracy and its consistency. The evaluation is done on a total of 1475 questions.
Discussion
### Testing robustness with our methodology
We first reproduce the experiment described in section 2 with the resulting data from our experiment, and the results are shown in figure 3. We observe that when providing context to a QA task, the model demonstrates increased certainty in the answers it produces and is therefore also more robust to small changes in the prompt.
We conduct another experiment to verify this where we modify the prompt in one of two ways: 1) We perturb the dataset by introducing typos, grammatical errors, or replacing individual words with synonyms, regardless of whether the synonym changes the implicit meaning of the question or not (as can happen in English). 2) We rephrase the question. Then, we get the answers and evaluate the new answers w.r.t the original question. If Tree-Search is more robust, it should perform better than what is shown in 1.
For each alternative, we repeat the process illustrated in figure 2 with the altered question. The underlying idea is that the changed prompts should generate similar trees, as Tree-Search aims to sample relevant parts of the answer space effectively, regardless of the exact wording of the initial prompt. Subsequently, the model should evaluate the same best answer (to the unchanged question), as something close to it should appear in the tree even though the greedy answer often differs. We then compare the new answers using the same metrics as the previous experiment: informativeness, accuracy, coherence, and consistency. The results are displayed in table 2, which shows the change caused by each alternative. Also shown in the table is that for this experiment we examined the effect of increasing tree size. With an increased maximum, the difference between closed book and open book QA becomes even more significant. We interpret this as the change in question pushing the model away from the correct answer, but a larger tree allows for wider (and well-sampled) exploration of the answer space, and when the tree reaches a given size, it also samples the correct answer.
Examining the results in the table, it becomes quite apparent that there is an increase in robustness as well as performance with increased tree size. The fact that we see the largest gains in the "consistent" metric is another indication of increased robustness as this measures alignment between answer and question (but does not require it to be _correct_ like the accuracy metric). As for the "informative" metric, this is where the room for further gains was the smallest, which is likely why we see such a small difference.
Figure 3: The figure shows the distribution of the odds between the top 2 tokens in position 1, both for the open book as well as the closed book generation. The flatter curve and fat tail for the open book response indicates that the model is more certain in its response.
### Tree-Search
We introduced Tree-Search, a novel decoding strategy designed to extract diverse information from large language models. Its robustness in enhancing question-answering tasks is particularly noteworthy. However, its application extends beyond that, proving beneficial across a variety of NLP tasks, such as text summarization, machine translation, and creative text generation. The strategy is flexible, allowing for enhanced diversity and controllability of outputs, which are critical for the quality and usefulness of the results. Furthermore, by adjusting key parameters like the entropy threshold, probability cut-off, and search depth, or controlling token selection at high entropy positions, Tree-Search provides a way to steer the model towards desired responses. Therefore, while the strategy warrants further exploration of its nuances, it already strikes a balance between diversity and controllability, demonstrating its potential as a significant advancement in decoding strategies.
### Closed to open book QA
Upon examining table 1, it becomes evident that our method of self-contextualisation successfully extracts more information from the model compared to traditional closed book prompting. This outcome demonstrates the effectiveness of Tree-Search in generating diverse outputs and highlights the benefits of providing context to the model for more informative answers, even when that additional context comes from the model itself.
Whilst the methodology often provides more informative answers, it does not show as significant improvements in other metrics. We have identified two primary reasons for this.
When utilizing Tree-Search to obtain information from the model, there is a possibility of extracting incorrect or irrelevant information. When re-prompting the model with this context, it struggles to identify and filter out incorrect details, which affects the accuracy and consistency of the generated answers. This issue highlights the limitations of the model's ability to discern the reliability of the extracted information within the context provided.
When it comes to open book QA, this type of model is primarily trained for extractive QA, i.e., processing context and extracting answers from it. Our method of constructing context consistently introduces syntactical and grammatical errors. The model often extracts the answer from this imperfect context without correcting the errors, which could negatively impact the consistency and coherence of the generated responses.
We attempted to alleviate this issue by having the model summarise the tree, rather than using it for context in a QA; using the prompt shown below.
**Document: {{context}} Summary:**
\begin{table}
\begin{tabular}{l l r r r} \hline \hline & & \multicolumn{3}{c}{**Maximum Tree Size**} \\ \cline{3-5}
**Category** & **Q change** & \(\mathbf{20[\%]}\) & \(\mathbf{50[\%]}\) & \(\mathbf{100[\%]}\) \\ \hline \multirow{3}{*}{**Informative**} & Perturb & \(-12.0\pm 8.5\) & \(-1.8\pm 1.3\) & \(0.8\pm 0.6\) \\ & Rephrase & \(-3.2\pm 2.6\) & \(1.7\pm 1.1\) & \(9.1\pm 7.1\) \\ \cline{1-1} & Perturb & \(8.5\pm 3.5\) & \(15.2\pm 5.9\) & \(25.1\pm 10.3\) \\ \cline{1-1} & Rephrase & \(9.6\pm 3.5\) & \(8.8\pm 4.3\) & \(36.2\pm 16.0\) \\ \cline{1-1} & Perturb & \(3.5\pm 1.5\) & \(8.7\pm 4.2\) & \(13.4\pm 6.2\) \\ \cline{1-1} & Rephrase & \(11.5\pm 5.2\) & \(13.0\pm 6.1\) & \(8.1\pm 4.2\) \\ \cline{1-1} & Perturb & \(-1.7\pm 0.8\) & \(4.0\pm 2.1\) & \(13.0\pm 7.4\) \\ \cline{1-1} & Rephrase & \(7.1\pm 3.7\) & \(10.1\pm 5.3\) & \(15.2\pm 8.3\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The change in proportion of preferred open book answers, for two different changes to the prompt and different tree sizes.
Whilst this does eliminate a large proportion of the syntactical errors from the answer, it does often lead to a misalignment between question and answer and a lower score on most of our metrics. However, this does indicate that a lot of improvement could be achieved by further refining the prompts as even with the basic context assembly and prompting we see enhanced performance.
### The Potential of Bootstrapping
The promising results of our Tree-Search based approach open the possibility for a bootstrapping method. This technique would create a feedback loop for iterative improvement, using the enhanced outputs of the model for its retraining, potentially leading to substantial growth in the model's performance and robustness.
One area worth exploring in this context is incremental retraining. Instead of retraining the model from scratch, it could be incrementally retrained with the superior outputs yielded by our self-contextualizing QA approach. This strategy could streamline the iterative improvement process, possibly leading to quicker convergence to a more robust and high-performing model.
Further, assessing the impact of bootstrapping on model robustness would shed light on its potential for performance enhancement. This could involve evaluating the model's robustness metrics, such as out-of-distribution generalization and adversarial resilience, pre and post-bootstrapping. These investigations would offer valuable insights into the effectiveness of bootstrapping as a tool for iterative model enhancement.
### Summary and Conclusions
This study demonstrated a notable improvement in QA tasks for smaller LLMs, specifically transforming from a closed book to an open book QA format. The application of Tree-Search played a crucial role in this transformation, where the model's initial responses were effectively utilized to create a rich context, enabling it to augment its responses.
While the process encountered certain limitations, the quality of responses exhibited a measurable improvement in terms of accuracy, consistence, informativeness and coherence. In addition to providing improved responses it also demonstrates an increase in robustness
The transformation of the model into an open book QA system via Tree-Search certainly shows potential. Future work could focus on refining the context assembly, improving error correction in the model, and investigating alternative context selection strategies. Although promising, this approach requires further exploration and refinement to fully realize its potential in diverse domains.
|
2305.07542 | Yang-Mills form factors on self-dual backgrounds | The construction of perturbative quantities on non-linear backgrounds leads
to the possibility of incorporating strong field effects in perturbation
theory. We continue a programme to construct QFT observables on self-dual
backgrounds. The approach works with asymptotic data for fields defined at null
infinity $\mathscr{I}$, extending earlier work on Yang-Mills amplitudes on
self-dual backgrounds to form factors and incorporating supersymmetry. Since
our analysis is based on reconstruction from data at null infinity, it
naturally ties into work on celestial and twisted holography. We study form
factors both in pure Yang-Mills and their supersymmetric counterparts in
$\mathcal{N}=4$ SYM, giving a full treatment of $\mathcal{N}=4$
super-Yang-Mills at null infinity and their self-dual nonlinear backgrounds. We
obtain tree-level MHV form factors around these backgrounds using new formulae
for lifting operators to twistor space leading to simple dressings of the
corresponding form factors around the vacuum. We give brief indications on how
to go beyond the MHV sector by introducing dressed versions of the MHV diagram
propagator. We discuss generating functionals of the MHV all plus 1-loop
amplitude in this context together with its various dual conformal
representations. | Giuseppe Bogna, Lionel Mason | 2023-05-12T15:08:30Z | http://arxiv.org/abs/2305.07542v1 | # Yang-Mills form factors on self-dual backgrounds
###### Abstract
The construction of perturbative quantities on non-linear backgrounds leads to the possibility of incorporating strong field effects in perturbation theory. We continue a programme to construct QFT observables on self-dual backgrounds. The approach works with asymptotic data for fields defined at null infinity \(\mathscr{I}\), extending earlier work on Yang-Mills amplitudes on self-dual backgrounds to form factors and incorporating supersymmetry. Since our analysis is based on reconstruction from data at null infinity, it naturally ties into work on celestial and twisted holography. We study form factors both in pure Yang-Mills and their supersymmetric counterparts in \(\mathcal{N}=4\) SYM, giving a full treatment of \(\mathcal{N}=4\) super-Yang-Mills at null infinity and their self-dual nonlinear backgrounds. We obtain tree-level MHV form factors around these backgrounds using new formulae for lifting operators to twistor space leading to simple dressings of the corresponding form factors around the vacuum. We give brief indications on how to go beyond the MHV sector by introducing dressed versions of the MHV diagram propagator. We discuss generating functionals of the MHV all plus 1-loop amplitude in this context together with its various dual conformal representations.
## 1 Introduction
Integrability is a powerful tool for the study of non-linear problems, and, although it doesn't apply directly to generic gauge and gravity theories, in four dimensions such theories possess self-dual sectors that are integrable [1; 2; 3]. There is by now a long tradition of exploiting the integrability of the self-dual sector to provide non-perturbative results such as the construction of instantons and monopoles [4]. These structures are also intimately related to the rich structures discovered in scattering amplitudes, from the famous Parke-Taylor formula for the tree-level MHV scattering of gluons [5; 6], to the more general constructions of [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], see [18; 19; 20; 21; 22] for reviews. The role of integrability was made explicit in studies of scattering amplitudes using twistor actions defined on twistor space that allow the direct
exploitation of the integrability of the self-dual sectors of Yang-Mills and gravity theories. Moreover, the non-self-dual theory can be formulated as a perturbation around the self-dual sector [23; 24; 25; 26; 27; 28] on twistor space, so many interesting results for the full theory can be readily obtained via perturbation theory; see the reviews [29; 30] for further details. From a different perspective, the existence of an integrable self-dual sector was at the heart of recent, exciting developments in flat-space holography, most notably celestial [31] and twisted [32] holography. These sectors host chiral symmetry algebras - whose existence is underpinned by infinitely many soft symmetries [33; 34; 35; 36] - that can be used to significantly constrain celestial correlators. The non-local nature of twistor constructions means that they can be naturally formulated at null infinity [37; 38; 39; 40], so these features become apparent if one adopts a twistorial description of flat holography; for example, twistor methods give a nice way to understand the symmetries of self-dual gravity [41] and can be used to derive the gluon celestial OPE at all orders [42].
These approaches have by now been extended to obtain formulae for form factors. Form factors are expectation values of local composite operators between the vacuum and an \(n\)-particle on-shell state and therefore represent intermediate observables between on-shell amplitudes and off-shell correlators. They have important physical applications as well, for example, arising as scattering amplitudes after additional fields have been integrated out in effective field theories and, since form factors are only partially off-shell, many amplitudes techniques have been extended to the construction of these observables. This includes the MHV formalism, recursion relations, and methods inspired by twistor theory in both \(\mathcal{N}=4\) super-Yang-Mills and pure Yang-Mills, both at tree and at loop level, see [43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54]. The twistor action approach has been pursued also for form factors leading to many further results [55; 56; 57; 58; 59; 60; 61; 62], building on work on correlation functions in twistor space [29; 63; 64]. Form factors have also played an interesting role in celestial and twisted holography where they were recently used to resolve singularities in the celestial amplitudes [65], to deform the soft symmetry algebras [66], as well as to compute both amplitudes and form factors in terms of correlators of a 2d chiral algebra reminiscent of the celestial soft algebra [32].
The above works concern observables around a flat background. In this paper, we address the question of extending the computations of form factors around non-trivial backgrounds. Although work already exists in this direction in the case of scattering amplitudes [67; 68; 69; 70], these face difficulties as soon as symmetries are lost and explicit formulae for background-coupled fields are no longer easily available; standard techniques and constraints based on momentum space and hence the rationality of tree diagrams, such as BCFW recursion and unitarity, cannot be applied. However, progress can be made on simple backgrounds: it is for example possible to extract much information about scattering around plane wave backgrounds [71; 72; 73; 74; 75]. In this work, we work around _self-dual_ and _radiative_ backgrounds, continuing the programme established in [76; 77]. In these works, all-multiplicity expressions for tree-level MHV gluon scattering amplitude around such backgrounds were obtained for the first time, together with conjectural formulae for the N\({}^{k}\)MHV amplitudes. Analogous gravitational formulae were found in [78].
The class of backgrounds we consider are _radiative_ in the sense of being determined by their data at \(\mathscr{I}\), but they are otherwise generic. They could be taken to be a sum of plane
waves at infinity, to make contact with higher-point formulae, but there is no particular reason to do so and one can choose data for more general backgrounds, such as instantons. We use the complete integrability of the self-dual sector to construct background-coupled fields from their asymptotic data at null infinity and study their interactions. Without local symmetries of the background, there will now be no straightforward local definition of plane waves in the interior, but fields can be taken to be plane waves at infinity; they can be characterized by the same data at null infinity that leads to momentum eigenstates on the trivial background. Complete integrability implies that these solutions do not themselves scatter as they pass through the background field, so their values from past null infinity \(\mathscr{I}^{-}\) can be identified with their values in the future at \(\mathscr{I}^{+}\) with no ambiguity, thus preserving crossing symmetry. Moreover, the backgrounds themselves are by assumption determined by their data at null infinity, so the whole setting is intrinsically holographic and fits well into the celestial holography programme.
As we explain below, the argument in [77] that leads to the MHV amplitude is closely related to the construction of an MHV form factor: the generating functional for the MHV amplitude can be viewed as the \(q\to 0\) limit of the generating functional for the form factor of the operator \(\,\mathrm{tr}\,B^{2}\), where \(B_{\alpha\beta}\) is the anti-self-dual component of the field strength in the chiral Yang-Mills formulation of Chalmers and Siegel [23, 79] and \(q\) is the momentum associated to the local operator \(\,\mathrm{tr}\,B^{2}\) in the expectation value. Similar reasoning was also the basis of the twistor action approach of [26] to the construction of the MHV formalism of [10], as well as recent works in celestial holography on a trivial background [32, 80, 81], but see also [82] for an intriguing example on Burns space. Even away from \(q=0\), the tree-level, colour-ordered MHV form factor for \(\,\mathrm{tr}\,B^{2}\) around a Cartan-valued self-dual radiative background is extremely simple
\[\mathscr{F}_{\,\mathrm{tr}\,B^{2}}(1^{+},\ldots,i^{-},\ldots,j^{-},\ldots n^{ +};q)=\frac{\langle ij\rangle^{4}}{\langle 12\rangle\ldots\langle n1\rangle} \int_{\mathbb{M}}\mathrm{d}^{4}x\,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x, \kappa_{j})}\,. \tag{1}\]
Here \(Q=k_{1}+\ldots+k_{n}\) is the sum of the gluon momenta measured at null infinity, \(g\) is a function that depends on the background field1 and \(e_{j}\) is the set of charges of each gluon with respect to the background, with values in the Cartan subalgebra of the gauge group - relatively simple formulae are available for generic backgrounds as well. The corresponding expression for the form factor for the operator \(\,\mathrm{tr}\,\tilde{F}^{2}\) is quite different due to the chirality in our focus on MHV form factors. We show that the tree-level MHV form factor around a Cartan-valued self-dual radiative background has the compact form
Footnote 1: For example, for a self-dual plane wave background, \(g\) is known as _Volkov exponent_[83, 84].
\[\mathscr{F}_{\,\mathrm{tr}\,\tilde{F}^{2}}(1^{+},\ldots,n^{+};q)=\frac{(q \cdot Q)^{2}}{\langle 12\rangle\ldots\langle n1\rangle}\int_{\mathbb{M}} \mathrm{d}^{4}x\,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x,\kappa_{j})}\,. \tag{2}\]
In both these examples, the space-time integral represents a simple dressing of the form factor around the flat background; in the limit \(g\to 0\) (corresponding to the flat-background limit) we recover momentum conservation in the presence of the local operators, i.e. the momentum of either \(\,\mathrm{tr}\,B^{2}\) or \(\,\mathrm{tr}\,\tilde{F}^{2}\) is constrained to be equal to the sum of the gluon momenta, as a consequence of the translational invariance of the trivial background. In
that limit, the expression of the \(\,\mathrm{tr}\,\tilde{F}^{2}\) form factor reduces to the well-known formula for the tree-level scattering of a massive Higgs and arbitrarily many positive-helicity gluons [85]. The form factor around a general background thus retains much of the simplicity observed in the trivial background case, although it involves a single residual space-time integral because the background is not translation invariant, but the kinematical prefactors coincide on the support of momentum conservation. Similar considerations apply to other tree-level form factors in the MHV sector, see Equations (5.21) and (5.12) for \(k=3\) below for explicit examples of \(\,\mathrm{tr}\,\tilde{F}^{3}\) and \(\,\mathrm{tr}\,B^{3}\), as well as for tree-level MHV form factors in \(\mathcal{N}=4\) SYM, see Equation (5.29).
This work is organized as follows. We review some elementary results in twistor theory and develop the necessary tools to describe self-dual radiative gauge fields in Section 2, as well as the extension to self-dual radiative backgrounds in \(\mathcal{N}=4\) SYM in SS3, paying special regard to the fermionic expansion for the asymptotic data near \(\mathscr{I}\). The key technique is introduced in SS4, where we introduce new explicit integral representations for the background fields in terms of their radiative data and for the linear fields propagating around these backgrounds. These are then used to lift for example \(\,\mathrm{tr}\,\tilde{F}^{2}\) and \(\,\mathrm{tr}\,\tilde{F}^{3}\) to twistor space in Section 5. We use these to show how expressions for the tree-level MHV form factors around non-trivial backgrounds in pure Yang-Mills can be readily obtained by their expressions around the trivial background. We conclude by showing that similar results hold for tree-level MHV super form factors in \(\mathcal{N}=4\) SYM around gluonic self-dual radiative backgrounds. In the discussion section 6.1 we briefly explain how the MHV-diagram propagator can be dressed; this can in principle be used to compute higher MHV degree and loop-level expressions on backgrounds. The machinery developed in the text is used in 6.2 to discuss generating functions for the MHV all plus one-loop amplitude and its extension to backgrounds, and we show how the formulation can be naturally used to obtain dual conformal invariant region momentum formulae for the amplitude such as those obtained by [86, 87]. In Appendix A we show an equivalence between the equations of motion for \(\mathcal{N}=4\) SYM and constraint equations for super-connections on chiral superspace. Finally, we defer some more computational details of our construction to Appendix B.
## 2 Self-dual radiative backgrounds in pure Yang-Mills
In this section, we review self-dual radiative Yang-Mills backgrounds in four-dimensional complexified Minkowski space-time and their twistor theory; see also [3, 77, 88, 89] for further details. Working on complex Minkowski space-time \(\mathbb{M}\cong\mathbb{C}^{4}\) with coordinates \(x^{\mu}\), \(\mu=0,1,2,3\), it's useful to recall the local isomorphism between \(\mathrm{SO}(4,\mathbb{C})\) and \(\mathrm{SL}(2,\mathbb{C})\times\mathrm{SL}(2,\mathbb{C})\) and to introduce the 2-spinor notation, trading tensor indices with pairs of spinor indices with opposite chirality
\[x^{\alpha\dot{\alpha}}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}x^{0}+x^{3}& x^{1}-\mathrm{i}x^{2}\\ x^{1}+\mathrm{i}x^{2}&x^{0}-x^{3}\end{array}\right)\,. \tag{2.1}\]
The \(\mathrm{SL}(2,\mathbb{C})\)-invariant Levi-Civita symbols \(\varepsilon_{\alpha\beta}\) and \(\varepsilon_{\dot{\alpha}\dot{\beta}}\) are used to raise and lower spinor indices. Following standard spinor-helicity notation, we denote the contractions between
spinors by \(\langle ab\rangle\coloneqq a^{a}b_{\alpha}=\varepsilon^{\alpha\beta}a_{\beta}b_{\alpha}\) and \([\bar{a}\bar{b}]\coloneqq\tilde{a}^{\dot{\alpha}}\tilde{b}_{\dot{\alpha}}= \varepsilon^{\dot{\alpha}\dot{\beta}}\tilde{a}_{\dot{\beta}}\tilde{b}_{\dot{ \beta}}\). Given a gauge field on \(\mathbb{M}\), the field strength \(F_{ab}\) admits the decomposition
\[F_{\alpha\dot{\alpha}\dot{\beta}\dot{\beta}}=\varepsilon_{\alpha\beta}\tilde{F }_{\dot{\alpha}\dot{\beta}}+\varepsilon_{\dot{\alpha}\dot{\beta}}F_{\alpha \beta}\,, \tag{2}\]
where \(F_{\alpha\beta}\) and \(\tilde{F}_{\dot{\alpha}\dot{\beta}}\) are symmetric spinors and represent the anti-self-dual (ASD) and self-dual (SD) parts of the field strength, respectively.
A source-free gauge field will be said to be radiative if it extends to null infinity inside the conformal compactification of Minkowski space-time and is completely determined by its free characteristic data at either past or future null infinity. Here we will in fact assume that we are working with complex fields on the conformal compactification of \(\mathbb{M}\) that includes \(\mathscr{I}_{\mathbb{C}}=\mathscr{I}_{\mathbb{C}}^{+}\cup\mathscr{I}_{\mathbb{ C}}^{-}\). This is a partial complexification of standard real null infinity \(\mathscr{I}=\mathscr{I}^{+}\cup\mathscr{I}^{-}\) of \(\mathbb{R}^{1,3}\) where advanced and retarded fields are allowed to be complex. Focusing on future null infinity, recall that in the real case, \(\mathscr{I}^{+}\cong\mathbb{R}\times S^{2}\) can be understood as the inversion of the light-cone of the origin of \(\mathbb{R}^{1,3}\); \(\mathscr{I}_{\mathbb{C}}^{+}\) is obtained by complexifying the \(\mathbb{R}\) factor to \(\mathbb{C}\) while keeping the \(S^{2}\) base.2 In order to connect with homogeneous coordinates on twistor space, we will use a homogeneous version \((u,\lambda_{\alpha},\bar{\lambda}_{\dot{\alpha}})\) of Bondi coordinates subject to the equivalence relation
Footnote 2: This will guarantee that each \(\alpha\)-plane representing a twistor in \(\mathbb{M}\) will intersect \(\mathscr{I}_{\mathbb{C}}^{+}\) in a unique point.
\[(u,\lambda_{\alpha},\bar{\lambda}_{\dot{\alpha}})\sim(b\bar{b}u,b \lambda_{\alpha},\bar{b}\bar{\lambda}_{\dot{\alpha}})\,, \tag{3}\]
for any \(b\in\mathbb{C}^{*}\). \(u\) is a complexification of the standard Bondi retarded time, while \((\lambda_{\alpha},\bar{\lambda}_{\dot{\alpha}})\) are homogeneous coordinates on the celestial sphere thought of as the complex projective line \(\mathbb{CP}^{1}\). The homogeneous coordinates allow us to encode spin and conformal weights in terms of homogenous line bundles \(\mathcal{O}(p,q)\to\mathscr{I}_{\mathbb{C}}^{+}\), where a section of \(\mathcal{O}(p,q)\) is represented as a function of \(f_{p,q}(u,\lambda,\bar{\lambda})\) with weights \((p,q)\) under rescaling of the homogeneous coordinates
\[f_{p,q}(|b|^{2}u,b\lambda,\bar{b}\bar{\lambda})=b^{p}\bar{b}^{q}f _{p,q}(u,\lambda,\bar{\lambda})\,. \tag{4}\]
The conformal and spin weights \((h,s)\) are then given by \(h=(p+q)/2\) and \(s=(p-q)/2\), respectively. A similar description can be set up on \(\mathscr{I}_{\mathbb{C}}^{-}\) by replacing the retarded time \(u\) with the advanced time \(v\).
Within this projective formalism, the restriction of a gauge field \(A\) on \(\mathbb{M}\) to \(\mathscr{I}_{\mathbb{C}}^{+}\) in temporal gauge \(A_{u}=0\) is [90; 91; 40; 92]
\[A|_{\mathscr{I}^{+}}=A_{-}(u,\lambda,\bar{\lambda})\mathrm{D} \lambda+A_{+}(u,\lambda,\bar{\lambda})\mathrm{D}\bar{\lambda}\,, \tag{5}\]
where \(\mathrm{D}\lambda\coloneqq\langle\lambda\,\mathrm{d}\lambda\rangle\), \(\mathrm{D}\bar{\lambda}\coloneqq[\bar{\lambda}\,\mathrm{d}\bar{\lambda}]\). The restriction of the leading components of the SD and ASD parts of the curvature are
\[F_{+}^{0} = \partial_{u}A_{+}\,\mathrm{d}u\wedge\mathrm{D}\bar{\lambda}\,, \tag{6a}\] \[F_{-}^{0} = \partial_{u}A_{-}\,\mathrm{d}u\wedge\mathrm{D}\lambda\,, \tag{6b}\]
thus \(A_{+}\), \(A_{-}\) are the free data for the SD and ASD components of the field strength, respectively. In terms of the line bundles above, \(A_{+}\) is a section of \(\mathcal{O}(-2,0)\otimes\mathfrak{g}\), while \(A_{-}\) is a section of \(\mathcal{O}(0,-2)\otimes\mathfrak{g}\), where \(\mathfrak{g}\) is the Lie algebra of the gauge group. A self-dual, radiative gauge field is a gauge field completely characterized by the free data \(\,A|_{\mathscr{I}_{\mathbb{C}}^{+}}=A_{+}\,\mathrm{D}\bar{\lambda}\), while \(A_{-}=0\).
Radiative fields from their asymptotic data.We can understand the previous discussion in terms of the peeling properties of the gauge fields and by means of the Kirchhoff-d'Adhemar integral formula [93; 94; 95]; these can in turn be regarded as twistor integral formulae using twistor representatives built from asymptotic data at future null infinity. This perspective will also provide a natural way to introduce radiative data for scalars and fermions in \(\mathcal{N}=4\) SYM.
Recall that the radiative data for a field \(\Phi_{\alpha_{1}\ldots\alpha_{2|h|}}\) of helicity \(h\leq 0\) consist of a function \(\Phi_{2h}=\Phi_{2h}(u,\lambda,\bar{\lambda})\) of weight \((-2|h|-1,-1))\) on (future) null infinity; it is well-known that the characteristic data \(\Phi_{2h}\) is the leading-order component of the field that decays as \(r^{-1}\) as \(r\to\infty\), where \(r\) is an affine parameter; the \(2|h|+1\) components of \(\Phi_{\alpha_{1}\ldots\alpha_{2|h|}}\)_peel_ at different rates as \(r\to\infty\)[95]. In terms of \(\Phi_{2h}\), the bulk field can be reconstructed using the Kirchhoff-d'Adhemar formula
\[\Phi_{\alpha_{1}\ldots\alpha_{2|h|}}(x)=\int_{\mathbb{CP}^{1}}\frac{\mathrm{D }\lambda\wedge\mathrm{D}\bar{\lambda}}{2\pi\mathrm{i}}\ \lambda_{\alpha_{1}}\ldots\lambda_{\alpha_{2|h|}}\ \frac{\partial\Phi_{2h}}{\partial u}\bigg{|}_{u= \langle\lambda|x|\bar{\lambda}]}\, \tag{7}\]
where the integral is evaluated on the _light-cone cut_ of \(x\), that is the intersection of the null cone with apex \(x\) and future null infinity
\[u=x^{\alpha\dot{\alpha}}\lambda_{\alpha}\bar{\lambda}_{\dot{\alpha}}\,. \tag{8}\]
In particular, for \(h=-1\), we identify the radiative data \(\Phi_{-2}\) for an ASD gauge field \(\tilde{\mathcal{A}}\) by
\[\Phi_{-2}(u,\lambda,\bar{\lambda})=\partial_{u}A_{-}(u,\lambda,\bar{\lambda})\,, \tag{9}\]
that is, \(\Phi_{-2}\) is precisely the leading part at \(\mathscr{I}_{\mathbb{C}}^{+}\) of the ASD curvature. For positive helicities, we can take the conjugate of the above and the radiative data \(\Phi_{2h}\) are valued in \(\mathcal{O}(-1,2h-1)\) (in particular, we identify \(\Phi_{2}=\partial_{u}A_{+}\)), but the corresponding Kirchhoff-d'Adhemar will have a less direct connection with twistor representatives.
The \(J\) and \(K\) potentials for self-dual gauge fields.Self-dual gauge fields can be described also in terms of scalar second potentials. Given a reference spinor \(\iota^{\alpha}\), the vanishing of the ASD curvature component \(\iota^{\alpha}\iota^{\beta}F_{\alpha\beta}=0\) implies flatness in the two-plane tangent to \(\iota^{\alpha}\beta^{\dot{\alpha}}\) for all \(\beta^{\dot{\alpha}}\), so we can work with the ansatz
\[A_{\alpha\dot{\alpha}}=\iota_{\alpha}A_{\dot{\alpha}}\,, \tag{10}\]
for the gauge connection. In particular, the gauge field is in light-cone gauge with respect to any null vector of the form \(n^{\alpha\dot{\alpha}}=\iota^{\alpha}\beta^{\dot{\alpha}}\). The equation \(\iota^{\alpha}F_{\alpha\beta}=0\) then implies the existence of a matrix-valued scalar potential \(K\), the \(K\)-matrix, so that
\[A_{\dot{\alpha}}=\iota^{\alpha}\partial_{\alpha\dot{\alpha}}K\,. \tag{11}\]
The gauge field is automatically in Lorentz gauge as well and the curvature can now be written in terms of \(K\) as
\[F_{\alpha\beta} = -\frac{1}{2}\iota_{\alpha}\iota_{\beta}(\square K+\mathrm{i}[\tilde{ \mathrm{d}}_{\dot{\alpha}}K,\tilde{\mathrm{d}}^{\dot{\alpha}}K])\,, \tag{12a}\] \[\tilde{F}_{\dot{\alpha}\dot{\beta}} = \mathrm{d}_{\dot{\alpha}}\mathrm{d}_{\dot{\beta}}K\,, \tag{12b}\]
where we introduced the notation \(\mathrm{d}_{\dot{\alpha}}\coloneqq\iota^{\alpha}\partial_{\alpha\dot{\alpha}}\). The self-duality equation is therefore
\[\square K+\mathrm{i}[\mathrm{d}_{\dot{\alpha}}K,\mathrm{d}^{\dot{\alpha}}K]=0\,. \tag{13}\]
The \(J\)-matrix potential requires the choice of a second spinor \(o_{\alpha}\), which we normalize by \(\langle\iota o\rangle=1\). Using \(o^{\alpha}o^{\beta}F_{\alpha\beta}=0\), we can deduce the existence of a matrix function \(J\) so that
\[A_{\alpha\dot{\alpha}}=-\mathrm{i}\,\iota_{\alpha}J^{-1}o^{\beta}\partial_{ \beta\dot{\alpha}}J\,. \tag{14}\]
Defining \(\tilde{\mathrm{d}}_{\dot{\alpha}}\coloneqq o^{\alpha}\partial_{\alpha\dot{ \alpha}}\), the curvatures become
\[F_{\alpha\beta} = -\mathrm{i}\,o_{(\alpha}\iota_{\beta)}\mathrm{d}^{\dot{\alpha}}( J^{-1}\tilde{\mathrm{d}}_{\dot{\alpha}}J)\,, \tag{15a}\] \[\tilde{F}_{\dot{\alpha}\dot{\beta}} = -\mathrm{i}\,\mathrm{d}_{(\dot{\alpha}}(J^{-1}\tilde{\mathrm{d}}_ {\dot{\beta})}J)\,, \tag{15b}\]
so that the self-dual Yang-Mills equations for this potential become
\[\mathrm{d}^{\dot{\alpha}}(J^{-1}\tilde{\mathrm{d}}_{\dot{\alpha}}J)=0\,. \tag{16}\]
### Twistor-space description
To define twistor space, introduce homogeneous coordinates \(Z^{A}=(\mu^{\dot{\alpha}},\lambda_{\alpha})\) on \(\mathbb{CP}^{3}\) subject to the equivalence relation \(Z^{A}\sim tZ^{A}\) for \(t\in\mathbb{C}^{*}\). The twistor space \(\mathbb{PT}\) of \(\mathbb{M}\) is the open subset of \(\mathbb{CP}^{3}\) given by
\[\mathbb{PT}=\{[Z^{A}]\in\mathbb{CP}^{3}:\lambda_{\alpha}\neq 0\}\,, \tag{17}\]
and can thus be described as the total space of the holomorphic bundle \(\mathcal{O}(1)\oplus\mathcal{O}(1)\to\mathbb{CP}^{1}\) over the Riemann sphere. Its relationship with \(\mathbb{M}\) is encoded in the incidence relations
\[\mu^{\dot{\alpha}}=x^{\alpha\dot{\alpha}}\lambda_{\alpha}\,. \tag{18}\]
For fixed \(x\in\mathbb{M}\), the incidence relations describe a _twistor line_, that is a linearly and holomorphically embedded Riemann sphere \(X\subset\mathbb{PT}\), while for constant \(Z^{A}\in\mathbb{PT}\) they give an \(\alpha\)-plane in \(\mathbb{M}\)[96], i.e. a totally null 2-planes with self-dual tangent bivectors. The definition (17) removes the twistor line \(I\subseteq\mathbb{CP}^{3}\) corresponding to spatial infinity \(i^{0}\).
Radiative linear fields admit a natural description in twistor space [97]. This can be obtained by pulling back asymptotic data on \(\mathscr{I}_{\mathbb{C}}^{+}\) to twistor space \(\mathbb{PT}\) via the natural projection \(p\) to \(\mathscr{I}_{\mathbb{C}}^{+}\) given by
\[p:(\mu^{\dot{\alpha}},\lambda_{\alpha})\mapsto(u=\mu^{\dot{\alpha}}\bar{ \lambda}_{\dot{\alpha}},\lambda_{\alpha},\bar{\lambda}_{\dot{\alpha}})\,. \tag{19}\]
We can define line bundles \(\mathcal{O}(n)\to\mathbb{PT}\) whose sections can be represented by functions of homogeneity-degree \(n\) in the homogeneous coordinates. These line bundles are identified
with both the pull-backs \(p^{*}{\cal O}(n,0)\) by \(p\) from \(\mathscr{I}_{\mathbb{C}}^{+}\) and the pull-backs of the line bundles \({\cal O}(n)\to\mathbb{CP}^{1}\) by the holomorphic projection \(\mathbb{PT}\to\mathbb{CP}^{1}\). For fields of helicity \(h\leq 0\), we can pull back the characteristic data at \(\mathscr{I}_{\mathbb{C}}^{+}\) to twistor space to give \((0,1)\)-forms of holomorphic weight \(2h-2\) that defines a Dolbeault cohomology class
\[\omega_{2h}=p^{*}\left(\frac{\partial\Phi_{2h}}{\partial u}{\rm D}\bar{ \lambda}\right)\,\in H^{0,1}(\mathbb{PT},{\cal O}(2h-2)). \tag{2.20}\]
Following [97], the Kirchhoff-d'Adhemar integral formula (2.7) can now be re-interpreted as a version of the twistor integral formula or Penrose transform [98, 99] using Dolbeault cohomology
\[\Phi_{\alpha_{1}\ldots\alpha_{2|h|}}(x)=\int_{X}{\rm D}\lambda\wedge\lambda_ {\alpha_{1}}\ldots\lambda_{\alpha_{2|h|}}\left.\omega_{2h}\right|_{X}\,. \tag{2.21}\]
Holomorphicity of \(\omega_{2h}\) then implies that \(\Phi_{\alpha_{1}\ldots\alpha_{2|h|}}\) solves the ZRM equation, so that it correctly represent a helicity-\(h\) field. For \(h=1/2\), we take \(\omega_{1}=p^{*}(\Phi_{1}{\rm D}\bar{\lambda})\in H^{0,1}(\mathbb{PT},{\cal O }(-1))\), but the integral formula now requires a derivative
\[\Phi_{\dot{\alpha}}(x)=\int_{X}{\rm D}\lambda\wedge\frac{\partial\omega_{1}}{ \partial\mu^{\dot{\alpha}}}\Big{|}_{X}\, \tag{2.22}\]
and similarly for other positive-helicity linear fields.
In order to extend this construction to fully non-linear, non-abelian self-dual gauge fields, we follow a strategy due to Sparling [39]. We first use the asymptotic gauge field at \(\mathscr{I}_{\mathbb{C}}^{+}\) to construct the \(\mathfrak{g}\)-valued \((0,1)\)-form
\[\mathfrak{a}\coloneqq p^{*}(A_{+}\,{\rm D}\bar{\lambda})=A_{+}(\mu^{\dot{ \alpha}}\bar{\lambda}_{\dot{\alpha}},\lambda,\bar{\lambda}){\rm D}\bar{ \lambda}\,, \tag{2.23}\]
valued in \(\Omega^{0,1}(\mathbb{PT},{\cal O}(0)\otimes\mathfrak{g})\) on twistor space. Since \(\mathfrak{a}\) points only along the \({\rm D}\bar{\lambda}\) direction and is holomorphic in \(\mu^{\dot{\alpha}}\), the \(\bar{\partial}\) operator \(\bar{D}=\bar{\partial}+\mathfrak{a}\) satisfies \(\bar{D}^{2}=0\), so it defines a holomorphic vector bundle \(E\) on twistor space and gives a direct method to construct the Ward transform of the self-dual gauge field from the characteristic data at null infinity. More in detail, recall that self-dual gauge fields are described on twistor space by the Ward correspondence [2, 40]:
**Theorem 1**: _There exists a one-to-one correspondence between_
* _self-dual gauge fields on_ \(\mathbb{M}\) _with gauge group_ \(G={\rm GL}(r,\mathbb{C})\)_,_
* _holomorphic rank-_\(r\) _vector bundles_ \(E\to\mathbb{PT}\) _such that_ \(\left.E\right|_{X}\) _is trivial for each_ \(x\in\mathbb{M}\)_._
We can use the reconstruction part of the Ward construction to solve the characteristic data initial value problem. We give some details here as they will be needed in the calculations that follow. If \(X\subseteq\mathbb{PT}\) is any line in twistor space, \(\left.E\right|_{X}\) is topologically trivial by assumption, and holomorphic with \(\bar{\partial}\)-operator \(\left.\bar{D}\right|_{X}\). For small \(\mathfrak{a}\) - that is, in perturbation theory- this implies that \(\left.E\right|_{X}\) is _holomorphically_ trivial, so that there exists a frame \(\mathsf{H}\colon\left.E\right|_{X}\to\mathbb{C}^{r}\) satisfying the Sparling equation [39]
\[\left.\bar{D}\right|_{X}\mathsf{H}(x,\lambda,\bar{\lambda})\coloneqq\left.( \bar{\partial}+\mathfrak{a})\right|_{X}\mathsf{H}=0\,. \tag{2.24}\]
It is possible to understand this equation in both geometric and holographic terms as follows [100]: the image of \(X\) under \(p\) is the light-cone cut of \(x\) and it can be shown that the Sparling equation is satisfied when \(\mathsf{H}\) is taken to be the parallel propagator from the point \(x\) to \(\mathscr{I}_{\mathbb{C}}^{+}\) along the light-cone. As we will see below, this parallel propagator determines the bulk gauge field, giving a holographic interpretation to the Ward correspondence. Note also that the frame is defined up to a matrix-valued function \(g(x)\) on \(\mathbb{M}\), \(\mathsf{H}(x,\lambda)\to\mathsf{H}(x,\lambda)g(x)\), with the resulting ambiguity identified with gauge transformations on \(\mathbb{M}\). We can remove the ambiguity by requiring that \(\mathsf{H}(x,\iota)\) is the identity matrix for some fixed spinor \(\iota_{\alpha}\). The incidence relations and the chain rule imply that \(\lambda^{\alpha}\partial_{\alpha\dot{\alpha}}\left.\mathsf{a}\right|_{X}=0\), as \(\left.\mathsf{a}\right|_{X}\) only depends on \(x\) through \(\mu^{\dot{\alpha}}\), which is annihilated by \(\lambda^{\alpha}\partial_{\alpha\dot{\alpha}}\) on the support of the incidence relations (18). Thus differentiating (24) along \(\lambda^{\alpha}\partial_{\alpha\dot{\alpha}}\) we quickly find
\[\bar{\partial}\big{|}_{X}\left(\mathsf{H}^{-1}\lambda^{\alpha}\partial_{ \alpha\dot{\alpha}}\mathsf{H}\right)=0\,, \tag{25}\]
that is, \(\mathsf{H}^{-1}\lambda^{\alpha}\partial_{\alpha\dot{\alpha}}\mathsf{H}\) is a holomorphic function on \(X\) of weight \(+1\) in \(\lambda_{\alpha}\). Liouville's theorem ensures the existence of a \(\mathfrak{g}\)-valued function \(A_{\alpha\dot{\alpha}}\) on \(\mathbb{M}\) such that
\[\mathsf{H}^{-1}(x,\lambda)\lambda^{\alpha}\partial_{\alpha\dot{\alpha}} \mathsf{H}(x,\lambda)=-\mathrm{i}\lambda^{\alpha}A_{\alpha\dot{\alpha}}(x)\,. \tag{26}\]
\(A_{\alpha\dot{\alpha}}(x)\) is the desired self-dual gauge potential transforming in the normal way under the gauge transformation \(g\). Defining the covariant derivative \(\nabla=\mathrm{d}-\mathrm{i}A\), equation (26) can be recast as
\[\lambda^{\alpha}\nabla_{\alpha\dot{\alpha}}\mathsf{H}^{-1}\coloneqq\lambda^{ \alpha}(\partial_{\alpha\dot{\alpha}}-iA_{\alpha\dot{\alpha}})\mathsf{H}^{-1} =0\,. \tag{27}\]
This directly implies
\[\lambda^{\alpha}\lambda^{\beta}F_{\alpha\dot{\alpha}\beta\beta}\mathsf{H}^{-1 }=[\lambda^{\alpha}\nabla_{\alpha\dot{\alpha}},\lambda^{\beta}\nabla_{\beta \dot{\beta}}]\mathsf{H}^{-1}=0\,. \tag{28}\]
Since this equation holds for any value of \(\lambda_{\alpha}\), we deduce that \(A_{\alpha\dot{\alpha}}\) is indeed self-dual. As promised, this equation implies that \(\mathsf{H}\) is parallel propagated along light-rays from \(x\) to infinity in the direction \(\lambda_{\alpha}\tilde{\lambda}_{\dot{\alpha}}\). If we fix the gauge freedom with the choice \(\mathsf{H}(x,\iota)=1\) for the frame, (26) can be used to express the scalar potentials as well, namely
\[J = \mathsf{H}(x,o)\,, \tag{29a}\] \[K = \mathrm{i}\,o^{\alpha}\left.\frac{\partial}{\partial\lambda^{ \alpha}}\mathsf{H}\right|_{\lambda=\iota}\,. \tag{29b}\]
Finally, in the following, it will be important to know the Green's function for \(\left.\bar{D}\right|_{X}\) acting on sections of \(\mathcal{O}(-1)\), which can be immediately found in terms of the holomorphic frame as
\[\mathsf{U}_{X}(\lambda,\lambda^{\prime})=\frac{1}{2\pi\mathrm{i}}\frac{ \mathsf{H}(x,\lambda)\mathsf{H}^{-1}(x,\lambda^{\prime})}{\langle\lambda \lambda^{\prime}\rangle}\,. \tag{30}\]
## 3 Self-dual radiative backgrounds in \(\mathcal{N}=4\) Sym
We now extend the discussion from the previous section to \(\mathcal{N}=4\) super-Yang-Mills. In order to have a discussion adapted to twistor theory, we consider the chiral formulation of the theory [23, 79], where the \(\mathcal{N}=4\) supermultiplet can be described by fields
\[\{A_{\alpha\dot{\alpha}},\tilde{\psi}_{a\dot{\alpha}},\phi_{ab},\psi_{\alpha}^{ a},B_{\alpha\beta}\}\,. \tag{31}\]
In the full \(\mathcal{N}=4\) theory, \(B_{\alpha\beta}=B_{(\alpha\beta)}\) is an ASD 2-form proportional to the ASD part of the curvature \(F_{\alpha\beta}\), while in the self-dual theory it is a linear field imposing the self-duality condition. In both cases, we can make supersymmetry manifest by working with a superspace description: the superspace that is best adapted to make contact with twistor theory is chiral Minkowski superspace.
### Chiral super-fields
We enlarge Minkowski space \(\mathbb{M}\) to chiral Minkowski super-space \(\mathbb{M}^{4|8}\) with coordinates \(x^{\alpha A}\coloneqq(x^{\alpha\dot{\alpha}},\theta^{\alpha a})\) and also define the corresponding coordinate derivatives \(\partial_{\alpha A}\coloneqq(\partial_{\alpha\dot{\alpha}},\partial_{\alpha a})\), where we used the \(2|4\) index \(A=(\dot{\alpha},a)\). We can then introduce the super-connection3\(\underline{\nabla}_{\alpha A}=\partial_{\alpha A}-\mathrm{i}\underline{A}_{ \alpha A}\), with \(\underline{A}_{\alpha A}(x,\theta)\) taking values in the Lie algebra of the gauge group.
Footnote 3: We will underline super-fields on \(\mathbb{M}^{4|8}\) to make the distinction with space-time fields on \(\mathbb{M}\) clear.
On non-chiral superspace, the \(\mathcal{N}=4\) equations of motion for the non-chiral superfields are equivalent to constraint equations for the super-connection [101; 102]. On chiral superspace, an analogue statement can be derived by imposing the following constraints on the super-connection
\[[\underline{\nabla}_{a(\alpha},\underline{\nabla}_{\beta)A}\}=0\,,\,. \tag{3.2}\]
However, if we wish to ensure that the super-fields are a solution to the full \(\mathcal{N}=4\) SYM equations of motion, we must require the further constraint on the ASD part of the bosonic supercurvature
\[\underline{F}_{\alpha\beta}=\lambda\underline{B}_{\alpha\beta}\, \tag{3.3}\]
where the fields can be scaled so that \(\lambda\) is the 't Hooft coupling and \(\underline{B}_{\alpha\beta}\) is defined as a consequence of the first set of constraints (3.2), see Equation (3.5b) below and appendix A for an extensive discussion. The self-dual theory is recovered in the limit \(\lambda\to 0\): in this limit, it is well known that the constraint equations can be supplemented with \([\underline{\nabla}_{A(\alpha},\underline{\nabla}_{\beta)B}\}=0\) and give the \(\mathcal{N}=4\) self-dual SYM equations of motion for the super-fields, both on non-chiral [101; 102; 103] and chiral [104; 30] superspace.
In terms of the super-connection, \(\underline{\tilde{F}}_{\dot{\alpha}\dot{\beta}}\) and \(\underline{F}_{\alpha\beta}\) are defined as usual. \(\underline{\phi}_{ab}\) and \(\underline{\tilde{\psi}}_{\dot{a}\dot{a}}\) are also defined as superspace curvatures by
\[-\varepsilon_{\alpha\beta}\underline{\phi}_{ab} \coloneqq\{\underline{\nabla}_{\alpha a},\underline{\nabla}_{ \beta b}\}\,, \tag{3.4a}\] \[\varepsilon_{\alpha\beta}\underline{\tilde{\psi}}_{\dot{a}\dot{a}} \coloneqq[\underline{\nabla}_{\alpha a},\underline{\nabla}_{ \beta\dot{a}}]\,, \tag{3.4b}\]
The remaining super-fields \(\underline{\psi}_{\alpha}^{a}\) and \(\underline{B}_{\alpha\beta}\) are given by consistency conditions following from the constraints (3.2) and suitable Jacobi identities and are defined by
\[\epsilon_{abcd}\underline{\psi}_{\alpha}^{d} \coloneqq\underline{\nabla}_{\alpha a}\underline{\phi}_{bc}\,, \tag{3.5a}\] \[\underline{B}_{\alpha\beta} \coloneqq\frac{1}{4}\underline{\nabla}_{a(\alpha}\underline{ \psi}_{\beta)}^{a}\,, \tag{3.5b}\]
see appendix A for more details. In each case the super-field at \(\theta^{a\alpha}=0\) will be the corresponding \(\mathcal{N}=4\) field on \(\mathbb{M}\), for example the scalar fields \(\phi_{ab}\) are the lowest components terms of \(\underline{\phi}_{ab}\). The Jacobi identity for \(\underline{\nabla}_{\alpha\alpha}\), \(\underline{\nabla}_{b\beta}\) and \(\underline{\nabla}_{\gamma\dot{\gamma}}\) gives
\[\underline{\nabla}_{\alpha\dot{\alpha}}\underline{\phi}_{ab}=-\underline{ \nabla}_{a\alpha}\underline{\tilde{\psi}}_{b\dot{\alpha}}\,. \tag{3.6}\]
### Chiral super-fields at \(\mathscr{I}\)
At null infinity, we take the tangent to the generators of \(\mathscr{I}^{+}_{\mathbb{C}}\) to be \(\iota^{\alpha}\iota^{\dot{\alpha}}\) and impose the gauge condition
\[\underline{A}_{\alpha a} = \iota_{\alpha}\underline{A}_{a}\,, \tag{10a}\] \[\underline{A}_{\alpha\dot{\alpha}} = \iota_{\alpha}\underline{A}_{\dot{\alpha}}+\tilde{\iota}_{\dot{ \alpha}}\underline{A}_{\alpha}\,. \tag{10b}\]
In this gauge we can now investigate the \(\theta\) dependence of the various super-fields at null infinity. It's useful to separate the fermionic variables the variables
\[\chi^{a}\coloneqq\theta^{a\alpha}o_{\alpha}\,,\qquad\tilde{\chi}^{a}\coloneqq- \theta^{a\alpha}\iota_{\alpha}\,. \tag{11}\]
At null infinity, we focus on the \(\chi^{a}\) dependence of the super-fields and set \(\tilde{\chi}^{a}=0\) accordingly. Peeling means that as one approaches \(\mathscr{I}\), fields align with \(o_{\alpha}\) to leading order \(B_{\alpha\beta}\sim B_{2}o_{\alpha}o_{\beta}/r+O(1/r^{2})\). Equivalently, we are considering a supersymmetrization \(\mathscr{I}^{+}_{\mathbb{C},\,\mathcal{N}=4}\) of \(\mathscr{I}^{+}_{\mathbb{C}}\) coordinatized by homogeneous coordinates \((u,\lambda_{\alpha},\bar{\lambda}_{\dot{\alpha}},\chi^{a})\) defined up to the equivalence relation
\[(u,\lambda_{\alpha},\bar{\lambda}_{\dot{\alpha}},\chi^{a})\sim(b\bar{b}\,u,b \lambda_{\alpha},\bar{b}\bar{\lambda}_{\dot{\alpha}},b\chi^{a})\,, \tag{12}\]
for any \(b\in\mathbb{C}^{*}\). To obtain the \(\chi^{a}\) dependence, we contract the spinor indices in (10a), (10b), and (11) with \(\iota^{\alpha}\) and, using \(\partial/\partial\chi^{a}=\iota^{\alpha}\partial_{a\alpha}\), integrate with respect to \(\chi^{a}\) to get
\[\langle\iota\underline{\psi}^{a}\rangle = \langle\iota\psi^{a}\rangle+\chi^{a}B_{2}+\mathcal{O}(\chi^{2})\,, \tag{13a}\] \[\underline{\phi}_{ab} = \phi_{ab}+\epsilon_{abcd}\chi^{c}\langle\iota\psi^{d}\rangle+ \frac{1}{2}\epsilon_{abcd}\chi^{c}\chi^{d}B_{2}+\mathcal{O}(\chi^{3})\,,\] (13b) \[[\tilde{\iota}\underline{\tilde{\psi}}_{a}] = [\tilde{\iota}\tilde{\psi}_{a}]+\chi^{b}\partial_{u}\phi_{ab}+ \frac{1}{2}\epsilon_{abcd}\chi^{b}\chi^{c}\langle\iota\psi^{d}\rangle+\chi^{3 }_{a}B_{2}+\mathcal{O}(\chi^{4}) \tag{13c}\]
where
\[B_{2} \coloneqq \iota^{\alpha}\iota^{\beta}B_{\alpha\beta}\,, \tag{14a}\] \[\chi^{3}_{a} \coloneqq \frac{1}{3!}\epsilon_{abcd}\chi^{b}\chi^{c}\chi^{d}\,, \tag{14b}\]
and where we have taken the integration constants to be the space-time field associated with the corresponding super-field.
The same computation applied to (10a) and (10b) leads to the expansion of the super-connection
\[\underline{A}_{a} = \partial_{u}^{-1}[\tilde{\iota}\,\tilde{\psi}_{a}]+\phi_{ab}\chi ^{b}+\langle\iota\psi^{d}\rangle\frac{1}{2}\epsilon_{abcd}\chi^{b}\chi^{c}+B_ {2}\chi^{3}_{a}+O(\chi^{4})\,, \tag{15a}\] \[\tilde{\iota}^{\dot{\alpha}}\underline{A}_{\dot{\alpha}} = \tilde{\iota}^{\dot{\alpha}}A_{\dot{\alpha}}+[\tilde{\iota}\tilde {\psi}_{a}]\chi^{a}+\partial_{u}\phi_{ab}\frac{1}{2}\chi^{a}\chi^{b}+\langle \iota\,\partial_{u}\psi^{a}\rangle\chi^{3}_{a}+\partial_{u}B_{2}\chi^{1}\chi^{2 }\chi^{3}\chi^{4}\,. \tag{15b}\]
On restriction to \(\mathscr{I}^{+}_{\mathbb{C}}\), the super-connection thus determines the 1-form \(\underline{A}\,\mathrm{D}\bar{\lambda}\), where we identify \(\underline{A}\) with \(\underline{A}_{\dot{\alpha}}|_{\mathscr{I}_{\mathbb{C}}}\). This means that we take
\[\underline{A}(u,\lambda,\bar{\lambda},\theta)=A_{+}+\Phi_{1,a}\chi^{a}+\partial _{u}\Phi_{0,ab}\chi^{a}\chi^{b}+\partial_{u}\Phi_{-1}^{a}\chi^{3}_{a}+\partial _{u}\Phi_{-2}\chi^{1}\chi^{2}\chi^{3}\chi^{4}\,, \tag{16}\]
where \(\chi^{a}\coloneqq\theta^{a\alpha}\lambda_{\alpha}\) and the coefficients are the characteristic data at \(\mathscr{I}^{+}_{\mathbb{C}}\) for the \(\mathcal{N}=4\) super Yang-Mills multiplet
\[\{\Phi_{2}=\partial_{u}A_{+},\Phi_{1,a},\Phi_{0,ab}=\Phi_{0,[ab]},\Phi_{-1}^{a}, \Phi_{-2}\}\,. \tag{3.14}\]
Note that the definition of the radiative data doesn't require any self-duality condition to be valid, in complete analogy with (2.5).
On the other hand, if we consider \(\mathcal{N}=4\)_self-dual_ super Yang-Mills, the equations of motion are equivalent to the graded integrability conditions [101; 102; 103; 104]
\[[\underline{\nabla}_{A(\alpha},\underline{\nabla}_{\beta)B}\}=0\,, \tag{3.15}\]
and they imply that there exists a gauge for which \(\underline{A}_{\alpha}=0\) as for the bosonic case. The integrability condition (3.15) implies the existence of supersymmetrized versions of the scalar potentials \(\underline{J}(x,\theta)\) and \(\underline{K}(x,\theta)\), so that
\[\underline{A}_{\alpha A}=\iota_{\alpha}\iota^{\beta}\partial_{\beta A} \underline{K}=-\mathrm{i}\,\iota_{\alpha}\alpha^{\beta}\underline{J}^{-1} \partial_{\beta A}\underline{J}\,. \tag{3.16}\]
As before, the space-time fields \(\tilde{F}_{\dot{\alpha}\dot{\beta}}\), \(\tilde{\psi}_{\dot{\alpha}a}\), and \(\phi_{ab}\) can also be understood as the lowest components of the self-dual super-curvature, defined by \(\varepsilon_{\alpha\beta}\underline{\mathcal{F}}_{AB}\coloneqq[\underline{ \nabla}_{\alpha A},\underline{\nabla}_{\beta B}\}\). We now have, with the definitions \(\mathrm{d}_{A}\coloneqq\iota^{\beta}\partial_{\beta A}\) and \(\tilde{\mathrm{d}}_{A}\coloneqq\alpha^{\beta}\partial_{\beta A}\)
\[\underline{\mathcal{F}}_{AB}=\mathrm{d}_{A}\mathrm{d}_{B}\underline{K}=- \mathrm{i}\,\mathrm{d}_{(A}(\underline{J}^{-1}\tilde{\mathrm{d}}_{B)} \underline{J})\,, \tag{3.17}\]
in addition to (2.12b) and (2.15b). Similarly, the \(\mathcal{N}=4\) self-duality equations are the obvious super-symmetrizations of (2.13) and (2.16)
\[2\tilde{\mathrm{d}}_{[A}\mathrm{d}_{B]}\underline{K}+\mathrm{i}[ \mathrm{d}_{A}\underline{K},\mathrm{d}_{B}\underline{K}\} = 0\,, \tag{3.18a}\] \[\mathrm{d}_{[A}(\underline{J}^{-1}\tilde{\mathrm{d}}_{B)} \underline{J}) = 0\,. \tag{3.18b}\]
In this gauge, the expansion of the \(\underline{K}\) matrix at \(\tilde{\chi}=0\) can be obtained by integrating the fermionic part of (3.16) using (3.12) to obtain
\[\underline{K}=K-\partial_{u}^{-1}[\tilde{\nu}\tilde{\psi}_{a}]\chi^{a}+\frac{ 1}{2}\phi_{ab}\chi^{a}\chi^{b}-\langle\iota\psi^{a}\rangle\chi_{a}^{3}+\iota^ {\alpha}\iota^{\beta}B_{\alpha\beta}\chi^{1}\chi^{2}\chi^{3}\chi^{4}\,. \tag{3.19}\]
### Super-twistor space description
As in the pure Yang-Mills case, solutions to the \(\mathcal{N}=4\) self-dual SYM equations are compactly obtained from a supersymmetric version of the Ward correspondence. Introducing homogeneous coordinates \(Z^{A}=(\mu^{\dot{\alpha}},\lambda_{\alpha},\chi^{a})\), \(a=1,\dots,4\), on \(\mathbb{CP}^{3|4}\) subject to \(Z^{A}\sim tZ^{A}\) for \(t\in\mathbb{C}^{*}\), the super-twistor space of \(\mathbb{M}^{4|8}\) is defined to be [105]
\[\mathbb{PT}=\{[Z]\in\mathbb{CP}^{3|4}:\lambda_{\alpha}\neq 0\}\,, \tag{3.20}\]
and we still denote it as \(\mathbb{PT}\). The incidence relations are
\[\mu^{A}\coloneqq(\mu^{\dot{\alpha}},\chi^{a})=(x^{\alpha\dot{\alpha}}\lambda_{ \alpha},\theta^{\alpha a}\lambda_{\alpha})\,, \tag{3.21}\]
and we denote the line in super-twistor space again by \(X\), even though it now depends on \((x,\theta)\). The Ward correspondence becomes [106; 107]
**Theorem 2**: _There exists a one-to-one correspondence between_
* _solutions to the_ \(\mathcal{N}=4\) _self-dual Yang-Mills equations on_ \(\mathbb{M}^{4|8}\) _with gauge group_ \(G=\mathrm{GL}(r,\mathbb{C})\)_,_
* _holomorphic rank-_\(r\) _vector bundles_ \(E\) _over super-twistor space such that_ \(\left.E\right|_{X}\) _is trivial for each_ \((x,\theta)\in\mathbb{M}^{4|8}\)_._
In the Dolbeault framework, these bundles are equipped with an integrable super-connection \(\underline{\mathsf{a}}\) and Dolbeault operator \(\bar{D}=\bar{\partial}+\underline{\mathsf{a}}\), which can be obtained using the characteristic data built out of the null data. Together with the gluonic background twistor connection \(\mathsf{a}\) of the previous section, we can construct the connection on supertwistor space
\[\underline{\mathsf{a}}\coloneqq\mathsf{a}+\mathsf{a}_{a}\chi^{a}+\frac{1}{2} \mathsf{a}_{ab}\chi^{a}\chi^{b}+\mathsf{a}^{a}\chi^{3}_{a}+\tilde{\mathsf{a}} \chi^{1}\chi^{2}\chi^{3}\chi^{4}\,, \tag{3.22}\]
valued in \(\Omega^{0,1}(\mathbb{PT},\mathcal{O}(0)\otimes\mathfrak{g})\). Here \(\{\mathsf{a},\mathsf{a}_{a},\mathsf{a}_{ab},\mathsf{a}^{a},\tilde{\mathsf{a}}\}\) are \((0,1)\)-forms of respective homogeneity \(0,-1,-2,-3,-4\) in the bosonic twistor variables and can be obtained as pullbacks of the radiative data for the multiplet respectively \(p^{*}\{A_{+},\Phi_{1,a},\partial_{u}\Phi_{0,ab},\partial_{u}\Phi_{-1}^{a}, \partial_{u}\Phi_{-2}\}\) on \(\mathscr{S}_{\mathbb{C}}^{+}\).
The reconstruction part of the Ward correspondence is unaltered from the pure Yang-Mills case, the only difference being the promotion of every field to a super-field. In this way, at least for small data, there exists a holomorphic frame \(\underline{\mathsf{H}}(x,\theta,\lambda)\) satisfying
\[\bar{D}\big{|}_{X}\,\underline{\mathsf{H}}(x,\theta,\lambda)=0\,, \tag{3.23}\]
in terms of which we can construct the super-connection \(\underline{A}_{\alpha A}\) on \(\mathbb{M}^{4|8}\) as
\[\underline{\mathsf{H}}^{-1}(x,\theta,\lambda)\lambda^{\alpha}\partial_{\alpha A }\underline{\mathsf{H}}(x,\theta,\lambda)=-\mathrm{i}\lambda^{\alpha} \underline{A}_{\alpha A}(x,\theta)\,. \tag{3.24}\]
## 4 Integral formulae for the curvature and background coupled fields
In order to lift space-time formulae to twistor space, in this Section we obtain explicit expressions for the space-time gauge field in terms of the twistor connection \(\mathsf{a}\) and the frame \(\mathsf{H}\); although these are in principle already determined by the previous Section, we will need more explicit formulae. Similarly, it has been known for some time how to obtain formulae for background-coupled linear fields and super-fields, but here we give more detailed formulae for momentum eigenstates.
### Connection and curvature formulae
We first introduce Green's functions on the Riemann sphere for inverting the \(\bar{\partial}\) operators on different line bundles. Let \(\mathcal{O}(n)\to\mathbb{CP}^{1}\) be the line bundle of homogenous functions of degree \(n\) in \(\lambda_{\alpha}\). Provided \(n\geq-1\), we can invert
\[\bar{\partial}g_{n}=f_{n}\,, \tag{4.1}\]
for any \(f_{n}\in\Omega^{0,1}(\mathbb{CP}^{1},\mathcal{O}(n))\) to find a solution \(g_{n}\in\Omega^{0,0}(\mathbb{CP}^{1},\mathcal{O}(k))\), namely
\[g_{n}(\lambda)=\int\frac{\mathrm{D}\lambda^{\prime}}{2\pi\mathrm{i}}\;f_{n}( \lambda^{\prime})\frac{1}{\langle\lambda\lambda^{\prime}\rangle}\left(\frac{ \langle\iota\lambda\rangle}{\langle\iota\lambda^{\prime}\rangle}\right)^{n+1}\,. \tag{4.2}\]
The integral is over \(\mathbb{CP}^{1}\), on which \(\lambda_{\alpha},\lambda^{\prime}_{\alpha}\) are homogeneous coordinates, and the reference spinor \(\iota_{\alpha}\) is used to fix the freedom in adding polynomials of degree \(n\) in \(\lambda\) to \(g_{n}\) by making it vanish to \(n\)-th order at \(\iota_{\alpha}\); note that for \(n=-1\) the solution is unique, while for \(n\geq 0\) the ambiguity in \(g_{n}\) is a consequence of \(H^{0}(\mathbb{CP}^{1},\mathcal{O}(n))\cong\mathbb{C}^{n-1}\).
We can now find an integral formula for \(A_{\alpha\dot{\alpha}}\) by differentiating the Sparling equation (2.24) and eliminating the twistor connection via the definition of the holomorphic frame. In this way, we find
\[\bar{\partial}\big{|}_{X}\left(\partial_{\alpha\dot{\alpha}}\mathsf{H}^{-1} \,\mathsf{H}\right)=\lambda_{\alpha}\mathsf{H}^{-1}\frac{\partial\mathsf{a}}{ \partial\mu^{\dot{\alpha}}}\mathsf{H}\,, \tag{4.3}\]
and using the Green's functions (4.2)
\[\partial_{\alpha\dot{\alpha}}\mathsf{H}^{-1}(x,\lambda)\,\mathsf{H}(x,\lambda )=\frac{1}{2\pi\mathrm{i}}\int_{X}\frac{\mathrm{D}\lambda^{\prime}}{\langle \lambda\lambda^{\prime}\rangle}\frac{\langle\iota\lambda\rangle}{\langle \iota\lambda^{\prime}\rangle}\lambda^{\prime}_{\alpha}\mathsf{H}^{-1}(x, \lambda^{\prime})\left.\frac{\partial\mathsf{a}}{\partial\mu^{\dot{\alpha}}} \right|_{X}\mathsf{H}(x,\lambda^{\prime})\,. \tag{4.4}\]
The possible ambiguity in adding a constant to the right-hand side at homogeneity degree zero is fixed by the vanishing of both sides of the equation at \(\lambda_{\alpha}=\iota_{\alpha}\) for the gauge \(\mathsf{H}(x,\iota)=I\) where \(A_{\alpha\dot{\alpha}}=\iota_{\alpha}A_{\dot{\alpha}}\). Contracting this equation with \(\lambda^{\alpha}\) using (2.27) yields
\[A_{\alpha\dot{\alpha}}(x)=\frac{\iota_{\alpha}}{2\pi}\int_{X}\frac{\mathrm{D} \lambda^{\prime}}{\langle\iota\lambda^{\prime}\rangle}\mathsf{H}^{-1}(x, \lambda^{\prime})\left.\frac{\partial\mathsf{a}}{\partial\mu^{\dot{\alpha}}} \right|_{X}\mathsf{H}(x,\lambda^{\prime})\,. \tag{4.5}\]
Comparing this last equation with (2.11), we can identify the integral in (4.5) with \(\mathrm{d}_{\dot{\alpha}}K\). The associated field strength can be straightforwardly checked to be self-dual, with SD component
\[\tilde{F}_{\dot{\alpha}\dot{\beta}}=\int_{X}\frac{\mathrm{D}\lambda_{1}}{2\pi \mathrm{i}}\mathsf{H}_{1}^{-1}\partial_{\dot{\alpha}}\partial_{\dot{\beta}} \mathsf{a}_{1}\mathsf{H}_{1}-\int_{X^{2}}\frac{\mathrm{D}\lambda_{1}\mathrm{D} \lambda_{2}}{(2\pi\mathrm{i})^{2}\langle\lambda_{1}\lambda_{2}\rangle}[ \mathsf{H}_{1}^{-1}\partial_{\dot{\alpha}}\mathsf{a}_{1}\mathsf{H}_{1},\mathsf{ H}_{2}^{-1}\partial_{\dot{\beta}}\mathsf{a}_{2}\mathsf{H}_{2}]\,, \tag{4.6}\]
where \(\mathsf{H}_{i}=\mathsf{H}(x,\lambda_{i})\), \(\mathsf{a}_{i}=\left.\mathsf{a}\right|_{X}(x,\lambda_{i})\), and we denoted \(\mu^{\dot{\alpha}}\) derivatives as \(\partial_{\dot{\alpha}}\coloneqq\partial/\partial\mu^{\dot{\alpha}}\). The expression for \(\tilde{F}_{\dot{\alpha}\dot{\beta}}\) has the advantage of being now both Lorentz and gauge invariant; the linear Penrose transform would lead to the first term in (4.6) only, but for the fully non-linear field we need also the second, double integral over \(X\).
For \(\mathcal{N}=4\) SYM, we similarly have
\[\underline{A}_{\alpha A}(x,\theta)=\frac{\iota_{\alpha}}{2\pi}\int_{X}\frac{ \mathrm{D}\lambda^{\prime}}{\langle\iota\lambda^{\prime}\rangle}\underline{ \mathsf{H}}^{-1}(x,\theta,\lambda)\left.\partial_{A}\underline{\mathsf{a}} \right|_{X}\underline{\mathsf{H}}(x,\theta,\lambda)\,. \tag{4.7}\]
Here \(\partial_{A}\coloneqq(\partial/\partial\mu^{\dot{\alpha}},\partial/\partial \chi^{a})\). The supersymmetrized \(J\)- and \(K\)-matrices and the propagator on the line \(X\) can be defined in this context as well by replacing the holomorphic frame with its supersymmetrized version in (2.29a), (2.29b), (2.30). The covariant derivative \(\underline{\nabla}_{\alpha A}\) is self-dual by construction with \([\nabla_{aA},\nabla_{\beta B}\}=\varepsilon_{\alpha\beta}\underline{F}_{AB}\); the gluon self-dual curvature, the positive-helicity fermions and the scalar fields arise as the lowest components of this self-dual super-curvature. Moreover, the supersymmetrized Ward correspondence now gives
gauge and Lorentz invariant expressions for both the fermions and the scalars, in complete analogy with the pure Yang-Mills case. Explicitly, the space-time fermion \(\tilde{\psi}_{a\dot{\alpha}}\) is the lowest component of the superfield
\[\underline{\tilde{\psi}}_{a\dot{\alpha}}=\int_{X}\frac{\mathrm{D} \lambda_{1}}{2\pi\mathrm{i}}\mathbbm{H}_{1}^{-1}\partial_{a}\partial_{\dot{ \alpha}}\underline{\underline{a}}_{1}\underline{\underline{H}}_{1}-\int_{X^ {2}}\frac{\mathrm{D}\lambda_{1}\mathrm{D}\lambda_{2}}{(2\pi\mathrm{i})^{2} \langle\lambda_{1}\lambda_{2}\rangle}[\mathsf{H}_{1}^{-1}\partial_{a} \underline{\underline{a}}_{1}\mathsf{H}_{1},\underline{\underline{H}}_{2}^{-1} \partial_{\dot{\alpha}}\underline{\underline{a}}_{2}\underline{\underline{H}}_{2 }]\,, \tag{4.8}\]
whilst the scalar \(\phi_{ab}\) is the lowest component of
\[\underline{\phi}_{ab}=\int_{X}\frac{\mathrm{D}\lambda_{1}}{2\pi \mathrm{i}}\mathbbm{H}_{1}^{-1}\partial_{a}\partial_{b}\underline{\underline{a }}_{1}\mathsf{H}_{1}-\int_{X^{2}}\frac{\mathrm{D}\lambda_{1}\mathrm{D} \lambda_{2}}{(2\pi\mathrm{i})^{2}\langle\lambda_{1}\lambda_{2}\rangle}\{ \underline{\underline{H}}_{1}^{-1}\partial_{a}\underline{\underline{a}}_{1} \mathsf{H}_{1},\underline{\underline{H}}_{2}^{-1}\partial_{b}\underline{ \underline{a}}_{2}\mathsf{H}_{2}\}\,. \tag{4.9}\]
This is also the prescription given in [55; 56; 57] for the construction of vertices for composite operators in \(\mathcal{N}=4\) SYM.
### Perturbations, linearized modes and momentum eigenstates
Massless fields of helicity \(n/2\) on a self-dual background are well-known to be given as first cohomology classes on twistor space with values in the appropriate representation of the Ward bundle \(E\) twisted by \(\mathcal{O}(n-2)\). In our radiative framework, for \(n\leq 0\), these can be represented by their \(\mathscr{I}\) data \(f_{n-2}\coloneqq\partial_{u}\Phi_{n}\mathrm{D}\bar{\lambda}\). These are \(\bar{\partial}\) closed around the non-trivial self-dual background too, because both \(\mathsf{a}\) and and \(f_{n-2}\) are pointing only along the \(\mathrm{D}\bar{\lambda}\) direction and \(\Phi_{n}\) is holomorphic in \(\mu^{\dot{\alpha}}\). For \(n\leq 0\), we can obtain a space-time linear field via the standard integral representation of the Penrose transform; the coupling with the background arises because the bundle \(E\) must first be trivialized before performing the twistor integral. Taking \(f_{n-2}\) to be in the adjoint, we obtain
\[\Phi_{\alpha_{1}\ldots\alpha_{|n|}}=\frac{1}{2\pi\mathrm{i}}\int_{X}\lambda_{ \alpha_{1}}\mathsf{H}^{-1}(x,\lambda)\left.f_{n-2}\right|_{X}\mathsf{H}(x, \lambda)\wedge\mathrm{D}\lambda\,. \tag{4.10}\]
For concrete calculations, we will take our \(f_{n-2}\) to be momentum eigenstates with colour \(T_{j}\in\mathfrak{g}\) and null momentum \(k_{j}^{\alpha\dot{\alpha}}=\kappa_{j}^{\alpha}\tilde{\kappa}_{j}^{\dot{\alpha}}\).
\[f_{n-2}^{j}(Z)=T_{j}\int_{\mathbb{C}^{*}}\frac{\mathrm{d}s}{s^{n-1}}\bar{ \delta}^{2}(\kappa_{j}-s\lambda)e^{\mathrm{i}s[\mu\tilde{\kappa}_{j}]}\,, \tag{4.11}\]
where the holomorphic \(\delta\)-function is defined by
\[\bar{\delta}^{2}(\kappa_{j}-s\lambda)\coloneqq\frac{1}{(2\pi \mathrm{i})^{2}}\bigwedge_{\alpha=1,2}\bar{\partial}\left(\frac{1}{\kappa_{j \,\alpha}-s\lambda_{\alpha}}\right)\,. \tag{4.12}\]
Negative-helicity gluons.The ASD linearized field strength itself is the case \(n=-2\) where the integral formula is immediately performed against the delta functions to give the ASD field strength
\[f_{\alpha\beta}^{j}(x)=\kappa_{j\alpha}\kappa_{j\beta}\mathsf{H}_{j}^{-1}T_{j }\mathsf{H}_{j}\,e^{\mathrm{i}k_{j}\cdot x}\,,\qquad\qquad\mathsf{H}_{j} \coloneqq\mathsf{H}(x,\kappa_{j})\,. \tag{4.13}\]
It is now easily checked that the spin-1 equation (4.14b) below follows using (2.27) under the integral sign. We will see that this only gives the perturbation \(f_{j\,\alpha\beta}\) of the curvature, whilst the construction of the corresponding gauge field perturbation \(a_{\alpha\dot{\alpha}}^{j}\) is not so straightforward as we now describe.
Background perturbationsThe linearized equations of motion for a general perturbation \(a_{\alpha\dot{\alpha}}\) of a self-dual background gauge field read
\[\nabla_{\dot{\alpha}(\alpha}a^{\dot{\alpha}}_{\beta)} = f_{\alpha\beta}\,, \tag{4.14a}\] \[\nabla_{\alpha\dot{\alpha}}f^{\alpha\beta} = 0\,. \tag{4.14b}\]
In a general background, it is no longer consistent to decompose a linear field on the background into those whose perturbation of the curvature is self-dual and anti-self-dual as (4.14b) would have an extra term from the background ASD curvature [108]. However, on a self-dual background, it is still consistent to require that \(f_{\alpha\beta}=0\) in which case the perturbation preserves the self-duality condition. Such solutions are still naturally identified with positive-helicity gluons. Conversely, non-trivial solutions to (4.14b) are interpreted as negative-helicity gluons around the SD background, but the corresponding potential that solves (4.14a) will generally lead to a non-trivial self-dual curvature perturbation too: even if it is imposed it to be zero asymptotically, it will develop a non-zero value as the field is evolved through the space-time. This follows because the MHV tree amplitude can be understood as being generated by the self-dual field at \(\mathscr{I}^{+}\) associated with a potential crossing space-time on an SD background whose data at \(\mathscr{I}^{-}\) is purely ASD, see [108; 109] for details.
Positive-helicity gluons.In the following, we focus on MHV form factors, which contain arbitrarily many positive-helicity external states. The Penrose transform relates these self-dual gluon perturbations to the cohomology classes of weight \(0\) whose representative \(a_{j}\) with colour \(T_{i}\in\mathfrak{g}\) and null momentum \(k^{\alpha\dot{\alpha}}_{j}=\kappa^{\alpha}_{j}\hat{\kappa}^{\dot{\alpha}}_{j}\) is as above
\[a_{j}(Z)=T_{j}\int_{\mathbb{C}^{*}}\frac{\mathrm{d}s}{s}\bar{\delta}^{2}( \kappa_{j}-s\lambda)e^{\mathrm{i}s[\mu\bar{\kappa}_{j}]}\,. \tag{4.15}\]
These will be used around a non-trivial background as well as in the radiative framework, the construction provides a perturbation that is asymptotic to a positive-helicity plane wave at \(\mathscr{I}_{\mathbb{C}}\). The corresponding space-time perturbation can be reconstructed by perturbing the formulae above. Considering just one of the perturbations, we perturb \(\mathsf{a}\to\mathsf{a}+\epsilon_{j}a_{j}\) so that \(\mathsf{H}\to\mathsf{H}+\epsilon_{j}\,\delta_{j}\mathsf{H}\). The perturbation of (2.24) to first order in \(\epsilon_{j}\) yields
\[\bar{\partial}\big{|}_{X}\left(\mathsf{H}^{-1}\delta_{j}\mathsf{H}\right)= \mathsf{H}^{-1}a_{j}\mathsf{H}\,, \tag{4.16}\]
where we used the definition of \(\mathsf{H}\) to eliminate the background twistor connection. Using the Green's function (4.2) and integrating against the delta function then gives
\[\mathsf{H}^{-1}\delta_{j}\mathsf{H}=\frac{\langle\iota\lambda\rangle}{\langle ij \rangle\langle\lambda j\rangle}\mathsf{H}_{j}^{-1}T_{j}\mathsf{H}_{j}e^{ \mathrm{i}k_{j}\cdot x}\,. \tag{4.17}\]
where insertions of \(\kappa_{j}\) into angle-brackets are denoted just by \(j\). The variations of the \(J\)- and \(K\)-matrices can be obtained via (2.29b) and (2.29a) to give
\[\delta_{j}K = -\frac{\mathrm{i}}{\langle ij\rangle^{2}}\mathsf{H}_{j}^{-1}T_{j }\mathsf{H}_{j}e^{\mathrm{i}k_{j}\cdot x}\,, \tag{4.18a}\] \[J^{-1}\delta_{j}J = -\frac{1}{\langle\iota j\rangle\langle oj\rangle}\mathsf{H}_{j}^ {-1}T_{j}\mathsf{H}_{j}e^{\mathrm{i}k_{j}\cdot x}\,. \tag{4.18b}\]
These then yield the space-time perturbation
\[a_{j\,\alpha\dot{\alpha}}(x)=\frac{\iota_{\alpha}}{\left\langle ij\right\rangle} \mathsf{H}_{j}^{-1}\left(\tilde{\kappa}_{j\,\dot{\alpha}}T_{j}+[g_{j\dot{ \alpha}}(x),T_{j}]\right)\mathsf{H}_{j}e^{\mathrm{i}k_{j}\cdot x}\,, \tag{4.19}\]
where we have used (2.27) to define \(g_{\dot{\alpha}}(x,\lambda)\) by
\[\nabla_{\alpha\dot{\alpha}}\mathsf{H}^{-1}=\lambda_{\alpha}\mathsf{H}^{-1}g_{ \dot{\alpha}}\,, \tag{4.20}\]
and the subscript \(j\) denotes evaluation at \(\lambda=\kappa_{j}\). Similar formulae for the variation of the curvature can be obtained. Here the usual gauge dependent undotted spinor present in the vector polarization of a spin-1 momentum eigenstate is taken to be \(\iota_{\alpha}\). Note in particular that the reconstruction of the space-time gauge field, rather than of its curvature, relies heavily on the existence of (perturbed) \(J\)- and \(K\)-matrices, in other words it's possible only for positive-helicity perturbations that preserve the self-duality condition. For negative-helicity fields, we can at best construct the linearized field strength (4.13).
Perturbations corresponding to positive-helicity gluons also modify the propagator (2.30). For a general perturbation \(\delta a\) -that is, not necessarily a momentum eigenstate- the variation \(\delta\mathsf{U}_{X}\) follows again from (4.17)
\[\delta\mathsf{U}_{X}(\lambda,\lambda^{\prime})=-\int_{X}\mathrm{D}\lambda^{ \prime\prime}\,\mathsf{U}_{X}(\lambda,\lambda^{\prime\prime})\;\delta a|_{X} \,\mathsf{U}_{X}(\lambda^{\prime\prime},\lambda^{\prime})\,. \tag{4.21}\]
This variation can be iterated \(n\) times to obtain the colour-ordered \(n\)th perturbation of the propagator on \(X\) as
\[\mathsf{U}_{X}(\lambda,\lambda^{\prime})=\sum_{n=0}^{\infty}\left(\frac{-1}{ 2\pi\mathrm{i}}\right)^{n}\mathsf{H}(x,\lambda)\int_{X^{n}}\mathrm{D}\lambda _{1}\ldots\mathrm{D}\lambda_{n}\frac{\mathsf{H}_{1}^{-1}\delta a_{1}\mathsf{ H}_{1}\ldots\mathsf{H}_{n}^{-1}\delta a_{n}\mathsf{H}_{n}}{\left\langle\lambda\lambda_{1} \right\rangle\left\langle\lambda_{1}\lambda_{2}\right\rangle\ldots\left\langle \lambda_{n}\lambda^{\prime}\right\rangle}\mathsf{H}^{-1}(x,\lambda^{\prime})\,, \tag{4.22}\]
where \(\delta a_{j}\coloneqq\left.\delta a\right|_{X}(x,\lambda_{j})\). The frames in (4.22) are the frames for the background \(\mathsf{a}\), and as before each term is evaluated on the line \(X\). If the perturbations are taken to be momentum eigenstates, the integrations can be directly performed against the delta functions.
\(\mathcal{N}=4\) super-momentum eigenstates.The Penrose transform provides momentum eigenstates for different helicities as well. In the following, we will consider super form factors in \(\mathcal{N}=4\) where we super-symmetrize the external states (but not the local operator), so we arrange the possible external states on space-time in terms of Nair's super-field [6], i.e. we consider the external state
\[\Phi_{j}=g_{j}^{+}+\psi_{j\,a}^{+}\eta_{j}^{a}+\frac{1}{2}\phi_{j\,ab}\eta_{j} ^{a}\eta_{j}^{b}+\frac{1}{3!}\varepsilon_{abcd}\psi_{j}^{-}{}^{a}\eta_{j}^{b} \eta_{j}^{c}\eta_{j}^{d}+g_{j}^{-}\eta_{j}^{1}\eta_{j}^{2}\eta_{j}^{3}\eta_{j} ^{4}\,, \tag{4.23}\]
with super-momentum \(k_{j\,\alpha A}\coloneqq\kappa_{j\,\alpha}\tilde{\kappa}_{j\,A}=(\kappa_{j\, \alpha}\tilde{\kappa}_{j\,\dot{\alpha}},\kappa_{j\,\alpha}\eta_{j\,a})\). The associated twistor representative around the flat background is
\[\underline{a}_{i}(Z)=T_{i}\int_{\mathbb{C}^{+}}\frac{\mathrm{d}s}{s}\bar{ \delta}^{2}(\kappa_{i}-s\lambda)e^{\mathrm{i}s[\mu\tilde{\kappa}_{i}]+ \mathrm{i}s\{\chi\eta_{i}\}}\,, \tag{4.24}\]
and since we are considering horizontal background fields on twistor space, such a representative can be used around these backgrounds as well. The corresponding linear perturbation on chiral superspace is
\[\underline{a}_{j\,\alpha A}(x,\theta)=\frac{\iota_{\alpha}}{\langle\iota j\rangle }\underline{\mathsf{H}}_{j}^{-1}(\tilde{\kappa}_{j\,A}+[\underline{g}_{j\,A} (x,\theta),T_{j}])\underline{\mathsf{H}}_{j}e^{\mathrm{i}k_{j}\cdot x+\mathrm{i }\kappa_{j}\beta\eta_{j}\,\phi\phi^{\mathrm{i}\beta}}\,, \tag{4.25}\]
where \(\underline{g}_{A}\) is the natural supersymmetrization of \(g_{\dot{\alpha}}\). Notice in particular that supersomentum eigenstates will perturb the propagator on the line \(X\) as well, the variation being given by the supersymmetrization of (4.22).
## 5 MHV (super) form factors
Given a local operator \(\mathscr{O}(x)\), its form factor \(\mathscr{F}_{\mathscr{O}}=\mathscr{F}_{\mathscr{O}}(1^{h_{1}},\ldots,n^{h_{n} };q)\) in presence of \(n\) external gluons is defined as the Fourier transform of the matrix element of \(\mathscr{O}(x)\) between the vacuum and the \(n\)-gluon multiparticle state
\[\mathscr{F}_{\mathscr{O}}(1,\ldots,n;q)\coloneqq\int_{\mathbb{M}}\mathrm{d}^ {4}x\,e^{-\mathrm{i}q\cdot x}\langle 1^{h_{1}},\ldots,n^{h_{n}}|\mathcal{O}(x)|0 \rangle\,, \tag{5.1}\]
where we implicitly assumed to take our gluons to be outgoing plane waves. Since we focus on MHV form factors, the helicities are almost all positive, the number of negative-helicity gluons being equal to the number of \(B\) fields appearing in \(\mathscr{O}\).
### MHV form factors and Cartan backgrounds
If \(\mathscr{O}\) is a composite operator depending only on \(\tilde{F}_{\dot{\alpha}\dot{\beta}}\) but not on \(B_{\alpha\beta}\), it's straightforward to show that its tree-level MHV form factor can be readily computed in self-dual Yang-Mills [23] by simply putting it on-shell [110]. More generally, in the MHV sector, we can treat form factors involving \(B_{\alpha\beta}\) as well by treating these ASD terms as perturbations away from the self-dual sector of \(S_{\mathrm{SDYM}}\) below. The generating functional for such a form factor is the path integral with action
\[S_{\mathrm{SDYM}}+\int_{\mathbb{M}}\mathrm{d}^{4}x\,\mathscr{J}\mathscr{O}\,, \tag{5.2}\]
\(S_{\mathrm{SDYM}}\) being the action for self-dual Yang-Mills [23]
\[S_{\mathrm{SDYM}}=\int_{\mathbb{M}}\mathrm{d}^{4}x\,\operatorname{tr}B_{ \alpha\beta}F^{\alpha\beta}\,, \tag{5.3}\]
and \(\mathscr{J}\) being a source for \(\mathscr{O}\). At tree-level, the generating functional reduces to (the exponential of) the on-shell action in the presence of the source, but even for non-trivial \(\mathscr{J}\) the existence of the \(J\)- and \(K\)-matrices is not affected, as long as \(\mathscr{O}\) is a polynomial in \(\tilde{F}_{\dot{\alpha}\dot{\beta}}\) and its derivatives. The argument for the generating functional still holds for operators involving the \(B\) field, because in the MHV sector, the number of negative-helicity gluons is then precisely the number of \(B_{\alpha\beta}\)'s appearing in the operator under study, so for non-exceptional kinematical configurations we can treat each \(B_{\alpha\beta}\) as a linear perturbation away from self-duality. Similar considerations hold for form factors in the self-dual sector of \(\mathcal{N}=4\) SYM. From the perspective of twisted holography [32], this procedure can be
interpreted as an explicit realization [81] of the correspondence between local operators in four dimensions and conformal blocks of a two-dimensional chiral algebra living on the celestial sphere.
If the form factor is computed in the MHV sector, the set of external positive-helicity gluons defines a self-dual background obtained as the coherent state whose data is a sum of plane waves at \(\mathscr{I}_{\mathbb{C}}^{+}\), thus the on-shell expression of \(\mathscr{O}\) around a general self-dual radiative background is the generating functional for the MHV form factor of \(\mathscr{O}\). In practice, \(\mathscr{F}_{\mathscr{O}}(1^{+},\ldots,n^{+})\) can be computed by considering the bulk gauge field that reduces to \(\epsilon_{1}a_{1}+\ldots\epsilon_{n}a_{n}\) at \(\mathscr{I}_{\mathbb{C}}\), where \(\{\epsilon_{j}\}\) are formal parameters, and extracting the term in \(\mathscr{O}\) proportional to \(\epsilon_{1}\ldots\epsilon_{n}\) - see also [111, 112] for a related approach. The twistor theory developed in the previous sections reduces the task of finding such coherent states from the non-linear equations of motion on \(\mathbb{M}\) with boundary condition at \(\mathscr{I}_{\mathbb{C}}^{+}\) to a linear problem on \(\mathbb{PT}\), leading to the construction of all-multiplicity formulae for the form factor from integrability rather than perturbation theory. The same strategy can be used for form factors around _any_ self-dual radiative background, as long as we include the twistor data for the radiative background together with the momentum eigenstates representatives, and treat the former non-perturbatively.
For the most part, we restrict ourselves to backgrounds valued in a Cartan subalgebra \(\mathfrak{h}\subseteq\mathfrak{g}\). Although our methods naturally yield concrete formulae for more general backgrounds, this restriction leads to simpler formulae in which the background is encoded into abelian factors obtainable by quadratures, i.e., direct integral formulae. These formulae still decompose an observable into colour-ordered components, with the additional information of a set of charges \(e^{i}\) of the positive helicity gluons relative to the background. These are determined by the relation \([t^{i},T_{j}]=e^{i}T_{j}\), where \(\{t^{i}\}\) are a basis of \(\mathfrak{h}\) and \(T_{j}\) the colour of the \(j\)-th gluon. If we compute observables around non-Cartan backgrounds, we need to introduce non-abelian \(\mathsf{H}\) factors associated with the background which, although determined as above, are no longer expressible as quadratures and further interleave the colour-ordered expression.
When the background is valued in the Cartan subalgebra, we can express the holomorphic frame \(\mathsf{H}=e^{-g}\) explicitly in terms of the Cartan-valued function \(g\)[2, 77]
\[g(x,\lambda)=\frac{1}{2\pi\mathrm{i}}\int_{X}\frac{\mathrm{D}\lambda^{\prime} }{\langle\lambda\lambda^{\prime}\rangle}\frac{\langle\iota\lambda\rangle}{ \langle\iota\lambda^{\prime}\rangle}\ \mathsf{a}|_{X}\, \tag{100}\]
and further use it to provide integral formulae for the background coupled fields. For example, the \(J\)- and \(K\)-matrices are given by
\[\log J = -\frac{1}{2\pi\mathrm{i}}\int_{X}\frac{\mathrm{D}\lambda^{\prime }}{\langle\iota\lambda^{\prime}\rangle\langle o\lambda^{\prime}\rangle}\ \mathsf{a}|_{X}\, \tag{101a}\] \[K = \frac{1}{2\pi}\int_{X}\frac{\mathrm{D}\lambda^{\prime}}{\langle \iota\lambda^{\prime}\rangle^{2}}\ \mathsf{a}|_{X}. \tag{101b}\]
Similarly, one can further simplify the field strength for linear perturbations around a Cartan-valued background. Negative-helicity gluons have a linearized ASD field strength
\[b_{j\,\alpha\beta}(x)=\kappa_{j\alpha}\kappa_{j\beta}T_{j}\mathrm{e}^{ \mathrm{i}k_{j}\cdot x+e_{j}g(x,\kappa_{j})}\,, \tag{102}\]
whilst positive-helicity gluons have linearized potential
\[a_{j\,\alpha\dot{\alpha}}(x)=\frac{\iota_{\alpha}}{\langle ij\rangle}(\tilde{ \kappa}_{j\dot{\alpha}}+e_{j}g_{\dot{\alpha}}(x,\kappa_{j}))T_{j}e^{\mathrm{i} k_{j}\cdot x+e_{j}g(x,\kappa_{j})}\,. \tag{5.7}\]
In particular, there is a factorization between the colour and kinematical degrees of freedom (as opposed to the more general structure in (4.19)), the background dresses the dotted component of the momentum as
\[\kappa_{j\,\alpha}\tilde{K}_{j\,\dot{\alpha}}(x)=\kappa_{j\,\alpha}(\tilde{ \kappa}_{j\,\dot{\alpha}}+e_{j}g_{\dot{\alpha}}(x,\kappa_{j}))\,, \tag{5.8}\]
while leaving invariant the undotted component. This is expected as a consequence of self-duality. The corresponding linearized SD field strength for a positive-helicity gluon is
\[\tilde{f}_{j\,\dot{\alpha}\dot{\beta}}(x)=\tilde{K}_{j\,\dot{\alpha}}(x)\tilde {K}_{k\,\dot{\beta}}(x)T_{j}e^{\mathrm{i}k_{j}\cdot x+e_{j}g(x,\kappa_{j}))}- \frac{\mathrm{i}}{\langle ij\rangle}\iota^{\alpha}\partial_{\alpha\dot{ \alpha}}g_{\dot{\beta}}(x)T_{j}e^{\mathrm{i}k_{j}\cdot x+e_{j}g(x,\kappa_{j})}\,. \tag{5.9}\]
### Form factors for powers of \(B\) and \(\tilde{F}\)
We first consider the form factor for \(\,\mathrm{tr}\,B^{2}\). At \(q=0\), this is the interaction term in the Chalmers-Siegel action that extends the SDYM action to the full YM action. Its generating functional is
\[\int_{\mathbb{M}}\mathrm{d}^{4}x\,e^{\mathrm{i}(k_{i}+k_{j}-q)\cdot x+e_{i}g( x,\kappa_{i})+e_{j}g(x,\kappa_{j})}\langle ij\rangle^{4}\,\mathrm{tr}\,( \mathsf{U}_{X}(\kappa_{i},\kappa_{j})T_{i}\mathsf{U}_{X}(\kappa_{j},\kappa_{ i})T_{j})\,, \tag{5.10}\]
where we assumed that the negative-helicity gluons have momenta \(k_{i},k_{j}\) and colours \(T_{i},T_{j}\). Since the addition of \(\,\mathrm{tr}\,B^{2}\) to \(S_{\mathrm{SDYM}}\) yields an action perturbatively equivalent to the ordinary full Yang-Mills action, the \(q\to 0\) limit of the generating functional is interpreted as the amplitude for the helicity flip of a single negative-helicity gluon traversing an SD background [113]. If the background connection on twistor space is further taken to be in the form \(\mathfrak{a}+\epsilon_{1}a_{1}+\ldots+\epsilon_{n-2}a_{n-2}\), where \(a_{i}\) are twistor representatives for positive-helicity momentum eigenstates, the 2-point amplitude expands into the \(n\)-point MHV amplitude around the background defined by \(\mathfrak{a}\)[77], in particular the Parke-Taylor denominator arises from the perturbation (4.22) of the propagator on the line \(X\), the integrals over \(X\) being saturated by the holomorphic \(\delta\) functions of the momentum eigenstates. More generally, the same expansion allows us to obtain the \(\,\mathrm{tr}\,B^{2}\) form factor away from \(q=0\)
\[\mathscr{F}_{\,\mathrm{tr}\,B^{2}}(1^{+},\ldots,i^{-},\ldots,j^{-},\ldots n;q )=\frac{\langle ij\rangle^{4}}{\langle 12\rangle\ldots\langle n1\rangle}\int_{ \mathbb{M}}\mathrm{d}^{4}x\,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x,\kappa_ {j})}\,. \tag{5.11}\]
The generalization to the form factor of an arbitrary power \(\,\mathrm{tr}\,B^{k}\coloneqq\,\mathrm{tr}\,B_{\alpha_{1}}^{\phantom{\alpha_ {1}}\alpha_{2}}\ldots B_{\alpha_{k}}^{\phantom{\alpha_{k}}\alpha_{1}}\) is straightforward
\[\mathscr{F}_{\,\mathrm{tr}\,B^{k}}(1^{+},\ldots,i_{1}^{-}\ldots,i_{k}^{-}, \ldots,n^{+};q)=\frac{(\langle i_{1}i_{2}\rangle\ldots\langle i_{k}i_{1} \rangle)^{2}}{\langle 12\rangle\ldots\langle n1\rangle}\int_{\mathbb{M}}\mathrm{d}^{4}x \,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x,\kappa_{j})}\,. \tag{5.12}\]
For generic operators containing \(\tilde{F}_{\dot{\alpha}\dot{\beta}}\) as well, the resulting formulae can still be considerably involved, so in the present section we first consider the form factor for \(\,\mathrm{tr}\,\tilde{F}_{\dot{\alpha}\dot{\beta}}\tilde{F}^{\dot{\alpha}\dot{ \beta}}\). Around the flat background, the tree-level colour-ordered MHV form factor is [85]
\[\mathscr{F}_{\,\mathrm{tr}\,\tilde{F}^{2}}(1^{+},\ldots,n^{+};q)=\frac{(q^{2}) ^{2}}{\langle 12\rangle\ldots\langle n1\rangle}\delta^{4}(Q-q)\,, \tag{5.13}\]
where \(Q=k_{1}+\ldots+k_{n}\) is the sum of the external gluon momenta. If one interprets the form factor as the amplitude for a massive complex scalar chirally coupled to the SD field strength, this beautiful formula can be proved by Berends-Giele recursion [85]; alternatively, the parity-conjugate form factor \(\mathscr{F}_{\,\mathrm{tr}\,B^{2}}(1^{-},\ldots,n^{-};q)\) can be computed as the "maximally-non-MHV" form factor using the MHV formalism [44]. It's remarkable that such a compact formula exists at all for a maximally googly form factor. We now show that the simplicity of this form factor is a consequence of the existence of the \(K\)-matrix. Expressing the SD field strength in terms of the \(K\)-matrix and integrating by parts twice, it is straightforward to rewrite the Fourier transform of \(\,\mathrm{tr}\,\tilde{F}^{2}\) as
\[\int_{\mathbb{M}}\mathrm{d}^{4}x\,e^{-\mathrm{i}q\cdot x}\iota_{\alpha}q^{ \alpha\dot{\alpha}}\iota_{\beta}q^{\beta\dot{\beta}}\,\mathrm{tr}\,\mathrm{d} _{\dot{\alpha}}K\,\mathrm{d}_{\dot{\beta}}K\,, \tag{5.14}\]
and the lifting of this expression to twistor space reads
\[\int_{\mathbb{M}}\mathrm{d}^{4}x\,e^{-\mathrm{i}q\cdot x}\iota_{\alpha}q^{ \alpha\dot{\alpha}}\iota_{\beta}q^{\beta\dot{\beta}}\quad\int_{X^{2}}\frac{ \mathrm{D}\lambda_{1}\mathrm{D}\lambda_{2}\,\langle\lambda_{1}\lambda_{2} \rangle^{2}}{\langle\iota\lambda_{1}\rangle\langle\iota\lambda_{2}\rangle}\, \mathrm{tr}\,\left(\left.\frac{\partial\mathfrak{a}_{1}}{\partial\mu_{1}^{ \dot{\alpha}}}\right|_{X}\,\mathsf{U}_{12}\,\left.\frac{\partial\mathfrak{a}_ {2}}{\partial\mu_{2}^{\dot{\beta}}}\right|_{X}\,\mathsf{U}_{21}\right)\,, \tag{5.15}\]
where we defined \(\mathsf{U}_{ij}\coloneqq\mathsf{U}_{X}(\lambda_{i},\lambda_{j})\) for the propagator. Since \(\,\mathrm{tr}\,\tilde{F}^{2}\) coincides on-shell with the topological term \(\,\mathrm{tr}\,F\wedge F\), one naively expects to write \(\,\mathrm{tr}\,\tilde{F}^{2}\) as the divergence of the Chern-Simons current, and indeed one can check that the latter simply takes the form
\[\mathcal{J}^{\alpha\dot{\alpha}}=\iota^{\alpha}\,\mathrm{tr}\,(\mathrm{d}_{ \dot{\beta}}K\,\mathrm{d}^{\dot{\alpha}}\mathrm{d}^{\dot{\beta}}K)\,. \tag{5.16}\]
From this relation, it's clear that expressing the gauge field via the \(K\)-matrix allows to extract a further total derivative from \(\mathcal{J}^{\alpha\dot{\alpha}}\). Notice also that (5.15) seemingly depends on the gauge spinor \(\iota_{\alpha}\), but it is actually gauge invariant because it coincides with the generating functional computed with (4.6); this latter expression, although gauge invariant, is more involved and leads to less compact formulae for the form factors.
We now evaluate the generating functional (5.15) on a perturbed background, \(\mathfrak{a}\to\mathfrak{a}+a\). The perturbation \(a\) is assumed to be the sum of \(n\) momentum eigenstates (4.15), and the term in (5.15) containing precisely \(n\) distinct momentum eigenstates is the form factor. Expanding the twistor propagators with the aid of (4.22) and decomposing the form factor into colour-ordered terms, we obtain the expression
\[\mathscr{F}= \frac{1}{\langle 12\rangle\ldots\langle n1\rangle}\int_{\mathbb{M}} \mathrm{d}^{4}x\,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x,\kappa_{j})}\iota_ {\alpha}q^{\alpha\dot{\alpha}}\iota_{\beta}q^{\beta\dot{\beta}}\sum_{i,j} \Biggl{\{}\frac{\langle ij\rangle^{2}}{\langle ii\rangle\langle\iota j\rangle }\tilde{\kappa}_{i\dot{\alpha}}\tilde{\kappa}_{j\dot{\beta}} \tag{5.17}\] \[-2\int_{X}\frac{\mathrm{D}\lambda}{2\pi\mathrm{i}}\frac{\langle i -1,i\rangle\langle\lambda j\rangle^{2}\tilde{\kappa}_{j\dot{\beta}}}{\langle i -1,\lambda\rangle\langle\lambda i\rangle\langle\iota\lambda\rangle\langle ij \rangle}\,\left.\frac{\partial\mathfrak{a}}{\partial\mu^{\dot{\alpha}}}\right|_{X }\] \[+\int_{X^{2}}\frac{\mathrm{D}\lambda\mathrm{D}\lambda^{\prime}}{( 2\pi\mathrm{i})^{2}}\frac{\langle i-1,i\rangle\langle j-1,j\rangle\langle \lambda\lambda^{\prime}\rangle^{2}}{\langle i-1,\lambda\rangle\langle\lambda i \rangle\langle j-1,\lambda\rangle\langle\lambda j\rangle\langle\iota\lambda \rangle\langle\iota\lambda^{\prime}\rangle}\,\left.\frac{\partial\mathfrak{a}}{ \partial\mu^{\dot{\alpha}}}\right|_{X}\,\left.\frac{\partial\mathfrak{a}}{ \partial\mu^{\prime}{}^{\dot{\beta}}}\right|_{X}\Biggr{\}}\,.\]
In principle, the background twistor connection can give derivative contributions to the form factor, namely the second and third line in (119). However, one can use the integral
\[\begin{split}\int_{X}\frac{\mathrm{D}\lambda}{2\pi\mathrm{i}}\frac{ \lambda_{\alpha}\lambda_{\beta}}{\langle\iota\lambda\rangle\langle\ell-1, \lambda\rangle\langle\lambda\ell\rangle}\left.\frac{\partial\mathbf{a}}{ \partial\mu^{\dot{\alpha}}}\right|_{X}=-\mathrm{i}\left(\frac{\kappa_{\ell-1 \,\alpha}\kappa_{\ell-1\,\beta}}{\langle\ell-1,\iota\rangle\langle\ell-1, \ell\rangle}g_{\dot{\alpha}}(x,\kappa_{\ell-1})\right.\\ \left.\phantom{\int_{X}\frac{\mathrm{D}\lambda}{2\pi\mathrm{i}}} \quad+\frac{\kappa_{\ell}\alpha_{\kappa}\kappa_{\ell\,\beta}}{\langle\ell_{ \iota}\rangle\langle\ell,\ell-1\rangle}g_{\dot{\alpha}}(x,\kappa_{\ell})\\ \left.\phantom{\int_{X}\frac{\mathrm{D}\lambda}{2\pi\mathrm{i}}} \quad+\frac{\iota_{\alpha}\iota_{\beta}}{\langle\iota,\ell-1\rangle\langle \iota\ell\rangle}g_{\dot{\alpha}}(x,\iota)\right)\,,\end{split} \tag{120}\]
to realize that the terms containing derivatives of the background actually give a vanishing contribution to the form factor. The final expression for the colour-ordered form factor around a Cartan-valued self-dual radiative background is
\[\mathscr{F}_{\,\mathrm{tr}\,\tilde{F}^{2}}(1^{+},\ldots,n^{+};q)=\frac{(q\cdot Q )^{2}}{\langle 12\rangle\ldots\langle n1\rangle}\int_{\mathbb{M}}\mathrm{d}^{4}x\,e ^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x,\kappa_{j})}\,, \tag{121}\]
see Appendix B for more details on the derivation. As previously anticipated, the dependence on the gauge spinor \(\iota_{\alpha}\) dropped out and the result is fully gauge-invariant. Around a non-trivial background, we expect translations to be broken, and therefore the "momentum-conserving" \(\delta\)-function in (117) is replaced by the residual integral over space-time in (121). This integral cannot be performed analytically for a generic background, but for specific, highly-symmetric examples, it is possible to further simplify it. For example, three of the four integrals can be evaluated around a self-dual plane wave background [71, 73, 114], and we recover three momentum-conserving \(\delta\)-functions on the directions along which the gauge field is constant. Furthermore, we note that for the form factor around the trivial background, \(q^{2}=q\cdot Q\) on the support of the momentum conserving \(\delta\) function, so that the form factor around a non-trivial background can be obtained simply by replacing the \(\delta\) function with the space-time integral in (121) while leaving the prefactor containing the spinor brackets unaffected.
A similar analysis can be set up for form factors of other polynomials in \(\tilde{F}_{\dot{\alpha}\dot{\beta}}\) and its derivatives. Let us now consider the cubic operator
\[\mathrm{tr}\,\tilde{F}^{3}\coloneqq\,\mathrm{tr}\,\tilde{F}_{\dot{\alpha}}^{ \ \dot{\beta}}\tilde{F}_{\dot{\beta}}^{\ \dot{\gamma}}\tilde{F}_{\dot{\gamma}}^{\ \dot{\alpha}}\,. \tag{122}\]
Note that this is the unique cubic operator not involving derivatives of the SD field strength, up to a sign. The lifting of this operator to twistor space cannot be simplified using the \(K\)-matrix anymore but can be straightforwardly obtained using (109) and again the generating functional is the Fourier transform of such lifting. Since the expression of this lifting is rather involved, we prefer to display it in Equation (120) in Appendix B. The perturbative expansion proceeds as before: at arbitrary multiplicity, the tree-level, colour-ordered MHV
form factor around a self-dual, Cartan-valued, radiative background is
\[\begin{split}\mathscr{F}_{\,\mathrm{tr}\,\tilde{F}^{3}}=& \frac{1}{\langle 12\rangle\ldots\langle n1\rangle}\int_{\mathbb{M}} \mathrm{d}^{4}x\,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x,\kappa_{j})}\times \\ &\times\left(\sum_{i,j,k}\langle ij\rangle\langle jk\rangle ki \rangle[ij][jk][ki]\\ &+3\sum_{i,j,k,\ell}\langle jk\rangle\langle k\ell\rangle\langle \ell i\rangle[k\ell]([\ell i][jk]+[ik][\ell j])\\ &+3\sum_{i,j,k,\ell,m}\langle jk\rangle\langle\ell m\rangle \langle mi\rangle\Big{(}[mi]([jk][\ell m]+[j\ell][km])+(i\leftrightarrow j) \Big{)}\\ &+\sum_{i,j,k,\ell,m,n}\langle jk\rangle\langle\ell m\rangle \langle ni\rangle\Big{(}[jk]([ni][\ell m]+[mi][\ell n])+(k\leftrightarrow\ell) \\ &+(i\leftrightarrow j)+(i\leftrightarrow j,k\leftrightarrow\ell) \Big{)}\Bigg{)}\,.\end{split} \tag{5.21}\]
In particular, the colour-ordered formulae obtained with our generating functional match the first few expressions around the trivial background, namely the minimal form factor
\[\mathscr{F}_{\,\mathrm{tr}\,\tilde{F}^{3}}(1^{+},2^{+},3^{+};q)=[12][23][31]\, \delta^{4}(Q-q)\,, \tag{5.22}\]
and the 4-point form factor [49, 56]
\[\mathscr{F}_{\,\mathrm{tr}\,\tilde{F}^{3}}(1^{+},2^{+},3^{+},4^{+};q)=\frac{[ 12][23][34][41]}{\langle 12\rangle[21]}\left(1+\frac{[31][4|q|3\rangle}{ \langle 23\rangle[32][41]}\right)\delta^{4}(Q-q)+\mathrm{cyclic}\,. \tag{5.23}\]
### Generic MHV form factors
The most remarkable feature of the expressions (5.12), (5.19), and (5.21) is that the form factor around a non-trivial background is obtained by a simple dressing of the form factor around the trivial background, namely one only has to replace the \(\delta\) function with the space-time integral in the last line of (5.21), while keeping intact the kinematical prefactor. This property is generic of any form factor of a composite operator, as the lifting of any polynomial in \(\tilde{F}_{\dot{\alpha}\dot{\beta}}\) and its derivatives4 will be a sum of products of integrals over \(X\) of terms of the form \(\lambda_{i}^{\alpha_{1}}\ldots\lambda_{i}^{\alpha_{s}}\partial_{\mu_{i}^{ \dot{\alpha}_{1}}}\ldots\partial_{\mu_{i}^{\dot{\alpha}_{s}}}\mathfrak{a}\), intertwined by propagators \(\mathsf{U}_{X}(\lambda_{i},\lambda_{j})\) between adjacent positions on the sphere \(X\) and with some contraction of the spinor indices. Once we expand such an expression in terms of momentum eigenstates, the background connection can contribute to the form factor in two distinct ways: it can either be present only in the holomorphic frames \(\mathsf{H}(x,\lambda_{i})\), or it can potentially give a contribution when present in a \(\mu^{\dot{\alpha}}\) derivative. The first type of contribution gives a factor of \(\exp(e_{j}g(x,\kappa_{j}))\) when the background acts by conjugation on the \(j\)th external gluon. Conversely, the second type of contribution is proportional to (possibly a spacetime derivative of)
Footnote 4: The possible presence of \(B_{\alpha\beta}\) fields does not bring in any additional issue, as we are working in the MHV sector.
\[\frac{1}{2\pi\mathrm{i}}\int_{X}\frac{\mathrm{D}\lambda}{\langle j-1,\lambda \rangle\langle\lambda j\rangle}\lambda_{\alpha}\left.\frac{\partial\mathfrak{ a}}{\partial\mu^{\dot{\alpha}}}\right|_{X}\,, \tag{5.24}\]
when we consider a colour-ordered form factor. Such a term must be inserted in any possible position inside the perturbative expansion, and for a colour-ordered form factor, this means that we must sum over \(j\). Moreover, any of these contributions come together with a partial Parke-Taylor denominator \(1/(\langle 12\rangle\ldots\langle\widehat{j-1,j}\rangle\ldots\langle n1\rangle)\) from which the factor \(1/\langle j-1,j\rangle\) is removed. The integral (5.24) can be evaluated for generic external momenta by considering \(\kappa_{j-1,\alpha}\) and \(\kappa_{j\,\alpha}\) as a basis of undotted spinors, and it's equal to
\[\frac{1}{2\pi\mathrm{i}}\int_{X}\frac{\mathrm{D}\lambda}{\langle j-1,\lambda \rangle\langle\lambda j\rangle}\lambda_{\alpha}\left.\frac{\partial\mathsf{a}} {\partial\mu^{\dot{\alpha}}}\right|_{X}=\mathrm{i}\frac{\kappa_{j-1,\alpha}g_ {\dot{\alpha}}(x,\kappa_{j-1})-\kappa_{j\,\alpha}g_{\dot{\alpha}}(x,\kappa_{j} )}{\langle j-1,j\rangle}\,. \tag{5.25}\]
The denominator \(\langle j-1,j\rangle\) is precisely the missing factor needed to reconstruct the Parke-Taylor denominator, so that the residual sum over \(j\) is telescopic
\[\mathrm{i}\sum_{j=1}^{n}(\kappa_{j-1,\alpha}g_{\dot{\alpha}}(x,\kappa_{j-1})- \kappa_{j\,\alpha}g_{\dot{\alpha}}(x,\kappa_{j}))=0\,. \tag{5.26}\]
Then the only non-vanishing contribution to the form factor around a non-trivial background comes from terms where the background is present only in the holomorphic frames, and this means that the desired form factor coincides with the one around the trivial background, once we replace the \(\delta\) function with the integral
\[\int_{\mathbb{M}}\mathrm{d}^{4}x\,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x, \kappa_{j})}\,. \tag{5.27}\]
### MHV super form factors in \(\mathcal{N}=4\) Sym
The results from the previous section can be extended to super form factors in \(\mathcal{N}=4\) super-Yang-Mills as well, starting with the space-time expression of a composite operator, lifting it to twistor space and expanding around the desired background using the on-shell states (4.24). As in the pure Yang-Mills case, any MHV super form factor around a Cartan-valued gluonic background is obtained by replacing the momentum conserving \(\delta\) function with the by-now usual background-dependent integral. The simplest case is the super form factor for \(\frac{1}{2}\operatorname{tr}\phi_{ab}\phi_{ab}\), which can be computed starting from the supersymmetrized \(K\)-matrix, that is using the lifting formula
\[\frac{1}{2}\operatorname{tr}\phi_{ab}\phi_{ab}=\frac{1}{2}\int\mathrm{d}^{8} \theta\,\iota_{\alpha}\iota_{\beta}\prod_{\begin{subarray}{c}\gamma\neq \alpha,\beta\\ c\neq a,b\end{subarray}}\theta^{\gamma c}\int_{X^{2}}\frac{\mathrm{D}\lambda_{1 }\,\mathrm{D}\lambda_{2}\,\langle\lambda_{1}\lambda_{2}\rangle^{2}}{\langle \iota\lambda_{1}\rangle\langle\iota\lambda_{2}\rangle}\operatorname{tr}\left. \left(\left.\frac{\partial\mathsf{a}_{1}}{\partial\chi_{1}^{a}}\right|_{X} \mathsf{U}_{12}\,\left.\frac{\partial\mathsf{a}_{2}}{\chi_{2}^{b}}\right|_{X} \mathsf{U}_{21}\right)\,, \tag{5.28}\]
the fermionic integrals being necessary to extract the lowest component of \(\frac{1}{2}\operatorname{tr}\underline{\phi}_{ab}\underline{\phi}_{ab}\). The corresponding MHV super form factor around a non-trivial background is given by
\[\mathscr{S}\mathscr{F}_{\frac{1}{2}\operatorname{tr}\phi^{2}}(1,\ldots,n;q)= \frac{1}{4}\frac{(\mathcal{Q}^{2})^{2}}{\langle 12\rangle\ldots\langle n1 \rangle}\int_{\mathbb{M}}\mathrm{d}^{4}x\,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e _{j}g(x,\kappa_{j})}\,, \tag{5.29}\]
where \(\mathcal{Q}_{\alpha a}=\kappa_{1\,\alpha}\eta_{1\,a}+\ldots+\kappa_{n\,\alpha} \eta_{n\,a}\), as we show in Appendix B. In the flat-background limit, we recover the standard expression for the super form factor [44]. For generic operators around purely gluonic backgrounds, the same argument of Section 5.3 holds and
the corresponding form factor can obtained by dressing the form factor around the trivial background with (102). Conversely, for backgrounds including fermions and scalars, the grading in the fermionic coordinates prevents having results as simple as in the purely gluonic case,5 but we can still obtain reasonably compact formulae. For example, one can consider fermionic space-time backgrounds: introducing the function
Footnote 5: Presumably, one could obtain interesting results by considering super form factors of supersymmetrized operators, i.e. Fourier transforms over the entire chiral superspace of matrix elements of a local, composite operator of superfields between the vacuum and an on-shell superstate.
\[g_{a}^{(f)}(x,\lambda)=\frac{1}{2\pi{\rm i}}\int_{X}\frac{{\rm D}\lambda^{ \prime}}{\langle\lambda\lambda^{\prime}\rangle}\ {\mathfrak{a}}_{a}|_{X}\, \tag{103}\]
the super form factor in the presence of a mixed gluonic and fermionic background is
\[\mathscr{S}\mathscr{F}_{\frac{1}{2}\,{\rm tr}\,\phi^{2}}(1,\ldots,n;q)=\frac{ 1}{4\langle 12\rangle\ldots\langle n1\rangle}\int_{\mathbb{M}}{\rm d}^{4}x\,( \mathcal{Q}_{a}^{\alpha}\tilde{\mathcal{Q}}_{\alpha a})^{2}e^{{\rm i}(Q-q) \cdot x+\sum_{j}e_{j}g(x,\kappa_{j})}\,, \tag{104}\]
where \(\tilde{\mathcal{Q}}_{\alpha a}=\kappa_{1\,\alpha}\tilde{\eta}_{1\,a}+\ldots+ \kappa_{n\,\alpha}\tilde{\eta}_{n\,a}\) is the dressed super-momentum defined in terms of the fermionic background by
\[\kappa_{i\,\alpha}\tilde{\eta}_{i\,a}(x)=\kappa_{i\,\alpha}(\eta_{i\,a}+e_{i} g_{a}^{(f)}(x,\kappa_{i}))\,, \tag{105}\]
in complete analogy with (100).
## 6 Discussion
We have extended the twistor framework for constructing form factors by exploiting the representation of the Ward correspondence at null infinity [39; 40] and using asymptotic \(\mathscr{I}\)-data for background fields to give formulae on such non-trivial self-dual radiative backgrounds. We have further extended the \(\mathscr{I}\) framework to incorporate supersymmetry. This presentation links directly into the celestial and twisted holography programmes. In particular, we gave a novel proof of the MHV form factor for \(\,{\rm tr}\,\tilde{F}^{2}\) extending it to the formula (100) for the same form factor, but now evaluated on a general Cartan-valued, self-dual, radiative background. The framework is set up so that, in principle, we can compute the tree-level MHV form factor for an arbitrary composite operator of the field strength and the MHV super form factor for an arbitrary composite operator in \(\mathcal{N}=4\) SYM, but for general operators we don't expect to have such simple expressions, as exemplified by the form factor of \(\,{\rm tr}\,\tilde{F}^{3}\). It's nevertheless remarkable that around any self-dual background, tree-level MHV form factors can be obtained by a simple dressing via a single, residual space-time integral encoding the details of the background, without any modification of the rational prefactor depending on the spinor brackets. This is to be contrasted with the expected complexity of the result, which should naively consist of \(n-2\) space-time integrals for an observables at \(n\) points.
### N\({}^{k}\)MHV observables
The main restriction in this work was the restriction to tree-level MHV form factors. To go to higher MHV degrees or loop form factors on non-trivial backgrounds, we must incorporate twistor space propagators on such non-trivial backgrounds - recall that single insertion raises the MHV degree by one at tree-level and each integration reduces the MHV degree by two. The main issue is the absence of a known, compact expression for the propagator around a radiative background: at the abstract level, twistor space propagators have been studied for many years, see for example [115]. More recently, in the context of twistor actions expressions for the twistor space propagators have been found in [27; 28; 29] in a gauge that induces the axial gauge used in the MHV formalism [10; 26] when represented in momentum space. This propagator was used for form factors and correlators in [55; 56; 57; 58; 59; 60; 61; 62]. A twistor space propagator \(\Delta_{\mathfrak{a}}(Z,Z^{\prime})\) on a background defined by a Lie algebra-valued \((0,1)\)-form \(\mathfrak{a}\) must satisfy the defining relation
\[(\bar{\partial}_{0}+\mathfrak{a})\Delta_{\mathfrak{a}}(Z,Z^{\prime})=\bar{ \delta}^{3}(Z,Z^{\prime}) \tag{6.1}\]
This can be solved in Euclidean signature by starting from the MHV propagator \(\Delta_{0}(Z,Z^{\prime})=\bar{\delta}^{2|4}(Z,Z^{\prime},Z_{*})\) around the trivial background as in [27; 28; 29], where \(Z_{*}\) is the chosen reference twistor, and conjugating with the holomorphic frame \(\mathsf{H}(Z)\) for the line joining \((Z,Z^{\prime})\). This will give the expression
\[\Delta_{\mathfrak{a}}(Z,Z^{\prime})=\mathsf{H}(Z,Z_{*})\bar{\delta}^{2|4}(Z,Z^ {\prime},Z_{*})\mathsf{H}(Z^{\prime},Z_{*})^{-1} \tag{6.2}\]
where \(\mathsf{H}(z,Z_{*})\) satisfies \((\bar{\partial}_{0}+\mathfrak{a})\mathsf{H}(Z,Z_{*})=0\) on the line joining \(Z\) to \(Z_{*}\). This then satisfies the defining relations above as a consequence of (3.23) on the support of the delta functions. The fact that \(\Delta_{\mathfrak{a}}(Z,Z^{\prime})\) has four delta functions in its definition should then lead to easy integrations at higher MHV degrees, and even to find formulae for loop integrands [116], although carrying out the loop integrals is possible but subtle, see [117]. With these, one should be able to compute both amplitudes and form factors at higher MHV degrees on a background in pretty much the same way as in [28].
### The one-loop all plus amplitude
This framework can be used to give new insights into loop amplitudes. The simplest case is the all-plus one-loop amplitude in pure Yang-Mills and we can justify old generating function formulae, extending them to a nontrivial background, and to understand the more recent dual-conformal invariant formulae of [86; 87].
The vanishing of the tree-level amplitude ensures that the one-loop amplitude around the trivial background is finite and a rational function of the spinor brackets. There are a number of versions of the all-multiplicity formula for this [118; 119; 120], for example, ignoring constant prefactors
\[\mathcal{A}^{\text{1-loop}}=\sum_{i<j<s<t}\frac{\langle ij\rangle[js]\langle st \rangle[ti]}{\langle 12\rangle\ldots\langle n1\rangle}\,. \tag{6.3}\]
The following generating functional, originally due to one of us, was briefly quoted without explanation in [121] as
\[\int_{\mathbb{M}}\mathrm{d}^{4}x\int_{X^{2}}\mathrm{D}\lambda_{1}\mathrm{D} \lambda_{2}\operatorname{tr}\left(\partial_{\alpha}^{\dot{\alpha}}\mathsf{U}_{2 1}\partial_{\dot{\beta}}^{\alpha}\;\mathsf{a}_{1}|_{X}\,\partial_{\beta}^{\dot {\beta}}\mathsf{U}_{12}\partial_{\dot{\alpha}}^{\beta}\;\mathsf{a}_{2}|_{X} \right). \tag{108}\]
We can express the derivative of the propagator as the variation under a translation using (109)
\[\partial_{\alpha\dot{\alpha}}\mathsf{U}_{X}(\lambda_{i},\lambda_{j})=-\int_{X} \mathrm{D}\lambda\mathsf{U}_{X}(\lambda_{i},\lambda)\partial_{\alpha\dot{ \alpha}}\;\mathsf{a}|_{X}\,\mathsf{U}_{X}(\lambda,\lambda_{j})\,. \tag{109}\]
With this, the generating functional becomes
\[\int_{\mathbb{M}}\mathrm{d}^{4}x\int_{X^{4}}\prod_{i=1}^{4}\mathrm{D}\lambda_{ i}\,\langle\lambda_{1}\lambda_{2}\rangle\langle\lambda_{3}\lambda_{4}\rangle \operatorname{tr}\left(\partial_{\dot{\alpha}}\mathsf{a}_{1}\,\mathsf{U}_{12} \,\partial^{\dot{\beta}}\mathsf{a}_{2}\,\mathsf{U}_{23}\,\partial_{\dot{\beta} }\mathsf{a}_{3}\,\mathsf{U}_{34}\,\partial^{\dot{\alpha}}\mathsf{a}_{4}\, \mathsf{U}_{41}\right). \tag{110}\]
This is more clearly a background-coupled 4-vertex generating amplitudes with at least four external legs. As recently observed in [81], it clearly gives (107) at any multiplicity when expanded around the trivial background, since the angle brackets (respectively, \(\mu^{\dot{\alpha}}\) derivatives) give the correct angle brackets (square brackets) in the numerator, whilst the expansion of the propagators as in (110) reproduces the Parke-Taylor denominator. The same expansion around a Cartan-valued background yields
\[\mathcal{A}_{\text{background}}^{\text{1-loop}}=\sum_{i<j<s<t}\frac{\langle ij \rangle[js]\langle st\rangle[ti]}{\langle 12\rangle\ldots\langle n1\rangle}\int_{ \mathbb{M}}\mathrm{d}^{4}x\,\exp\left(\sum_{j=1}^{n}(\mathrm{i}k_{j}\cdot x+e _{j}g(x,\kappa_{j}))\right)\,, \tag{111}\]
in agreement with our results on form factors, as well as with the previously known results on gluon amplitudes [77].
In a different direction, the two-point generating function (108) neatly ties into region momenta formulae, and, in particular, the observation of [86] that the 1-loop all plus rational amplitude can be derived from the 1-loop maximally supersymmetric MHV 1-loop integrand in region momentum space where the region momentum for the loop is placed at infinity. We can see this in a slightly different way here. We first introduce the region momenta \(y_{i}\)'s by
\[y_{ij}=y_{i}-y_{j}=\sum_{i\leq s<j}k_{s}\,. \tag{112}\]
We can then expand \(\partial_{\alpha\dot{\alpha}}\mathsf{U}_{ij}\) in momentum eigenstates, using (109) to obtain, for a given cyclic ordering, region momenta appearing at the \((j-i)\)-th order in the form
\[\partial^{\alpha\dot{\alpha}}\mathsf{U}_{ij}=\frac{y_{i\,j+1}^{\alpha\dot{ \alpha}}}{\langle ii+1\rangle\ldots\langle j-1j\rangle}\,,\qquad\partial^{ \alpha\dot{\alpha}}\partial^{\beta\dot{\beta}}\mathsf{U}_{ij}=\frac{y_{i\,j+ 1}^{\alpha\dot{\alpha}}y_{i\,j+1}^{\beta\dot{\beta}}}{\langle ii+1\rangle \ldots\langle j-1j\rangle}\,. \tag{113}\]
Using the first, we see that our generating function (108) expands to give
\[\sum_{i<j}\frac{\langle i|y_{i\,j+1}|j\rangle\langle j|y_{j+1\,i}|i]}{\langle 1 2\rangle\langle 23\rangle\ldots\langle n1\rangle}\,. \tag{114}\]
and, using, for example, SS6 of [122], this can be recognised to be the sum over 2-mass easy and 1-mass boxes [123] written in terms of region momenta, with the loop insertion point \(x_{0}\) -represented by the line \((A,B)\) in [122]- placed at infinity. Of course, this formula could be seen directly by summing the \(j\) and \(t\) in (104) to give region momenta.
An alternative version of the one-loop amplitude was observed in [86]
\[\mathcal{A}^{\text{1-loop}}=\sum_{i<j}\frac{\langle 1|y_{1i}y_{ij}|1\rangle^{2}}{ \langle 12\rangle\ldots\langle n1\rangle}\frac{\langle i-1,i\rangle\langle j -1,j\rangle}{\langle 1,i-1\rangle\langle 1i\rangle\langle 1,j-1\rangle\langle 1j \rangle}\,. \tag{105}\]
As explained in [87], this is the so-called _Kermit_ formula for the supersymmetric MHV loop integrand, with the loop integrand region momentum placed at infinity. This is the form obtained from the all-loop BCFW recursion of [14] and also the MHV-diagram version of [122]; in the former, leg 1 is the direction of the BCFW shift, whereas, in the latter, it can be taken to be an arbitrary reference twistor associated to the reference spinor of the MHV formalism. In [122], a detailed argument was given, based on the calculations in [124] to relate this to the more familiar 2-mass-easy boxes of [120]. At the level of the loop integrand (here again with the loop insertion point placed at infinity) this follows from the algebraic relation
\[\begin{split}\mathcal{A}^{\text{1-loop}}&=\sum_{i< j}\frac{1}{\langle 12\rangle\ldots\langle n1\rangle}\left(\frac{\langle 1|y_{1j}y_{ ji}|i\rangle\langle 1|y_{1i}y_{ij}|j\rangle}{\langle 1i\rangle\langle 1j\rangle}-\frac{ \langle 1|y_{1j}y_{j,i-1}|i-1\rangle\langle 1|y_{1i}y_{ij}|j\rangle}{\langle 1,i-1 \rangle\langle 1j\rangle}\right.\\ &\left.-\frac{\langle 1|y_{1j}y_{ji}|i\rangle\langle 1|y_{1i}y_{i,j-1}| j-1\rangle}{\langle 1i\rangle\langle 1,j-1\rangle}+\frac{\langle 1|y_{1j}y_{j,i-1}| i-1\rangle\langle 1|y_{1i}y_{i,j-1}|j-1\rangle}{\langle 1,i-1\rangle\langle 1,j-1 \rangle}\right)\end{split} \tag{106}\]
so it should be possible to write a two-point generating functional for this form of the amplitude, along the lines of (103). However, this generating functional is unlikely to be local on space-time, as the summands for this version of the one-loop amplitude contain spurious poles (the poles cancel in the sum though, as they should [86]).
An open question is how to construct these generating functionals as local expressions in the self-dual Yang-Mills potentials. The central puzzle is that (102) contains four derivatives of the twistor connection a, so it corresponds to a local space-time expression containing two derivatives on the space-time gauge field. Gauge invariance and self-duality restrict the possible terms to the integral of \(\,\mathrm{tr}\,\tilde{F}^{2}\), but this integral vanishes, as one can infer from the \(q\to 0\) limit of the form factor we presented in this work 6. A more conservative approach towards a first-principle derivation of (102) could be a computation of a one-loop determinant around the background gauge field, but at the moment only the _variation_ of the determinant seems to be easy to compute; it's however worth mentioning that a one-loop determinant is generically non-local, but its variation becomes local precisely around self-dual backgrounds [125; 126]. It would also be nice to reproduce the chiral algebra computations of [32] directly from a space-time perspective, without referring to the 2d conformal blocks; indeed, the Green-Schwarz mechanism for the anomaly cancellation on twistor space [127; 128] suggests that the all-plus one-loop amplitude could
be computed using tree diagrams only, as showed in [129] for the parity-conserving terms of the amplitude. We hope to return to these issues in future work.
Acknowledgements:We thank Atul Sharma and Arthur Lipstein for useful discussions and Atul Sharma for providing feedback on the draft. GB is supported by a joint Clarendon Fund and Merton College Mathematics Scholarship. LJM is grateful to the IHES and ENS Paris for hospitality while this work was progressing and to the STFC for support under grant ST/T000864/1.
## Appendix A \(\mathcal{N}=4\) constraint equations on chiral superspace
It is well known that there exist an equivalence between the field equations for the superfields and constraint equations for the super-connections. The precise statement depend on the superspace one is considering: the \(d=10\), \(\mathcal{N}=1\) SYM equations of motion are equivalent to the constraint equation [101; 102]
\[\{\underline{\nabla}_{\mathcal{A}},\underline{\nabla}_{\mathcal{B}}\}=2 \gamma^{M}_{\mathcal{A}\mathcal{B}}\underline{\nabla}_{\mu}\,, \tag{104}\]
where \((x^{M},\theta^{\mathcal{A}})\) are coordinates on \(\mathbb{C}^{10|16}\). When compactified to 4 dimensions, this equivalence becomes an equivalence between the constraint equations and the \(\mathcal{N}=4\) equations of motion on non-chiral superspace \(\mathbb{C}^{4|16}\) with coordinates \((x^{\alpha\dot{\alpha}},\theta^{\alpha\alpha},\bar{\theta}^{\dot{\alpha}}_{a})\). If one further wants to reduce to chiral superspace, additional conditions are required [104; 30]: in this Appendix, we show the following result
**Theorem 3**: _There is an equivalence between_
1. _the constraint equations on chiral superspace_ \[[\underline{\nabla}_{a}(\alpha,\underline{\nabla}_{\beta})_{B}\} = 0\,,\] (105a) \[\underline{F}_{\alpha\beta} = \lambda\underline{B}_{\alpha\beta}\,,\] (105b) _where_ \(\underline{F}_{\alpha\beta}\) _is the ASD component of the supercurvature_ \([\underline{\nabla}_{\alpha\dot{\alpha}},\underline{\nabla}_{\beta\dot{\beta}}]\)_,_ \(\underline{B}_{\alpha\beta}\) _will be determined below by the constraint equations (_105a_), and_ \(\lambda\) _is the 't Hooft coupling,_
2. _super-fields_ \((\underline{A}_{\alpha\dot{\alpha}},\underline{\tilde{\psi}}_{\dot{a}\dot{ \alpha}},\underline{\phi}_{ab},\underline{\psi}^{\alpha}_{a},\underline{B}_{ \alpha\beta})\) _on chiral superspace satysfing_ \[\underline{\nabla}_{a\alpha}\underline{F}_{\dot{\alpha}\dot{ \beta}} = \underline{\nabla}_{\alpha(\dot{\alpha}}\underline{\tilde{\psi}}_{ \dot{\beta})_{a}}\,,\] (106a) \[\underline{\nabla}_{a\alpha}\underline{\tilde{\psi}}_{\dot{a}b} = 2\underline{\nabla}_{\alpha\dot{\alpha}}\underline{\phi}_{ab}\,,\] (106b) \[\underline{\nabla}_{a\alpha}\underline{\phi}_{bc} = \epsilon_{abcd}\underline{\psi}^{\alpha}_{a}\,,\] (106c) \[\underline{\nabla}_{a\alpha}\underline{\psi}^{b}_{\beta} = -\varepsilon_{\alpha\beta}[\underline{\phi}_{ac},\underline{\phi} ^{bc}]+\delta^{b}_{a}\underline{B}_{\alpha\beta}\,,\] (106d) \[\underline{\nabla}_{a\alpha}\underline{B}_{\alpha\beta} = -2\varepsilon_{\alpha(\beta}[\underline{\psi}^{b}_{\gamma)}, \underline{\phi}_{ab}]\,,\] (106e) \[\underline{F}_{\alpha\beta} = \lambda\underline{B}_{\alpha\beta}\,,\] (106f)
_which in turn imply the_ \(\mathcal{N}=4\) _super-field equations of motion_ \[\underline{\nabla}^{\beta}_{\dot{\alpha}}\underline{B}_{\alpha\beta} = -\{\tilde{\psi}_{a\dot{\alpha}},\underline{\psi}^{a}_{\alpha}\}- \frac{1}{2}[\underline{\phi}_{ab},\underline{\nabla}_{\alpha\dot{\alpha}} \underline{\phi}^{ab}]\,,\] (115a) \[\underline{\nabla}^{\dot{\alpha}}_{\alpha}\tilde{\underline{\psi} }_{a\dot{\alpha}} = 2\lambda[\underline{\psi}^{b}_{\alpha},\underline{\phi}_{ab}]\,,\] (115b) \[\underline{\Box}\underline{\phi}_{ab} = \{\underline{\tilde{\psi}}_{\dot{\alpha}[b},\underline{\tilde{ \psi}}^{\dot{\alpha}}_{\dot{\alpha}]}\}+\lambda\epsilon_{abcd}\{\underline{ \psi}^{c}_{\alpha},\underline{\psi}^{d\alpha}\}+2\lambda[\underline{\phi}_{c[a },[\underline{\phi}^{cd},\underline{\phi}_{\dot{b}]d}]]\,,\] (115c) \[\underline{\nabla}^{\alpha}_{\dot{\alpha}}\underline{\psi}^{a}_{\alpha} = [\underline{\tilde{\psi}}_{b\dot{\alpha}},\underline{\phi}^{ab} ]\,,\] (115d) \[\underline{F}_{\alpha\beta} = \lambda\underline{B}_{\alpha\beta}\,,\] (115e)
3. _component fields_ \((A_{\alpha\dot{\alpha}},\tilde{\psi}_{a\dot{\alpha}},\phi_{ab},\psi^{\alpha}_{ a},B_{\alpha\beta})\) _on space-time satisfying the_ \(\mathcal{N}=4\) _equations of motion - that is the_ \(\theta^{\alpha a}=0\) _truncation of the super-field equations of motion-, supplemented by the recursion relations (_105a_), (_105b_), (_105c_), (_105d_), (_105e_), (_105f_), (_105g_), and (_105h_)._
Our result is therefore a deformation away from self-duality of the results in [104] for \(\mathcal{N}=4\) and reduces to that construction in the \(\lambda=0\) limit, and is more self-contained compared to the result of [30].
For the \(1\). \(\Rightarrow\) 2. implication, we begin with the construction of the various super-fields appearing in the equations of motion, slightly varying the normalizations from the main body of the paper. The super-connection is taken to be \(\underline{\nabla}_{\alpha A}=\partial_{\alpha A}+\underline{\Delta}_{\alpha A}\). We define the supercurvature components as implied by the constraint equations
\[[\underline{\nabla}_{\alpha\dot{\alpha}},\underline{\nabla}_{ \beta\dot{\beta}}] \coloneqq \varepsilon_{\alpha\beta}\underline{\tilde{F}}_{\dot{\alpha}\dot {\beta}}+\varepsilon_{\dot{\alpha}\dot{\beta}}\underline{F}_{\alpha\beta}\,, \tag{116a}\] \[= \varepsilon_{\alpha\beta}\underline{\tilde{\psi}}_{\dot{\alpha}a}\,,\] (116b) \[= 2\varepsilon_{\alpha\beta}\underline{\phi}_{ab}\,, \tag{116c}\]
as well as
\[\underline{\phi}^{ab} \coloneqq \frac{1}{2}\epsilon^{abcd}\underline{\phi}_{cd}\,. \tag{117}\]
The remaining super-fields are defined by
\[\underline{\psi}^{a}_{\alpha} \coloneqq -\frac{1}{3!}\epsilon^{abcd}\underline{\nabla}_{\alpha b} \underline{\phi}_{cd}\,, \tag{118a}\] \[\underline{B}_{\alpha\beta} \coloneqq \frac{1}{4}\underline{\nabla}_{a(\alpha}\psi^{a}_{\beta)}\,. \tag{118b}\]
The definitions for \(\underline{\psi}^{a}_{\alpha}\)\(\underline{B}_{\alpha\beta}\) can be interpreted as arising from Jacobi identities for the super-connection. For example, the identity for \(\underline{\nabla}_{\alpha a}\), \(\underline{\nabla}_{\beta b}\), and \(\underline{\nabla}_{\gamma c}\) implies
\[\underline{\nabla}_{\alpha a}\underline{\phi}_{bc}=\underline{\nabla}_{ \alpha[a}\phi_{bc]}\,. \tag{119}\]
Similarly, the constraint equation (116c) implies that the super-field
\[\underline{\nabla}_{a(\alpha}\underline{\nabla}_{\beta)b}\phi_{cd}\,, \tag{120}\]
is totally skew in the \(\,\mathrm{SU}(4)\) indices. We complete our definitions by introducing the 't Hooft coupling \(\lambda\) and by requiring
\[\underline{F}_{\alpha\beta}=\lambda\underline{B}_{\alpha\beta}\,, \tag{121}\]
for the ASD part of the bosonic supercurvature.
With these definitions, we can derive the \(\mathcal{N}=4\) equations of motion as follows: we first consider the equations of motion for \(\underline{\psi}^{a}_{\alpha}\) and \(\underline{B}_{\alpha\beta}\) and notice that the Jacobi identity for \(\underline{\nabla}_{\alpha\dot{\alpha}}\), \(\underline{\nabla}_{\beta b}\), and \(\underline{\nabla}_{\gamma c}\) can be written as
\[2\underline{\nabla}_{\alpha\dot{\alpha}}\underline{\phi}_{ab}=\underline{ \nabla}_{\alpha a}\underline{\tilde{\psi}}_{\dot{\alpha}b}\,, \tag{108}\]
while the skew part of \(\nabla_{\alpha a}\underline{\tilde{\psi}}_{\dot{b}\dot{\alpha}}\) in \(a\), \(b\) vanishes. Using (107b), (107c) and (108), we reproduce (109d)
\[\underline{\nabla}^{\alpha}_{\alpha}\underline{\psi}^{a}_{\alpha}=[\underline{ \tilde{\psi}}_{\dot{\alpha}b},\phi^{ab}]\,. \tag{109}\]
In the same way, we can use (107c) to obtain for the fermionic derivative acting of \(\underline{\psi}^{a}_{\alpha}\)
\[\underline{\nabla}_{a\alpha}\underline{\psi}^{b}_{\beta}=-\varepsilon_{\alpha \beta}[\underline{\phi}_{ac},\underline{\phi}^{bc}]+\delta^{b}_{a}\underline{B }_{\alpha\beta}\,. \tag{110}\]
From (107b), (108), and (110) one can straightforwardly show that
\[\underline{\nabla}^{\beta}_{\dot{\alpha}}\underline{B}_{\alpha\beta}=-\{ \tilde{\psi}_{a\dot{\alpha}},\underline{\psi}^{a}_{\alpha}\}-\frac{1}{2}[ \underline{\phi}_{ab},\underline{\nabla}_{\alpha\dot{\alpha}}\underline{\phi }^{ab}]\,, \tag{111}\]
thus reproducing (107a). The fermionic derivative of \(\underline{B}_{\alpha\beta}\) be obtained via (107c) and reads
\[\underline{\nabla}_{a\alpha}\underline{B}_{\beta\gamma}=-2\varepsilon_{\alpha (\beta}[\underline{\psi}^{b}_{\gamma)},\underline{\phi}_{ab}]\,, \tag{112}\]
We can finally derive the equations of motion for the positive-helicity spinors and the scalars. Starting with the spinors, the Jacobi identity for \(\underline{\nabla}_{\alpha a}\), \(\underline{\nabla}_{\beta\dot{\beta}}\) and \(\underline{\nabla}_{\gamma\dot{\gamma}}\) can be written as
\[\underline{\nabla}_{a\alpha}\underline{F}_{\alpha\beta} = -\epsilon_{\alpha(\beta}\underline{\nabla}^{\dot{\alpha}}_{ \gamma)}\underline{\tilde{\psi}}_{\dot{\alpha}a}\,, \tag{113a}\] \[\underline{\nabla}_{a\alpha}\underline{\tilde{F}}_{\dot{\alpha} \dot{\beta}} = \underline{\nabla}_{\alpha(\alpha}\underline{\tilde{\psi}}_{ \beta)a}\,, \tag{113b}\]
so that (107b) and (112) directly give (107b)
\[\underline{\nabla}^{\dot{\alpha}}_{\alpha}\underline{\tilde{\psi}}_{\dot{a} \dot{\alpha}}=2\lambda[\underline{\psi}^{b}_{\alpha},\underline{\phi}_{ab}]\,. \tag{114}\]
Similarly, acting with \(\underline{\nabla}^{\alpha\dot{\alpha}}\) on (108) and using (107b) we obtain
\[\underline{\square}\phi_{ab}=\{\tilde{\underline{\psi}}_{\dot{a}[b},\underline {\tilde{\psi}}^{\dot{\alpha}}_{a]}\}+\frac{1}{2}\underline{\nabla}_{\alpha a}\underline{\nabla}^{\alpha\dot{\alpha}} \tilde{\psi}_{\dot{\alpha}b}\,, \tag{115}\]
so that (110) gives
\[\underline{\square}\underline{\phi}_{ab}=\{\tilde{\underline{\psi}}_{\dot{a}[ b},\underline{\tilde{\psi}}^{\dot{\alpha}}_{a]}\}+\lambda\epsilon_{abcd}\{ \underline{\psi}^{c}_{\alpha},\underline{\psi}^{d\alpha}\}+2\lambda[ \underline{\phi}_{c[a},[\underline{\phi}^{cd},\underline{\phi}_{b]d}]]\,. \tag{116}\]
To prove the remaining implications, we adopt essentially the strategy developed in [102; 104], to which we refer for more details. We partially fix the gauge on chiral superspace by going to the radial gauge
\[\mathcal{D}\underline{A}_{\alpha a}=0\,, \tag{117}\]
where \(\mathcal{D}\) is the Euler vector field along the fermionic directions
\[\mathcal{D}\coloneqq\theta^{a\alpha}\partial_{a\alpha}\,. \tag{108}\]
The residual gauge invariance corresponds to gauge transformations on chiral superspace that are independent of the fermionic variables, i.e. to ordinary gauge transformations on space-time. Moreover, in this gauge the equations (107a), (107b), (107c), (108d), and (108e) readily imply the recursion relations
\[\mathcal{D}\underline{\tilde{E}}_{\dot{\alpha}\dot{\beta}} = \theta^{a\alpha}\nabla_{\alpha(\dot{\alpha}}\underline{\tilde{ \omega}}_{\dot{\beta})a}\,, \tag{109a}\] \[\mathcal{D}\underline{\tilde{\omega}}_{\dot{\alpha}\dot{\alpha}} = -2\theta^{b\alpha}\nabla_{\alpha\dot{\alpha}}\underline{\phi}_{ab}\,,\] (109b) \[\mathcal{D}\underline{\phi}_{ab} = \epsilon_{abcd}\theta^{c}_{\alpha}\underline{\psi}^{d\alpha}\,,\] (109c) \[\mathcal{D}\underline{\psi}^{a}_{\alpha} = \theta^{a\beta}\underline{B}_{\alpha\beta}-\theta^{b}_{\alpha}[ \underline{\phi}_{bc},\underline{\phi}^{ac}]\,,\] (109d) \[\mathcal{D}\underline{B}_{\alpha\beta} = -2\theta^{a}_{(\alpha}[\underline{\psi}^{b}_{\beta)},\underline{ \phi}_{ab}]\,,\] (109e) \[\mathcal{D}\underline{F}_{\alpha\beta} = \lambda\mathcal{D}\underline{B}_{\alpha\beta}\,,\] (109f) \[\mathcal{D}\underline{A}_{\alpha\dot{\alpha}} = \theta^{a}_{\alpha}\underline{\tilde{\omega}}_{a\dot{\alpha}}\,,\] (109g) \[(1+\mathcal{D})\underline{A}_{a\alpha} = 2\theta^{b}_{\alpha}\underline{\phi}_{ba}\,. \tag{109h}\]
These relations defines uniquely the super-fields in terms of their lowest component fields, since the RHSs are all linear in the fermionic variables and since a homogeneous polynomial in the fermionic variables is an eigenstate of \(\mathcal{D}\), the eigenvalue being the degree of the polynomial. In particular, in this gauge the lowest terms in the \(\theta\) expansion are
\[\underline{\tilde{F}}_{\dot{\alpha}\dot{\beta}} = \tilde{F}_{\dot{\alpha}\dot{\beta}}+\theta^{a\alpha}\nabla_{\alpha (\dot{\alpha}}\tilde{\psi}_{\dot{\beta})a} \tag{110a}\] \[+\frac{1}{2}\theta^{a\alpha}\theta^{b\beta}(\nabla_{\alpha(\dot{ \alpha}}\nabla_{\beta)\beta}\phi_{ab})+\varepsilon_{\beta\alpha}\{\tilde{ \psi}_{b(\dot{\alpha}},\tilde{\psi}_{\dot{\beta})a}\})+\mathcal{O}(\theta^{3})\,,\] \[\underline{\tilde{\psi}}_{a\dot{\alpha}} = \tilde{\psi}_{a\dot{\alpha}}-2\theta^{b\alpha}\nabla_{\alpha\dot{ \alpha}}\phi_{ab}-\theta^{b\beta}\theta^{c\gamma}(\epsilon_{abcd}\nabla_{\beta \dot{\alpha}}\psi^{d\beta}+\varepsilon_{\gamma\beta}[\tilde{\psi}_{c\dot{ \alpha}},\phi_{ab}])+\mathcal{O}(\theta^{3})\,,\] (110b) \[\underline{\phi}_{ab} = \phi_{ab}+\epsilon_{abcd}\theta^{c}_{\alpha}\psi^{d\alpha}+\frac{ 1}{2}\epsilon_{abcd}\theta^{c}_{\alpha}\theta^{e\beta}(\delta^{d}_{e}B^{\alpha }_{\beta}-\delta^{\alpha}_{\beta}[\phi_{ef},\phi^{df}])+\mathcal{O}(\theta^{3 })\,,\] (110c) \[\underline{\psi}^{a}_{\alpha} = \psi^{a}_{\alpha}+\theta^{b\beta}(\delta^{b}_{b}B_{\alpha\beta}- \varepsilon_{\beta\alpha}[\phi_{bc},\phi^{ac}])-\frac{1}{2}\theta^{a\beta} \theta^{b}_{(\alpha}[\psi^{c}_{\beta)},\phi_{bc}]\] (110d) \[-\frac{1}{2}\theta^{b}_{\alpha}\theta^{f}_{\beta}(2[\phi_{bc}, \delta^{[a}_{f}\psi^{c]\beta}]-\epsilon_{bcfe}[\phi^{ac},\psi^{e\beta}])+ \mathcal{O}(\theta^{3})\,,\] \[\underline{B}_{\alpha\beta} = B_{\alpha\beta}-2\theta^{a}_{(\alpha}[\psi^{b}_{\beta)},\phi_{ab}]\] (110e) \[-\theta^{a}_{(\alpha}\theta^{c\gamma}(\delta^{b}_{c}[B_{\beta \gamma,\phi ab}]-\varepsilon_{\gamma\beta}[[\phi_{cd},\phi^{bc}],\phi_{ab}]- \{\psi^{b}_{\beta)},\psi^{d}_{\gamma}\})+\mathcal{O}(\theta^{3})\,,\] \[\underline{A}_{\alpha\dot{\alpha}} = A_{\alpha\dot{\alpha}}+\theta^{a}_{\alpha}\tilde{\psi}_{a\dot{ \alpha}}-\theta^{a}_{\alpha}\theta^{b\beta}\nabla_{\beta\dot{\alpha}}\phi_{ab}+ \mathcal{O}(\theta^{3})\,,\] (110f) \[\underline{A}_{a\alpha} = \theta^{b}_{\alpha}\phi_{ba}+\frac{1}{3}\theta^{b}_{\alpha}\theta ^{c}_{\beta}\epsilon_{abcd}\psi^{d\beta}+\mathcal{O}(\theta^{3})\,. \tag{110g}\]
Conversely, if we assume that we have super-fields defined by the recursion relations and the lowest-component fields, and if we assume that the \(\mathcal{N}=4\) equations of motion holds for the lowest-component fields, then the super-fields relations (107a), (107b), (107c), (108d),
(A.3e), (A.3f) can be obtained by induction on the fermionic degree via the recursion relations
\[(1+\mathcal{D})(\underline{\nabla}_{a\alpha}\underline{\tilde{F}}_{ \dot{\alpha}\dot{\beta}}-\underline{\nabla}_{\alpha(\dot{\alpha}}\underline{ \tilde{\psi}}_{\dot{\beta})a} = -\theta^{b\beta}\underline{\nabla}_{a\alpha}\underline{\nabla}_{ \beta(\dot{\alpha}}\underline{\tilde{\psi}}_{\dot{\beta})a}+2\theta^{b}_{ \alpha}[\underline{\phi}_{ba},\underline{\tilde{F}}_{\dot{\alpha}\dot{\beta}}]\] (A.24a) \[-\underline{\nabla}_{\alpha(\dot{\alpha}}(\theta^{b\beta} \underline{\nabla}_{\dot{\beta})\beta}\underline{\phi}_{ab})+\theta^{b}_{ \alpha}\{\underline{\tilde{\psi}}_{b(\alpha},\underline{\tilde{\psi}}_{\dot{ \beta})a}\}\,,\] \[(1+\mathcal{D})(\underline{\nabla}_{a\alpha}\underline{\tilde{\psi}}_ {\dot{\alpha}b}-2\underline{\nabla}_{\alpha\dot{\alpha}}\underline{\phi}_{ab}) = 2\sigma^{c}\underline{\beta}\,\underline{\nabla}_{a\alpha} \underline{\phi}_{ab}+2\theta^{b}_{\alpha}[\underline{\phi}_{ba},\underline{ \tilde{\psi}}_{\dot{\alpha}b}]\] (A.24b) \[-2\epsilon_{abcd}\theta^{c}_{\beta}\underline{\nabla}_{\alpha \dot{\alpha}}\underline{\psi}^{d\beta}+2\theta^{c}_{\alpha}[\underline{\tilde{ \psi}}_{\dot{\alpha}\dot{\alpha}},\underline{\phi}_{ab}]\,,\] \[(1+\mathcal{D})(\underline{\nabla}_{a\alpha}\underline{\tilde{ \psi}}_{\dot{\alpha}b}-\epsilon_{abcd}\underline{\phi}_{\alpha}^{d}) = -\epsilon_{bcde}\theta^{b}_{\beta}\underline{\nabla}_{a\alpha} \underline{\psi}^{e\beta}+2\theta^{d}_{\alpha}[\underline{\phi}_{da},\underline {\phi}_{bc}]\] (A.24c) \[-\epsilon_{abcd}(\theta^{d\beta}\underline{B}_{\alpha\beta}- \theta^{e}_{\alpha}[\underline{\phi}_{ef},\underline{\phi}^{df}])\,,\] \[(1+\mathcal{D})(\underline{\nabla}_{a\alpha}\underline{\psi}_{ \beta}^{b}+\varepsilon_{\alpha\beta}[\underline{\phi}_{ac},\underline{\phi}^{ bc}] = \delta^{b}_{a}(1+\mathcal{D})\underline{B}_{\alpha\beta}-\theta^{ b\gamma}\underline{\nabla}_{a\alpha}\underline{B}_{\beta\gamma}\] (A.24d) \[+\theta^{c}_{\beta}\underline{\nabla}_{a\alpha}[\underline{\phi}_ {cd},\underline{\phi}^{bd}]+2\theta^{c}_{\alpha}[\underline{\phi}_{ca}, \underline{\psi}_{\beta}^{b}]\] \[+\varepsilon_{\alpha\beta}(\epsilon_{acde}\theta^{d}_{\gamma}[ \underline{\psi}^{e\gamma},\underline{\phi}^{bc}]+\theta^{b}_{\gamma}\delta^{ c}_{g}[\underline{\phi}_{ac},\underline{\psi}^{g\gamma}])\] \[+2\delta^{b}_{a}\theta^{c}_{\alpha}[\underline{\psi}_{\beta}^{d},\underline{\phi}_{cd}],,\] \[(1+\mathcal{D})(\underline{\nabla}_{a\alpha}\underline{B}_{ \alpha\beta}+2\varepsilon_{\alpha(\beta}[\underline{\psi}_{\gamma)}^{b}, \underline{\phi}_{ab}]) = 2\theta^{b}_{(\beta}\underline{\nabla}_{|a\alpha|}[\underline{ \psi}_{\gamma)}^{c},\underline{\phi}_{bc}]+2\theta^{b}_{\alpha}[\underline{ \phi}_{ba},\underline{B}_{\beta\gamma}]\] (A.24e) \[+2\varepsilon_{\alpha(\beta}([\theta^{b\delta}\underline{B}_{ \gamma)\delta}-\theta^{c}_{\gamma}][\underline{\phi}_{cd},\underline{\phi}^{ bd}],\underline{\phi}_{ab}])\] \[+2\epsilon_{abcd}\varepsilon_{\alpha(\beta}[\underline{\psi}_{ \gamma)}^{b},\theta^{e}_{\delta}\underline{\psi}^{d\delta}]\,,\]
which follow directly from (A.22a), (A.22b), (A.22c), (A.22d), (A.22e), (A.22g), and (A.22h).
Similarly, the constraint equations (A.2a) follows from induction on the fermionic degree on the recursion relations
\[(1+\mathcal{D})([\underline{\nabla}_{a\alpha},\underline{\nabla}_ {\beta\dot{\alpha}}]-\varepsilon_{\alpha\beta}\underline{\tilde{\psi}}_{a \dot{\alpha}}) = -\theta^{b}_{\beta}\underline{\nabla}_{a\alpha}\underline{\tilde{ \psi}}_{b\dot{\beta}}+2\theta^{b}_{\alpha}\underline{\nabla}_{\beta\beta} \underline{\phi}_{ba}\] (A.25a) \[+2\varepsilon_{\alpha\beta}\theta^{b\gamma}\underline{\nabla}_{ \gamma\dot{\beta}}\underline{\phi}_{ab}\,,\] \[(2+\mathcal{D})(\{\underline{\nabla}_{a\alpha},\underline{\nabla}_ {b\beta}\}-2\varepsilon_{\alpha\beta}\underline{\phi}_{ab}) = -2\theta^{c}_{\beta}\underline{\nabla}_{a\alpha}\underline{\phi}_ {cb}-2\theta^{c}_{\alpha}\underline{\nabla}_{b\beta}\underline{\phi}_{ca}\] (A.25b) \[-2\varepsilon_{\alpha\beta}\epsilon_{abcd}\theta^{c}_{\gamma} \underline{\tilde{\psi}}^{d\gamma}\,,\]
thus establishing the equivalence.
## Appendix B Computational details
In this brief appendix, we show in more detail how to derive (5.19) from the generating functional (5.15)
\[\int_{\mathbb{M}}\mathrm{d}^{4}x\,e^{-\mathrm{i}q\cdot x}\iota_{\alpha}q^{ \alpha\dot{\alpha}}\iota_{\beta}q^{\beta\beta}\int_{X^{2}}\frac{\mathrm{D} \lambda_{1}\mathrm{D}\lambda_{2}}{\langle\iota\lambda_{1}\rangle\langle\iota \lambda_{2}\rangle}\langle\lambda_{1}\lambda_{2}\rangle^{2}\,\mathrm{tr}\,\left( \left.\frac{\partial\mathbf{a}}{\partial\mu_{1}^{\dot{\alpha}}}\right|_{X} \mathsf{U}_{X}(\lambda_{1},\lambda_{2})\left.\frac{\partial\mathbf{a}}{\partial \mu_{2}^{\dot{\beta}}}\right|_{X}\mathsf{U}_{X}(\lambda_{2},\lambda_{1})\right)\,,\] (B.1)
here rewritten for the sake of clarity. Its expansion at \(n\) points evaluated on momentum eigenstates reads
\[\begin{split}&\left(\frac{-1}{2\pi{\rm i}}\right)^{n-2}\int_{\mathbb{ M}}\mathrm{d}^{4}x\,e^{{\rm i}(Q-q)\cdot x}\iota_{\alpha}q^{\alpha\dot{\alpha}} \iota_{\beta}q^{\beta\dot{\beta}}\sum_{i,j}\Biggl{\{}\frac{{\rm tr}\,\hat{T}_{ p_{1}}\dots\hat{T}_{p_{n}}}{\langle p_{1}p_{2}\rangle\dots\langle p_{n}p_{1} \rangle}\frac{\langle ij\rangle^{2}}{\langle i\rangle\langle\iota j\rangle \langle\iota j\rangle}\tilde{\kappa}_{i\,\dot{\alpha}}\tilde{\kappa}_{j\,\dot{ \beta}}\\ &-2\int_{X}\frac{{\rm D}\lambda}{2\pi{\rm i}}\frac{{\rm tr}\,\hat{ T}_{p_{1}}\dots\hat{T}_{p_{i-1}}\mathsf{H}^{-1}(x,\lambda)\,\partial_{\dot{ \alpha}}\mathsf{a}\,\mathsf{H}(x,\lambda)\hat{T}_{p_{i}}\dots\hat{T}_{p_{n}}}{ \langle p_{1}p_{2}\rangle\dots\langle p_{i-1}\lambda\rangle\langle\lambda p_{i }\rangle\dots\langle p_{n}p_{1}\rangle}\frac{\langle\lambda j\rangle^{2}}{ \langle\iota\lambda\rangle\langle\iota j\rangle}\tilde{\kappa}_{j\,\dot{\beta}} \\ &+\int_{X^{2}}\frac{{\rm D}\lambda{\rm D}\lambda^{\prime}}{(2\pi{ \rm i})^{2}}\frac{1}{\langle p_{1}p_{2}\rangle\dots\langle p_{i-1}\lambda \rangle\langle\lambda p_{i}\rangle\dots\langle p_{j-1}\lambda^{\prime} \rangle\langle\lambda^{\prime}p_{j}\rangle\dots\langle p_{n}p_{1}\rangle}\frac {\langle\lambda\lambda^{\prime}\rangle^{2}}{\langle\iota\lambda\rangle\langle \iota\lambda^{\prime}\rangle}\times\\ &\times\,{\rm tr}\,\hat{T}_{p_{1}}\dots\hat{T}_{p_{i-1}}\mathsf{H} ^{-1}(x,\lambda)\,\partial_{\dot{\alpha}}\mathsf{a}\,\mathsf{H}(x,\lambda)\hat {T}_{p_{i}}\dots\hat{T}_{p_{j-1}}\mathsf{H}^{-1}(x,\lambda^{\prime})\, \partial_{\dot{\beta}}\mathsf{a}\,\mathsf{H}(x,\lambda^{\prime})\hat{T}_{p_{j} }\dots\hat{T}_{p_{n}}\\ &+{\rm perms.}\Biggr{\}}\,,\end{split} \tag{100}\]
where \(\hat{T}_{j}\coloneqq\mathsf{H}^{-1}(x,\kappa_{j})T_{j}\mathsf{H}(x,\kappa_{j})\), and \(+{\rm perms.}\) is a sum over the permutations of \(\{p_{1},\dots,p_{n}\}\). Around a Cartan-valued background, the conjugation on the colour factors reduces to \(\hat{T}^{\mathsf{a}_{j}}=T_{j}\exp(e_{j}g(x,\kappa_{j}))\), so that the colour-ordered form factor is
\[\begin{split}\mathscr{F}=&\frac{1}{\langle 12 \rangle\dots\langle n1\rangle}\int_{\mathbb{M}}\mathrm{d}^{4}x\,e^{{\rm i}(Q-q) \cdot x+\sum_{j}e_{j}g(x,\kappa_{j})}\iota_{\alpha}q^{\alpha\dot{\alpha}} \iota_{\beta}q^{\beta\dot{\beta}}\sum_{i,j}\Biggl{\{}\frac{\langle ij\rangle^ {2}}{\langle\iota i\rangle\langle\iota j\rangle}\tilde{\kappa}_{i\dot{\alpha} }\tilde{\kappa}_{j\dot{\beta}}\\ &-2\int_{X}\frac{{\rm D}\lambda}{2\pi{\rm i}}\frac{\langle i-1,i \rangle\langle\lambda j\rangle^{2}\tilde{\kappa}_{j\dot{\beta}}}{\langle i-1, \lambda\rangle\langle\lambda i\rangle\langle\iota\lambda\rangle\langle\iota j \rangle}\left.\frac{\partial\mathsf{a}}{\partial\mu^{\dot{\alpha}}}\right|_{X} \left.\frac{\partial\mathsf{a}}{\partial\mu^{\dot{\beta}}}\right|_{X}\left. \frac{\partial\mathsf{a}}{\partial\mu^{\dot{\beta}}}\right|_{X}\right\}.\end{split} \tag{101}\]
We now consider the integral
\[\mathcal{J}_{\alpha\beta\dot{\alpha}}(\ell)\coloneqq\int_{X}\frac{{\rm D}\lambda }{2\pi{\rm i}}\frac{\lambda_{\alpha}\lambda_{\beta}}{\langle\iota\lambda \rangle\langle\ell-1,\lambda\rangle\langle\lambda\ell\rangle}\left.\frac{ \partial\mathsf{a}}{\partial\mu^{\dot{\alpha}}}\right|_{X}\,. \tag{102}\]
Using the identity
\[\begin{split}\frac{\lambda_{\alpha}\lambda_{\beta}}{\langle\iota \lambda\rangle\langle\ell-1\lambda\rangle\langle\ell\lambda\rangle}=& \frac{\kappa_{\ell-1\,\alpha}\kappa_{\ell-1\,\beta}}{\langle\ell-1,\ell \rangle^{2}\langle\iota,\ell-1\rangle}\left(\frac{\langle\iota\ell\rangle}{ \langle\iota\lambda\rangle}-\frac{\langle\ell-1,\ell\rangle}{\langle\ell-1, \lambda\rangle}\right)\\ &+\frac{\kappa_{\ell\,\alpha}\kappa_{\ell\,\beta}}{\langle\ell-1, \ell\rangle^{2}\langle\iota\ell\rangle}\left(\frac{\langle\iota,\ell-1\rangle}{ \langle\iota\lambda\rangle}-\frac{\langle\ell,\ell-1\rangle}{\langle\ell\lambda \rangle}\right)\\ &-\frac{\kappa_{\ell-1\,\alpha}\kappa_{\ell\,\beta}+\kappa_{\ell-1 \,\beta}\kappa_{\ell\,\alpha}}{\langle\ell-1,\ell\rangle^{2}\langle\iota \lambda\rangle}\,,\end{split} \tag{103}\]
the integral can be evaluated explicitly and reads
\[\begin{split}\mathcal{J}_{\alpha\beta\dot{\alpha}}(\ell)=-{\rm i}& \left(\frac{\kappa_{\ell-1\,\alpha}\kappa_{\ell-1\,\beta}}{\langle\ell-1, \iota\rangle\langle\ell-1,\ell\rangle}G_{\dot{\alpha}}(x,\kappa_{\ell-1})+ \frac{\kappa_{\ell\,\alpha}\kappa_{\ell\,\beta}}{\langle\ell_{\ell}\rangle \langle\ell,\ell-1\rangle}G_{\dot{\alpha}}(x,\kappa_{\ell})\right.\\ &\left.+\frac{\iota_{\alpha}\iota_{\beta}}{\langle\iota,\ell-1 \rangle\langle\iota\ell\rangle}G_{\dot{\alpha}}(x,\iota)\right)\,.\end{split} \tag{104}\]
We then see that the terms with residual integrals over \(X\) and \(X^{2}\) don't contribute to the form factor in (114). For the term with a single integral over \(X\), the sum over \(i\) is
\[\sum_{i}\left(\frac{\langle i-1,j\rangle^{2}}{\langle i-1,\iota\rangle}G_{\dot{ \alpha}}(x,\kappa_{i-1})-\frac{\langle ij\rangle^{2}}{\langle i\iota\rangle}G_ {\dot{\alpha}}(x,\kappa_{i})+\frac{\langle\iota j\rangle^{2}\langle i-1,i \rangle}{\langle\iota,i-1\rangle\langle\iota i\rangle}G_{\dot{\alpha}}(x, \iota)\right)\,. \tag{115}\]
The first two terms obviously cancel in the sum. The third one is telescopic in \(i\) as well, once we complete \(\iota_{\alpha}\) to a basis \(\{\iota_{\alpha},o_{\alpha}\}\) of undotted spinors, introducing the spinor \(o_{\alpha}\) normalized such that \(\langle\iota o\rangle=1\). The third term then reduces to
\[\sum_{i}\frac{\langle i-1,1\rangle}{\langle\iota,i-1\rangle\langle\iota i \rangle}=\sum_{i}\left(\frac{\langle oi\rangle}{\langle\iota i\rangle}-\frac{ \langle o,i-1\rangle}{\langle\iota,i-1\rangle}\right)=0\,. \tag{116}\]
Overall, the form factor reads
\[\mathscr{F}=\frac{1}{\langle 12\rangle\ldots\langle n1\rangle}\sum_{i,j} \frac{\langle ij\rangle^{2}\langle\iota|q|i\rangle\langle\iota|q|j\rangle}{ \langle\iota i\rangle\langle\iota j\rangle}\int_{\mathbb{M}}\mathrm{d}^{4}x\,e ^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x,\kappa_{j})}\,. \tag{117}\]
Let \(\mathcal{S}\) denote the sum over \(i,j\)
\[\mathcal{S}\coloneqq q^{\alpha\dot{\alpha}}q^{\beta\dot{\beta}}\sum_{i,j} \frac{\langle ij\rangle^{2}}{\langle\iota i\rangle\langle\iota j\rangle}\iota _{\alpha}l_{\beta}\tilde{\kappa}_{i\dot{\alpha}}\tilde{\kappa}_{j\dot{\beta}}\,. \tag{118}\]
Using the Schouten identity and performing one of the sums, \(\mathcal{S}\) reduces to
\[\mathcal{S}=q^{\alpha\dot{\alpha}}q^{\beta\dot{\beta}}\left(Q_{\gamma\dot{ \alpha}}\sum_{i}\frac{\iota_{\beta}\tilde{\kappa}_{i\dot{\beta}}\kappa_{i \alpha}\kappa_{i}^{\gamma}}{\langle\iota i\rangle}+Q_{\gamma\dot{\beta}}\sum_ {i}\frac{\iota_{\beta}\tilde{\kappa}_{i\dot{\alpha}}\kappa_{i\alpha}\kappa_{ i}^{\gamma}}{\langle\iota i\rangle}\right)\,, \tag{119}\]
Finally, noticing the identity
\[q^{\alpha\dot{\alpha}}Q_{\gamma\dot{\alpha}}=\frac{1}{2}(q^{\alpha\dot{\alpha }}Q_{\gamma\dot{\alpha}}-q_{\gamma\dot{\alpha}}Q^{\alpha\dot{\alpha}})+\frac{ 1}{2}\delta_{\gamma}^{\alpha}q\cdot Q\,, \tag{120}\]
we can further simplify \(\mathcal{S}\) down to
\[\mathcal{S}=-(q\cdot Q)^{2}\,, \tag{121}\]
and the form factor is finally (up to an overall numerical factor)
\[\mathscr{F}=\frac{(q\cdot Q)^{2}}{\langle 12\rangle\ldots\langle n1\rangle} \int_{\mathbb{M}}\mathrm{d}^{4}x\,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x, \kappa_{j})}\,. \tag{122}\]
The same procedure can be used to obtain (109) from the lifting of \(\,\mathrm{tr}\,\tilde{F}^{3}\) to twistor space. Such lifting reads
\[\begin{split}\mathrm{tr}\,\tilde{F}^{3}&=\int \mathrm{D}\lambda_{123}\,\langle\lambda_{1}\lambda_{2}\rangle\langle\lambda_{2 }\lambda_{3}\rangle\langle\lambda_{3}\lambda_{1}\rangle\,\mathrm{tr}\,\partial _{\dot{\alpha}}\partial^{\dot{\beta}}\mathsf{a}_{1}\mathsf{U}_{12}\partial_{ \dot{\beta}}\partial^{\dot{\gamma}}\mathsf{a}_{2}\mathsf{U}_{23}\partial_{ \dot{\gamma}}\partial^{\dot{\alpha}}\mathsf{a}_{3}\mathsf{U}_{31}\\ &\quad+6\int\mathrm{D}\lambda_{1234}\,\langle\lambda_{2}\lambda_{ 3}\rangle\langle\lambda_{3}\lambda_{4}\rangle\langle\lambda_{4}\lambda_{1} \rangle\,\mathrm{tr}\,\partial_{(\dot{\alpha}}\mathsf{a}_{1}\mathsf{U}_{12} \partial_{\dot{\beta})}\mathsf{a}_{2}\mathsf{U}_{23}\partial^{\dot{\beta}} \partial^{\dot{\gamma}}\mathsf{a}_{3}\mathsf{U}_{34}\partial_{\dot{\gamma}} \partial^{\dot{\alpha}}\mathsf{a}_{4}\mathsf{U}_{41}\\ &\quad-12\int\mathrm{D}\lambda_{12345}\,\langle\lambda_{2}\lambda_{ 3}\rangle\langle\lambda_{4}\lambda_{5}\rangle\langle\lambda_{5}\lambda_{1} \rangle\,\mathrm{tr}\,\partial_{(\dot{\alpha}}\mathsf{a}_{1}\mathsf{U}_{12} \partial_{\dot{\beta})}\mathsf{a}_{2}\mathsf{U}_{23}\partial^{(\dot{\beta}} \mathsf{a}_{3}\mathsf{U}_{34}\partial^{\dot{\gamma})}\mathsf{a}_{4}\mathsf{U}_{ 45}\partial_{\dot{\gamma}}\partial^{\dot{\alpha}}\mathsf{a}_{5}\mathsf{U}_{51} \\ +8\,\varepsilon^{\dot{\alpha}\dot{\delta}}\int\mathrm{D}\lambda_{12 3456}\,\langle\lambda_{2}\lambda_{3}\rangle\langle\lambda_{4}\lambda_{5} \rangle\langle\lambda_{6}\lambda_{1}\rangle\,\mathrm{tr}\,\partial_{(\dot{\alpha} }\mathsf{a}_{1}\mathsf{U}_{12}\partial_{\dot{\beta})}\mathsf{a}_{2}\mathsf{U}_{2 3}\partial^{(\dot{\beta}}\mathsf{a}_{3}\mathsf{U}_{34}\partial^{\dot{\gamma})} \mathsf{a}_{4}\mathsf{U}_{45}\partial_{(\dot{\gamma}}\mathsf{a}_{5}\mathsf{U}_{ 56}\partial_{\dot{\delta})}\mathsf{a}_{6}\mathsf{U}_{61}\,,\end{split} \tag{123}\]
In the case of super form factors in \({\cal N}=4\) SYM, the same arguments of Section 5.3 hold around gluonic backgrounds, so we briefly comment only on the derivation of (108) from (109). We denote the background super-connection as \(\mathbbm{a}^{(\text{B})}=\mathbbm{a}+\mathbbm{a}_{a}\chi^{a}\) and a generic external state (110) as \(\mathbbm{a}_{i}\). We first notice that (109) is obtained from the super-field \(\underline{\phi}_{ab}\) using the supersymmetrized \(K\)-matrix
\[\frac{1}{2}\operatorname{tr}\phi_{ab}\phi_{ab}=\frac{1}{2}\iota^{\alpha}\iota^ {\beta}\iota^{\gamma}\iota^{\delta}\int\mathrm{d}^{8}\theta\,\prod_{\gamma,c }\theta^{\gamma c}\,\operatorname{tr}\left(\partial_{\alpha a}\partial_{\beta b }\underline{K}\,\partial_{\gamma a}\partial_{\delta b}\underline{K}\right), \tag{111}\]
and integrating by parts twice. The super form factor is obtained by taking the Fourier transform of
\[\iota^{\alpha}\iota^{\beta}\Bigg{(}\sum_{i,j}\int\frac{\mathrm{D }\lambda_{1}\ldots\mathrm{D}\lambda_{n}}{\langle\iota\lambda_{i}\rangle\langle \iota\lambda_{j}\rangle}\frac{\langle\lambda_{i}\lambda_{j}\rangle^{2}}{ \langle\lambda_{1}\lambda_{2}\rangle\ldots\langle\lambda_{n}\lambda_{1}\rangle }\\ \operatorname{tr}\left(\underline{\mathsf{H}}_{1}^{-1}\underline{a }_{1}\underline{\mathsf{H}}_{1}\ldots\underline{\mathsf{H}}_{i}^{-1}\partial _{a}\underline{a}_{i}\underline{\mathsf{H}}_{i}\ldots\underline{\mathsf{H}}_{ j}^{-1}\partial_{b}\underline{a}_{j}\underline{\mathsf{H}}_{j}\ldots\underline{ \mathsf{H}}_{n}^{-1}\underline{a}_{n}\underline{\mathsf{H}}_{n}\right)\Bigg{)} \Bigg{|}_{\theta^{\alpha a}\theta^{\beta b}}\,, \tag{112}\]
where we are extracting the \(\theta^{\alpha a}\theta^{\beta b}\) component, evaluating this expression on the on-shell state (110) and summing over permutations. Around a Cartan-valued background, the holomorphic frame is \(\underline{\mathsf{H}}=\exp(-\underline{g})\), with
\[\underline{g}(x,\theta,\lambda)=\frac{1}{2\pi\mathrm{i}}\int_{X}\frac{\mathrm{ D}\lambda^{\prime}}{\langle\lambda\lambda^{\prime}\rangle}\frac{\langle \iota\lambda\rangle}{\langle\iota\lambda^{\prime}\rangle}\,\mathbbm{a}^{( \text{B})}\Big{|}_{X}=g(x,\lambda)+\theta^{\alpha a}\mathcal{G}_{\alpha a}(x, \lambda)\,, \tag{113}\]
where
\[\mathcal{G}_{\alpha a}(x,\lambda)=\frac{1}{2\pi\mathrm{i}}\int_{X}\frac{ \mathrm{D}\lambda^{\prime}}{\langle\lambda\lambda^{\prime}\rangle}\frac{ \langle\iota\lambda\rangle}{\langle\iota\lambda^{\prime}\rangle}\lambda^{ \prime}_{\alpha}\,\mathbbm{a}_{a}|_{X}. \tag{114}\]
Note that this function is related to (107) as
\[\iota^{\alpha}\mathcal{G}_{\alpha a}(x,\lambda)=\langle\iota\lambda\rangle g _{a}^{(f)}(x,\lambda)\,, \tag{115}\]
so that the colour-ordered super form factor is finally given by
\[\frac{1}{2}\frac{1}{\langle 12\rangle\ldots\langle n1\rangle}\int_{ \mathbb{M}}\mathrm{d}^{4}x\,e^{\mathrm{i}(Q-q)\cdot x+\sum_{j}e_{j}g(x,\kappa_ {j})}\\ \sum_{i,j,k,\ell}\frac{\langle ij\rangle^{2}}{\langle\iota i \rangle\langle\iota j\rangle}\eta_{i\,a}\eta_{j\,b}\iota^{\alpha}\iota^{ \beta}(\kappa_{k\,\alpha}\eta_{k\,a}+e_{k}\mathcal{G}_{\alpha a}(x,\kappa_{k}) )(\kappa_{\ell\,\beta}\eta_{\ell\,b}+e_{\ell}\mathcal{G}_{\beta b}(x,\kappa_{ \ell}))\,. \tag{116}\]
With a computation completely analogous to the one carried out for the form factor for \(\operatorname{tr}\tilde{F}^{2}\), the last sums can be shown to be independent of \(o^{\alpha}\) and they coincide precisely with \((\mathcal{Q}_{a}^{\alpha}\mathcal{\dot{Q}}_{\alpha a})^{2}\).
|
2304.10317 | Adaptive Consensus Optimization Method for GANs | We propose a second order gradient based method with ADAM and RMSprop for the
training of generative adversarial networks. The proposed method is fastest to
obtain similar accuracy when compared to prominent second order methods. Unlike
state-of-the-art recent methods, it does not require solving a linear system,
or it does not require additional mixed second derivative terms. We derive the
fixed point iteration corresponding to proposed method, and show that the
proposed method is convergent. The proposed method produces better or
comparable inception scores, and comparable quality of images compared to other
recently proposed state-of-the-art second order methods. Compared to first
order methods such as ADAM, it produces significantly better inception scores.
The proposed method is compared and validated on popular datasets such as FFHQ,
LSUN, CIFAR10, MNIST, and Fashion MNIST for image generation
tasks\footnote{Accepted in IJCNN 2023}. Codes:
\url{https://github.com/misterpawan/acom} | Sachin Kumar Danisetty, Santhosh Reddy Mylaram, Pawan Kumar | 2023-04-20T13:50:42Z | http://arxiv.org/abs/2304.10317v1 | # Adaptive Consensus Optimization Method for GANs
###### Abstract
We propose a second order gradient based method with ADAM and RMSprop for the training of generative adversarial networks. The proposed method is fastest to obtain similar accuracy when compared to prominent second order methods. Unlike state-of-the-art recent methods, it does not require solving a linear system, or it does not require additional mixed second derivative terms. We derive the fixed point iteration corresponding to proposed method, and show that the proposed method is convergent. The proposed method produces better or comparable inception scores, and comparable quality of images compared to other recently proposed state-of-the-art second order methods. Compared to first order methods such as ADAM, it produces significantly better inception scores. The proposed method is compared and validated on popular datasets such as FFHQ, LSUN, CIFAR10, MNIST, and Fashion MNIST for image generation tasks1. Codes: [https://github.com/misterpawan/acom](https://github.com/misterpawan/acom)
Footnote 1: Accepted in IJCNN 2023
## I Introduction and Related Work.
Recently generative modeling have received much attention with the advent of diffusion based models [15] for text-to-image generation, however, sampling for generative adversarial networks (GAN) [17] still remain order of magnitude fast. Moreover, with bigger architectures as in [22], image quality from GAN remain as good as those from diffusion models. We consider the problem of solving the following min max problem
\[\min_{x}\max_{y}f(x,y), \tag{1}\]
where \(f:\mathbb{R}^{m}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\). This can be seen as a two-player game, where one agent tries to maximize its objective, whereas, the other agent tries to minimize its objective.
In this work, we are interested in such optimization problems stemming from generative adversarial networks. Such problems also arise in adversarial training [3] and multi-agent reinforcement learning [42]. This is an active area of research, and recent solvers often involve second order derivatives in some way. Using second order derivatives is found to increase robustness and quality of the images generated. The minmax problem (1) above can be seen either as simultaneous minmax or sequential minmax problem. If it is indeed seen as simultaneous minmax, then the solutions will correspond to local Nash equilibrium [21, 47, 48]. However, in the current literature, there is no well established clarity on usage of one of these in designing solver for GAN. As pointed out in [21], it is understood that GANs correspond to sequential min-max, i.e., generator observes the discriminator's action, then generator optimizes, followed by discriminator rather than both generator and discriminator optimizing simultaneously. These two views of simultaneous or sequential minmax leads to a variety of second order gradient methods.
A class of methods consider minmax interpretation; here it is understood that if the discriminator is optimal, then it must lead the generator loss to approach Jensen-Shannon divergence between real and generated distribution. This view leads to class of methods that suggest us to use variety of divergences or metrics with improved theoretical properties. As was pointed out in [48], the minmax interpretation has two major problem: "Without regularity constraints, the discriminator can always be perfect" and "Imposing regularity constraints needs a measure of similarity of images." They claim that imposing regularity is equivalent to forcing the discriminator to map images to similar images. However, it is not easy to mimic similarity of images using such map that corresponds to similarity seen by human perception. Furthermore, in [33], authors did not find significant differences in the performance of GANs with various choices of divergence measures. In [35], it was shown that simultaneous gradient descent (SimGD) on both players leads to additional stable points compared to the case when gradient descent is done sequentially, i.e., by considering one of the player fixed at a time. Moreover, these additional stable points do not correspond to local Nash equilibrium. Considering this concern of unusual additional stable points, recent approaches such as [36] and [9] suggest modifications that lead only to local Nash equilibrium. Another class of methods stems from the original GAN [17], called metric-agnostic GANs. In original GAN, the loss function is given as follows:
\[\min_{\mathcal{G}}\max_{\mathcal{D}}\frac{1}{2}\mathbb{E}_{x\sim P_{\text{data }}}[\log\mathcal{D}(x)]+\frac{1}{2}\mathbb{E}_{x\sim P_{G}}[\log(1-\mathcal{ D}(x))],\]
where \(\mathcal{G}\) is the distribution generated by generator, and \(\mathcal{D}\) is the classifier provided by discriminator, and \(P_{\text{data}}\) is the target. In [4, 5], WGAN was proposed with the following loss function
\[\min_{\mathcal{G}}\max_{\mathcal{D}}\mathbb{E}_{x\sim P_{\text{data }}}[\mathcal{D}(x)]-\mathbb{E}_{x\sim P_{\mathcal{D}}}[\mathcal{D}(x)]+ \mathcal{F}(\nabla\mathcal{D}),\]
where \(\mathcal{F}(\nabla\mathcal{D})\) is infinity if \(\sup_{x}\|\nabla\mathcal{D}(x)\|>1\) and zero elsewhere. Shortly later, WGAN-GP was proposed in [19], where the inequality constraint is replaced by
\(\mathbb{E}[(\|\nabla\mathcal{D}\|-1)^{2}]\). These methods depend on the choice of norm used to measure gradient \(\nabla\mathcal{D}\). There are quite a few other variants with other measures or norms proposed in Banach-GAN [2], Sobolev-GAN [40], and Besov-GAN [50] to measure \(\nabla\mathcal{D}\). These are so-called metric-informed GANs.
For solving problem (1), several solvers have been proposed in the past. A straightforward approach is gradient descent-ascent (GDA), in this case, the two players see each of their objectives separately as minimization problem without any regard to the other player's interest. It is well documented that this approach may lead to a cycling behavior [47]. Hence, GDA is not a suitable solver for competitive optimization as seen in GANs. In [17], two scale update rule is proposed, methods that use follow the regularized leader is proposed in [8], predictive approach is shown in [52], solver based on opponent learning awareness is proposed in [16]. Similarly, some of the sophisticated heuristics based on one agent predicting other agent's next move was proposed in [45, 37, 14]. In [48, 47], authors proposed a new method, namely, CGD, for numerical solution of (1). Compared to some of the methods mentioned before, the CGD method avoids divergence or oscillations, which is typical of some of the methods based on alternating gradient descent. In their later paper [48], they claim that the ACGD [48] method provides implicit competitive regularization [6, 20, 21, 21, 7, 41]. In [39], it is shown that unregularized GAN is neither locally nor globally convergent. Some of the above methods could be seen in the framework of preconditioned gradient methods used in other applications [12, 13, 10, 25, 26, 27, 28, 29, 30, 31, 32, 33, 43, 25, 34, 46].
In Table I, we see update procedures for various algorithms, we notice that, unlike others, an approximate version of CGD involves solving for a linear system. We also notice that except for GDA, all other methods make use of second order terms.
Contributions.In this paper, we propose a new simple update rule for solving (1). The image quality generated by the proposed method is among the best second order methods, moreover, it is fastest to train compared to all second order methods we compared with. The proposed update rule for the gradients is integrated with RMSprop and ADAM for adaptive learning rate. In particular, we observe that mixed derivative terms as used in existing methods does not seem to be necessary on practical datasets such as MNIST, Fashion MNIST, CIFAR10, FFHQ, and LSUN. The mixed derivative terms in some of the existing methods are motivated from the view that solution corresponds to local Nash equilibrium, however, we don't find it much useful in practice. Moreover, we also show theoretical guarantee for convergence of the proposed method.
We summarize the main contributions of the paper as follows:
* We propose a new second order method called adaptive consensus optimization (ACOM) integrated with adaptive learning rate of RMSProp or ADAM. We show that the proposed method is fastest to train among second order methods, and achieves inception scores as good as existing state-of-the-art. Extensive experiments on five popular datasets are shown.
* We show a complete convergence analysis of our method with RMSprop and ADAM. We identify the fixed point iteration, and show that the necessary condition for convergence of the proposed method is satisfied, ensuring at least linear convergence. Although, analysis was done for ConOpt alone in [38] and for CGD alone in [47], unified full analysis of the update rule combined with momentum based methods such as ADAM or RMSprop is shown for the first time.
The following sections are organized as follows. In section 2, we describe briefly the GAN and smooth two-player game. In Section 3, we describe the proposed method ACOM, and convergence of the proposed method. Finally, in Section 4, we show the numerical experiments on five popular datasets MNIST, Fashion MNIST, CIFAR10, LSUN, and FFHQ.
## II GAN and smooth two-player games.
We wish to find a Nash-equilibrium of the two player game associated with training GAN. We define Nash-equilibrium as a point \(\bar{p}=(\bar{x},\bar{y})\) if the following two conditions hold
\[\bar{x}\in\text{arg max}_{x}f(x,\bar{y})\quad\text{and}\quad\bar{y}\in\text{ arg max}_{y}g(\bar{x},y)\]
in some local neighborhood of \((\bar{x},\bar{y})\). For differentiable two-player game, associated vector field is given by \(V(x,y)=\begin{bmatrix}D_{x}f(x,y)\\ D_{y}g(x,y)\end{bmatrix},\) where
\[D_{x}f=\nabla_{x}f,\;D_{y}f=\nabla_{y}g.\]
For a zero sum game, we have \(f=-g\), and the derivative of the vector field is
\[V^{\prime}(x,y)=\begin{bmatrix}D_{xx}^{2}f(x,y)&D_{xy}^{2}f(x,y)\\ -D_{yx}^{2}f(x,y)&-D_{yy}^{2}f(x,y)\end{bmatrix},\]
where
\[D_{xx}^{2}f=\nabla_{xx}^{2}f,\;D_{xy}^{2}f=\nabla_{xy}^{2}f,\;D_{yy}^{2}f= \nabla_{yy}^{2}f.\]
**Lemma II.1**.: _For zero-sum games, \(V^{\prime}(p)\) is negative semi-definite if and only if \(D_{xx}^{2}f(x,y)\) is negative semi-definite and \(D_{yy}^{2}f(x,y)\) is positive semi-definite._
Proof.: See [38].
**Corollary II.2**.: _For zero-sum games, \(V^{\prime}(p)\) is negative semi-definite for any local Nash-equilibrium \(\bar{p}\). Conversly, if \(\bar{p}\) is a
\begin{table}
\begin{tabular}{l|l} Update rule & Name \\ \hline \(\Delta x=-\nabla_{x}f\) & GDA \\ \(\Delta x=-\nabla_{x}f-\gamma D_{xy}^{2}f\nabla_{y}f\) & SGA [9] \\ \(\Delta x=-\nabla_{x}f-\gamma D_{xy}^{2}f\nabla_{y}f-\gamma D_{xx}^{2}f \nabla_{x}f\) & ConOpt [38] \\ \(\Delta x=-\nabla_{x}f-\gamma D_{xy}^{2}f\nabla_{y}f+\gamma D_{xx}^{2}f\nabla_{ x}f\) & OGDA [14] \\ \(\Delta x=-(Id+\eta^{2}D_{xy}^{2}fD_{xx}^{2}f)^{-1}\) (\(\nabla_{x}f-\gamma D_{xy}^{2}f\nabla_{y}f\)) & CGD [47] \\ \(\Delta x=-\nabla_{x}f-D_{xx}^{2}f\Delta x\) & ACOM \\ \end{tabular}
\end{table} TABLE I: Various update rules for min-max optimization problems.
stationary point of \(V(p)\) and \(V^{\prime}(\bar{p})\) is negative-definite, then \(\bar{p}\) is a local Nash-equilibrium._
Proof.: See [38].
### _Results for Fixed Point Iteration._
To analyze the convergence properties of our proposed method, we begin with the classical theorem for convergence of fixed point iterations:
**Proposition II.3**.: _Let \(F:\Omega\rightarrow\Omega\) be a continuously differential function on an open subset \(\Omega\) of \(\mathbb{R}^{n}\) and let \(\bar{p}\in\Omega\) be so that_
1. \(F(\bar{p})=\bar{p}\)_, and_
2. _The absolute values of the eigenvalues of the Jacobian_ \(F^{\prime}(\bar{p})\) _are all smaller than 1._
_Then there is an open neighborhood \(U\) of \(\bar{p}\) so that for all \(p_{0}\in U\), the iterates \(F^{(k)}\left(p_{0}\right)\) converge to \(\bar{p}\). The rate of convergence is at least linear. More precisely, the error \(\left|F^{(k)}\left(p_{0}\right)-\bar{p}\right|\) is in \(\mathcal{O}\left(\left|\lambda_{\max}\right|^{k}\right)\) for \(k\rightarrow\infty,\) where \(\lambda_{\max}\) is the eigenvalue of \(F^{\prime}(\bar{p})\) with the largest absolute value._
Proof.: See [11], proposition 4.4.1.
**Lemma II.4**.: _Assume that \(A\in\mathbb{R}^{n\times n}\) only has eigenvalues with negative real-part and let \(h>0.\) Then the eigenvalues of the matrix \(I+hA\) lie in the unit ball if and only if_
\[h<\frac{1}{|\Re(\lambda)|}\frac{2}{1+\left(\frac{\Im(\lambda)}{\Re(\lambda)} \right)^{2}}.\]
Proof.: See Lemma 4 of [38].
For the choice of \(F(p)=p+hG(p)\) for some \(h>0,\) the Jacobian is given by \(F^{\prime}(p)=I+hG^{\prime}(p).\) Hence, using lemma II.4 above for \(A=G^{\prime}(p),\) we claim convergence using proposition II.3.
## III ACOM: Adaptive Consensus Optimization Method.
The proposed method is derived as follows. Consider the partial derivative defined as follows
\[\nabla_{x}f(x,y)\Delta x=f(x+\Delta x,y)-f(x,y). \tag{2}\]
Taking partial derivative with respect to \(x,\) we have
\[\nabla_{xx}^{2}f(x,y)\Delta x=\nabla_{x}f(x+\Delta x,y)-\nabla_{x}f(x,y),\]
which leads to the following update to the gradient at the new point \(x+\Delta x:\)
\[\nabla_{x}f(x+\Delta x,y)=\nabla_{x}f(x,y)+\nabla_{xx}^{2}f(x,y)\Delta x. \tag{3}\]
Similarly, new update to the gradient with respect to variable \(y\) would be
\[\nabla_{y}f(x,y+\Delta y)=\nabla_{y}f(x,y)+\nabla_{yy}^{2}f(x,y)\Delta y. \tag{4}\]
That is, the updates (3) and (4) can be seen as first order Taylor series expansion of \(\nabla_{x}f(x+\Delta x,y)\) and \(\nabla_{y}f(x,y+\Delta y)\) at \(x+\Delta x\) and \(y+\Delta y\) respectively. As we will see in numerical experiments, these are simple update rules and are as effective as ACGD [48] in obtaining high inception scores, while it is much faster than ACGD. The full algorithm is shown in Algorithm 1. In line number 5 and 12, the update rules for gradients described above are used. As we notice, we supply the updated gradients to the ADAM method, which subsequently uses these to compute first momentum and second momentum terms. As mentioned before, compared to Algorithm CGD or ACGD, ACGM does not require expensive linear system solve, moreover, it does not use mixed derivative terms \(D_{xy}^{2}f\) as in ConOpt [38].
### _Convergence of ACOM._
Our convergence proofs follow the framework of [38], and we refer the reader to see theoretical comparisons for other methods.
#### Iii-A1 Convergence of ACOM with RMSPROP
In ConOpt [38], the fixed point update rules were written for second order update procedure of ConOpt, and similar analysis for momentum based method was done separately. In the following, we do a combined analysis of our second order update rule ACOM with RMSprop. To the best of our knowledge, all methods in the past were analyzed separately, however, combined analysis with momentum was not shown. The iterative update function for ACOM with RMSProp is given by \(F(p),\) with \(p=(x,y,v_{x},v_{y})\) is given as follows
\[F(x,y,v_{x},v_{y})=\begin{bmatrix}x+\frac{h(D_{x}f+D_{xx}^{2}f\Delta x)}{ \sqrt{v_{x}+\epsilon}}\\ y+\frac{h(D_{y}g+D_{yy}^{2}g\Delta y)}{\sqrt{v_{y}+\epsilon}}\\ (1-\beta_{1})v_{x}+\beta_{1}(D_{x}f+D_{xx}^{2}f\Delta x)^{2}\\ (1-\beta_{2})v_{y}+\beta_{2}(D_{y}g+D_{yy}^{2}g\Delta y)^{2}\end{bmatrix},\]
where \(v_{x}\),\(v_{y}\) are the second order momentum of gradients of \(x,y\) respectively, \(0\leq\beta_{1},\beta_{2}\leq 1\) and \(h>0\). The Jacobian of this update function is \(F^{\prime}(x,y,v_{x},v_{y})=\begin{bmatrix}P&Q\\ R&S\end{bmatrix},\) where,
\[P =\begin{bmatrix}1+\frac{h(D_{xx}^{2}f+D_{xxx}f\Delta x)}{\sqrt{v_ {x}+\epsilon}}&\frac{h(D_{xy}^{2}f+D_{xx}f\Delta x)}{\sqrt{v_{x}+\epsilon}}\\ \frac{h(D_{xy}^{2}g+D_{xyy}g\Delta y)}{\sqrt{v_{y}+\epsilon}}&1+\frac{h(D_{yy}^ {2}g+D_{yyy}g\Delta y)}{\sqrt{v_{y}+\epsilon}}\end{bmatrix},\] \[Q =\begin{bmatrix}-\frac{3h(D_{x}f+D_{xx}^{2}f\Delta x)}{2(v_{x}+ \epsilon)^{\frac{3}{2}}}&0\\ 0&-\frac{3h(D_{y}g+D_{xyy}^{2}g\Delta y)}{2(v_{y}+\epsilon)^{\frac{3}{2}}} \end{bmatrix},\] \[R =\begin{bmatrix}2\beta_{1}(D_{x}f+D_{xx}^{2}f\Delta x)&2\beta_{1} (D_{x}f+D_{xx}^{2}f\Delta x)\\ (D_{xx}^{2}f+D_{xxx}f\Delta x)&(D_{xy}^{2}f+D_{xxy}f\Delta x)\\ 2\beta_{2}(D_{y}g+D_{yy}^{2}g\Delta y)&2\beta_{2}(D_{y}g+D_{yyy}^{2}g\Delta y)\\ (D_{xy}^{2}g+D_{yyy}g\Delta y)&(D_{yy}^{2}g+D_{yyy}g\Delta y)\end{bmatrix},\] \[S =\begin{bmatrix}1-\beta_{1}&0\\ 0&1-\beta_{2}\end{bmatrix}.\]
At any fixed point \((\bar{x},\bar{y},\bar{v}_{x},\bar{v}_{y})\), we have \(\Delta x=0,\Delta y=0,\bar{v}_{x}=0,\bar{v}_{y}=0\), \(D_{x}\) and \(D_{y}=0\).
We have \(F^{\prime}(\bar{x},\bar{y},\bar{v}_{x},\bar{v}_{y})=I+hA\), where
\[A=\begin{bmatrix}\frac{D_{xx}^{2}f}{\sqrt{\epsilon}}&\frac{D_{xy}^{2}f}{\sqrt{ \epsilon}}&0&0\\ \frac{D_{xx}^{2}g}{\sqrt{\epsilon}}&\frac{D_{yy}^{2}g}{\sqrt{\epsilon}}&0&0\\ 0&0&-\frac{\beta_{1}}{h}&0\\ 0&0&0&-\frac{\beta_{2}}{h}\end{bmatrix}.\]
For zero-sum game, \(f=-g\), the eigen values of \(A\) are the eigen values of \(\frac{1}{\sqrt{\epsilon}}V^{\prime}\), \(-\frac{\beta_{1}}{h}\) and \(-\frac{\beta_{2}}{h}\). By the assumption that \(V^{\prime}\) is negative definite, we have all eigen values of matrix \(A\) to have negative real-part. If \(h\) satisfies the bound in Lemma II.4, then the eigen values of matrix \(F^{\prime}(p)=I+hA\) lies in unit ball, hence, by proposition II.3, the fixed point iteration \(F\) is locally convergent towards a local Nash-equilibrium \((\bar{x},\bar{y},\bar{v}_{x},\bar{v}_{y})\). For the iterative method to converge, according to Lemma II.1, the eigenvalues of \(D_{xx}^{2}f\) must be less than or equal to zero and for \(D_{yy}^{2}f\) it must be greater than or equal to zero as verified empirically also in Figure 6. We summarize with the following remark.
_Remark III.1_.: With \(V^{\prime}\) negative semi-definite (holds for zero-sum game), we have just shown that \(A\) is negative semi-definite, hence, for choice of \(h\) from lemma II.4, this lemma shows that eigenvalues of \(F^{\prime}=I+hA\) lies in unit ball. Hence item 2 of proposition II.3 is satisfied, thereby leading to convergence with at least linear rate.
#### Iii-A2 Convergence of ACOM with ADAM
Similarly, as before, the iterative update function for ACOM with ADAM is given by \(F(x,y,m_{x},m_{y},v_{x},v_{y})=\)
\[\begin{bmatrix}x+\frac{hm_{x}}{\sqrt{\epsilon}x+\epsilon}\\ y+\frac{hm_{y}}{\sqrt{v_{y}+\epsilon}}\\ (1-\beta_{1})m_{x}+\beta_{1}(D_{x}f+D_{xx}^{2}f\Delta x)\\ (1-\beta_{1})m_{y}+\beta_{1}(D_{y}g+D_{yy}^{2}g\Delta y)\\ (1-\beta_{2})v_{x}+\beta_{2}(D_{x}f+D_{xx}^{2}f\Delta x)^{2}\\ (1-\beta_{2})v_{y}+\beta_{2}(D_{y}g+D_{yy}^{2}g\Delta y)^{2}\end{bmatrix},\]
where \(m_{x}\),\(m_{y}\) are the first order momentum of gradients and \(v_{x}\),\(v_{y}\) are the second order momentum of gradients of \(x,y\) respectively, \(0\leq\beta_{1},\beta_{2}\leq 1\) and \(h>0\). The Jacobian of this update function is \(F^{\prime}(x,y,m_{x},m_{y},v_{x},v_{y})=\)
\[\begin{bmatrix}P_{1}&P_{2}&P_{3}\\ Q_{1}&Q_{2}&Q_{3}\\ R_{1}&R_{2}&R_{3}\end{bmatrix},\]
where,
\[P_{1}=\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\quad P_{2}=\begin{bmatrix}\frac{h}{\sqrt{v_{x}+\epsilon}}&0 \\ 0&\frac{h}{\sqrt{v_{y}+\epsilon}}\end{bmatrix},\] \[P_{3}=\begin{bmatrix}-\frac{hm_{x}}{2(v_{x}+\epsilon)^{\frac{3} {2}}}&0\\ 0&-\frac{hm_{y}}{2(v_{y}+\epsilon)^{\frac{3}{2}}}\end{bmatrix},\]
\[Q_{1}=\begin{bmatrix}\beta_{1}(D_{xx}^{2}f+D_{xxx}f\Delta x)& \beta_{1}(D_{xy}^{2}f+D_{xxy}f\Delta x)\\ \beta_{1}(D_{xy}^{2}g+D_{yyg}g\Delta y)&\beta_{1}(D_{yy}^{2}g+D_{yyg}g\Delta y )\end{bmatrix},\] \[Q_{2}=\begin{bmatrix}1-\beta_{1}&0\\ 0&1-\beta_{1}\end{bmatrix},\quad Q_{3}=\begin{bmatrix}0&0\\ 0&0\end{bmatrix},\]
\[R_{1}=\begin{bmatrix}2\beta_{2}(D_{x}f+D_{xx}^{2}f\Delta x)&2 \beta_{2}(D_{x}f+D_{xx}^{2}f\Delta x)\\ (D_{xx}^{2}f+D_{xxx}f\Delta x)&(D_{xy}^{2}f+D_{xxy}f\Delta x)\\ 2\beta_{2}(D_{y}g+D_{yyg}^{2}g\Delta y)&2\beta_{2}(D_{y}g+D_{yyg}^{2}g\Delta y )\\ (D_{xy}^{2}g+D_{yyg}\Delta y)&(D_{yyg}^{2}g+D_{yyg}g\Delta y)\end{bmatrix},\] \[R_{2}=\begin{bmatrix}0&0\\ 0&0\end{bmatrix},\quad R_{3}=\begin{bmatrix}1-\beta_{2}&0\\ 0&1-\beta_{2}\end{bmatrix}.\]
Again, at any fixed point \((\bar{x},\bar{y},\bar{m}_{x},\bar{m}_{y},\bar{v}_{x},\bar{v}_{y})\), we have \(\Delta x=0,\Delta y=0,\bar{m}_{x}=0\), \(\bar{m}_{y}=0\), \(\bar{v}_{x}=0\), \(\bar{v}_{y}=0\), \(D_{x}f\) and \(D_{y}f=0\).
Writing \(F^{\prime}(\bar{x},\bar{y},\bar{m}_{x},\bar{m}_{y},\bar{v}_{x},\bar{v}_{y})=I+hA\), where,
\[A=\begin{bmatrix}0&0&\frac{1}{\sqrt{\epsilon}}&0&0&0\\ 0&0&0&\frac{1}{\sqrt{\epsilon}}&0&0\\ \frac{\beta_{1}}{h}D_{xx}^{2}f&\frac{\beta_{1}}{h}D_{xy}^{2}f&-\frac{\beta_{1} }{h}&0&0&0\\ \frac{\beta_{1}}{h}D_{xy}^{2}g&\frac{\beta_{1}}{h}D_{yy}^{2}g&0&-\frac{\beta_{1 }}{h}&0&0\\ 0&0&0&0&\frac{-\beta_{2}}{h}&0\\ 0&0&0&0&0&-\frac{\beta_{2}}{h}\end{bmatrix}. \tag{5}\]
Unlike RMSProp, here the condition that the real part of eigenvalues of \(A\) is negative definite is not easy to show without further assumptions. We leave this as future work.
### _Comparison of Computational Complexity._
Assuming \(x\in\mathbb{R}^{m},y\in\mathbb{R}^{n},\) then due to additional term \(D_{xy}^{2}\) involved in both ConOpt and ACGD, compared to our method ACOM, there is additional computational cost of the order \(O(m^{2}n^{2})\) both for constructing \(D_{xy}^{2}\) and for matrix vector operation required (for example, see steps 2 and 3 of Algorithm 2 in [38]). On the other hand, for ACGD, there are additional cost of order \(O(m^{3}n^{3})\) for solving the linear system with matrix of order \(mn\times mn\) if direct method is used, and of order \(O(mn)\) if iterative methods such as CG as in [48] is used. Also, the updates for SGA is more costly due to additional operations (see Algorithm 1 in [9]). For empirically verifying the time complexity, for smaller dataset, in Figure 0(a), we observe that ACGD is slowest. Remaining methods SGA, OMD, and ConOpt are costlier than our method. Although, ConOpt looks closer, for larger dataset CIFAR10, in Figure 0(b), ConOpt is twice slower than ACOM. The first order methods such as ADAM or SGD are faster per iteration, but qualitatively they never achieve good inception scores. As seen in inception score for CIFAR10, ACGD does not achieve high inception scores early on, hence, it does not have additional advantage compared to ACOM.
## IV Numerical Experiments.
### _Experimental setup and machine used._
Codes: [https://github.com/misterpawan/acom](https://github.com/misterpawan/acom). All experiments were performed on Intel Xeon E5-2640 v4 processors providing 40 virtual cores, 128GB of RAM and attached to one NVIDIA GeForce GTX 1080 Ti GPU providing 14336 CUDA cores and 44 GB of GDDR5X VRAM. All the codes were written in python 3.9.1 using PyTorch (torch-1.7.1). For evaluating models, we have used Inception score which is calculated using
inception_v3 module from torchvision-0.8.2. The loss function used is BCELogitsLoss. The hyperparameters used for ACOM is mentioned in Algorithm 1, for ACGD in Algorithm 1 in [49], for ADAM: \(\beta_{1}=0.9,\beta_{2}=0.999,\alpha=1e^{-3}\). We need not explicitly calculate \(D_{xx}^{2}f\) (or \(D_{yy}^{2}f\)) and multiply with \(\Delta x\) (or \(\Delta y\)) in step 4 (or step 11) of Algorithm 1, because the whole term \(D_{xx}^{2}f\Delta x\) is calculated using PyTorch's autograd module.
### _GAN architecture and loss function used._
We use the DC-GAN architecture [44] as shown in Table II and Table III. There are various other GAN architectures, and it will be beyond the scope of this work to do extensive comparison with all these. The loss function used is BCELogitsLoss. The latent variable \(z\) is randomly sampled from standard normal distribution \(\mathcal{N}(0,1)\), followed by three convolution layers with ReLU activations and block normalizations.
### _Results for MNIST and Fashion MNIST._
In Figures 2 and 3, we show the generated images for MNIST [1] and Fashion MNIST [51] datasets. We observe that the images generated by ACOM is comparable to real data and those generated by ACGD. We remark here that ConOpt performed poorly for these two dataset, this could be due to batch normalization as mentioned in original paper [38]. These datasets were not tested before in ConOpt paper. Similarly, for the other dataset Fashion MNIST, the output generated images from our method is comparable to real sample. For both these datasets, our method was fastest to train. Our proposed method is also effective with RMSprop, in Figure 7, we show generated images for ACOM with RMSProp; we observe that the generated images are of similar quality for both ADAM and RMSprop.
### _Results for CIFAR10, LSUN and FFHQ_
To compare our method on standard CIFAR10 dataset [24], in subfigures of Figure 4, we compare sample generated datasets from ACOM, ADAM, ACGD, and ConOpt. The images generated by our method ACOM is comparable to real sample. Since for CIFAR10, a standard metric to compare is inception score, in subfigure 6c in Figure 6, we plot the inception scores for these methods, we find that our method achieves high inception score much earlier than both ACGD and ConOpt for the same GAN architecture as mentioned above. For reference, we have also plotted discriminator and generator losses in Figure 5(a) and 5(b) respectively. However, we must mention here that the first order methods such as ADAM never achieved comparable inception scores as those of second order methods; this observation was also found in previous works such as [38, 48], where other first order methods were also compared. Hence, existing literature and our comparison suggest that qualitative improvements are seen with second order methods. Also, as shown before, our method is fastest to train and requires less memory among second order methods. In Figure 5 and 9 (more samples only for our method), we compare the generated images for LSUN2[53] bedroom dataset for \(32\times 32\) images; we find that the generated images are close to state-of-the-art. Lastly, in Figure 6(c) and 8 (more samples only for our method), we show images generated from FFHQ dataset3; this dataset offers a significant variety in terms of ethnicity, age, viewpoint, image background, and lighting
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Module & Kernel & Stride & Pad & Shape \\ \hline Input & N/A & N/A & N/A & \(z\in\mathbb{R}^{100}\sim\mathcal{N}(0,1)\) \\ Conv, BN, ReLU & \(4\times 4\) & 1 & 0 & \(100\to 1024\) \\ Conv, BN, ReLU & \(4\times 4\) & 2 & 1 & \(1024\to 512\) \\ Conv, BN, ReLU & \(4\times 4\) & 2 & 1 & \(512\to 256\) \\ Conv, Tanh & \(4\times 4\) & 2 & 1 & \(256\to 3\) \\ \hline \end{tabular}
\end{table} TABLE II: Generator architecture for CIFAR10 experiments.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Module & Kernel & Stride & Pad & Shape \\ \hline Input & N/A & N/A & N/A & \(z\in\mathbb{R}^{100}\sim\mathcal{N}(0,1)\) \\ Input & N/A & N/A & N/A & \(x\in\mathbb{R}^{3\times 32\times 32\times 32}\) \\ Conv, LeakyReLU & \(4\times 4\) & 2 & 1 & \(3\to 256\) \\ Conv, BN, LeakyReLU & \(4\times 4\) & 2 & 1 & \(256\to 512\) \\ Conv, BN, LeakyReLU & \(4\times 4\) & 2 & 1 & \(512\to 1024\) \\ Conv, Sigmoid & \(4\times 4\) & 1 & 0 & \(1024\to 1\) \\ \hline \end{tabular}
\end{table} TABLE III: Discrimimator architecture for CIFAR10 experiments
for face images. To make training feasible on our machines, we down sampled the original images from \(128\times 128\) (thumbnail images) to \(64\times 64\). We find that the generated images are close to realistic, and we see good diversity in the generated images. Some more generated images from ACOM are shown in Figures 8, 9 and 10.
More precisely, in our method, the recent updated gradient is passed on in ADAM or RMSprop method for first and second order momentum calculations. Our method does not involve any mixed derivatives (as in ConOPT) or it does not involve solving costly linear system (as in ACGD) contrary to other recent second order methods. When comparing the well known inception score on standard CIFAR10 dataset, our inception scores is among the best. Our experiments suggest that the mixed derivatives terms in the solver may only be useful for artificial toy example cases, in practice (as well as in theory as proved), for practical datasets (five state-of-the-art datasets) such terms are unnecessary, and using these leads to slow training. We showed a rigorous convergence analysis of the proposed method seen as a fixed point iteration, which to the best of our knowledge is the only complete analysis of the full algorithm (second order update with momentum), which is not done in other existing second other methods. In future, we would like to see how the proposed method behaves for other types of GAN architectures and losses.
## Acknowledgement
Supported by Qualcomm Faculty Award and MAPG grant.
Fig. 4: Generated Images for CIFAR10 [24].
Fig. 5: Images Generated for LSUN Bedroom [53].
Fig. 6: Loss and Inception Scores.
Fig. 7: Left three: Generated images of ACOM with RMS prop. Right: Eigenvalue plot. |
2306.11670 | GIO: Gradient Information Optimization for Training Dataset Selection | It is often advantageous to train models on a subset of the available train
examples, because the examples are of variable quality or because one would
like to train with fewer examples, without sacrificing performance. We present
Gradient Information Optimization (GIO), a scalable, task-agnostic approach to
this data selection problem that requires only a small set of (unlabeled)
examples representing a target distribution. GIO begins from a natural,
information-theoretic objective that is intractable in practice. Our
contribution is in showing that it can be made highly scalable through a simple
relaxation of the objective and a highly efficient implementation. In
experiments with machine translation, spelling correction, and image
recognition, we show that GIO delivers outstanding results with very small
train sets. These findings are robust to different representation models and
hyperparameters for GIO itself. GIO is task- and domain-agnostic and can be
applied out-of-the-box to new datasets and domains. We open source a
pip-installable implementation of the algorithm as "pip install grad-info-opt". | Dante Everaert, Christopher Potts | 2023-06-20T16:43:38Z | http://arxiv.org/abs/2306.11670v3 | # GIO: Gradient Information Optimization for Training Dataset Selection
###### Abstract
It is often advantageous to train models on a subset of the available train examples, because the examples are of variable quality or because one would like to train with fewer examples, without sacrificing performance. We present Gradient Information Optimization (Gio), a scalable, task-agnostic approach to this data selection problem that requires only a small set of (unlabeled) examples representing a target distribution. Gio begins from a natural, information-theoretic objective that is intractable in practice. Our contribution is in showing that it can be made highly scalable through a simple relaxation of the objective and a highly efficient implementation. In experiments with machine translation, spelling correction, and image recognition, we show that Gio delivers outstanding results with very small train sets. These findings are robust to different representation models and hyperparameters for Gio itself. Gio is task- and domain-agnostic and can be applied out-of-the-box to new datasets and domains.
## 1 Introduction
In situations in which one has a very large train set available, it is often advantageous to train systems on a subset of the data. In the simplest case, the train set may be so large as to run up against resource constraints, and the question arises whether performance goals can be reached with less effort (e.g. [32]). It can also be the case that the train examples are known to be of variable quality, say, because they were harvested from diverse websites [21], annotated by crowdworkers [15], or created by a synthetic data generation process [6]. In this case, the goal is to identify a reliable subset of examples.
This is the data selection problem that we address in the current paper. The end goal is to select a subset of the available train examples that leads to models that are at least as performant as (and perhaps even better than) those trained on all the examples. To achieve this goal, we propose Gradient Information Optimization (Gio), a highly scalable, task-agnostic approach to data selection that is based in information theory. Our method assumes access to a (potentially small) set of examples \(X\) that represent the desired data distribution and a (presumably very large) set of potential train examples \(G\). Our method derives a set \(V\subseteq G\) that has as much information content as possible about the target distribution \(X\). The method begins from the natural intuition that we want \(V\) to minimize the average KL divergence from \(X\), and the novelty of the approach lies in making this computationally tractable by relying on properties of the derivative of the KL divergence and implementing the method extremely efficiently. Crucially, our method works in any continuous representation space, is task- and domain-agnostic, and requires no labels on examples.
We motivate Gio with a diverse set of experiments. We first explore machine translation using the WMT14 dataset and Transformer-based models. In this case, \(G\) is the WMT14 dataset and \(X\) is the dev set. These experiments show that, using Gio, we can can surpass the performance of a model trained on the full WMT14 corpus with only a fraction of the example in \(G\), which represents very
large efficiency gains. We then turn to spelling correction. In this case, the set \(G\) is generated by a noisy synthetic process and the target distribution \(X\) is a set of actual spelling errors. Here, we are using \(\textsc{Go}\) to home in on realistic train examples. Our results show that we can do this extremely effectively. Finally, we apply \(\textsc{Go}\) to an image recognition task (FashionMNIST) and show again that our method can reduce the size of the train sets chosen without large drops in performance, this time operating with representations of images. In this case, we trust the train set \(G\) to represent the space accurately, and our goal is simply to select a useful subset of \(G\). Thus, in this case \(X=G\). Finally, we discuss expanding \(\textsc{Go}\), open-source a Python package1 to run \(\textsc{Go}\), and report on a wide range of robustness experiments and empirical analyses of how and why the method works in practice.
Footnote 1: pip install grad-info-opt, also [https://github.com/daeveraert/gradient-information-optimization](https://github.com/daeveraert/gradient-information-optimization)
## 2 Related Work
**Active learning.** Active learning methods (e.g. 28; 8; 18) can be cast as data selection methods in our sense. In active learning, one iteratively chooses new unlabeled training examples to label, with the goal of efficiently creating a powerful train set. By contrast, \(\textsc{Go}\) makes no use of labels and is oriented towards the goal of identifying a subset of existing cases to use for training.
**Heuristic.**\(\textsc{Go}\) is closer to recent methods in which one uses a large language model to generate a large number of candidate texts and then extracts a subset of them based on a specific criteria. For example, Brown et al. (2019) develop a heuristic method to filter CommonCrawl based on a trained classifier's probability that datapoints are high quality. Similarly, Wenzek et al. (2019) develop a pipeline to clean CommonCrawl based principally on the perplexity of an LM trained on high quality text, and Xie et al. (2019) develop a sampling technique based on approximate n-gram counts.
Like \(\textsc{Go}\), these heuristic methods aim to select a subset of data that is higher quality and more relevant. However, they are either highly tailored to their particular tasks or they require very large numbers of examples (to develop classifiers or construct target probabilities). By contrast, \(\textsc{Go}\) is task- and domain-agnostic, it can be applied plug-and-play to a new task and dataset, and it requires comparatively few gold examples \(X\) to serve as the target distribution.
**Similarity Search.** Methods using vector similarity search can also be used for data selection at scale (e.g. 14; 2; 27). The technique would index \(G\) and \(X\) and retrieve the top-k datapoints from \(G\) for each point in \(X\). Like our method, similarity search works in a continuous space. However, there are several nontrivial issues with using similarity search. First, similarity search does not consider the _distribution_ of \(X\), just the points themselves. Therefore, it can be prone to selecting suboptimal points; we review such a case in detail in Section 3.4. Second, similarity search does not have a natural stopping criterion and requires data size to be chosen before running the algorithm, as a hyperparameter. Is 10% data enough? 20%? We don't know a priori. And if the data in \(G\) is far away from \(X\), semantic search will still choose it up to the desired data size. \(\textsc{Go}\) solves the first issue by considering the distribution of \(X\) rather than the points themselves. In addition, it provides KL divergence as a natural stopping criteria. As a result, it doesn't require data size to be chosen arbitrarily beforehand and will not add points that are too far from \(X\) to add any information.
Recently, Yao et al. (2019) use information retrieval as a method for data selection, with strong results. Specifically, they use a target dataset \(X\) and BM25 ranking function (Zhou et al., 2019) to query a general corpus \(G\) and select the top-k most relevant datapoints for each point in \(X\). They show that this method is able to select high quality train sets. Additionally, BM25 can somewhat mitigate the issue with similarity search, as it provides a score for each pair, where 0 indicates no similarity and the point can be discarded. Like \(\textsc{Go}\), BM25 is an information-theoretic method to select data. However, BM25 operates on a bag-of-words model, which can make it challenging when the target set is small, and like similarity search, requires data size to be chosen arbitrarily beforehand. Further, this method only applies to text tasks, whereas \(\textsc{Go}\) applies to any task with continuous representation.
Overall, previous work in data selection is typically tailored to a specific domain like NLP (e.g. 36) or image recognition (e.g. 8), and makes assumptions about the data available, for example, that the target set \(X\) is large enough to construct an LM (e.g. 36), or that it has labels (for active learning) (e.g. 28). In addition, many of these methods use discrete approximations and heuristics (e.g. 38; 39). In this work, we provide a general, theoretically-motivated data selection method that works with large or small \(X\) and can be applied out-of-the-box to any domain (image, text, etc) without needing labels.
```
Quantize \(X,D,G\) using K-means and pick the cluster centroids \(X_{c},D_{c},G_{c}\) as the new points whileNot Stopping Criteriondo Gradient-Descend to find \(\mathbf{v}_{opt}\): \(\mathbf{v}_{1}\leftarrow\) previous \(\mathbf{v}_{opt}\), \(\tilde{\mathbf{x}}\) or random \(\triangleright\) We explore different techniques in our experiments Perform \(\mathbf{v}_{k+1}\leftarrow\mathbf{v}_{k}-\gamma\cdot\frac{\partial}{\partial \mathbf{v}_{k}}\hat{D}_{\textit{KL}}(P_{X_{c}}\parallel P_{D_{c}\cup\{\mathbf{ v}_{k}\}})\) until converged to \(\mathbf{v}_{opt}\) Update\(D_{c}\) : \(\mathbf{v}_{b}\leftarrow\operatorname*{argmin}_{\mathbf{v}_{i}\in G_{c}}|| \mathbf{v}_{i}-\mathbf{v}_{opt}||\)\(\triangleright\) The closest point in \(G_{c}\) to \(\mathbf{v}_{opt}\)\(D_{c}\gets D_{c}+\{\mathbf{v}_{b}\}\) Remove \(\mathbf{v}_{b}\) from \(G_{c}\) endwhile Explode: Select points from full \(D\) and \(G\) which belong to the chosen centroids' (\(D_{c}\cup V_{c}\)) clusters
```
**Algorithm 1** Gradient Information Optimization
## 3 Gradient Information Optimization: Method
We formulate data selection as maximizing information content and outline the natural algorithm for this objective, which is infeasible. We then introduce optimizations which enable the algorithm to work at scale, and conduct tests to show the algorithm is consistent and robust to different scenarios.
### Abstract Formulation of the Data Selection Problem
We assume that all examples are represented in continuous space. We have a set of train examples \(G\) and a target ideal state \(X\). We allow also that there may be existing train examples \(D\) that we definitely want to include in our train set, though \(D\) can be empty. Our goal is to identify a subset \(V\) of \(G\) such that the set \(D\cup V\) contains the most information about \(X\).
In this setting, it is natural to take an information-theoretic approach. Let \(p_{X}(\mathbf{x})\) be the distribution of target \(X\), and let \(p_{D\cup V}(\mathbf{x})\) be the distribution of data-selected data \(D\cup V\). The information content of \(D\cup V\) about \(X\) is the negative KL divergence from \(p_{X}(\mathbf{x})\) to \(p_{D\cup V}(\mathbf{x})\)[19]. In this context, the general objective of data selection is as follows:
\[\text{Choose data }V\cup G\text{ such that }\int_{\Omega}p_{X}(\mathbf{x}) \log\frac{p_{X}(\mathbf{x})}{p_{D\cup V}(\mathbf{x})}d\mathbf{x}\text{ is minimized} \tag{1}\]
The implication is that a data selection method which gives the minimum KL divergence will also give the best performance (assuming we are correct that \(X\) represents the task to be solved).
### Naive Approach
A natural approach is to hill-climb on the KL divergence objective (1). Given existing data \(D\) and points \(\mathbf{v}_{1},\ldots,\mathbf{v}_{k}\) of \(G\), we recompute the distribution \(p_{D\cup\{\mathbf{v}_{i}\}}(\mathbf{x})\) for each \(\mathbf{v}_{i}\), pick the one that gives the minimum KL divergence, and add it to our selected set \(D\):
\[D\gets D+\operatorname*{argmin}_{\mathbf{v}_{i}\in G}\int_{\Omega}p_{X}( \mathbf{x})\log\frac{p_{X}(\mathbf{x})}{p_{D\cup\{\mathbf{v}_{i}\}}(\mathbf{x })}d\mathbf{x} \tag{2}\]
Unfortunately, this algorithm is intractable in practice. We need to construct a new distribution \(p_{D\cup\{\mathbf{v}_{i}\}}(\mathbf{x})\) and compute KL divergence for every \(\mathbf{v}_{i}\in G\), at each step. Therefore, the complexity at each iteration is \(\mathcal{O}(|G|\cdot C)\), where \(C\) is the cost of computation for the KL divergence. For a dataset of only 1M and 0.1s per iteration, it would take 70 days to complete the algorithm. The method is also prone to adding the same point multiple times.
### Gradient Information Optimization
Gio addresses the shortcomings of (2) with a combination of mathematical and implementational optimizations. The method is described in Algorithm 1.
First, instead of calculating divergence for each point, we use the derivative of the KL divergence to find the optimal point. We rewrite \(p_{D\cup\{\mathbf{v}_{i}\}}(\mathbf{x})=g(\mathbf{x},\mathbf{v}_{i})\), a function of only \(\mathbf{x}\) and \(\mathbf{v}_{i}\) since \(D\) is not changing, and thus the optimization term in each iteration becomes:
\[\operatorname*{argmin}_{\mathbf{v}_{i}\in G}\int_{\Omega}p_{X}(\mathbf{x})\log \frac{p_{X}(\mathbf{x})}{g(\mathbf{x},\mathbf{v}_{i})}d\mathbf{x} \tag{3}\]
We can relax the constraint that \(\mathbf{v}_{i}\in G\) to the space of all possible \(\mathbf{v}\) and solve this integral minimization for the optimal \(\mathbf{v}_{opt}\). Since \(p_{X}\) is unchanging and the integral implicitly removes \(\mathbf{x}\) as a variable, the integral defines a functional \(F\left[g(\mathbf{v})\right].\) Therefore, we partially differentiate with respect to \(\mathbf{v}\) and do gradient descent with the partials \(\nabla_{\mathbf{v}_{k}}F[g]\) to solve for \(\mathbf{v}_{opt}\). All together, this becomes:
\[\mathbf{v}_{k+1}\leftarrow\mathbf{v}_{k}-\gamma\cdot\frac{\partial}{ \partial\mathbf{v}_{k}}\left(\int_{\Omega}p(\mathbf{x})\log\frac{p(\mathbf{x} )}{g(\mathbf{x},\mathbf{v}_{k})}d\mathbf{x}\right) \tag{4}\]
Once we have \(\mathbf{v}_{opt}\), we find the nearest \(\mathbf{v}_{i}\in G\) and add that to \(D\), as the closest \(\mathbf{v}_{i}\in G\) is the solution to the original (3). For that to be true, we assume \(G\) is locally dense for the extrema of the integral in (4); see Appendix A.2 for details.
The complexity at each iteration, for \(S\) gradient descent steps, is \(\mathcal{O}(S\cdot C)\) which does not increase with \(G\). Therefore, when \(|G|>S\), as is common in practice, the derivative trick is faster than the naive algorithm. We time both algorithms in Section 3.4 and show the derivative trick is 80% faster.
Second, even at its most efficient, an algorithm that adds point-by-point becomes intractable. Therefore, we use a quantization-explosion process. First, we cluster the data with K-means [1] and pick the centroids \(\boldsymbol{\mu}_{i}\) as our new data points. Second, we perform the algorithm using the cluster centroids \(\boldsymbol{\mu}_{i}\) instead of the original data. Finally, after having our chosen cluster centroids, we explode back out into the original data based on cluster membership. Figure 1 provides an overview of this process.
Third, to compute the KL Divergence in high-dimensional spaces, we use the k-Nearest Neighbors approximation for continuous KL divergence proposed by Wang et al. [34], and modify it to be an average across all points to bypass 0 gradient problems (details and proof of modification are in the Appendix A.1). Let \(|D|=m\), \(|X|=n\) and \(d\) be the dimensionality:
\[\hat{D}_{\text{KL}}(P_{X}\parallel P_{D})=\frac{1}{m}\sum_{k=1}^{m}\frac{1}{n }\left[\sum_{i=1}^{n}d\cdot\log\nu_{k}(i)-d\cdot\log\rho_{l}(i)\right]+\frac {1}{m}\sum_{k=1}^{m}\log\frac{l\cdot m}{k(n-1)} \tag{5}\]
Where \(\nu_{k}(i)\) is the distance from point \(X_{i}\) to the \(k\)th nearest point in \(D\) and \(\rho_{l}(i)\) is the distance from point \(X_{i}\) to the \(l\)th nearest \(X_{j\neq i}\). We use automatic differentiation to compute the derivative.
We can stop when the KL divergence increases (strict), reset \(G\) and allow the algorithm to pick again, among a variety of criteria. We explore several in our experiments and list additional criteria in Appendix B.2. Unlike data selection methods that make data size a hyperparameter [e.g. 41], Gio provides a natural stopping criterion (KL divergence). Finally, initializing \(D\) from a uniform start rather than empty leads to same optimal points but a smoother convergence; see Appendix A.3.
**Limitations.** We derived Gio from the natural information-theoretic objective, however, we can use any arbitrary statistical distance in the Gio framework. For example, in situations where \(G\) is close to \(X\) with the exception of a large gap somewhere, the statistical distance \((\max|p_{X}(\mathbf{x})-p_{D}(\mathbf{x})|)\)
Figure 1: Visualization of the Quantization-Explosion Process. From left to right: original data (400 points), representative K-means centroids (50 points) of the original data (Quantization), selected centroids after data selection, original data represented by the selected centroids (Explosion)
may be better suited. We also use gradient descent to iteratively find \(\mathbf{v}_{\textit{opt}}\), but we know the space is non-convex. Therefore, replacing gradient descent with a method like particle swarm optimization [16] and using batch-wise selection may lead to better selected data. Finally, in practice it is important to ensure that \(X\) reasonably represents the space a model might be used on. A narrow \(X\) could make a model trained on \(\textsc{Gio}\)-selected data perform poorly when confronted with inputs that lie outside \(X\). Methods like starting from a subset of training data, which we explore, or adding uniform points to \(X\) to encourage generalization, should be explored. We leave these improvements to future work.
### Analytic Checks
**Gio is self-consistent.** We define self-consistency as follows: if both \(G\) and \(X\) come from the same distribution, i.e., \(p_{G}(\mathbf{x})=p_{X}(\mathbf{x})\), a good data selection method should choose all of \(G\). We show \(\textsc{Gio}\) is self-consistent with the following setup: let \(X\) be 100 points from a 2D normal distribution centered at \((3,4)\) and let \(G\) be another 100 points from the same distribution (Figure 2, first graph). We run \(\textsc{Gio}\) on this setup; \(\textsc{Gio}\) selects 96% of \(G\) before termination, showing \(\textsc{Gio}\) is self-consistent (Figure 2, second graph).
**Gio is negative-consistent.** We define negative consistency as follows: if \(G\) is very far from \(X\), i.e. \(d(p_{X}(\mathbf{x}),p_{G}(\mathbf{x}))\gg 0\), a good data selection method should not choose any of \(G\). Most data selection methods that rely on choosing a desired data size as a stopping criteria (e.g. 41, 38, similarity search) are not negative consistent; they will select data regardless of how close or far the data may be from \(X\), up to the desired data size. We show \(\textsc{Gio}\) is negative-consistent with the following setup: let \(X\) be the same as above, but this time let \(G\) be 100 points centered far away at \((300,400)\). We run \(\textsc{Gio}\) on this setup; \(\textsc{Gio}\) terminates without adding any points from \(G\), showing it is negative-consistent.
**Quantization in \(\textsc{Gio}\) is consistent with the original space.** Quantizing the space with K-means should not change the distribution of data. We show the quantized space is consistent with the original space with the following setup: let \(X\) be 400 points from a 2D normal distribution centered at \((3,4)\). We quantize \(X\) using K-means with K=50, and compute the KL divergence \(\hat{D}_{\textit{KL}}(X\parallel X_{\textit{quant}})\), which should be close to \(0\) if the distributions are close. The KL divergence is \(0.44\), showing the quantization in \(\textsc{Gio}\) is consistent with the original space.
**The derivative trick is 80% faster.** We benchmark the wall-clock time between the naive hill-climb method and \(\textsc{Gio}\) with the derivative trick, with 100 points in \(X\) and 2000 points in \(G\) spread uniformly, and run the algorithm for 100 iterations. The regular hill-climb method takes 1369s, whereas the derivative trick takes 257s, which is an 80% speedup.
**GIO selects based on a distribution; similarity search does not.** We demonstrate an additional important pitfall of similarity search (beyond it not being negative consistent) with the following test. Let \(X\) be a circle of 2D points encapsulating a region of space, and let \(G\) be 2000 uniformly distributed points (Figure 2, third and fourth graphs). As we covered in related works, the ideal points are the points within the region encapsulated by \(X\) and should be chosen with preference over points outside. Figure 2 shows the points selected by semantic search (fourth graph); in order to get the data in the middle of the circle, it also picks the data outside the circle. Figure 2 also shows the
Figure 2: The leftmost graph shows \(X\) and \(G\), which come from the same distribution. The second graph shows that \(\textsc{Gio}\) recovers nearly all of \(G\) (self consistency). The right two graphs compare \(\textsc{Gio}\) with similarity search. Points within the circle formed by \(X\) are more ideal than points outside. By considering the distribution, \(\textsc{Gio}\) selects nearly all points inside before terminating (third graph). By comparison, in order to pick points within the circle, similarity search also picks a range of points outside the circle, which is suboptimal (fourth graph).
Gio-selected points (third graph); by considering the _distribution_ of all of \(X\) rather than the simple Euclidean location of each \(X\), Gio selects mostly points which are within the circle, as desired.
## 4 Experiments
We perform four sets of experiments to validate Gio. First, we replicate the setup of Vaswani et al. [33] on WMT14 [3] and show that using Gio-selected data can surpass the performance of the full model with only a fraction of the data. Next, we demonstrate Gio is robust to different choices of embedding models and quantization. Third, we use a spelling correction task to show that Gio selects the highest quality data from a pool of mixed-quality synthetic data, and set a new state-of-the-art. Finally, we show Gio can reduce the training set size of FashionMNIST image task without a big drop in performance. We show that Gio achieves the lowest KL divergence compared to alternatives, and that this correlates with model performance. Details of each experiment are in Appendix C.
### Machine Translation Experiments
Our first set of experiments seeks to show that Gio can pick data from a general corpus to meet or exceed the performance of a model trained on the full corpus.
Data and MethodsWe use Transformer Big from Vaswani et al. [33], trained for 300k iterations with the same hyperparameters. We use the same processed WMT14 training data. We report the BLEU score [25] on the WMT14 test set.3
Footnote 2: From Vaswani et al. [33]
We apply Gio to select a subset of data from the WMT14 train set using the inputs only (as our method makes no use of labels). \(G\) is the training data we can select from. For target state \(X\), we collect the dev sets for WMT08-WMT13, extract 3K pairs and report BLEU on this held out dev set, and use the remaining \(\approx\)12K pairs as \(X\). For initial state \(D\), we consider starting from an empty set, a 25% random subset of train data, and a 50% random subset of train data, and we report results for each setting. We use MPNet-Base-V2 model [31] to embed the input sentences in a continuous vector space and use K=1500 for quantization. We compare this embedding model and quantization amount to other settings in our robustness experiments (Section 4.2). As our stopping criteria, we stop when the KL divergence increases. We also deduplicate the data pairs before training.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Init. \%**} & \multirow{2}{*}{**System**} & \multicolumn{4}{c}{EN-FR} & \multicolumn{4}{c}{EN-DE} \\ \cline{3-10} & & \multicolumn{1}{l}{**Train**} & \multicolumn{1}{l}{**Dev**} & \multicolumn{1}{l}{**WMT**} & \multicolumn{1}{l}{\(\mathbf{\hat{D}_{KL}}\)} & \multicolumn{1}{l}{**Train**} & \multicolumn{1}{l}{**Dev**} & \multicolumn{1}{l}{**WMT**} & \multicolumn{1}{l}{\(\mathbf{\hat{D}_{KL}}\)} \\ & & \multicolumn{1}{l}{**Size**} & \multicolumn{1}{l}{**Test**} & \multicolumn{1}{l}{**14**} & & \multicolumn{1}{l}{**Size**} & \multicolumn{1}{l}{**Test**} & \multicolumn{1}{l}{**14**} & \\ \hline \multirow{3}{*}{0} & Ours & \multicolumn{3}{c}{**34.2**} & **41.2** & _156_ & \multicolumn{3}{c}{} & 22.1 & 24.3 & _148_ \\ & BM25 & 5.6M & 33.9 & 41.0 & 172 & 701K & **22.6** & **24.9** & 175 \\ & Random & & 33.1 & 40.0 & 194 & & 21.9 & 24.0 & 183 \\ \hline \multirow{3}{*}{25} & Ours & \multicolumn{3}{c}{_34.8_} & **42.2** & **166** & \multicolumn{3}{c}{} & **23.9** & **27.0** & **159** \\ & BM25 & 14M & 34.6 & 42.0 & 179 & 1.7M & 22.9 & 26.3 & 178 \\ & Random & & 34.3 & 41.4 & 195 & & 23.0 & 26.7 & 182 \\ \hline \multirow{3}{*}{50} & Ours & \multicolumn{3}{c}{_34.7_} & _42.3_ & **172** & \multicolumn{3}{c}{} & **24.2** & **27.9** & **164** \\ & BM25 & 21M & 34.3 & 42.1 & 185 & 2.5M & 23.7 & **27.9** & 178 \\ & Random & & 34.1 & 41.7 & 195 & & 24.0 & 27.3 & 181 \\ \hline
100 & Full3 & 35M & - & 41.8 & 188 & 4M & - & _28.2_ & 180 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Machine translation results. Training data sizes and BLEU scores of models trained on the full data, Gio-selected data, BM25-selected data and random subsets for various initialization states. **Bold** is the best score in each initialization state, and _italic_ is the best score overall. Gio outperforms a model trained with the full EN-FR with only 40% of the data, outperforms the random baseline at all evaluations and outperforms BM25 in 10/12 evaluations. It achieves 99% of the performance in EN-DE with only 60% of the data.
We compare Gio to a random subset of data of the same size. In addition, we compare against Yao et al.'s [41] recent data selection approach of using BM25 retrieval. To keep the setup equal, we also initialize the BM25 method from a 0%, 25% and 50% random subset and run that algorithm to have the same size as the Gio-selected data.
**Results** We find that Gio outperforms the random baseline at every initialization. A data selection method should always outperform a randomly-selected subset of the same size. Table 1 shows the BLEU score on both dev and WMT14 test sets and demonstrates Gio always outperforms a randomly-selected subset. The BM25 method only outperforms the random baseline sometimes.
Gio outperforms the EN-FR model trained on the full data using only 40% of the data. At initialization of 25% and 50%, a model trained on Gio-selected data outperforms the full Vaswani et al. [33] model trained on all data by +0.4 and +0.6 BLEU, respectively. In addition, a model trained on the Gio-selected data at 0% initialization achieve 98.5% of the performance of the full model with only 16% of the data. It outperforms the BM25 method at all initializations. In EN-DE, it gets to 99% of the performance with 60% of the data, and within 88% of the performance with only 18% of the data.
Gio outperforms the BM25 method in 10/12 of the evaluations. Gio always matches or outperforms the BM25 method with initializations of 25% or 50%, by +0.2 BLEU on WMT14 and +0.5 BLEU on Dev Test on average, and only falls short with 0% initialization in EN-DE.
Gio has the lowest KL divergence, which correlates with model performance. The implication of the objective in (1) is that a method which results in lower KL divergence between train and target will perform the best. From Table 1, the average Spearman rank correlation coefficient between KL divergence and best performance is 0.83 and the median is 1, showing a high degree of correlation between a dataset that minimizes KL divergence and model performance, and thereby confirming the implication from the theory. Additionally, Table 1 shows that Gio leads to the lowest KL divergence.
In summary, Gio leads to the lowest KL divergence between train and target set out of all the methods, which correlates with model performance and confirms the theory in (1). Notably, a model trained with Gio-selected data outperforms a model trained on the full data in EN-FR despite using only 40% of the total data. In EN-DE, a model trained with Gio-selected data came to within 99% of the performance of the full model despite using only 60% of the data. Gio outperforms the random baseline at all initializations and outperforms the BM25 method in 10/12 evaluations. Overall, these experiments show Gio can achieve and surpass the performance of a model trained on full data and comparable baselines, by explicitly optimizing for KL Divergence.
### Robustness
Gio above relies on two approximations to work: an embedding model to generate vector representations of text, and K-means to quantize the space into representative samples of the full data. In this section, we show that Gio is robust to different choices of embedding models and different values of K. The results of these experiments are summarized in Table 2.
**Gio works with different embedding models.** Gio should be robust to different text embedding models. We change the embedding model from MPNet-Base-V2 to MiniLM-L12-v1 [35], which has
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{4}{c}{EN-FR} & \multicolumn{4}{c}{EN-DE} \\ \cline{2-9}
**System** & **Train** & **Dev** & **WMT** & \(\hat{\mathbf{D}}_{\text{KL}}\) & **Train** & **Dev** & **WMT** & \(\hat{\mathbf{D}}_{\text{KL}}\) \\ & **Size** & **Test** & **14** & **Size** & **Test** & **14** & \\ \hline Base (MPNet, K=1500) & 5.6M & 34.2 & 41.2 & 156 & 701K & 22.1 & 24.3 & 148 \\ \hline MiniLM Variant & 5.7M & 34.0 & 41.1 & - & 737K & 22.3 & 24.6 & - \\ K=1000 Variant & 5.6M & 33.9 & 41.2 & 169 & 701K & 22.1 & 24.3 & 150 \\ K=3000 Variant & 5.7M & 34.2 & 41.3 & 138 & 718K & 22.3 & 24.6 & 133 \\ \hline Average Variance from Base & 1.9\% & 0.5\% & 0.2\% & - & 3.7\% & 0.6\% & 0.8\% & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Training data sizes and BLEU scores of the base version (K=1500 and MPNet) and the variants of K and embedding model. BLEU scores of the variants vary by only 0.4% on average from the base, indicating Gio is robust to different quantization and embedding models
different architecture, training, and produces embeddings of a different size. We then rerun the 0% initialization experiments end-to-end with the new embeddings for both EN-DE and EN-FR. Table 2 shows that using MiniLM in Gio results in roughly similar selected data size (4.4% difference on average) and virtually identical performance (0.7% difference on average), demonstrating that Gio is robust to different embedding models.
Gio works with different choices of K.Gio should also be robust to varying amounts of quantization. We decrease the value of K from 1500 to 1000 and increase to 3000 and rerun the 0% initialization experiments end-to-end for both new values of K, in EN-FR and EN-DE. For K=1000 due to the coarser grain, Gio selects more data, therefore we sample from the selected data the same amount as K=1500 in order to maintain parity. Table 2 shows that performance is virtually identical between the different values of K (0.4% difference on average), demonstrating Gio is robust to different values of K. In general, higher values of K have lower KL Divergence and slightly better performance, which is expected as the quantization is more fine grained.
### Spelling Correction Experiments
In this section, we set up a problem with a pool of high and low quality synthetic candidate train examples and show Gio selects mostly high quality data. In addition, we set a new state of the art on the challenging BEA4660 spelling correction benchmark of Jayanthi et al. [12].
Data and MethodsWe follow the setup of Jayanthi et al. [12] and collect 15M samples from the 1 Billion Word benchmark corpus and deduplicate. To create high quality data, we use the best noising technique (prob) from Jayanthi et al. [12] and noise half of the data. For low quality data, we use the "word" method with high chance of replacement (70%) and noise the other half, and mix the two sets.
We follow Zhou et al. [42] and set up the spelling correction task as a translation task from <mistake> to <correction>. We use the BART base architecture [20] and learn 40K vocabulary byte-pair encoding [30] on the training data. We run the model for 50k iterations and pick the best checkpoint by validation loss. For testing, Jayanthi et al. [12] report word-level accuracy and correction rate on a challenging dataset of 4,660 ambiguous mistakes and corrections from the BEA grammar correction benchmark [5] which they provide, and we will test on this dataset as well.
We apply Gio to select a subset of data from the training set. \(G\) is the training data we can select from. For target state \(X\), Jayanthi et al. [12] provide 40k real spelling mistakes and corrections from the BEA grammar correction corpus. For initial state \(D\), we start from an empty set. For the embedding model, we use MPNet, and we use K=1500 for quantization. As our stopping criteria, we experiment with a new scheme: we first allow the algorithm to run until the KL divergence increases, then reset \(G\) and allow the algorithm to pick again from the training data, until the KL divergence increases.
As before, we compare Gio to a random subset of the same size and the BM25 selection method.
ResultsGio selects high quality data. A good data selection method should select mostly from the high quality data. Table 3 shows Gio selects 73% high quality data, compared to 55% for the BM25 method. Gio's KL Divergence is lower than BM25 and random, indicating KL divergence is also an indicator of data quality in this setup.
Gio outperforms the random, BM25, and the full model. Gio outperforms BM25 on accuracy, correction rate and overall F1, and matches random baseline on accuracy and outperforms on
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**System** & **Train Size** & **Accuracy** & **Correction Rate** & **F1** & **\% High Quality** & \(\mathbf{\hat{D}_{KL}}\) \\ \hline Ours & & **95.9** & **99.6** & **97.7** & **73\%** & **224** \\ BM25 & 3.6M & 95.5 & 99.0 & 97.2 & 55\% & 264 \\ Random & & **95.9** & 99.4 & 97.6 & 50\% & 284 \\ \hline Full & 14.7M & 95.7 & 99.5 & 97.6 & 50\% & 280 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Spelling correction results. Training data size, accuracy/correction rate/F1 scores, % of high quality data and KL Divergence of the full data, the Gio-selected data, BM25-selected data and random subset. **Bold** is the best score overall. Gio selects 73% high quality data, outperforms all other methods and sets a new state-of-the-art on spelling correction.
correction rate and overall F1. It outperforms the full model by +0.2pps for accuracy and +0.1pps for correction rate, despite using only 24% of the data. We set a new state-of-the-art in correction rate and overall F1 score on BEA4660 over the best model reported by Jayanthi et al. [12] (+2.3 pps F1 and +7.2 pps correction rate).
### Image Recognition
For our fourth set of experiments, we seek to show that Gio works well in domains outside of NLP. We focus on the FashionMNIST [37] image recognition problem and show that we can use Gio to dramatically reduce train set sizes without big drops in performance.
**Data and Methods** The FashionMNIST task has 10 classes of 28x28x1 images. There are 60,000 images in the training set, and 10,000 images in the test set. Our task will be to select a subset no more than 25% of the total data that best approximates the training data. We then finetune the Resnet50 model [9] with the chosen data to do FashionMNIST classification for 5 epochs with Adam [17] (LR=5e-5). We split the train set into train and validation sets, and pick the best checkpoint by validation loss. We report the accuracy of the chosen checkpoint on the test set.
We apply Gio to select a subset of data from the training set. We use the training set as both \(G\) and also as our target set \(X\). We start \(D\) from an empty set. We use the normalized and normed vector format of the images themselves in the algorithm, and use K=1000 for quantization. As our stopping criteria, we run the algorithm until we get 250 clusters (250 iterations), which is ~25% of the data.
The typical method to use a smaller set of training data is simple random subsample. Thus, we report results on a random sample of 25% of the data as a comparison.
**Results** Gio outperforms a simple random subset by +1.1%. Gio in this setup is optimized to pick the images which add the most information about the entire training set. Table 4 shows training on Gio-selected data only dropped performance by 2.3% from the full model, compared to a drop of 3.4% for a random subset of the same size.
## 5 Conclusion
We presented Gio, a task- and domain-agnostic data selection method that works in any continuous space with few assumptions. Gio begins with the natural objective of minimizing KL divergence between a target set and selected set, and uses the gradient of KL divergence and an efficient implementation to find and select the data points that optimize that objective at scale. Gio selected high quality data, selected data that outperformed models trained on full data and on recent data selection techniques, and was able to effectively reduce the training set size under a given resource constraint without big drops in performance. Current models consume large quantities of data, and we hope Gio can help improve the performance of models while using less data and fewer resources. Additionally, with large quantities of synthetic and scraped data of variable quality available, we hope Gio can help home in on high quality data. For example, Gio can be used to select high quality synthetic data output by large language models for a particular task. Improvements and changes to the statistics and optimization in Gio and applications of Gio to varied domains and tasks are promising directions for future work.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**System** & **Size Train/Valid** & **Accuracy** & \(\hat{\mathbf{D}}_{\text{KL}}\) \\ \hline Ours & 15,000/1,700 & **92.0\%** & 759 \\ Random & 15,000/1,700 & 90.9\% & 740 \\ \hline Full & 56,300/3,700 & 94.3\% & 7394 ** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Image recognition results. Training data sizes and accuracy of models trained on Gio-selected data and random subset. **Bold** is the best score between ours and random. Gio gives the best performance under the reduction in training data size. Full model is provided for comparison |
2310.13781 | How Much Consistency Is Your Accuracy Worth? | Contrast set consistency is a robustness measurement that evaluates the rate
at which a model correctly responds to all instances in a bundle of minimally
different examples relying on the same knowledge. To draw additional insights,
we propose to complement consistency with relative consistency -- the
probability that an equally accurate model would surpass the consistency of the
proposed model, given a distribution over possible consistencies. Models with
100% relative consistency have reached a consistency peak for their accuracy.
We reflect on prior work that reports consistency in contrast sets and observe
that relative consistency can alter the assessment of a model's consistency
compared to another. We anticipate that our proposed measurement and insights
will influence future studies aiming to promote consistent behavior in models. | Jacob K. Johnson, Ana Marasović | 2023-10-20T19:28:06Z | http://arxiv.org/abs/2310.13781v1 | # How Much Consistency Is Your Accuracy Worth?
###### Abstract
Contrast set consistency is a robustness measurement that evaluates the rate at which a model correctly responds to all instances in a bundle of minimally different examples relying on the same knowledge. To draw additional insights, we propose to complement consistency with _relative consistency_ -- the probability that an equally accurate model would surpass the consistency of the proposed model, given a distribution over possible consistencies. Models with 100% relative consistency have reached a consistency peak for their accuracy. We reflect on prior work that reports consistency in contrast sets and observe that relative consistency can alter the assessment of a model's consistency compared to another. We anticipate that our proposed measurement and insights will influence future studies aiming to promote consistent behavior in models.
## 1 Introduction
Annotators introduce data shortcuts that allow models to solve tasks in unintended ways Gururangan et al. (2018). In response, it has been proposed to measure whether a model correctly responds to a bundle (or a _contrast set_) of slightly modified instances that rely on the same knowledge Gardner et al. (2020); Kaushik et al. (2020). The rate at which a model accomplishes this is termed _consistency_. We propose an additional measurement -- _relative consistency_ -- that facilitates discussion about achievable consistency scores, enabling a more nuanced comparison.
To demonstrate why this is desired, consider situations that are illustrated in Table 1. Both 1a-1b correctly solve two bundles, i.e., have the same consistency. 1b solves three additional instances but in a way that does not promote consistency; 1c shows that a higher consistency can be gained with the same accuracy. In contrast, although 1a is less accurate, everything it handled was done consistently, and higher consistency cannot be achieved with the same accuracy. This analysis sheds light on an upside of 1a and a limitation of 1b that might go unnoticed if we solely compare accuracy/consistency. Let us turn to examples 1d. Although it represents a model with an improved consistency relative to 1a, we could have achieved better consistency for the same accuracy (see 1e).1
Footnote 1: Because this is a toy example, relative consistency is high, though not perfect, even in less-than-ideal cases 1b and 1d.
Relative consistency (SS2) measures whether the consistency of our model would likely be outper
\begin{table}
\begin{tabular}{c c} \hline \hline
**Jacob K. Johnson** & **Ana Marasovic** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Tables depict a dataset of 10 examples, where each column showcases a bundle of an original instance paired with its perturbed version. \⃝{} denotes that the instance is correctly predicted by a model. The relative consistency is the measurement we propose to complement the standard consistency.
formed by an equally accurate model, relative to the distribution of possible consistencies; see Eq. (5). Specifically, it is the probability that our model's consistency is (in most cases) higher or equal to the consistency scores that are achievable with the same accuracy. If relative consistency is 100% then our model is the most consistent it can be given its accuracy, as a more consistent, equally accurate model exists only with near-zero probability. In practice, the goal should be to increase the "standard consistency" while also achieving 100% relative consistency.
In light of this additional consistency metric, in SS4 we revisit the findings of three publications that report consistency as a metric for their evaluations and point out some additional conclusions we might draw from these reported consistencies. Our code is available at [https://github.com/jacobkj314/relative-consistency](https://github.com/jacobkj314/relative-consistency).
## 2 Relative Consistency
We first introduce background terminology (SS2.1), then derive elements we need for defining relative consistency: (i) achievable consistency scores for a given accuracy (SS2.2) and (ii) a distribution over achievable consistency scores (2.3).
### Background
A _contrast set_ or _bundle_ is a set of minimally different instances that might admit different answers, thus testing a model across/near its decision boundary.2 For example, these two HotpotQA instances (Yang et al., 2018) represent a contrast set:
Footnote 2: Sometimes “contrast set” is used to refer to contrastive instances only (without the original ones).
* Q: Is the Marsilea or the Brabejum the genus of **more** individual species of plants? A: Marsilea
* Q: Is the Marsilea or the Brabejum the genus of **less** individual species of plants? A: Brabejum
The model is required to answer both of them correctly to be considered consistent in that bundle. Evaluation with contrast sets makes it harder for simple and inadequate models to perform highly (e.g, a model that has just learned a spurious correlation between the word "Marsilea" and "more"). Related studies construct bundles of paraphrases that have the same, not contrastive, labels (Elazar et al., 2021).
The term _consistency_ is overloaded in NLP and refers to different concepts (Li et al., 2019; Jang et al., 2022; Wang et al., 2023). In this work, we study _contrast set consistency_ defined as the proportion of bundles where a model accurately labels every instance in a bundle:
\[\mathrm{consistency}=\frac{|B\in\mathcal{B}:\forall x\in B,y_{p}(x)=y(x)|}{| \mathcal{B}|}, \tag{1}\]
where \(\mathcal{B}\) is a set of all bundles of related instances in a given dataset, \(x\) is an example, \(y_{p}(x)\) is the predicted label for \(x\), and \(y(x)\) is its gold label.
### Achievable Consistency Scores
Consider a contrastive test set formed from \(n\) original instances, plus a contrastive instance derived from each original instance by varying along some pertinent dimension. There are \(2n+1\) possible accuracies \(a\) that a model could achieve on this test set, namely \(A=\{0,1,\ldots,2n-1,2n\}\).3 Similarly, there are \(n+1\) possible consistencies \(c\) that a model could achieve, namely \(C=\{0,1,\ldots,n-1,n\}\).
Footnote 3: While accuracy is typically denoted as a proportion of correct instances, reporting absolute numbers simplifies our notation. It is easy to translate a quantity \(a\) to a corresponding proportion \(\alpha\) via the identity \(a=2n\alpha\), while a consistency quantity \(c\) relates to the consistency proportion \(\gamma\) via \(c=n\gamma\).
Furthermore, for a given accuracy \(a\in A\), only a subset \(C_{a}\subseteq C\) of consistencies is achievable. Trivially, for \(a=0\), \(C_{a}=\{0\}\) (because a model cannot consistently respond to a bundle without correctly responding to at least the instances within that bundle) and for \(a=2n\), \(C_{a}=\{n\}\) (because a model that correctly responds to all instances has also consistently responded to all the bundles those instances comprise). \(C_{a}\) can then be defined in terms of \(n\) and \(a\):
\[C_{a}=\{c\in C:c_{min}^{(a)}\leq c\leq c_{max}^{(a)}\} \tag{2}\]
where \(c_{min}^{(a)}\) and \(c_{max}^{(a)}\) are defined as:
\[c_{min}^{(a)} =\begin{cases}0&\text{if }a\leq n\\ a-n&\text{if }a>n\end{cases} \tag{3}\] \[c_{max}^{(a)} =\begin{cases}\dfrac{a}{2}\end{cases} \tag{4}\]
Intuitively, if \(a\leq n\) then it is possible that all bundles have one of their constituent instances incorrectly answered, in which case, \(c_{min}^{(a)}=0\). However, if \(a>n\), then at least \(a-n>0\) of bundles must be fully correctly answered. Indeed, for a bundle to be inconsistent at least one item
must be incorrectly answered, so for a given \(a\), the number of incorrect items is \(2n-a\). Thus, at most \(2n-a\) bundles can be inconsistent, and \(c_{min}^{(a)}=n-(2n-a)=n-2n+a=a-n\).
The definition of \(c_{max}^{(a)}\) follows from the observation that a maximally consistent model will consistently respond to the maximum number of bundles for which it is possible that both instances are correctly answered, and that equals \(\left\lfloor\frac{a}{2}\right\rfloor\).
### Distribution of Achievable Consistencies
Given an accuracy \(a\), we construct a distribution of achievable consistencies \(c\in C_{a}\) with:
\[\mathbb{P}(c|a)=\frac{m(c,a)}{M(a)} \tag{5}\]
where \(M(a)\) is the number of ways a model can achieve accuracy \(a\) and is given by:
\[M(a)=\binom{2n}{a} \tag{6}\]
because there are \(2n\) total instances, of which any \(a\) might be the ones to which a model correctly responds.4\(m(c,a)\) represents the number of ways a model can achieve accuracy \(a\) and consistency \(c\), and is given by:
Footnote 4: It is possible to consider consistency to be the more underlying property of a model’s behavior and compute a distribution over possible accuracies in the range \([2c,2n-n+c]\). The corresponding accuracy by consistency distributions could then be computed given the above-defined consistency by accuracy distributions.
\[m(c,a)=\binom{n}{c}\binom{n-c}{a-2c}2^{a-2c} \tag{7}\]
where:
* \(\binom{n}{c}\) corresponds to the number of ways that \(c\) consistent bundles can be selected from \(n\),
* \(\binom{n-c}{a-2c}\) corresponds to the number of ways the remaining \(a-2c\) accurate instances can be distributed across the remaining \(n-c\) bundles, giving each selected bundle only one correct instance (to avoid creating an additional consistent bundle),
* \(2^{a-2c}\) represents the number of ways that these partially correct bundles could have either instance correct.
Using this, we can calculate \(m(c,a)\) and \(M(a)\) across all values of \(c\) and \(a\) for reasonable sizes of \(n\). These distributions can be extended for bundle sizes above 2; see formulas in Appendix B. Figure 0(a) shows the distributions of consistency scores for a dataset with 100 bundles of 2 instances.
Note that this distribution is not uniform for different consistencies at a given accuracy. There will be some consistencies that have more ways to be achieved for a given accuracy. This is why the formula \(m(c,a)\) is crucial to the computation of relative consistency that comes next.
This formulation assumes that all instances are equally difficult which is known to not be the case in practice (Swayamdipta et al., 2020). It also disregards any inductive biases of models/datasets that could skew the distribution.
Relative ConsistencyWe measure the tendency to be consistent exhibited by a model that achieved accuracy \(a\) and consistency \(c\) on a contrastive set by computing the cumulative probability distribution over achievable consistencies in \(C_{a}\) up to \(c\):
Figure 1: On the left is a heatmap of distributions of consistency at each accuracy for 100 bundles of 2 instances: each vertical slice corresponds to a separate distribution of different consistencies. Fig. 2 (Appendix) shows the \(\log_{10}\) of this plot that better highlights the long tails of these distributions. On the right are relative consistency scores given a model’s accuracy and consistency, i.e., the CDF of the figure on the left. Note that for a different number of bundles, these plots would look slightly different.
\[\mathrm{RC}(c,a)=\sum_{\begin{subarray}{c}c_{i}\in C_{a}\\ c_{i}\leq c\end{subarray}}\mathbb{P}(c_{i}|a) \tag{8}\]
Intuitively, \(\mathrm{RC}(c,a)\) indicates how likely the model's consistency is to outperform an equally accurate model relative to the distribution of achievable consistencies defined in (5). This allows us to quantify whether model consistency is below, at, or above chance, given its accuracy. In a good case, \(\mathrm{RC}\) is high, meaning that it is unlikely that an equally accurate model will have higher consistency. Alternatively, if \(\mathrm{RC}\) is low, then it is likely that an equally accurate model will have higher consistency (which is unwanted).
Although other measurements which contextualize consistency scores within a particular accuracy can be constructed -- such as simply scaling the consistency between \(c_{min}^{(a)}\) and \(c_{max}^{(a)}\), or reporting the fraction of fully consistent of those that are at least partly correct -- these approaches lack the probabilistic interpretation underlying \(\mathrm{RC}\). SS3-4 highlight circumstances in which this probabilistic interpretation is useful, and Appendix C compares the score distributions obtained via these measurements to the score distributions obtained via \(\mathrm{RC}\).
## 3 Analysis with Simulated Contrastive Set
Suppose you evaluate a model on a contrastive test set containing 100 bundles of 2 instances. The distribution of consistencies for this dataset is shown in Figure 0(a), with the CDF of that distribution (corresponding to the \(\mathrm{RC}\) score) in Figure 0(b).
Note that the highest-density region of the distribution moves upward as accuracy increases, and takes up only a very thin band. This means that, for a given accuracy, there is generally little room for improvement in consistency. This can be useful when discussing results: if a particular training approach yields a 5% improvement in consistency for an equally accurate model, that represents a substantial change in how the model tends to respond to inputs.
It can still happen that improving accuracy and consistency decreases relative consistency. As an example, consider comparing a model \(M_{1}\), which achieves \(a=130,c=45\) (\(65\%\) accuracy, \(45\%\) consistency) against a model \(M_{2}\) with \(a=150,c=55\) (\(75\%\) accuracy, \(55\%\) consistency). Clearly, model \(M_{2}\) is more desirable for practical uses, if we are just comparing one model to another, but if we are comparing two different training approaches, and want to know which induces a stronger tendency for consistent responses, then we would be interested to know that \(M_{1}\) has \(\mathrm{RC}=93.0\%\), while \(M_{2}\) has \(\mathrm{RC}=37.1\%\). This insight, that one model is below chance consistency, while another is well above, is made possible by the probabilistic interpretation of \(\mathrm{RC}\).
## 4 Meta-Analysis of Prior Work
In this section, we discuss results reported by prior works that conduct evaluation with contrast sets under the light of relative consistency.
### Gardner et al. (2020)
They construct contrast sets for several common test sets by modifying a sample of the test set instances. They train a biaffine parser (Dozat and Manning, 2017) with ELMo embeddings (Peters et al., 2018) for UD parsing (Zeldes, 2017, Silveira et al., 2014, Basili et al., 2015, Ahrenberg, 2007), and RoBERTa (Liu et al., 2019) for reading comprehension tasks: ROPES (Lin et al., 2019), and MC-TACO (Zhou et al., 2019) and stance prediction: PERSPECTRUM (Chen et al., 2019). Table 2 shows the accuracy and consistency of these models for four of their contrast sets.5 In the rightmost column, we report the relative consistency scores that we introduce.
Footnote 5: We exclude contrast sets that do not have the bundle size of 2. They report the accuracy of the original instances and contrastive instances separately, so to obtain the accuracy in the contrast set (that we need to calculate \(\mathrm{RC}\)) we average those. In doing so, we assume that the accuracy of the full original test set is similar to the accuracy of the sample of original test set instances.
AnalysisWe observe that the UD parsing and ROPES models have a similar consistency score
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **\#Bundles** & **Acc** & **Cons** & **RC** \\ \hline UD Parsing & 150 & 55.3 & 17.3 & \(\sim\)0.0 \\ PERSPECTRUM & 217 & 88.0 & 78.8 & 97.6 \\ ROPES & 974 & 40.1 & 17.6 & 97.8 \\ MC-TACO & 646 & 26.0 & 8.0 & 95.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Relative consistency scores computed for results reported in Gardner et al. (2020). In the 3rd column, we report the average of “Original Test” (original only) and “Contrast” (contrastive only) columns in their Table 2. That is the accuracy, \(a\), we use in calculations in §2. Models with similar consistency (UD Parsing and ROPES) have different tendencies to respond consistently as revealed by their \(\mathrm{RC}\) scores.
(17.3 and 17.6). However, the UD parsing model's consistency has a near-zero chance to outperform an equally accurate model. On the other hand, the ROPES model is quite likely to do so.
Additionally, relative consistency shows that models with low consistency could nonetheless have a large tendency to respond to bundles consistently.6 We see this with the results for MC-TACO, which, despite only achieving 8.0% consistency, is more consistent than an equally accurate model in 95.2% of cases. Intuitively, this means that the above chance model has at least generalized well within the few cases to which it correctly responds.
Footnote 6: Note that high relative consistency does not guarantee that such a model will continue to respond to bundles consistently with improved accuracy.
### Dua et al. (2021)
They investigate whether training approaches that consider a full bundle of related instances together, instead of their constituent instances separately, improve consistency. Table 3 shows their report results obtained with T5 (Raffel et al., 2020) and the relative consistency scores we compute from their results, on the contrastive version of ROPES -- a reading comprehension dataset for evaluating a model's ability to reason about "effects of the relationships in the background passage in the context of the situation".
AnalysisWe observe that the baseline model trained with the maximum likelihood estimation (MLE) is already at ceiling performance in terms of its tendency to produce consistent responses (i.e., its \(\mathrm{RC}\) scores). Combining contrastive estimation (CE; Smith and Eisner, 2005), or unlikelihood training (UL; Welleck et al., 2020), with MLE not only improves the accuracy and consistency but also does so in a way that does not lower the relative consistency, which is desired. This emphasizes the effectiveness of these objectives.
### Ravichander et al. (2022)
They introduce CondaQA, a contrastive dataset for studying reading comprehension models' effectiveness in reasoning about the implications of negation expressed in a given text. Each CondaQA instance comes with three minimally varied versions: one paraphrases the negation, another modifies what is negated (scope), and the last removes the negation. Ravichander et al. (2022) use UnifiedQA-v2 (Khashabi et al., 2022) as a backbone model. We explore the factors that might influence the consistency of the large and 3B versions of this model:
* The training objective: MLE, CE, or combined \(\lambda_{1}\)MLE+\(\lambda_{2}\)CE.
* The choice of hyperparemeters \(\lambda_{1}\) and \(\lambda_{2}\) (with UnifiedQA-large).
Table 4 shows accuracy, consistency, and relative consistency we obtain for bundles where the original instance is paired with its: (i) _scope_-edited version, and (ii) _affirmative_ version (without negation). In Table 5 (Appendix), we also include the results with paraphrase-edits.
AnalysisAn increase in consistency does not necessarily indicate a heightened tendency to consistently respond to bundles (unless the accuracy stays the same). Compare CE with 1MLE+1CE (double underlined, in the upper part of the table). In this case, by training with MLE and CE, affirmative consistency has gone up slightly, however, the model's chance of outperforming an equally accurate model dropped down from 26% to 19%. This is an example of a suboptimal way of improving consistency, and MLE+CE is not necessarily superior to the standalone CE in this case. A similar, but less pronounced, situation occurs when comparing MLE against.33MLE+1CE for scope consistency in the bottom part of the table (italicized).
Conversely, even if standard consistency has not improved, a model's tendency to consistently respond to bundles may have. For example, compare MLE with 1MLE+1CE for scope consistency in the upper part of the table (wavy underlined). In this case, scope accuracy lowered slightly but absolute scope consistency remained the same, leading to a large improvement in \(\mathrm{Scope}\)-\(\mathrm{RC}\). This may suggest that additional CE loss resulted in the model unlearning a few individual instances without unlearn
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Loss** & **Accuracy** & **Consistency** & **RC** \\ \hline MLE & 65.7 & 52.1 & 100.0 \\ \(\downarrow\) +UL & 68.3 & 55.6 & 100.0 \\ \(\downarrow\) +CE & 76.6 & 64.7 & 100.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: A comparison of relative consistency scores computed from results report in Dua et al. (2021) (in “Dev EM” and “Dev C” columns in their Table 3). The number of bundles is 844. The unlikelihood (UL) and contrastive estimation (CE) objectives improved the accuracy and consistency over MLE, _without decreasing relative consistency_. This is how consistency should be improved in this case.
ing any complete bundles it had learned. Similarly, 0.33MLE+1CE scope consistency in the upper part of the table (underlined once) increased slightly but the scope relative consistency has increased notably. If we compared only consistency we would conclude that the choice of hyperparameters \(\lambda_{1},\lambda_{2}\) is not vital, where actually they can affect model consistency behavior as shown by relative consistency.
## 5 Conclusion
We introduce relative consistency, which complements standard contrast consistency by allowing an accuracy and consistency score pair to be examined to determine whether a higher consistency was possible with that accuracy. This facilitates the comparison of consistencies achieved by models that achieved different levels of accuracy. We show that relative consistency enriches conclusions we make about whether a model is more consistent than another, and occasionally even leads us to different takeaways.
## 6 Limitations
This mathematical model is based on a simplified version of contrastive datasets. Contrastive datasets may have more than two edits for each original instance, which will result in a different distribution. Although we provide formulas for distributions of arbitrary bundle size in Appendix B, these distributions are less intuitive, more expensive to compute, and additionally have the drawback that, if a model achieves high pairwise \(\mathrm{RC}\) on two of the elements of the bundle, it is likely to achieve high bundle \(\mathrm{RC}\), even if the other elements of the test set do not achieve high pairwise \(\mathrm{RC}\). In general, we recommend formulating questions of consistency in terms of bundles with one instance exhibiting a feature and the other instance lacking that feature. Moreover, contrastive datasets may include extra data that is not contrastive; e.g., CondaQA has a small number of bundles with a single instance because other instances in the bundle were filtered because they did not pass quality checks.
In SS2.3, we state the drawbacks of the distribution (5). Namely, we do not consider that the distribution might be skewed due to the varying example difficulty and other inherent properties of datasets and models.
## 7 Acknowledgements
We thank anonymous reviewers for their thoughtful and constructive comments, members of the UtahNLP group for helpful feedback, and Petar Bakic for proofreading our formulas.
|
2308.09593 | Investigation of Architectures and Receptive Fields for Appearance-based
Gaze Estimation | With the rapid development of deep learning technology in the past decade,
appearance-based gaze estimation has attracted great attention from both
computer vision and human-computer interaction research communities.
Fascinating methods were proposed with variant mechanisms including soft
attention, hard attention, two-eye asymmetry, feature disentanglement, rotation
consistency, and contrastive learning. Most of these methods take the
single-face or multi-region as input, yet the basic architecture of gaze
estimation has not been fully explored. In this paper, we reveal the fact that
tuning a few simple parameters of a ResNet architecture can outperform most of
the existing state-of-the-art methods for the gaze estimation task on three
popular datasets. With our extensive experiments, we conclude that the stride
number, input image resolution, and multi-region architecture are critical for
the gaze estimation performance while their effectiveness dependent on the
quality of the input face image. We obtain the state-of-the-art performances on
three datasets with 3.64 on ETH-XGaze, 4.50 on MPIIFaceGaze, and 9.13 on
Gaze360 degrees gaze estimation error by taking ResNet-50 as the backbone. | Yunhan Wang, Xiangwei Shi, Shalini De Mello, Hyung Jin Chang, Xucong Zhang | 2023-08-18T14:41:51Z | http://arxiv.org/abs/2308.09593v1 | # Investigation of Architectures and Receptive Fields for Appearance-based Gaze Estimation
###### Abstract
With the rapid development of deep learning technology in the past decade, appearance-based gaze estimation has attracted great attention from both computer vision and human-computer interaction research communities. Fascinating methods were proposed with variant mechanisms including soft attention, hard attention, two-eye asymmetry, feature disentanglement, rotation consistency, and contrastive learning. Most of these methods take the single-face or multi-region as input, yet the basic architecture of gaze estimation has not been fully explored. In this paper, we reveal the fact that tuning a few simple parameters of a ResNet architecture can outperform most of the existing state-of-the-art methods for the gaze estimation task on three popular datasets. With our extensive experiments, we conclude that the stride number, input image resolution, and multi-region architecture are critical for the gaze estimation performance while their effectiveness dependent on the quality of the input face image. We obtain the state-of-the-art performances on three datasets with 3.64 on ETH-XGaze, 4.50 on MPIIFaceGaze, and 9.13 on Gaze360 degrees gaze estimation error by taking ResNet-50 as the backbone.
## 1 Introduction
Eye gaze can serve as a cue to model a person's cognitive process and analyze human visual attention [44]. Various eye-tracking applications have been proposed, such as diagnostic interpretation [3], human-computer interaction [35, 45], visual marketing [40], and augmented and virtual reality [1, 32]. Appearance-based gaze estimation alleviates the requirement of accurate 3D model fitting by directly regressing from the input eye/face image to the gaze target [36]. With the recent development of deep learning methods, appearance-based gaze estimation has attracted large attention in the computer vision community [11, 15].
The early works usually take a single-eye image [14, 27, 34, 48, 50] or a combination of two eyes [19, 20] as the input. Later methods demonstrate the multi-region method, i.e. two eye patches and single face region, is more effective than the eye-region only methods [25]. Taking a single-face image could further improve the gaze estimation performance, yet enlarging input image resolution plays an important role [49]. Variant methods have been proposed for the gaze estimation, such as generative model [37], pictorial representation [30], unsupervised learning [42], hard attention [47], two-eye asymmetry [12], coarse-to-fine [8], weakly supervision [23], rotation consistency [2], self-adversarial [7], and vision transformer [9]. Some of the previous works imply the input image resolution could be critical for the final gaze estimation performance [4, 43, 49], and whether to take eye, face, or multi-region as the input is still a mystery. It is not clear which method we should choose when dealing with real-world settings given the specific devices, applications, and environments. In addition, while these deep learning approaches have outperformed traditional methods across datasets, there still exists a considerable gap towards a "perfect gaze estimator", as the general gaze estimation error is around four to five degrees on high-resolution controlled laboratory setting ETH-XGaze datasets [43], four degrees on the real-world laptop setting MPIIFaceGaze dataset [49], two centimeters on the cellphone screen GazeCapture [25], and ten degrees on the challenging outdoor setting Gaze360 [21].
In this paper, we improve the gaze estimation performance on multiple datasets by examining the basic strategy of taking high and low-resolution images as input, changing the stride of the first convolutional layer, taking a single-face image as input, and taking multi-region as input. Taking single-face or multi-region as input is a popular choice for the current gaze estimation methods, yet there is no conclusion about which one we should pick. The input image resolution and stride
both affect the reception field of the neural networks and, thus, have an impact on the final gaze estimation performances.
Our main findings are:
* Decreasing the stride of a CNN's first convolutional layer effectively improves performance on high-resolution datasets.
* Increasing input image resolution effectively improves performance on high-resolution datasets.
* Multi-region architecture (left eye, right eye, and face images each with a CNN backbone) performs well on high-resolution datasets, while not on low-resolution datasets.
## 2 Related work
There are two main categories of gaze estimation methods: model-based and appearance-based [17]. Model-based methods employ geometric eye models and detect geometric features to estimate gaze [6, 33, 51]. However, the accuracy of model-based methods can suffer from in-the-wild settings [48] and sometimes require a time-consuming process of collecting subject-specific parameters, such as cornea radii, cornea center, and kappa angles [15].
Appearance-based gaze estimation attempts to regress gaze direction from eye or face images. Most of the recent appearance-based approaches adopt a CNN-based architecture. Zhang _et al_. proposed the in-the-wild MPIIGaze dataset and demonstrated the exceptional performance of CNN-based model in this setting [48]. Krafka _et al_. proposed the large-scale Gaze-Capture dataset and the multi-region multi-branch iTracker framework that takes left eye, right eye, face, and face grid location as input to separate branches of CNN to estimate gaze [25]. Zhang _et al_. implemented a model that takes only full-face images as input [49]. Park _et al_. proposed a pictorial gaze estimation model that first regresses the image to an intermediate gazemap and then estimates gaze from that [31]. Kellnhofer _et al_. provide an LSTM-based model with pinball loss and the Gaze360 dataset in unconstrained indoor and outdoor environments with a wide range of head poses [21]. Zhang _et al_. developed the large-scale high-resolution ETH-XGaze dataset in a constraint environment with extreme head poses and gaze variations [43]. Furthermore, various gaze estimation approaches are proposed, such as few-shot learning [29], bayesian learning [38], unsupervised learning [42], weakly-supervised learning [24], contrastive learning [39]. Recent research has demonstrated that methods based on Vision Transformers [5, 13, 26] achieve exceptional performance compared to previous CNN-based methods. Cheng _et al_. proposed the first transformer-based architecture for gaze estimation [10].
## 3 Background
The gaze estimation task is strongly relevant to the eye region since eyeball rotation is the only factor that determines the gaze direction. Unfortunately, accurate eyeball rotation is difficult so it becomes crucial to include the rest of the face region for the appearance-based gaze estimation methods. However, the balance between the eye region and the rest of the face region has not been fully explored before.
In this section, we describe the strategy of stride and resolution selection for gaze estimation and a multi-region multi-branch architecture.
### Receptive filed
In deep convolutional neural networks (CNNs), a basic concept is the receptive field, also known as the field of view, which refers to the region of the input that influences a unit in a specific layer of the network. Unlike fully connected networks, where each unit's value depends on the entire input, a unit in convolutional networks is influenced by a localized region of the input corresponding to the unit's receptive field [28]. In various tasks, particularly in dense prediction tasks such as semantic image segmentation, stereo, and optical flow estimation, where predictions are made for individual pixels in the input image, it is crucial to ensure that each output pixel has a substantial receptive field. This ensures that no vital information is overlooked during the prediction process.
In general, there are multiple ways to change the receptive field inside CNNs. We take two ways of them, stride and input image size, for gaze estimation.
### Stride
As a parameter of the neural network's filter, stride determines the amount of shifting applied to the input image or feature. Thus, the smaller stride results in a larger receptive field. By changing the stride of the first convolutional layer of a gaze estimation network, we enlarge the receptive field for the units in subsequent layers until the bottom layer's units. Furthermore, we investigate changing the stride of the sliding patch of a transformer-like model, Poolformer [41], where the self-attention layers in the transformer were replaced with pooling layers.
### Input image resolution
Enlarging input image resolution can consequently increase the receptive fields of CNNs. A simple way to enlarge the input image is the interpolation of existing pixel values that do not add any information to the input image. Instead, we enlarge the input image during the data normalization process [46] in the image warping step. The total amount of information contained by the input image is limited by the raw image resolution, i.e. there would be no benefits of enlarging the input image if the size becomes larger than the face in the original raw image. Enlarging the input image brings heavy computation costs due to more convolution operations and longer feature vectors before the output layer. In this study, we only experiment with two image resolutions as 224\(\times\)224 and 448\(\times\)448 pixels.
### Gaze estimation model
Theoretically, eyeball rotation is the only factor to determine the gaze direction and the eye region is the only required input for the gaze estimation model. Nonetheless, taking the full face instead of the eye as input improves the gaze estimation performance empirically [49]. Since the eye region should be the most critical part of the gaze estimation task compared to the rest of the face, the eye region should have a larger receptive field than the rest of the face. The multi-region method reflects such an intuition by cropping and enlarging the two eye regions as parallel input for the gaze estimator [25].
In this paper, we explore the potential of full-face and multi-region gaze estimation. For the multi-region method, the model delegates a network branch for the left eye, right eye, and full face, and adopts a fully connected layer to regress the outputs from the three branches to the gaze direction. ResNet-50 serves as the backbone model for each branch. This model has a similar fashion to iTracker [25], yet it does not share model weights for eye regions and excludes the branch of the face grid location. We further study the necessity of using different backbones for different regions.
## 4 Experiments
We experimented with manipulating strides, input resolutions, and model architectures across three datasets, ETH-XGaze [43], MPIIFaceGaze [49], and Gaze360 [21]. We also tested using self-attention in a transformer-like model and variant architectures designed for multi-region CNN.
### Settings
We applied the data normalization method introduced in [46] to cancel out the geometric variability caused by various head poses and distances to the camera by converting input images and gaze ground truth to a normalized space. For training CNN-based models, we implemented ResNet-50 [18] as the CNN backbone. We use the Adam optimizer [22] with the initial learning rate set to 0.0001. For different model architectures, the batch sizes were set according to the limitation of GPU memory. The model was trained for 30 epochs and we divide the learning rate by 10 for every 10 epochs. We pick the results of the 30th epoch for ETH-XGaze, and the 25th epoch for the MPIIFaceGaze according to the previous paper settings. The results of Gaze360 are tested based on the validation performance. We did not observe significant performance differences between the 30th and 25th epochs. For training transformer-based models, we adopted the ideas introduced in [10] to train the model with 50 epochs, and an initial learning rate of 0.0005. The learning rate decay factor is set to 0.5 for every 10 epochs. We implemented a gradual learning rate warm-up procedure [16] in the first three epochs. The batch size was set to 100. If the batch size could not fit the model in an NVIDIA A40 GPU, we set it to the largest number that can fit the model on A40.
### Datasets
**ETH-XGaze**[43] contains 1.1 million images and the raw image resolution is \(6000\times 4000\). The images were collected in the laboratory and with extreme head pose, gaze variations, and 16 illumination conditions. For person-independent gaze estimation, 80 subjects are set for training and 15 for testing. We obtained the face and eye input images using the normalization method [46] on raw images.
**MPIIFaceGaze**[49] contains 214 thousand images and the raw image resolution is \(1280\times 720\). The images were collected in the wild and with daily life illustrations. MPIIFaceGaze contains 15 subjects and the typical evaluation procedure is to conduct a cross-subject 15-fold evaluation. For the single region training, the original input image size is \(448\times 488\) and we downsampled them to be \(224\times 224\) for later usage. For the multi-region training, we use the normalization method [46] to obtain the face and two-eye images. Note that, the coordinate system of gaze labels after normalization is not aligned with the one for single region training, but it is aligned with the one from the ETH-XGaze dataset.
**Gaze360**[21] contains 172 thousand images and the raw head image resolution span from around \(100\times 100\) to \(500\times 500\). Gaze360 is a diverse dataset containing 238 subjects in unconstrained indoor and outdoor environments with a wide range of head poses. We performed the aforementioned data normalization method [46] on Gaze360 and obtain face and eye input images. The dataset is split into train, validation, test and unused subsets.
### Results
To gain a deeper understanding of the effect of our design choices, we assess the performance of our approach across diverse settings and configurations.
#### 4.3.1 Stride
The default stride parameter in the first convolutional layer in the ResNet series is set to be two to reduce the size of the receptive field. However, it may not be applicable to regression tasks such as gaze estimation. As shown in the first and second rows of Tab. 1, by changing the stride from two to one for the input image resolution \(224\times 224\) pixels, an \(11.1\%\) performance improvement (\(4.50^{\circ}\to 4.00^{\circ}\)) is achieved on ETH-XGaze, a \(4.2\%\) performance improvement (\(4.71^{\circ}\to 4.51^{\circ}\)) is achieved on MPIIFaceGaze. A similar trend can be observed for the input image resolution \(448\times 448\) pixels settings for the ETH-XGaze. However, there is no significant performance change on Gaze360 with either input image resolution settings and MPIIFaceGaze with \(448\times 448\) pixels input image setting. We conclude that percentages of performance improvements on these three datasets are aligned with raw image resolution increases, _i.e._ the higher raw image resolution, the higher performance improvement by decreasing the stride number.
To valid the effect of stride on gaze estimation with different architecture, we also conducted experiments with PoolFormer-24 [41] that collects input patches with different strides. Note we only changed the stride in the first stage of the PoolFormer architecture, which is used to slide image patches from patch embedding. As shown in Tab. 2, PoolFormer yields a gaze error of \(4.56^{\circ}\) with the default stride as four, and decreasing stride number yields a drastic improvement of \(19.5\%\) (\(4.56^{\circ}\to 3.67^{\circ}\)). It shows that stride is critical for the gaze estimation task for different architectures. The main reason could be that the model needs to extract fine-level features around the eye region. Since vision transformers are famous for their self-attention modules, we replaced the pooling layers with self-attention layers in the top two out of four stages in PoolFormer-24 to better capture global information at early stages. However, we observe an increase in gaze error in the setting with a self-attention based module. It could be that the amount of training data is insufficient to train the self-attention modules [13].
#### 4.3.2 Input image resolution
Besides the stride, input image resolution can also change the size of the receptive field. We exam
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Input Image** & **MethodsDatasets** & **ETH-XGaze** & **MPIIFaceGaze** & **Gaze360** \\ \hline
224\(\times\)224 & Res50 (stride 2) & \(4.50^{\circ}\) & \(4.71^{\circ}\) & \(9.21^{\circ}\) \\ & Res50 (stride 1) & \(4.00^{\circ}\) & \(4.51^{\circ}\) & \(9.19^{\circ}\) \\ \hline \hline
448\(\times\)448 & Res50 (stride 2) & \(3.95^{\circ}\) & \(4.53^{\circ}\) & \(\mathbf{9.13^{\circ}}\) \\ & Res50 (stride 1) & \(\mathbf{3.76^{\circ}}\) & \(\mathbf{4.50^{\circ}}\) & \(9.56^{\circ}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of methods on datasets in \(224\times 224\) and \(448\times 448\) pixels resolutions. ETH-XGaze has the highest raw image resolution, followed by MPIIFaceGaze then Gaze360. As we change the stride from two to one, the gaze error improves by a greater scale on the dataset with higher raw image resolution.
\begin{table}
\begin{tabular}{c|c} \hline \hline
**ModelDataset** & ETH-XGaze \\ \hline PoolFormer-24 & \\ (stride 4, with \\ self-attention) & \(4.73^{\circ}\) \\ \hline PoolFormer-24 & \\ (stride 4) & \(4.56^{\circ}\) \\ \hline PoolFormer-24 & \\ (stride 2) & \(3.98^{\circ}\) \\ \hline PoolFormer-24 & \\ (stride 1) & \(\mathbf{3.67^{\circ}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of PoolFormer [41] with or without self-attention and with different strides on ETH-XGaze of \(224\times 224\) pixels input image resolution.
ined ResNet-50 with two different image resolutions as \(224\times 224\) and \(448\times 448\) pixels. To achieve different image resolutions, we change the parameters during the data normalization procedure. As seen in first the third rows of Tab 1, by increasing input resolution (\(224\times 224\to 448\times 448\)), ResNet-50 with a stride of two in the first convolutional layer can yield improvements of \(12.2\%\) (\(4.50^{\circ}\to 3.95^{\circ}\)) on XGaze and \(3.8\%\) (\(4.71^{\circ}\to 4.53^{\circ}\)) MPIIFaceGaze, respectively. It shows that increasing image resolution can improve gaze estimation performance based on the raw image resolution of the dataset. The raw face size in the original image is around \(1000\times 1000\) pixels and \(500\times 500\) pixels on XGaze and MPIIFaceGaze datasets, respectively. We expect even more improvement on the ETH-XGaze dataset when increasing the input image resolution to the networks to be \(1000\times 1000\) pixels. However, such a model would require more energy and memory to train.
We further combine our finding of the stride effect that we changed the stride of the first convolutional layer in ResNet-50 from two to one. By comparing results in the third and fourth rows in Tab 1, we can see that improvements by decreasing stride yield a small improvement (\(4.8\%\), \(3.95^{\circ}\to 3.76^{\circ}\)) on XGaze and no improvement on MPIIFaceGaze. Note decreasing the stride from two to one results in worse performance on Gaze360 (\(9.13^{\circ}\to 9.56^{\circ}\)). It could be that the faces crops on Gaze360 are much smaller than the \(448\times 448\) pixels that the enlarging input image resolution introduces noise with interpolation.
In general, we found the input image resolution and stride number could improve the gaze estimation performance while the relative improvements are dependent on the image resolution of the raw input frame.
#### 4.3.3 Multi-region CNN
With the assumption that gaze estimation performance improves by increasing the receptive field from extracting detailed features on eye regions, the multi-region method is assumed to perform better than the single-face input methods.
We conducted the experiment of the multi-region CNN model that consists of three separated ResNet-50 networks for the left eye, right eye, and face patches, respectively. For the sake of computation time, we use \(224\times 224\) pixels resolution input image size. The cropped left and right eye patches are obtained by the data normalization method [46] on the raw images. The features of the three networks are concatenated and fed into the last fully connected layer to regress to the gaze direction.
As shown in the first row of Tab. 3, the multi-region architecture achieves \(3.88\) degrees gaze error on ETH-XGaze, which is significantly better than the single-face input method (\(4.5\) degrees). We further investigated variations of the multi-region architectures. We change the stride of the first convolutional layers from two to one, which achieves \(6.2\%\) improvement (\(3.88^{\circ}\to 3.64^{\circ}\)). It is the same conclusion in \(4.3.1\) that decreasing the stride can improve the performance of gaze estimation.
Since previous works have variations of sharing the network for left and right eye patches or not, we implemented a variant of the multi-region method in that the left and right eye nets are shared. It can be seen from the table that the sharing eye nets result in better performance (\(3.88^{\circ}\to 3.70^{\circ}\)) on ETH-XGaze. To explore the limitations of different strategies, we experimented with a multi-region of shared eye nets and the stride to be one in the first convolutional layer. However, the result on the ETH-XGaze dataset does not show an improvement compared to the model with the stride one (\(3.70^{\circ}\) vs. \(3.69^{\circ}\)). It could be that the feature extraction with cropped eye region and stride one would be similar to the eye region size in the raw images. Therefore, changing the stride number cannot yield performance improvement.
A similar trend can be observed on the MPI-IFaceGaze dataset as switching from a single face to a multi-region model with input image resolution \(224\times 224\) results in a performance improvement (\(4.71^{\circ}\)
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**MethodsDataset** & **ETH-XGaze** & **MPIIFaceGaze** & **Gaze360** \\ \hline No shared eye net (stride 2) & \(3.88^{\circ}\) & \(4.62^{\circ}\) & \(9.26^{\circ}\) \\ \hline No shared eye net (stride 1) & \(\mathbf{3.64^{\circ}}\) & \(4.61^{\circ}\) & \(9.26^{\circ}\) \\ \hline Shared eye nets (stride 2) & \(3.70^{\circ}\) & \(\mathbf{4.51^{\circ}}\) & \(9.28^{\circ}\) \\ \hline Shared eye nets (stride 1) & \(3.69^{\circ}\) & \(4.62^{\circ}\) & \(\mathbf{9.13^{\circ}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of variants of multi-region ResNet-50 on different datasets in \(224\times 224\) resolution. We experimented with sharing or not sharing the eye net, and different stride numbers.
vs. 4.62\({}^{\circ}\), stride with two), and the shared eye net can further lower the gaze error to 4.51\({}^{\circ}\). However, there is no noticeable improvement with changing the stride.
For the Gaze360, we do not observe the significant changes with different architectures as shown in the last column of Tab. 3. It could be that the raw face sizes on Gaze360 are relatively small, thus, fine-level feature extraction could not boost the performance.
## 5 Conclusion
In this study, we have investigated the substantial potential of optimization of fundamental parameters, namely input image resolution, stride, and input patches to achieve exceptional performance for the gaze estimation task. Through a series of extensive experiments, we have derived decreasing the stride of the first convolutional layer, increasing the input image resolution, and switching to multi-region architecture lead to a noticeable improvement in performance when dealing with high-resolution datasets. However, while these strategies exhibit strong performance on high-resolution datasets, their effectiveness diminishes when applied to low-resolution datasets. These findings collectively highlight the importance of parameter selection, including input image resolution, stride, and the choice of architecture, in optimizing performance for the gaze estimation task across different dataset resolutions. With our optimized yet simple architecture, we successfully achieved state-of-the-art gaze estimation performances on three popular datasets.
|
2305.06884 | Risk-limiting Financial Audits via Weighted Sampling without Replacement | We introduce the notion of a risk-limiting financial auditing (RLFA): given
$N$ transactions, the goal is to estimate the total misstated monetary
fraction~($m^*$) to a given accuracy $\epsilon$, with confidence $1-\delta$. We
do this by constructing new confidence sequences (CSs) for the weighted average
of $N$ unknown values, based on samples drawn without replacement according to
a (randomized) weighted sampling scheme. Using the idea of importance weighting
to construct test martingales, we first develop a framework to construct CSs
for arbitrary sampling strategies. Next, we develop methods to improve the
quality of CSs by incorporating side information about the unknown values
associated with each item. We show that when the side information is
sufficiently predictive, it can directly drive the sampling. Addressing the
case where the accuracy is unknown a priori, we introduce a method that
incorporates side information via control variates. Crucially, our construction
is adaptive: if the side information is highly predictive of the unknown
misstated amounts, then the benefits of incorporating it are significant; but
if the side information is uncorrelated, our methods learn to ignore it. Our
methods recover state-of-the-art bounds for the special case when the weights
are equal, which has already found applications in election auditing. The
harder weighted case solves our more challenging problem of AI-assisted
financial auditing. | Shubhanshu Shekhar, Ziyu Xu, Zachary C. Lipton, Pierre J. Liang, Aaditya Ramdas | 2023-05-08T17:34:06Z | http://arxiv.org/abs/2305.06884v1 | # Risk-limiting Financial Audits
###### Abstract
We introduce the notion of a risk-limiting financial auditing (RLFA): given \(N\) transactions, the goal is to estimate the total misstated monetary fraction (\(m^{*}\)) to a given accuracy \(\epsilon\), with confidence \(1-\delta\). We do this by constructing new confidence sequences (CSs) for the weighted average of \(N\) unknown values, based on samples drawn without replacement according to a (randomized) weighted sampling scheme. Using the idea of importance weighting to construct test martingales, we first develop a framework to construct CSs for arbitrary sampling strategies. Next, we develop methods to improve the quality of CSs by incorporating side information about the unknown values associated with each item. We show that when the side information is sufficiently predictive, it can directly drive the sampling. Addressing the case where the accuracy is unknown _a priori_, we introduce a method that incorporates side information via control variates. Crucially, our construction is adaptive: if the side information is highly predictive of the unknown misstated amounts, then the benefits of incorporating it are significant; but if the side information is uncorrelated, our methods learn to ignore it. Our methods recover state-of-the-art bounds for the special case when the weights are equal, which has already found applications in election auditing.
The harder weighted case solves our more challenging problem of AI-assisted financial auditing.
###### Contents
* 1 Introduction
* 1.1 Contributions
* 2 Betting-based CS construction
* 2.1 Powerful betting strategies
* 2.2 Logical CS
* 3 Sampling Strategies
* 4 Using possibly inaccurate side information
* 5 Experiments
* 6 Conclusion
* A Additional Background
* A.1 Related Work on Confidence Sequences (CS)
* A.2 Betting-based CS construction
* A.3 Working with minibatches
* B Proofs
* B.1 Proof of Proposition 2
* B.2 Proof of Proposition 3
* C Alternative Definitions of RLFA
* D Hoeffding and empirical-Bernstein Confidence sequences
* D.1 Hoeffding CS
* D.2 Empirical-Bernstein CS
* E Connections with Waudby-Smith and Ramdas [32, 33]
* F Experiments Comparing Different CS Constructions
* G Experiments with Housing Sales Data
## 1 Introduction
Consider the following scenario: in a given year, a company has \(N\) recorded financial transactions with reported monetary values \(M(i)\in(0,\infty)\) for each \(i\in[N]\coloneqq\{1,\ldots,N\}\). As required by law, an external auditor is required to attest with "reasonable assurance" about whether the financial records as a whole are free from "material misstatement." For example, the company has cash receipts for sales of products, and it wants to ensure that the reported monetary value matches the true amount that was made on the sales according to prescribed accounting rules as some receipts may actually represent past sales or future deliveries. This can be done, for instance, by manually examining the entire sales process to determine the true sales amount against the the amount recorded by the company. Since the task of _auditing_ each transaction can be complex requires substantial human labor it can be prohibitively expensive to perform a comprehensive audit of a company's records.
Suppose that the auditor has built an AI system for "automated auditing", i.e., this AI system can output predictions about the accuracy of a transaction value, based on receipts, OCR (optical character recognition), databases, etc. Such systems are in a state of active development and deployment, and the high level of industry demand is unsurprising given the remarkable predictive capabilities of modern machine learning algorithms. But there's a catch: because the system is trained and deployed on differently distributed data, its accuracy on a new set of records in a new time period is unknown _a priori_. Even if anecdotally, the AI system seems to perform reasonably well on data collected from a variety of companies, we cannot make statistically certifiable conclusions based solely on the output of the AI system on a new company and/or in a new time period. Thus we can think of AI systems in deployment as black boxes for which we have (reasonable) hopes of high accuracy but lack formal guarantees.
The auditor's goal is to minimize the amount of manual auditing that must be done by a person, while accurately estimating the true monetary amount of those transactions that have not manually audited. When the AI system is accurate, we want to reduce the amount of human auditing effort required. More importantly, we want a statistically rigorous conclusion regardless of the AI system accuracy. Hence, our method should interpolate between using predictions to reduce its uncertainty rapidly when the system is accurate, and the most efficient AI-free strategy when the system is inaccurate.
Problem setup and notation.Denote the unknown misstated fraction of the \(i\)th transaction as \(f(i)\in[0,1]\), for each \(i\in[N]\). In other words, if \(M^{*}(i)\) denotes the true value of the transaction \(i\), and \(M(i)\) is the reported value, then 1\(f(i)=|M^{*}(i)-M(i)|/M(i)\). We can normalize the reported transaction values by the sum over all transaction values to get a weight \(\pi(i)\coloneqq M(i)/(\sum_{i=1}^{N}M(i))\) for each \(i\in[N]\), where \(\sum_{i=1}^{n}\pi(i)=1\). The auditor wishes to obtain an estimate of \(m^{*}=\sum_{i=1}^{N}\pi(i)f(i)\), the fraction of the total monetary value that is misstated, up to an accuracy \(\varepsilon\in[0,1]\). By \(S(i)\), we denote the _side information_, a
score for the \(i\)th transaction that (ideally) predicts \(f(i)\). In our setup, the side information can be generated through any method, e.g., through an AI system that automatically analyzes the documents a human auditor would use, may also be available to the auditor. Each transaction can be evaluated by the auditor to reveal \(M^{*}(i)\) (or equivalently, \(f(i)\)). Thus, _given an \(\varepsilon>0\), in what order should the transactions be audited to estimate \(m^{*}\) within \(\varepsilon\) additive accuracy, using the fewest number of calls to the auditor?_
If we allow for no uncertainty, i.e., we want to produce a confidence interval (CI) for \(m^{*}\) with \(100\%\) confidence, then the best strategy is to audit the transactions in decreasing order of their reported value, and stop when the remaining transactions constitute smaller than an \(\varepsilon\) fraction of the total. However, we can show that if we want to provide an estimate of \(m^{*}\) that is \(\varepsilon\)-accurate with probability at least \(1-\delta\), for a tolerance level \(\delta\) (e.g., \(0.01\)), there exist strategies based on randomized sampling WoR that allow us to stop much earlier. In other words, for each \(t\in[N]\), we adaptively construct a sampling distribution \(q_{t}\) over the remaining \(N-t+1\) unaudited transactions, and sample \(I_{t}\), the index of the \(t\)th transaction to audit, according to \(q_{t}\). We then obtain \(f(I_{t})\) through manual auditing, and incorporate this new information to update our estimate of \(m^{*}\). If our residual uncertainty is sufficiently small (i.e., smaller than \(\varepsilon\)), we stop sampling. Otherwise, we continue the process by drawing the next index, \(I_{t+1}\), according to an appropriately chosen distribution \(q_{t+1}\).
Before presenting the technical details, we note that we use \((X_{t})_{t\in\mathbb{I}}\) to denote a sequence of objects indexed by a set \(\mathbb{I}\), and the \(t\)th object is \(X_{t}\). We drop the indexing subscript if it is clear from context. For any \(t\in[N]\), we use \(\mathcal{F}_{t}\coloneqq\sigma(\{I_{i}\}_{i\in[t]})\) to denote the sigma-algebra over our query selections for the first \(t\) queries.
Risk-limiting financial audit (RLFA).Formally, a \((\varepsilon,\delta)\)-_risk-limiting financial audit (RFLA)_ is a procedure that outputs an interval \(\mathcal{C}\) where \(|\mathcal{C}|\leq\varepsilon\) and \(\mathcal{C}\) contains the true misstated fraction, \(m^{*}\), with probability at least \(1-\delta\). This is a natural generalization of risk-limiting audits that are used to ensure statistically valid election auditing [26, 27, 20] to the financial setting, where each transaction is weighted by its reported monetary value (as opposed to uniform weighting for all votes in the election setting). We also consider other possible definitions of an RLFA in Appendix C. Our goal is to produce \(\mathcal{C}\) that satisfies the conditions of an RLFA with as few audits, i.e, queries of \(f\), as possible. To produce such an interval, we propose a framework for building RLFAs by constructing confidence sequences, which we introduce next.
Confidence sequences for sequential estimation.Let \(T\in[N]\) be a random stopping time, that is, a random variable for which the event \(\{T=t\}\) belongs to \(\mathcal{F}_{t}\) for each \(t\in[N]\), and let \(\mathcal{T}\) denote the universe of all such stopping times. _Confidence sequences_[19, 13] (CSs), or time-uniform confidence intervals, are sequences of intervals, \((\mathcal{C}_{t})_{t\in[N]}\), that satisfy
\[\sup_{T\in\mathcal{T}}\,\mathbb{P}\left(m^{*}\not\in\mathcal{C}_{T}\right) \leq\delta\Leftrightarrow\mathbb{P}\left(\exists t\in[N]:m^{*}\not\in\mathcal{ C}_{t}\right)\leq\delta,\]
where \(\delta\in(0,1)\) is a fixed error level. Ramdas et al. [22] showed the equivalence above, i.e., that any sequence of intervals \((\mathcal{C}_{t})\) that satisfies one side of the implication will immediately satisfy the other as well.
Using this equivalence, we can define a simple \((\varepsilon,\delta)\)-RLFA procedure: construct a CS for \(m^{*}\), denoted by \((\mathcal{C}_{t})\), and produce \(\mathcal{C}_{\tau}\) where \(\tau\) is the following stopping time:
\[\tau=\tau(\varepsilon,\delta)\coloneqq\min\{t\geq 1:|\mathcal{C}_{t}|\leq \varepsilon\}. \tag{1}\]
The width of all nontrivial CSs converges to zero as \(t\to N\), and thus the above stopping time is well-defined, and is usually smaller than \(N\).
Note that the only source of randomness in this problem is the randomized sampling strategy \((q_{t})_{t\in[N]}\), used to select transactions for manual evaluation. Hence, \((q_{t})_{t\in[N]}\) is another design choice for us to make. To summarize, our goal in this paper is to **(i)** design sampling strategies \((q_{t})\), and **(ii)** develop methods of aggregating the information so collected with any available side information, in order to construct CSs for \(m^{*}\) whose width decays rapidly to \(0\).
Among existing works in literature, the recent papers by Waudby-Smith and Ramdas [33, 32] are the most closely related to our work. In these works, the authors considered the problem of estimating the average value of \(N\) items via WoR sampling--however, they considered only uniform sampling, and estimating only
the unweighted mean of the population. Our methods work with any sampling scheme, and can estimate any weighted mean; we recover their existing results in Appendix E.
**WoR confidence intervals for a fixed sample size.** Most existing results on concentration inequalities for observations drawn via WoR sampling focus on the fixed sample size setting, starting with Hoeffding [11], who bounded the probability of deviation of the unweighted empirical mean with WoR sampling in terms of the range of the observations. In particular, Hoeffding [11] showed that for observations \(X_{I_{1}},\ldots,X_{I_{n}}\in[a,b]\) drawn uniformly WoR from \(N\) values \((X_{i})_{i\in[N]}\), we have
\[\mathbb{P}\left(\frac{\sum_{t=1}^{n}X_{I_{k}}}{n}-\frac{\sum_{t=1}^{N}X_{i}}{N }>\varepsilon\right)\leq\exp\left(-\tfrac{2n\varepsilon^{2}}{(b-a)^{2}}\right). \tag{2}\]
In WoR sampling, as the sample size \(n\) approaches \(N\), the total number of items, we expect the empirical estimate to approximate the true average very accurately. This observation, not captured by the above bound, was made formal by Serfling [24], who showed that the \(n\) in (2) can be replaced by \(\frac{n}{1-(n-1)/N}\), thus highlighting the significant improvement possible for larger \(n\) values. Ben-Hamou et al. [3] prove a Hoeffding style concentration inequality on the unweighted sample mean to its own expectation, which is a different estimand than the weighted population mean considered in this paper. Finally, in the unweighted case, Bardenet and Maillard [2] obtained variance adaptive Bernstein and empirical-Bernstein variants of Serfling's results, that are tighter in cases where the variance of the observations is small. These results appear to be incomparable to those of Waudby-Smith and Ramdas [32, 33], that have found successful application to auditing elections [34]. In this paper, we develop techniques that generalize the CS constructions of Waudby-Smith et al. [34], Waudby-Smith and Ramdas [32] in order to estimate the weighted average of \(N\) quantities (instead of simple, unweighted average) sampled via an adaptive scheme (instead of uniform), motivated by financial auditing applications.
### Contributions
We introduce the concept of \((\varepsilon,\delta)\)-RLFA that generalizes the notion of a risk-limiting audit introduced by Stark [26] for election auditing. Unlike risk-limiting audits, where the main concern is testing an announced result, the objective of an RLFA is to precisely estimate the misstated monetary fraction of the reported financial transactions. To accomplish this, we make the following key technical contributions:
1. _New CSs for weighted means with non-uniform sampling._ To design an \((\varepsilon,\delta)\)-RLFA procedure, we construct novel CSs for \(m^{*}\) that are based on a betting method that was pioneered in [33] in Section 4, as well as Hoeffding and empirical-Bernstein CSs in Appendix D (which are looser but have a simple analytical form). Our results generalize previous methods in two ways: **(i)** they can estimate the weighted mean of \(N\) items, and **(ii)** they work with adaptive, data-dependent, sampling strategies. In particular, our betting CSs, which we show empirically are the most powerful in Appendix F) are based on simultaneously playing gambling games with an aim to disprove the possibility that \(m^{*}=m\), for each \(m\in[0,1]\). Values for \(m\), where we accumulate much wealth are eliminated from the CS. Consequently, we develop a simple, lucrative betting strategy for this setting (ApproxKelly), which is equivalent to formulating narrower CSs.
2. _Adaptive sampling strategies that minimize CS width._ In addition to designing CSes that are intrinsically narrow, we are also able to change the sampling distribution of the transactions at each time step, and develop a sampling strategy that will minimize CS width in concert with any valid CS construction. We propose two sampling strategies, prop-M and prop-MS, the latter of which can incorporate approximately accurate scores \((S(i))_{i\in[N]}\) to improve the sample efficiency of our CSs. This is accomplished by choosing the sampling distribution, at each time step, that maximizes the wealth accumulated by the betting strategies that underlie our CSs. We find that this is approximately equivalent to choosing the sampling distribution with the minimal variance, and we show that our sampling strategies result in a noticeable improvement over uniform sampling through simulations in Section 5.
3. _Robust use of side information to tighten CSs._ Finally, in Section 4, we develop a principled way of leveraging any available side information, inspired by the idea of control variates used for variance reduction in Monte Carlo sampling. Interestingly, our method adapts to the quality of the side information--if
\((S(i))_{i\in[N]}\) and \((f(i))_{i\in[N]}\) are highly correlated, the resulting CSs are tighter, while in the case of uncorrelated \((S(i))\), we simply learn to discard the side information.
## 2 Betting-based CS construction
We derive our CSs by designing sequential tests to simultaneously check the hypotheses that \(m^{*}=m\), for all \(m\in[0,1]\). By the principle of _testing by betting_[25], this is equivalent to playing repeated gambling games aimed at disproving the null \(m^{*}=m\), for each \(m\in[0,1]\). Formally, for all \(m\in[0,1]\), we construct a process \((W_{t}(m))_{t\in[N]}\) (the wealth process), such that **(i)** if \(m=m^{*}\), then \((W_{t}(m))\) is a _test martingale_, i.e., a nonnegative martingale with initial value \(1\), and **(ii)** if \(m\neq m^{*}\), then \(W_{t}(m)\) grows at an exponential rate. Recall that a process \((W_{t})_{t\in[N]}\) adapted to \((\mathcal{F}_{t})_{t\in[N]}\) is a supermartingale iff \(\mathbb{E}[W_{t}\mid\mathcal{F}_{t-1}]\leq W_{t-1}\) for all \(t\in[N]\), and a martingale if the inequality is replaced with an equality. Assuming we can construct such a process, we define the confidence set at any time \(t\) as the set of those \(m\in[0,1]\) for which \((W_{t}(m))\) is'small', because a nonnegative martingale is unlikely to take large values.
As mentioned earlier, this approach requires us to design sampling distributions \((q_{t})\), and a method for constructing a CS \((\mathcal{C}_{t})\) from the queried indices. We begin by formally defining a sampling strategy.
**Definition 1** (Sampling Strategy).: _A sampling strategy consists of a sequence \((q_{t})_{t\in[N]}\), where \(q_{t}\) is a probability distribution on the set \(\mathcal{N}_{t}\coloneqq[N]\setminus\{I_{1},\ldots,I_{t-1}\}\). Here \(I_{j}\) denotes the index drawn according to the predictable (i.e., \(\mathcal{F}_{j-1}\)-measurable) distribution \(q_{j}\)._
A natural baseline sampling strategy is to set \(q_{t}\) to be uniform over \(\mathcal{N}_{t}\) for all \(t\in[N]\). We will develop other, more powerful, sampling strategies that are more suited to our problem in Section 3.
We now describe how to construct the wealth process for an arbitrary sampling strategy. First, define the following:
\[Z_{t}\coloneqq f(I_{t})\tfrac{\pi(I_{t})}{q_{t}(I_{t})},\text{ and }\mu_{t}(m) \coloneqq m-\sum_{j=1}^{t-1}\pi(I_{j})f(I_{j}).\]
Note that \(\mu_{t}(m)\) is the remaining misstated fraction after accounting for the first \(t-1\) queries to \(f\) if \(m\) is truly the total misstated fraction. Now, we can define the _wealth process_:
\[W_{t}(m)=W_{t-1}(m)\times\left(1+\lambda_{t}(m)\left(Z_{t}-\mu_{t}(m)\right) \right),\]
with \(W_{0}=1\). \((\lambda_{t}(m))_{t\in[N]}\) is a predictable sequence with values in \([0,1/u_{t}(m)]\), and \(u_{t}(m)\) is the largest value in the support of \(Z_{t}-\mu_{t}(m)\), for each \(t\in[N]\). Note that this constraint on \((\lambda_{t}(m))\) ensures that \(W_{t}(m)\) is nonnegative for each \(t\in[N]\). We also let \(W_{0}(m)=1\) for all \(m\in[0,1]\). If we view the wealth process as the wealth we earn from gambling on the outcome of \(Z_{t}-\mu_{t}(m)\), then \((\lambda_{t}(m))\) represents a betting strategy, i.e., how much money to gamble each turn. Hence, we refer to \((\lambda_{t}(m))\) as a _betting strategy_.
It is easy to verify that \((W_{t}(m^{*}))\) is a nonnegative martingale for any sampling strategy \((q_{t})\) and betting strategy \((\lambda_{t}(m^{*}))\)). Hence, it is unlikely to take large values, as we describe next.
**Proposition 1**.: _For any sampling and betting strategies \((q_{t})\) and \((\lambda_{t}(m^{*}))\), the following holds:_
\[\mathbb{P}\left(\exists t\geq 1:W_{t}(m^{*})\geq 1/\delta\right)\leq\delta.\]
This is a consequence of Ville's inequality, first obtained by Ville [29], which is a time-uniform version of Markov's inequality for nonnegative supermartingales. This result immediately implies that for any sampling strategy, and any betting strategy, the term \(m^{*}\) must lie in the set
\[\mathcal{C}_{t}=\{m:W_{t}(m)<1/\delta\} \tag{3}\]
with probability at least \(1-\delta\), making \((\mathcal{C}_{t})\) a \((1-\delta)\)-CS.
**Theorem 1**.: \((\mathcal{C}_{t})\) _is an \((1-\delta)\)-CS, where \(\mathcal{C}_{t}\) defined by (3). Hence, a procedure that outputs \(\mathcal{C}_{\tau}\) is an \((\varepsilon,\delta)\)-RLFA, for any sampling strategy \((q_{t})\) and betting strategies \((\lambda_{t}(m))\) for each \(m\in[0,1]\). Recall that the \(\tau\) is defined in (1) as the first time where \(|\mathcal{C}_{t}|\leq\varepsilon\)._
This methodology gives us flexible framework for constructing different \((\mathcal{C}_{t})\) that result in different RLFAs. Now, we can turn our attention to finding betting strategies \((\lambda_{t}(m))\) that reduces the CS width quickly and minimizes \(\tau\).
**Remark 1**.: _Note that the set \(\mathcal{C}_{t}\) in (3), does not admit a closed form expression, and is computed numerically in practice by choosing \(m\) values over a sufficiently fine grid on \([0,1]\). In Appendix D, we design CSs based on nonnegative supermartingales (instead of martingales) that do admit closed form representation. However, this analytical tractability comes as the price of empirical performance, as we demonstrate in Appendix F._
**Remark 2**.: _Ville's inequality (Fact 1 in Appendix A.2), used for proving Proposition 1, is known to be tight for continuous-time nonnegative martingales with infinite quadratic variation, and incurs a slight looseness as we move to the case of discrete time martingales. As a result, the martingale-based CSs constructed in this section provide nearly tight coverage guarantees, that are strictly better than the supermartingale based closed-form CSs discussed in Appendix D. This near-tightness of the error probability of our betting-based CSs implies that there exists no other CS that is uniformly tighter than ours, while also controlling the error probability below \(\alpha\). In other words, our CSs satisfy a notion of admissibility or Pareto-optimality._
### Powerful betting strategies
Besides validity, we also want the size of the CS to shrink rapidly. This depends on how quickly the values of \(W_{t}(m)\) for \(m\neq m^{*}\) grow with \(t\). One such criterion is to consider the _growth rate_, i.e., the expected logarithm of the outcome of each bet. We can define the _one-step growth rate_\(D_{n}\), for each \(n\in[N]\) as follows:
\[D_{n}(m,\lambda)\coloneqq\log(1+\lambda(Z_{t}-\mu_{t}(m))).\]
We are interested in maximizing the expected logarithm of the wealth process [10, 25], since it is equivalent to minimizing the expected time for a wealth process to exceed a fixed threshold (asymptotically, as the threshold grows larger) [6]. Thus, _in the context of the auditing problem, maximizing \(\mathbb{E}[D_{t}(\lambda,m)\mid\mathcal{F}_{t-1}]\), approximately minimizes \(\mathbb{E}[\tau]\)_. The one-step growth rate is a broadly studied objective known as the "Kelly criterion" [18]. In general, finding the best sequence of bets \(\lambda_{t}(m)\) for different values of \(n\) is non-tractable. Instead we consider the approximation \(\log(1+x)\geq x-x^{2}\) for \(|x|\leq 1/2\), and define the best constant bet \(\lambda_{n}^{*}\) in hindsight, as
\[B_{t}(m,\lambda) \coloneqq\lambda\left(Z_{t}-\mu_{t}(m)\right)-\lambda^{2}\left(Z_ {t}-\mu_{t}(m)\right)^{2}, \tag{4}\] \[\lambda_{n}^{*} \coloneqq\operatorname*{argmax}_{\lambda\in[\pm 1/2c]}\frac{1}{n} \sum_{t=1}^{n}B_{t}(m,\lambda),\]
where \(c=\max\{|Z_{t}-\mu_{t}(m)|:t\in[n]\}\). We get the following result on \(\lambda_{n}^{*}\) for each \(n\in[N]\):
\[\lambda_{n}^{*}\propto\frac{\sum_{t=1}^{n}Z_{t}-\mu_{t}(m)}{\sum_{t=1}^{n}(Z_ {t}-\mu_{t}(m))^{2}}\coloneqq\frac{A_{n}}{V_{n}}.\]
Since \(\lambda_{n}^{*}\) depends on the \(n\)th sample itself, \(Z_{n}\), we cannot use this strategy in our CS construction. Instead, at any \(n\in[N]\), we can use a predictable approximation of this strategy, that we shall refer to as the ApproxKelly betting strategy. This strategy sets \(\lambda_{t}(m)\) as follows:
\[\lambda_{t}(m)=c_{t}\frac{A_{t-1}}{V_{t-1}},\] (ApproxKelly)
where the (predictable) factor \(c_{t}\) is selected to ensure that \(\lambda_{t}(m)\times(Z_{t}-\mu_{t}(m))\in(-1,\infty)\), i.e., to satisfy the nonnegativity constraint of \((W_{t}(m))\).
**Remark 3**.: _We note there exist several other betting schemes in literature besides ApproxKelly, such as those based on alternative approximations of \(\log(1+x)\)[9, 33, 23], or the ONS strategy that relies on the exp-concavity of the \(\log\)-loss [7]. In practice, however, we did not observe significant difference in their performance, and we focus on the ApproxKelly strategy in this paper due to its conceptual simplicity._
### Logical CS
Irrespective of the choice of the sampling and betting strategies, we can construct a CS that contains \(m^{*}\) with probability 1, based on purely logical considerations. After sampling \(t\) transactions, we know that \(m^{*}\) is lower bounded by quantities derived from the the misstatement fraction accumulated in the items we have sampled already. Hence, we can derive the following deterministic bounds:
\[L_{l}(t)\coloneqq\sum_{j=1}^{t}\pi(I_{j})f(I_{j})\leq m^{*},\quad\text{and} \quad U_{l}(t)\coloneqq L_{l}(t)+\sum_{i\in\mathcal{U}_{t}}\pi(i)\geq m^{*}.\]
Note that \(L_{l}(t)\) (resp. \(U_{l}(t)\)) values are obtained by noting that all the remaining unknown \(f\) values must be larger than 0 (resp. smaller than 1). Additionally, due to the time-uniform nature of confidence sequences,we can intersect the logical CS with a 'probabilistic' CS constructed in (3), and obtain the following CS:
\[\widetilde{\mathcal{C}}_{t}\coloneqq\mathcal{C}_{t}\cap[L_{\ell}(t),U_{\ell} (t)]\cap\widetilde{\mathcal{C}}_{t-1}, \tag{5}\]
where \(\widetilde{C}_{0}\coloneqq[0,1]\). Note that we may take the running intersection of a CS since it remains a CS, simply by definition. Consequently, the combined CS in (5) dominates the probabilistic CS.
## 3 Sampling Strategies
The choice of the sampling strategy, \((q_{t})\), is also critical to reducing uncertainty about \(m^{*}\) quickly. Recall that \(q_{t}\) is a probability distribution on the remaining indices \(\mathcal{N}_{t}\) for each \(t\in[N]\). To motivate the choice of our sampling strategy, we first consider the following question: _what is the randomized sampling strategy that leads to the fastest reduction in uncertainty about \(m^{*}\)?_
In general, it is difficult to characterize this strategy in closed form (other than the computational aspect of the strategy being the solution of a multistage optimization problem). Thus, we consider a simplified question, that of finding the sampling strategy that maximizes the expectation of the one-step growth rate, \(D_{n}(\lambda,m)\), for each \(n\in[N]\). We seek to maximize the lower bound, \(B_{n}(\lambda,m)\), introduced in (4):
\[q_{n}^{*}\coloneqq\operatorname*{argmax}_{q\in\Delta^{\mathcal{N}_{n}}}\mathbb{ E}_{I_{n}\sim q}\left[B_{n}(\lambda,m)\right], \tag{6}\]
where \(\Delta^{\mathcal{N}_{n}}\) is the universe of distributions supported on \(\mathcal{N}_{n}\). Our next result presents a closed-form characterization of \(q_{n}^{*}\).
**Proposition 2**.: _Note that \(q_{n}^{*}=\text{argmin}_{q\in\Delta^{\mathcal{N}_{n}}}\ \mathbb{V}_{I_{n}\sim q}[Z_{n}]\), which implies that \(q_{n}^{*}(i)\propto\pi(i)f(i)\). Hence, for any valid betting strategy \((\lambda_{t})\) and sampling strategy \((q_{t})\), we have \(\mathbb{E}_{I\sim q_{t}}[B_{t}(\lambda_{t},m)]\leq\mathbb{E}_{I\sim q_{t}^{*}} [B_{t}(\lambda_{t},m)]\)._
We defer the proof to Appendix B.1, which proceeds by showing that maximizing the lower bound on the one-step growth rate is equivalent to minimizing the variance of \(Z_{n}\). It turns out that \(q_{n}^{*}(i)\propto\pi(i)f(i)\) is the minimum (in fact, zero) variance sampling distribution, and thus, \((q_{t}^{*})\) dominates any other sampling strategy w.r.t. maximizing the expected bound on the one-step growth rate.
**Remark 4**.: _The oracle strategy in Proposition 2 can be considered as a solution of an alternative question: suppose there is an oracle who knows the true values of \(f(i)\), and needs to convince an observer that the value \(m^{*}\) is within an interval of width \(\varepsilon\) with probability at least \(1-\delta\). The oracle wishes to do so by revealing as few \(f(i)\) values to the observer as possible. Clearly, any deterministic sampling strategy from the oracle will lead to skepticism from the observer (i.e., the observer will only be convinced once the \(\pi(i)\) corresponding to the unrevealed \(f(i)\) sum to \(\varepsilon\)). Hence, the sampling strategy used by the oracle must be random, and according to Proposition 2, it should draw transactions with probability \(\propto\pi(i)\times f(i)\)._
Sampling without side information.Since the \((f(i))\) values are unknown by definition of the problem, we cannot use \((q_{t}^{*})\) in practice. Instead, we consider a sampling strategy that selects a index \(i\in\mathcal{N}_{t}\) in proportion to its \(\pi(i)\) value -- we refer to this strategy as the prop-M strategy. This strategy is also known
as "sampling proportional to size" in auditing literature [5], and is similar to the best deterministic strategy, which queries indices in descending order w.r.t. \(\pi(i)\).
\[q_{t}(i)=\frac{\pi(i)}{\sum_{j\in\mathcal{N}_{t}}\pi(j)},\] (prop-M)
for each \(i\in\mathcal{N}_{t}\). Sampling with prop-M minimizes the "worst case" support range, and max value, of \(Z_{t}\). This allows for the largest possible choice of \(\lambda_{t}\), i.e., our bet.
Using accurate side information for sampling.Proposition 2 motivates a natural sampling strategy in situations where we have access to side information (\(S(i)\)) that is known to be a high-fidelity approximation of the true (\(f(i)\)) values--draw indices proportional to \(\pi(i)\times S(i)\). We will refer to this strategy as the prop-MS strategy:
\[q_{t}(i)=\frac{\pi(i)S(i)}{\sum_{j\in\mathcal{N}_{t}}\pi(j)S(j)}\.\] (prop-MS)
Under certain relative accuracy guarantees on the side information, we can characterize the performance achieved by the prop-MS strategy as compared to the optimal strategy of Proposition 2, as we state next.
**Corollary 1**.: _Assume that the side information, \((S(i))\), is an accurate prediction of \((f(i))\), i.e., there exists a known parameter \(a\in[0,1)\), such that_
\[S(i)/f(i)\in[1\pm a] \tag{7}\]
_for all \(i\in[N]\). With the prop-MS strategy for \((q_{t})\), we can ensure \(\mathbb{E}_{I_{t}\sim q_{t}}[B_{t}(\lambda_{t},m)]\geq\mathbb{E}_{I_{t}\sim q _{t}^{*}}[B_{t}(\lambda_{t},m)]\left(\frac{1}{1+a}\right)^{2}\), where \((q_{t}^{*})\) is the optimal sampling strategy of Proposition 2._
In many cases, we do not have the accuracy guarantees on the side information required by Corollary 1, and we develop an approach to properly incorporate such side information in the next section.
## 4 Using possibly inaccurate side information
Often, we do not have a uniform guarantee on accuracy on (\(S(i)\)) as we assumed in the previous section. In such cases, we cannot continue to use the prop-MS strategy, as it requires knowledge of the range of \(f(i)/S(i)\) in order to select the betting fractions that ensure non-negativity of the process (\(W_{t}(m)\)). Nevertheless, we develop new techniques in this section that can exploit the side information without the uniform accuracy guarantees, provided that the side information is correlated with the unknown (\(f(i)\)) values. In particular, the method developed in this section for incorporating the side information is orthogonal to the choice of the sampling strategy; and thus, it can be combined with any sampling strategy that ensures the non-negativity of the process (\(W_{t}(m)\)).
Our approach is based on the idea of control variates [1, SS V.2] that are used to reduce the variance of Monte Carlo (MC) estimates of an unknown quantity, using some correlated side information whose expected value is known. More specifically, let \(\widehat{m}\) denote an unbiased estimate of an unknown parameter \(m\), and let \(\widehat{v}\) denote another (possibly correlated to \(\widehat{m}\)) statistic with zero mean. Then, the new statistic, \(\widehat{m}_{\beta}=\widehat{m}+\beta\widehat{v}\) is also an unbiased estimate of \(m\), for all \(\beta\in\mathbb{R}\). Furthermore, it is easy to check that \(\mathbb{V}(\widehat{m}_{\beta})=\mathbb{V}(\widehat{m})+\beta^{2}\mathbb{V}( \widehat{v})+2\beta\mathrm{Cov}(\widehat{m},\widehat{v})\), which implies that the variance of this new estimate is minimized at \(\beta=\beta^{*}\coloneqq-\left(\mathrm{Cov}(\widehat{m},\widehat{v})/\mathbb{ V}(\widehat{v})\right)\). Finally, note that the variance of \(\widehat{m}_{\beta^{*}}\) cannot be larger than the variance of the original estimate \(\widehat{m}\), since \(\mathbb{V}(\widehat{m}_{\beta^{*}})\leq\mathbb{V}(\widehat{m}_{0})=\mathbb{V} (\widehat{m})\) by the definition of \(\beta^{*}\).
Returning to our problem, given some possibly inaccurate side information (\(S(i)\)), define the control variate (that is, an analog of the term \(\widehat{v}\)) as
\[U_{t}\coloneqq S(I_{t})-\mathbb{E}_{I^{\prime}\sim q_{t}}[S(I^{\prime})],\]
and let \((\beta_{t})\) denote a sequence of predictable terms taking values in \([-1,1]\) used to weigh the effect of \((U_{t})\). Note that, similar to \(\widehat{v}\), the term \(U_{t}\) has zero mean for each \(t\in[N]\). We now define the wealth process with control variates, denoted by \((\widetilde{W}_{t}(m))\), and its corresponding CS as follows:
\[\widetilde{W}_{t}(m) \coloneqq\prod_{t=1}^{n}\left(1+\lambda_{t}(m)(Z_{t}+\beta_{t}U_ {t}-\mu_{t}(m))\right),\] \[\mathcal{C}_{t} =\{m\in[0,1]:\widetilde{W}_{n}(m)<1/\alpha\}, \tag{8}\]
where \((\lambda_{t}(m))\) is a betting strategy for each \(m\in[0,1]\).
**Theorem 2**.: _For any set of side information \((S(i))\), sequence \((\beta_{t})\), sampling strategy \((q_{t})\), and betting strategies \((\lambda_{t}(m))\), \((\mathcal{C}_{t})\) as defined in (8) is an \((1-\delta)\)-CS for \(m^{*}\). Consequently, the procedure that produces \(\mathcal{C}_{\tau}\) is an \((\varepsilon,\delta)\)-RLFA._
The discussion above suggests that by a suitable choice of the parameters \((\beta_{t})\), we can reduce the variance of the first term. To see why this is desirable, recall that the optimal value of the approximate growth rate after \(n\) steps of the new wealth process satisfies the following:
\[\widetilde{B}_{n}(\lambda,m)\coloneqq \lambda(Z_{t}+\beta_{t}U_{t}-\mu_{t}(m))\] \[-\lambda^{2}(Z_{t}+\beta_{t}U_{t}-\mu_{t}(m))^{2},\] \[\max_{\lambda}\widetilde{B}_{n}(\lambda,m)\propto\frac{\sum_{t=1 }^{n}Z_{t}+\beta_{t}U_{t}-\mu_{t}(m)}{\sum_{t=1}^{n}(Z_{t}+\beta_{t}U_{t}-\mu_ {t}(m))^{2}}.\]
Note that by setting \(\beta_{t}=0\) for all \(t\in[n]\), we recover \(\widetilde{B}_{n}(\lambda,m)=B_{n}(\lambda,m)\), i.e., the wealth lower bound with no side information. Next, we observe that \(\sum_{t=1}^{n}\beta_{t}U_{t}\) concentrates strongly around its mean (0).
**Proposition 3**.: _For any \(\delta\in(0,1)\) and sequence \((\beta_{t})\), the following statement is simultaneously true for all \(n\in[N]\) with probability at least \(1-\delta\)_
\[\left|\frac{1}{n}\sum_{t=1}^{n}\beta_{t}U_{t}\right|=\mathcal{O}\left(\sqrt{ \frac{\log(\log n/\delta)}{n}}\right).\]
This result, proved in Appendix B.2, implies that in order to select the parameters \((\beta_{t})\), we can focus on its effect on the second order term in the denominator. In particular, the best value of \(\beta\) for the first \(n\) observations, is the one that minimizes the denominator, and can be defined as follows:
\[\beta_{n}^{*}\coloneqq\operatorname*{argmin}_{\beta\in[-1,1]}\,\sum_{t=1}^{n }\,(Z_{t}-\mu_{t}(m)+\beta U_{t})^{2}\propto-\frac{\sum_{t=1}^{n}(Z_{t}-\mu_ {t}(m))U_{t}}{\sum_{t=1}^{n}U_{t}^{2}}.\]
The numerator of \(\beta_{n}^{*}\) varies with \(\sum_{t=1}^{n}f(I_{t})S(I_{t})\)--hence, the magnitude of \(\beta_{t}\) increases with the amount of correlation between \(f(i)\) and \(S(i)\). Since \(\beta_{n}^{*}\) is not predictable (it is \(\mathcal{F}_{n}\) instead of \(\mathcal{F}_{n-1}\) measurable), we will use the following strategy of approximating \(\beta_{n}^{*}\) at each \(n\in[N]\):
\[\beta_{n}\propto-\frac{\sum_{t=1}^{n-1}(Z_{t}-\mu_{t}(m))U_{t}}{\sum_{t=1}^{n -1}U_{t}^{2}}.\]
for \(n\geq 2\) and we let \(\beta_{1}=0\). This provides a principled way of incorporating side information even when the relationship between the side information and the ground truth is unclear.
**Remark 5**.: _Our work is motivated by applications where the side-information is generated by an ML model trained on historical transaction data. In practice, ML models are trained via empirical risk minimization, and we expect that models with lower risk should result in side-information with higher correlation. For some simple cases, such as least-squares linear regressors, we can obtain a precise relation between correlation and risk: \(\rho^{2}=1-MSE/S_{y}\), where \(S_{y}\) is the empirical variance of the target variable \(y\) used in training the model. Characterizing this relation for more general models is left for future work._
Experiments
We conduct simulations of our RLFA methods on a variety of scenarios for \(\pi\) and \(f\). For each simulation setup, we choose two positive integers \(N_{\text{lg}}\) and \(N_{\text{sm}}\) such that \(N_{\text{lg}}+N_{\text{sm}}=N\). We generate the weight distribution \(\pi\), consisting of \(N_{\text{lg}}\) 'large' values and \(N_{\text{sm}}\)'small' values. The exact range of values taken by these terms are varied across experiments, but on an average the ratio of 'large' over'small' \(\pi\) values lie between \(10\) and \(10^{3}\). We then generate the \(f\) values in one of two ways: (1) \(f\propto\pi\), where indices with where large \(\pi\) values take \(f\) values in \([0.4,0.5]\) and small \(\pi\) values take on \(f\) values in \([0.001,0.01]\), or (2) \(f\propto 1/\pi\), where the \(f\) value ranges are swapped for large and small values. The simulations in this section focus on the different sampling strategies as well as the efficacy of control variates -- we provide additional experiments comparing the betting CS with other types of CS in Appendix F.
No side information: uniform vs. prop-M sampling.In the first experiment, we compare the performance of the prop-M strategy with the uniform baseline. In addition to this, we also illustrate the significance of logical CS (introduced in Section 2.2) especially in cases when there are a few large \(\pi\) values. From the widths of the CSs plotted in Figure 1, we can see that prop-M outperforms the uniform baseline in all four cases. The gap in performance increases when \(N_{\text{lg}}\) is small since \(\pi\) deviates more significantly from the uniform weighting: it consists of a few large weights with the rest close to \(0\). On the other hand, when \(N_{\text{lg}}\) is large, the weights resemble the uniform distribution, leading to the competitive performance of the uniform baseline. The logical CSs are most useful in the case of small \(N_{\text{lg}}\), especially with \(f\propto\pi\). This is because for small \(N_{\text{lg}}\), every query to an index with large \(\pi\) value leads to a significant reduction in the uncertainty about \(m^{*}\).
Next, in Figure 2, we plot the distribution of the stopping time \(\tau\) for an RLFA with \(\varepsilon=\delta=0.05\), over \(500\) independent trials. The prop-M strategy leads to a significant reduction in the sample size requirement
Figure 1: A comparison of prop-M vs. uniform sampling distributions, and the impact of intersecting with the logical CS (Section 2.2) on the width of the CSs, when \(\delta=0.05\). The prop-M strategy produces tighter CSs that results, and intersecting with the logical CS further reduces the width, particularly when few transactions are large (\(N_{\text{lg}}=0.2\)).
Figure 2: Distribution of the number of transactions audited (\(\tau\)) for the same experiments as Figure 1, with \(\varepsilon=0.05\). We omit the uniform (without logical CS) CS histograms, as they are concentrated entirely at \(N\)
to obtain an \(\varepsilon\)-accurate estimate of \(m^{*}\) as compared to the uniform baseline, both with and without the logical CS. Furthermore, the distribution of \(\tau\) with the prop-M strategy often has less variability than the uniform strategy. Hence, prop-M has demonstrated itself empirically to be a better sampling strategy than simply sampling uniformly, as one would do when all the weights are equal.
Using prop-MS with accurate side information.In the second experiment, we study the benefit of incorporating accurate side information in the design of our CSs, by comparing the performance of prop-MS strategy with that of the prop-M strategy. We generate \(S\) randomly while ensuring \(S(i)/f(i)\in[1\pm a]\) (from (7)) for some \(a\in(0,1)\). Thus smaller values of \(a\) imply that the scores \(S(i)\) are more accurate approximations of \(f(i)\) for all \(i\in[N]\).
In Figure 2(a), we can see that the prop-MS strategy with accurate side information dominates the prop-M strategy. This is further reflected in the distribution of \(\tau\) for an RLFA where \(\varepsilon=\delta=0.05\) in Figure 2(b). Hence, in situations where we are confident in the accuracy of our side information, we should incorporate it directly into our sampling strategy to reduce the width of the CS.
Control variates from possibly inaccurate side information.Finally, we consider the case in which we do not have prior information about the accuracy of the side information. Thus, using the prop-MS strategy in this scenario directly can lead to very conservative CSs (this is because in the absence of tight guarantees on the range of the \(S/f\) ratio, we will have to use the worst case range). Instead, we compare the performance of the prop-M strategy, with and without using control variates described in Section 4. In this case, we set \(S(i)=c\times f(i)+(1-c)\times R_{i}\) for \(c\in(0,1)\), where \((R_{i})_{i\in[N]}\) are i.i.d.random variables distributed uniformly over \([0,1]\). The parameter \(c\) controls the level of correlation between \(f\) and \(S\) values, with small \(c\) values indicating low correlation.
We generate the data with \(N_{\text{lg}}=40\) and \(N=200\). In Figure 4, we compare the CSs and the distribution of \(\tau\) for an RLFA (with \(\varepsilon=\delta=0.05\)) for the prop-M strategy with and without control variates, when the
Figure 3: Comparison of the prop-MS vs. the prop-M strategy with accurate side information (\(S(i)\)), i.e., \(S(i)/f(i)\in[0.9,1.1]\) where \(\varepsilon=\delta=0.05\). We see that prop-MS outperforms prop-M in both CS width and sample efficiency.
side information is generate with \(c=0.9\). Due to the high correlation, there is a significant decrease in the samples needed to reach an accuracy of \(\varepsilon\), when using control variates.
Finally, in Figure 5, we study the variation in sample efficiency as the correlation between \(S\) and \(f\) changes (i.e., by varying \(c\)). In particular, for 9 linearly spaced \(c\) values in the range \([0.1,0.9]\), we compute the \(\tau\) for an RLFA without (\(\tau_{\text{no-CV}}\)) and with control variates (\(\tau_{\text{CV}}\)) over 250 trials, and then plot the variation of the mean of their ratio, \(\tau_{\text{CV}}/\tau_{\text{no-CV}}\).
Figure 5 highlights the key advantage of our CS construction using control variates -- this method automatically adapts to the correlation between the side information and the \(f\) values. In cases where the side information is highly correlated (i.e., larger \(c\) values), the reduction in samples is large; whereas when the correlation is small, our approach automatically reduces the impact of the side information.
## 6 Conclusion
In this paper, we defined the concept of an \((\varepsilon,\delta)\)-RLFA and devised RLFA procedures from confidence sequences (CSs) for the weighted average of \(N\) terms (denoted by \(m^{*}\)), using adaptive randomized sampling WoR. First, for arbitrary randomized sampling strategies, we developed two methods of constructing CSs for \(m^{*}\) using test martingales. We then addressed the question of effectively incorporating side information, with or without guarantees on their accuracy, to improve the quality of the CSs constructed.
Our work opens up several interesting directions for future work. For instance, in Proposition 2, we characterized the sampling strategy that optimizes a lower bound on the one-step growth rate. Future work could investigate whether we can obtain a more complete characterization of the optimal policy, without relying on approximations like Proposition 2. Another interesting issue, not addressed in our paper is
Figure 4: The plots above show the width of the CSs and the distribution of \(\tau\) for the \(f\propto\pi\) and the \(f\propto 1/\pi\) cases, where \(N_{\text{lg}}/N=0.2\) and \(c=0.9\).
that of considering more general types of side information available to us. As described in Section 1, we have assumed that we have access to \([0,1]\) valued side information that is supposed to be a proxy for the true (and unknown) \(f\) values. However, in practical auditing problems, the side information is usually available in terms of a collection of numeric, discrete and categorical features that are correlated with the unknown \(f\) values. Developing methods for incorporating these more realistic forms of side information into our framework for designing CSs is another important question for future work. Furthermore, another type of side information is any knowledge from a prior audit. For example, auditors may know before reviewing any data (transactions or AI-generated side-info) that for this year, some accounts are likely to have smaller or bigger \(f\) values than other accounts because of the specific performance incentives placed on the company managers by their supervisors or by the market conditions.
**Acknowledgements.** We acknowledge Ian Waudby-Smith, Justin Whitehouse, Sang Wu, Tom Yan, and our collaborators at PricewaterhouseCoopers for their insightful discussions on this work. We also acknowledge PricewaterhouseCoopers for providing financial support for this research.
|
2305.18462 | Membership Inference Attacks against Language Models via Neighbourhood
Comparison | Membership Inference attacks (MIAs) aim to predict whether a data sample was
present in the training data of a machine learning model or not, and are widely
used for assessing the privacy risks of language models. Most existing attacks
rely on the observation that models tend to assign higher probabilities to
their training samples than non-training points. However, simple thresholding
of the model score in isolation tends to lead to high false-positive rates as
it does not account for the intrinsic complexity of a sample. Recent work has
demonstrated that reference-based attacks which compare model scores to those
obtained from a reference model trained on similar data can substantially
improve the performance of MIAs. However, in order to train reference models,
attacks of this kind make the strong and arguably unrealistic assumption that
an adversary has access to samples closely resembling the original training
data. Therefore, we investigate their performance in more realistic scenarios
and find that they are highly fragile in relation to the data distribution used
to train reference models. To investigate whether this fragility provides a
layer of safety, we propose and evaluate neighbourhood attacks, which compare
model scores for a given sample to scores of synthetically generated neighbour
texts and therefore eliminate the need for access to the training data
distribution. We show that, in addition to being competitive with
reference-based attacks that have perfect knowledge about the training data
distribution, our attack clearly outperforms existing reference-free attacks as
well as reference-based attacks with imperfect knowledge, which demonstrates
the need for a reevaluation of the threat model of adversarial attacks. | Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick | 2023-05-29T07:06:03Z | http://arxiv.org/abs/2305.18462v2 | # Membership Inference Attacks against Language Models
###### Abstract
Membership Inference attacks (MIAs) aim to predict whether a data sample was present in the training data of a machine learning model or not, and are widely used for assessing the privacy risks of language models. Most existing attacks rely on the observation that models tend to assign higher probabilities to their training samples than non-training points. However, simple thresholding of the model score in isolation tends to lead to high false-positive rates as it does not account for the intrinsic complexity of a sample. Recent work has demonstrated that reference-based attacks which compare model scores to those obtained from a reference model trained on similar data can substantially improve the performance of MIAs. However, in order to train reference models, attacks of this kind make the strong and arguably unrealistic assumption that an adversary has access to samples closely resembling the original training data. Therefore, we investigate their performance in more realistic scenarios and find that they are highly fragile in relation to the data distribution used to train reference models. To investigate whether this fragility provides a layer of safety, we propose and evaluate neighbourhood attacks, which compare model scores for a given sample to scores of synthetically generated neighbour texts and therefore eliminate the need for access to the training data distribution. We show that, in addition to being competitive with reference-based attacks that have perfect knowledge about the training data distribution, our attack clearly outperforms existing reference-free attacks as well as reference-based attacks with imperfect knowledge, which demonstrates the need for a reevaluation of the threat model of adversarial attacks.
## 1 Introduction
The public release and deployment of machine learning models trained on potentially sensitive user data introduces a variety of privacy risks: While embedding models have been shown to leak personal attributes of their data [14], generative language models are capable of generating verbatim repetitions of their training data and therefore exposing sensitive strings such as names, phone numbers or email-addresses [13]. Another source of risk arises from membership inference attacks (MIAs) [2], which enable adversaries to classify whether a given data sample was present in a target model's training data or not. Due to their simplicity and the fact that MIAs are an important component of more sophisticated attacks such as extraction attacks [13], they have become one of the most widely used tools to evaluate data leakage and empirically study the privacy of machine learning models [15, 14].
Typically, membership inference attacks exploit models' tendency to overfit their training data and therefore exhibit lower loss values for training members [21, 2]. A highly simple and commonly used baseline attack is therefore the LOSS attack [21], which classifies samples as training members if their loss values are below a certain threshold. While attacks of this kind do generally reap high accuracies, Carlini et al. Carlini2011a point out a significant flaw: _Good accuracies for attacks of this kind are primarily a result of their ability to identify non-members rather than training data members_, which does arguably not pose important privacy risks. This shortcoming can be attributed to the fact that certain samples such as repetitive or very simple short sentences are naturally assigned higher probabilities than others [17, 18], and the influence of this aspect on the obtained model score largely outweighs the influence of a model's tendency to overfit its training samples [13]. To account for this, previous work has introduced the idea of _difficulty calibration mechanisms_ [16].
et al., 2018; Watson et al., 2022), which aim to quantify the intrinsic complexity of a data sample (i.e., how much of an outlier the given sample is under the probability distribution of the target model) and subsequently use this value to regularize model scores before comparing them to a threshold value.
In practice, difficulty calibration is mostly realized through _Likelihood Ratio Attacks (LiRA)_, which measure the difficulty of a target point by feeding it to _reference models_ that help provide a perspective into how likely that target point is in the given domain (Ye et al., 2022; Carlini et al., 2021; Watson et al., 2022; Mireshghallah et al., 2022, 2022). In order to train such reference models, LiRAs assume that an adversary has knowledge about the distribution of the target model's training data and access to a sufficient amount of samples from it. We argue that this is a highly optimistic and in many cases unrealistic assumption: as also pointed out by Tramer et al. (2022), in applications in which we care about privacy and protecting our models from leaking data (e.g. in the medical domain), high-quality, public in-domain data may not be available, which renders reference-based attacks ineffective. Therefore, we aim to design an attack which does not require any additional data: For the design of our proposed _neighborhood attack_, we build on the intuition of using references to help us infer membership, but instead of using reference models, we use _neighboring samples_, which are textual samples crafted through data augmentations such as word replacements to be non-training members that are as similar as possible to the target point and therefore practically interchangeable with it in almost any context. With the intuition that neighbors should be assigned equal probabilities as the original sample under any plausible textual probability distribution, we then compare the model scores of all these neighboring points to that of the target point and classify its membership based on their difference. Similar to LiRAs, we hypothesize that if the model score of the target data is similar to the crafted neighbors, then they are all plausible points from the distribution and the target point is not a member of the training set. However, if a sample is much more likely under the target model's distribution than its neighbors, we infer that this could only be a result of overfitting and therefore the sample must be a part of the model's training data.
We conduct extensive experiments measuring the performance of our proposed neighborhood attack, and particularly compare it to reference-based attacks with various different assumptions about knowledge of the target distribution and access to additional data. Concretely, amongst other experiments, we simulate real-world reference-based attacks by training reference models on external datasets from the same domain as the target model's training data. We find that neighbourhood attacks outperform LiRAs with more realistic assumptions about the quality of accessible data by up to 100%, and even show competitive performance when we assume that an attacker has perfect knowledge about the target distribution and access to a large amount of high-quality samples from it.
Figure 1: Overview of our attack: Given a target sample \(x\), we use a pretrained masked language model to generate highly similar neighbour sentences through word replacements. Consequently, we compare our neighbours’ losses and those of the original sample under the target model by computing their difference. As our neighbours are highly similar to the target sequence, we expect their losses to be approximately equal to the target model and only to be lower if the target sequence was a sample of the model’s training data. In this case, the difference should be below our threshold value \(\gamma\).
Membership Inference Attacks via Neighbourhood Comparison
In this section, we provide a detailed description of our attack, starting with the general idea of comparing neighbouring samples and following with a technical description of how to generate such neighbors.
### General Idea
We follow the commonly used setup of membership inference attacks in which the adversary has grey-box access to a machine learning model \(f_{\theta}\) trained on an unknown dataset \(\mathcal{D}_{\mathrm{train}}\), meaning that they can obtain confidence scores and therefore loss values from \(f_{\theta}\), but no additional information such as model weights or gradients. The adversary's goal is to learn an attack function \(A_{f_{\theta}}:\mathcal{X}\rightarrow\{0,1\}\), which determines for each \(x\) from the universe of textual samples \(\mathcal{X}\) whether \(x\in D_{\mathrm{train}}\) or \(x\not\in D_{\mathrm{train}}\). As mentioned in the previous section, the LOSS attack Yeom et al. (2018), one of the most simple forms of membership inference attacks, classifies samples by thresholding their loss scores, so that the membership decision rule is:
\[A_{f_{\theta}}(x)=\mathbbm{1}[\mathcal{L}(f_{\theta},x)<\gamma]. \tag{1}\]
More recent attacks follow a similar setup, but perform difficulty calibration to additionally account for the intrinsic complexity of the sample \(x\) under the target distribution and adjust its loss value accordingly. Concretely, given a function \(d:\mathcal{X}\rightarrow\mathbb{R}\) assigning difficulty scores to data samples, we can extend the the decision rule to
\[A_{f_{\theta}}(x)=\mathbbm{1}[\mathcal{L}(f_{\theta},x)-d(x)<\gamma]. \tag{2}\]
Likelihood Ratio Attacks (LiRAs) Ye et al. (2022), the currently most widely used form of membership inference attacks, use a sample's loss score obtained from some reference model \(f_{\phi}\) as a difficulty score, so that \(d(x)=\mathcal{L}(f_{\phi},x)\). However, this makes the suitability of the difficulty score function dependent on the quality of reference models and therefore the access to data from the training distribution. We circumvent this by designing a different difficulty calibration function depending on synthetically crafted neighbors.
Formally, for a given \(x\), we aim to produce natural adjacent samples, or a set of \(n\) neighbors \(\{\tilde{x}_{1},...,\tilde{x}_{n}\}\), which slightly differ from \(x\) and are not part of the target model's training data, but are approximately equally likely to appear in the general distribution of textual data, and therefore offer a meaningful comparison. Given our set of neighbors, we calibrate the loss score of \(x\) under the target model by subtracting the average loss of its neighbors from it, resulting in a new decision rule:
\[A_{f_{\theta}}(x)=\mathbbm{1}\left[\left(\mathcal{L}(f_{\theta},x)-\sum_{i=1}^ {n}\frac{\mathcal{L}(f_{\theta},\tilde{x}_{i})}{n}\right)<\gamma\right]. \tag{3}\]
The interpretation of this decision rule is straightforward: Neighbors crafted through minimal changes that fully preserve the semantics and grammar of a given sample should in theory be interchangeable with the original sentence and therefore be assigned highly similar likelihoods under any textual probability distribution. Assuming that our neighbors were not present in the training data of the target model, we can therefore use the model score assigned to them as a proxy for what the original sample's loss should be if it was not present in the training data. The target sample's loss value being substantially lower than the neighbors' losses could therefore only be a result of overfitting and therefore the target sample being a training member. In this case, we expect the difference in Equation 3 to be below our threshold value \(\gamma\)
### Obtaining Neighbour Samples
In the previous section, for a given text \(x\), we assumed access to a set of adjacent samples \(\{\tilde{x}_{1},...,\tilde{x}_{n}\}\). In this section we describe how those samples are generated. As it is highly important to consider neighbours that are approximately equally complex, it is important to mention that beyond the semantics of \(x\), we should also preserve structure and syntax, and can therefore not simply consider standard textual style transfer or paraphrasing models. Instead, we opt for very simple word replacements that preserve semantics and fit the context of the original word well. For obtaining these replacements, we adopt the framework proposed by Zhou et al. (2019), who propose the use of transformer-based Vaswani et al. (2017) masked language models (MLMs) such as BERT Devlin et al. (2019) for lexical substitutions: Concretely, given a text \(x:=(w^{(1)},...,w^{(L)})\) consisting of \(L\) tokens, the probability \(p_{\theta}(\tilde{w}=w^{(i)}|x)\) of token
\(\tilde{w}\) as the word in position \(i\) can be obtained from the MLM's probability distribution \(p(\mathcal{V}^{(i)}|x)\) over our token vocabulary \(\mathcal{V}\) at position \(i\). As we do not want to consider the influence of the probability of the original token on the token's suitability as a replacement when comparing it to other candidates, we normalize the probability over all probabilities except that of the original token. So, if \(\hat{w}\) was the original token at position \(i\), our suitability score for \(\tilde{w}\) as a replacement is
\[p_{\mathrm{swap}}(\hat{w}^{(i)},\tilde{w}^{(i)})=\frac{p_{\theta}(\tilde{w}=w^ {(i)}|x)}{1-p_{\theta}(\tilde{w}=w^{(i)}|x)}. \tag{4}\]
In practice, simply masking the token which we want to replace will lead to our model completely neglecting the meaning of the original word when predicting alternative tokens and therefore potentially change the semantics of the original sentence - for instance, for the given sample "The movie was great", the probability distribution for the last token obtained from "The movie was [MASK]" might assign high scores to negative words such as "bad", which are clearly not semantically suitable replacements. To counteract this, Zhou et al. (2019) propose to keep the original token in the input text, but to add strong dropout to the input embedding layer at position \(i\) before feeding it into the transformer to obtain replacement candidates for \(w^{(i)}\). We adopt this technique, and therefore obtain a procedure which allows us to obtain \(n\) suitable neighbors with \(m\) word replacements using merely an off-the-shelf model that does not require any adaptation to the target domain. The pseudocode is outlined in Algorithm 1.
```
Input : Text \(x=(w^{(1)},...,w^{(L)})\), \(n\), \(m\) Output : Neighbours \(\{\tilde{x}_{1},...,\tilde{x}_{n}\}\) with \(m\) word replacements each for\(i\in\{1,\dots,L\}\)do Get embeddings \((\phi(w^{(1)}),..,\phi(w^{(L)})\). Add dropout: \(\phi(w^{(i)})=\mathrm{drop}(\phi(w^{(i)}))\). Obtain \(p(\mathcal{V}^{(i)}|x)\) from BERT. Compute \(p_{\mathrm{swap}}(w^{(i)},\tilde{w}^{(i)})\forall\hat{w}\in\mathcal{V}\). For all swaps \((w^{(i_{1})},\tilde{w}^{(i_{1})})...(w^{(i_{m})},\tilde{w}^{(i_{m})})\) with \(i_{k}\neq i_{l}\) for \(i\neq l\), compute joint suitability \(\sum_{i=1}^{m}p_{\mathrm{swap}}(w^{(i_{1})},\tilde{w}^{(i_{1})})\) and return \(n\) highest
```
**Algorithm 1**Neighbourhood Generation
## 3 Experimental Setup
We evaluate the performance of our attack as well as reference-free and reference-based baseline attacks against large autoregressive models trained with the classical language modeling objective. Particularly, we use the base version of GPT-2 (Radford et al., 2019) as our target model.
### Datasets
We perform experiments on three datasets, particularly news article summaries obtained from a subset of the AG News corpus1 containing four news categories ("World", "Sports", "Business", "Science & Technology"), tweets from the Sentiment140 dataset (Go et al., 2009) and excerpts from wikipedia articles from Wikitext-103 (Merity et al., 2017). Both datasets are divided into two disjunct subsets of equal size: one of these subsets serves as training data for the target model and therefore consists of positive examples for the membership classification task. Subset two is not used for training, but its samples are used as negative examples for the classification task. The subsets contain 60,000, 150,000 and 100,000 samples for AG News, Twitter and Wikitext, respectively, leading to a total size of 120,000, 300,000 and 200,000 samples. For all corpora, we also keep an additional third subset that we can use to train reference models for reference-based attacks.
Footnote 1: [http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html](http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html)
### Baselines
To compare the performance of our attack, we consider various baselines: As the standard method for reference-free attacks, we choose the **LOSS Attack** proposed by Yeom et al. (2018), which classifies samples as training members or non-members based on whether their loss is above or below a certain threshold (see Equation 1). For reference-based attacks, we follow recent implementations (Mireshghallah et al., 2022, 2022) and use reference data to train a single reference model of the same architecture as the target model. Subsequently, we measure whether the likelihood of a sample under the target model divided by its likelihood under the reference model crosses a certain threshold.
Training Data for Reference ModelsAs discussed in previous sections, we would like to evaluate reference-based attacks with more realistic
assumptions about access to the training data distribution. Therefore, we use multiple reference models trained on different datasets: As our **Base Reference Model**, we consider the pretrained, but not fine-tuned version of GPT-2. Given the large pretraining corpus of this model, it should serve as a good estimator of the general complexity of textual samples and has also been successfully used for previous implementations of reference-based attacks [16]. Similar to our neighbourhood attack, this reference model does not require an attacker to have any additional data or knowledge about the training data distribution.
To train more powerful, but still realistic reference models, which we henceforth refer to as **Candidate Reference Models**, we use data that is in general similar to the target model's training data, but slightly deviates with regard to topics or artifacts that are the result of the data collection procedure. Concretely, we perform this experiment for both our AG News and Twitter corpora: For the former, we use article summaries from remaining news categories present in the AG News corpus ("U.S.", "Europe", "Music Feeds", "Health", "Software and Development", "Entertainment") as well as the NewsCatcher dataset2 containing article summaries for eight categories that highly overlap with AG News ("Business", "Entertainment", "Health", "Nation", "Science", "Sports", "Technology", "World"). For Twitter, we use a depression detection dataset for mental health support from tweets 3 as well as tweet data annotated for offensive language 4. As it was highly difficult to find data for reference models, it was not always possible to match the amount of training samples of the target model. The number of samples present in each dataset can be found in Table 1.
Footnote 2: [https://github.com/kotartemiy/topic-labeled-news-dataset](https://github.com/kotartemiy/topic-labeled-news-dataset)
Footnote 3: [https://www.kaggle.com/datasets/infanouscode/mental-health-social-media](https://www.kaggle.com/datasets/infanouscode/mental-health-social-media)
Footnote 4: [https://www.kaggle.com/datasets/rmorj/hate-speech-and-offensive-language-dataset](https://www.kaggle.com/datasets/rmorj/hate-speech-and-offensive-language-dataset)
As our most powerful reference model, henceforth referred to as **Oracle Reference Model**, we use models trained on the same corpora, but different subsets as the target models. This setup assumes that an attacker has perfect knowledge about the training data distribution of the target model and high quality samples.
### Implementation Details
We obtain and fine-tune all pretrained models using the Huggingface transformers library [14] and PyTorch [20]. As target models, we fine-tune the pretrained 117M parameter version of GPT-2, which originally has a validation perplexity of 56.8 and 200.3 on AG News and Twitter data, respectively, up to validation set perplexities of 30.0 and 84.7. In our initial implementation of our neighbourhood attack, we obtain the 100 most likely neighbour samples using one word replacement only from the pretrained 110M parameter version of BERT. We apply a dropout of \(p=0.7\) to the embedding of the token we want to replace. For evaluating LiRA baselines, we train each reference model on its respective training dataset over multiple epochs, and choose the best performing reference model w.r.t attack performance. Following Carlini et al. (2021), we evaluate our attack's precision for predetermined low false positive rate values such as 1% or 0.01%. We implement this evaluation scheme by adjusting our threshold \(\gamma\) to meet this requirement and subsequently measure the attack's precision for the corresponding \(\gamma\). All models have been deployed on single GeForce RTX 2080 and Tesla K40 GPUs.
## 4 Results
In this section, we report our main results and perform additional experiments investigating the impact of reference model performance on the success of reference-based attacks as well as several ablation studies. Following [11], we report attack performances in terms of their true positive rates (TPR) under very low false positive rates (FPR) by adjusting the threshold value \(\gamma\). Concretely, we choose 1%, 0.1% and 0.01% as our target FPR values.
\begin{table}
\begin{tabular}{l l} \hline \hline Dataset & \#Samples \\ \hline AG News (Other Categories) & 60,000 \\ NewsCatcher & 60,000 \\ AG News Oracle Data & 60,000 \\ \hline Twitter Mental Health & 20,000 \\ Twitter Offensive Language & 25,000 \\ Twitter Oracle Data & 150,000 \\ \hline Wikipedia Oracle Data & 100,000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of samples in the reference model training data. Target models for News, Twitter and Wikipedia were trained on 60,000, 150,000 and 100,000 samples, respectively.
### Main Results
Our results can be found in Table 2 and 3, with the former showing our attack performance in terms of true positive rates under low false positive rates and the latter showing AUC values. As previously discovered, the LOSS attack tends to perform badly when evaluated for very low false positive rates Carlini et al. (2021); Watson et al. (2022). Likelihood Ratio Attacks can clearly outperform it, but we observe that their success is highly dependent on having access to suitable training data for reference models: Attacks using the base reference models and candidate models can not reach the performance of an attack using the oracle reference model by a large margin. Notably, they are also substantially outperformed by our Neighbour Attack, which can, particularly in low FPR ranges, even compete very well with or outperform Likelihood Ratio Attacks with an Oracle Reference Model, without relying on access to any additional data.
### Measuring the Dependence of Attack Success on Reference Model Quality
Motivated by the comparably poor performance of Likelihood Ratio Attacks with reference models trained on only slightly different datasets to the target training data, we aim to investigate the dependence of reference attack performances on the quality of reference models in a more controlled and systematic way. To do so, we train reference models on our oracle data over multiple epochs, and report the attack performance of Likelihood Ratio Attacks w.r.t to the reference models' validation perplexity (PPL) on a held out test set, which is in this case the set of non-training members of the target model. Intuitively, we would expect the attack performance to peak when the validation PPL of reference models is similar to that of the target model, as this way, the models capture a very similar distribution and therefore offer the best comparison to the attack model. In this setup, we are however particularly interested in the attack performance when the validation PPL does not exactly match that of the target model, given that attackers will not always be able to train perfectly performing reference models.
The results of this experiment can be found in Figure 2 for our News and Twitter dataset and in Figure 3 for Wikitext. As can be seen, the performance of reference-based attacks does indeed peak when reference models perform roughly the same as the target model. A further very interesting observation is that substantial increases in attack success only seem to emerge as the validation PPL of reference models comes very close to that of the target model and therefore only crosses the success
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{News} & \multicolumn{2}{c}{Twitter} & \multicolumn{2}{c}{Wikipedia} \\ \cline{2-10} False Positive Rate & 1\% & 0.1\% & 0.01\% & 1\% & 0.1\% & 0.01\% & 1\% & 0.1\% & 0.01\% \\ \hline
**Likelihood Ratio Attacks:** & & & & & & & & \\ Base Reference Model & 4.24\% & 0.91\% & 0.16\% & 5.66\% & 0.98\% & 0.22\% & 1.21\% & 0.12\% & 0.01\% \\ Candidate Reference Model 1 & 4.91\% & 0.95\% & 0.15\% & 6.49\% & 1.10\% & 0.24\% & & & \\ Candidate Reference Model 2 & 4.76\% & 0.92\% & 0.15\% & 6.61\% & 1.19\% & 0.25\% & & & \\ Oracle Reference Model* & 18.90\% & 3.76\% & 0.16\% & 13.90\% & 1.59\% & 0.28\% & 11.70\% & 3.70\% & 0.12\% \\ \hline \multicolumn{10}{l}{**Reference-Free Attacks:** & & & & & & & & \\ LOSS Attack & 3.50\% & 0.10\% & 0.01\% & 2.08\% & 0.11\% & 0.02\% & 1.06\% & 0.11\% & 0.01\% \\ Neighbour Attack (Ours) & **8.29\%** & **1.73\%** & **0.29\%** & **7.35\%** & **1.43\%** & **0.28\%** & **2.32\%** & **0.27\%** & **0.10\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: True positive rates of various attacks for low false positive rates of \(1\%,0.1\%\), and \(0.01\%\). Candidate Reference Model 1 refers to reference models trained on data from other AG News categories and our Twitter mental health dataset, Candidate Reference Model 2 refers to reference models trained on NewsCatcher and the offensive tweet classification dataset. *As reference attacks trained on oracle datasets represent a rather unrealistic scenario with perfect assumptions, we compare our results with other baselines with more realistic assumptions when highlighting best results as bold.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & News & Twitter & Wiki \\ \hline
**LIRA:** & & & \\ Base Reference Model & 0.76 & 0.75 & 0.54 \\ Candidate Reference 1 & 0.78 & **0.81** & \\ Candidate Reference 2 & 0.75 & 0.77 & \\ Oracle Reference* & 0.94 & 0.89 & 0.90 \\ \hline
**Other Attacks:** & & & & \\ LOSS Attack & & 0.64 & 0.60 & 0.52 \\ Neighbour Attack & **0.79** & 0.77 & **0.62** \\ \hline \hline \end{tabular}
\end{table}
Table 3: AUC values of various attacks.
rate of neighbourhood attacks when the reference model's performance is almost the same as that of the target model. This further illustrates the fragility of reference-based attacks with respect to the choice of the reference model.
### Ablation Studies
Having extensively studied the impact of different reference model training setups for the Likelihood Ratio Attack, we now aim to explore the effect of various components of our proposed neighbourhood attack.
Number of Generated NeighboursFor our main results in Table 2, we report the performance of neighbour attacks for the 100 most likely generated neighbours as determined by BERT. In the following, we measure how varying this number affects the attack performance. While intuitively, a higher number of neighbours might offer a more robust comparison, it is also plausible that selecting a lower number of most likely neighbours under BERT will lead to neighbours of higher quality and therefore a more meaningful comparison of loss values. Our results in Table 4 show a clear trend towards the former hypothesis: The number of neighbours does in general have a strong influence on the performance of neighbourhood attacks and higher numbers of neighbours produce better results.
Number of Word ReplacementsBesides the number of generated neighbours, we study how the number of replaced words affects the performance of our attack. While we reported results for the replacement of a single word in our main results in Table 2, there are also reasons to expect that a higher number of replacements leads to better attack performance: While keeping neighbours as similar to the original samples as possible ensures that their probability in the general distribution of textual data remains as close as possible, one could also expect that too few changes lead the target model to assign the original sample and its neighbours almost exactly the same score, and therefore make it hard to observe high differences in loss scores for training members. Our results of generating 100 neighbours with multiple word replacements are reported in Table 5. We find that replacing only one word clearly outperforms multiple replacements. Beyond this, we do not find highly meaningful differences between two and three word replacements.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \#Neighbours & 5 & 10 & 25 & 50 & 100 \\ \hline
**News:** & & & & \\
1\% FPR & 2.98\% & 4.57\% & 6.65\% & 8.19\% & 8.29\% \\
0.1\% FPR & 0.53\% & 0.79\% & 1.43\% & 1.50\% & 1.73\% \\
0.01\% FPR & 0.05\% & 0.07\% & 0.18\% & 0.23\% & 0.29\% \\ \hline
**Twitter:** & & & & \\
1\% FPR & 3.93\% & 4.88\% & 6.21\% & 6.63\% & 7.35\% \\
0.1\% FPR & 0.57\% & 0.62\% & 1.01\% & 1.34\% & 1.43\% \\
0.01\% FPR & 0.05\% & 0.07\% & 0.10\% & 0.23\% & 0.28\% \\ \hline
**Wikipedia:** & & & & \\
1\% FPR & 1.57\% & 1.81\% & 2.02\% & 2.17\% & 2.32\% \\
0.1\% FPR & 0.16\% & 0.21\% & 0.23\% & 0.26\% & 0.27\% \\
0.01\% FPR & 0.05\% & 0.08\% & 0.09\% & 0.10\% & 0.10\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Attack performance w.r.t the number of neighbours against which we compare the target sample
Figure 2: Attack Performance of reference attacks w.r.t validation PPL of reference models, compared to the performance of neighborhood attacks. The perplexities of the target models were 30.0 and 84.7 for AG News and Twitter, respectively
## 5 Defending against Neighbourhood Attacks
Due to the privacy risks that emerge from the possibility of membership inference and data extraction attacks, the research community is actively working on defenses to protect models. Beyond approaches such as confidence score perturbation Jia et al. (2019) and specific regularization techniques Mireshgallah et al. (2021); Chen et al. (2022) showing good empirical performance, differentially private model training is one of the most well known defense techniques offering mathematical privacy guarantees: DP-SGD Song et al. (2013); Bassily et al. (2014); Abadi et al. (2016), which uses differential privacy Dwork et al. (2006) to bound the influence that a single training sample can have on the resulting model and has been shown to successfully protect models against membership inference attacks Carlini et al. (2021) and has recently also successfully been applied to training language models Yu et al. (2022); Li et al. (2022); Mireshgallah et al. (2021). To test the effectiveness of differential privacy as a defense against neighbourhood attacks, we follow Li et al. (2022) and train our target model GPT-2 in a differentially private manner on AG News, where our attack performed the best. The results can be seen in Table 6 and clearly demonstrate the effectiveness of DP-SGD. Even for comparably high epsilon values such as ten, the performance of the neighbourhood attack is substantially worse compared to the non-private model and is almost akin to random guessing for low FPR values.
## 6 Related Work
MIAs have first been proposed by Shokri et al. (2016) and continue to remain a topic of interest for the machine learning community. While many attacks, such as ours, assume to only have access to model confidence or loss scores Yeom et al. (2018); Sablayrolles et al. (2019); Jayaraman et al. (2020); Watson et al. (2022), others exploit additional information such as model parameters Leino and Fredrikson (2020) or training loss trajectories Liu et al. (2022). Finally, some researchers have also attempted to perform membership inference attacks given only hard labels without confidence scores Li and Zhang (2021); Choquette-Choo et al. (2021). Notably, the attack proposed by Chooquette-Choo et al. (2021) is probably closest to our work as it tries to obtain information about a sample's membership by flipping its predicted labels through small data augmentations such as rotations to image data. To the best of our knowledge, we are the first to apply data augmentations of this kind for text-based attacks.
Membership Inference Attacks in NLPSpecifically in NLP, membership inference attacks are an important component of language model extraction attacks Carlini et al. (2021); Mireshgallah et al. (2022). Further studies of interest include work by Hisamoto et al. (2020), which studies membership inference attacks in machine translation, as well as work by Mireshgallah et al. (2022), which investigates Likelihood Ratio Attacks for masked language models. Specifically for language models, a large body of work also studies the related phenomenon of memorization Kandpal et al. (2022); Carlini et al. (2022); Zhang et al. (2021), which enables membership inference and data extraction attacks in the first place.
Machine-Generated Text DetectionDue to the increasing use of tools like ChatGPT as writing assistants, the field of machine-generated text detection has become of high interest within the research community and is being studied extensively
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \(\epsilon=5\) & \(\epsilon=10\) & \(\epsilon=\infty\) \\ \hline TPR @ 1\% FPR & 1.29\% & 1.52\% & 8.29\% \\ TPR @ 0.1\% FPR & 0.09\% & 0.13\% & 1.73\% \\ TPR @ 0.01\% FPR & 0.01\% & 0.01\% & 0.29\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance of neighbourhood attacks against models trained with DP-SGD
\begin{table}
\begin{tabular}{l c c c} \hline \hline \#Word Replacements & 1 & 2 & 3 \\ \hline
**News:** & & & \\
1\% FPR & 8.29\% & 4.09\% & 4.18\% \\
0.1\% FPR & 1.73\% & 0.85\% & 0.94\% \\
0.01\% FPR & 0.29\% & 0.23\% & 0.21\% \\ \hline
**Twitter:** & & & \\
1\% FPR & 7.35\% & 4.86\% & 4.37\% \\
0.1\% FPR & 1.43\% & 0.74\% & 0.72\% \\
0.01\% FPR & 0.28\% & 0.14\% & 0.11\% \\ \hline
**Wikipedia:** & & & \\
1\% FPR & 2.32\% & 1.76\% & 1.44\% \\
0.1\% FPR & 0.27\% & 0.23\% & 0.17\% \\
0.01\% FPR & 0.10\% & 0.07\% & 0.03\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Attack performance w.r.t the number of words that are replaced when generating neighbours
(Chakraborty et al., 2023; Krishna et al., 2023; Mitchell et al., 2023; Mireshghallah et al., 2023). Notably, Mitchell et al. (2023) propose DetectGPT, which works similarly to our attack as it compares the likelihood of a given sample under the target model to the likelihood of perturbed samples and hypothesizes that the likelihood of perturbations is smaller than that of texts the model has generated itself.
## 7 Conclusion and Future Work
In this paper, we have made two key contributions: First, we thoroughly investigated the assumption of access to in-domain data for reference-based membership inference attacks: In our experiments, we have found that likelihood ratio attacks, the most common form of reference-based attacks, are highly fragile to the quality of their reference models and therefore require attackers to have access to high-quality training data for those. Given that specifically in privacy-sensitive settings where publicly available data is scarce, this is not always a realistic assumption, we proposed that the design of reference-free attacks would simulate the behavior of attackers more accurately. Thus, we introduced neighborhood attacks, which calibrate the loss scores of a target samples using loss scores of plausible neighboring textual samples generated through word replacements, and therefore eliminate the need for reference trained on in-domain data. We have found that under realistic assumptions about an attacker's access to training data, our attack consistently outperforms reference-based attacks. Furthermore, when an attacker has perfect knowledge about the training data, our attack still shows competitive performance with reference-based attacks. We hereby further demonstrated the privacy risks associated with the deployment of language models and therefore the need for effective defense mechanisms. Future work could extend our attack to other modalities, such as visual or audio data, or explore our attack to improve extraction attacks against language models.
## Limitations
**The proposed attack is specific to textual data** While many membership inference attacks are universally applicable to all modalities as they mainly rely on loss values obtained from models, our proposed method for generating neighbours is specific to textual data. While standard augmentations such as rotations could be used to apply our method for visual data, this is not straightforward such as the transfer of other attacks to different modalities.
**Implementation of baseline attacks** As the performance of membership inference attacks depend on the training procedure of the attacked model as well as its degree of overfitting, it is not possible to simply compare attack performance metrics from other papers to ours. Instead, we had to reimplement existing attacks to compare them to our approach. While we followed the authors' descriptions in their papers as closely as possible, we cannot guarantee that their attacks were perfectly implemented and the comparison to our method is therefore 100% fair.
## Ethical Considerations
Membership inference attacks can be used by malicious actors to compromise the privacy of individuals whose data has been used to train models. However, studying and expanding our knowledge of such attacks is crucial in order to build a better understanding for threat models and to build better defense mechanisms that take into account the tools available to malicious actors. Due to the importance of this aspect, we have extensively highlighted existing work studying how to defend against MIAs in Section 6. As we are aware of the potential risks that arise from membership inference attacks, we will not freely publicize our code, but instead give access for research projects upon request.
With regards to the data we used, we do not see any issues as all datasets are publicly available and have been used for a long time in NLP research or data science competitors.
|
2306.01463 | S-duality and the universal isometries of instanton corrected q-map
spaces | Given a conical affine special K\"{a}hler (CASK) manifold together with a
compatible mutually local variation of BPS structures, one can construct a
quaternionic-K\"{a}hler (QK) manifold. We call the resulting QK manifold an
instanton corrected c-map space. Our main aim is to study the isometries of a
subclass of instanton corrected c-map spaces associated to projective special
real (PSR) manifolds with a compatible mutually local variation of BPS
structures. We call the latter subclass instanton corrected q-map spaces. In
the setting of Calabi-Yau compactifications of type IIB string theory,
instanton corrected q-map spaces are related to the hypermultiplet moduli space
metric with perturbative corrections, together with worldsheet, D(-1) and D1
instanton corrections. In the physics literature, it has been shown that the
hypermultiplet metric with such corrections must have an
$\mathrm{SL}(2,\mathbb{Z})$ acting by isometries, related to S-duality. We give
a mathematical treatment of this result, specifying under which conditions
instanton corrected q-map spaces carry an action by isometries by
$\mathrm{SL}(2,\mathbb{Z})$ or some of its subgroups. We further study the
universal isometries of instanton corrected q-map spaces, and compare them to
the universal isometries of tree-level q-map spaces. Finally, we give an
explicit example of a non-trivial instanton corrected q-map space with full
$\mathrm{SL}(2,\mathbb{Z})$ acting by isometries and admitting a quotient of
finite volume by a discrete group of isometries. | Vicente Cortés, Iván Tulli | 2023-06-02T11:40:26Z | http://arxiv.org/abs/2306.01463v1 | # S-duality and the universal isometries of instanton corrected q-map spaces
###### Abstract
Given a conical affine special Kahler (CASK) manifold together with a compatible mutually local variation of BPS structures, one can construct a quaternionic-Kahler (QK) manifold. We call the resulting QK manifold an instanton corrected c-map space. Our main aim is to study the isometries of a subclass of instanton corrected c-map spaces associated to projective special real (PSR) manifolds with a compatible mutually local variation of BPS structures. We call the latter subclass instanton corrected q-map spaces. In the setting of Calabi-Yau compactifications of type IIB string theory, instanton corrected q-map spaces are related to the hypermultiplet moduli space metric with perturbative corrections, together with worldsheet, D(-1) and D1 instanton corrections. In the physics literature, it has been shown that the hypermultiplet metric with such corrections must have an \(\mathrm{SL}(2,\mathbb{Z})\) acting by isometries, related to S-duality. We give a mathematical treatment of this result, specifying under which conditions instanton corrected q-map spaces carry an action by isometries by \(\mathrm{SL}(2,\mathbb{Z})\) or some of its subgroups. We further study the universal isometries of instanton corrected q-map spaces, and compare them to the universal isometries of tree-level q-map spaces. Finally, we give an explicit example of a non-trivial instanton corrected q-map space with full \(\mathrm{SL}(2,\mathbb{Z})\) acting by isometries and admitting a quotient of finite volume by a discrete group of isometries.
Department of Mathematics
University of Hamburg
Bundesstrasse 55, D-20146 Hamburg, Germany
###### Contents
* 1 Introduction
* 1.1 Summary of main results
* 1.2 Organization of topics
* 2 Instanton corrected QK metrics and their twistor description
* 2.1 QK metrics associated to CASK manifolds with mutually local variations of BPS structures
* 2.1.1 Associated instanton corrected HK manifold
* 2.1.2 Associated instanton corrected QK manifold via HK/QK correspondence
* 2.1.3 The case of a CASK domain
* 2.2 Twistor space description and Darboux coordinates
* 2.2.1 Darboux coordinates for c-map spaces associated to CASK domains
* 2.2.2 The case with instanton corrections
* 3 S-duality on instanton corrected q-map spaces
* 3.1 Setting
* 3.2 The quantum corrected mirror map and the S-duality action
* 3.3 Type IIB Darboux coordinates
* 3.3.1 Preliminary lemmas
* 3.3.2 Poisson resummation of the quantum corrections of the contact structure
* 3.3.3 Proof of Theorem 3.9
* 3.4 S-duality
* 4 Universal isometries of instanton corrected q-map spaces and S-duality
* 5 An example of full S-duality
* A Integral identities in terms of Bessel functions
* B Type IIA Darboux coordinates for instanton corrected c-map spaces
## 1 Introduction
The supergravity c-map assigns to a projective special Kahler (PSK) manifold \((\overline{M},g_{\overline{M}})\) of complex dimension \(n\), a quaternionic-Kahler (QK) manifold \((\overline{N},g_{\overline{N}})\) of real dimension \(4n+4\). In the context of Calabi-Yau compactifications of type IIA/B string theory on a Calabi-Yau threefold \(X\), the c-map takes the vector multiplet moduli space \(\mathcal{M}^{\rm IIA/B}_{\rm VM}(X)\) to the hypermultiplet moduli space \(\mathcal{M}^{\rm IIB/A}_{\rm HM}(X)\) with its string-tree-level metric \(g_{\rm FS}\), also known as the Ferrara-Sabharwal metric [11, 12]. Such a construction receives quantum corrections in the string coupling \(g_{s}\) of several kinds:
* Perturbative corrections: these produce the so-called 1-loop corrected Ferrara-Sabharwal metric on \(\mathcal{M}^{\rm IIA/B}_{\rm HM}(X)\)[13, 1]. In a purely mathematical setting, this construction can be understood as a way of assigning to a PSK manifold \((\overline{M},g_{\overline{N}})\) a 1-parameter family of QK manifolds \(\{(\overline{N}_{c_{\ell}},g_{\overline{N},c_{\ell}})\}_{c_{\ell}\in\mathbb{R}}\)[1], where \(c_{\ell}\in\mathbb{R}\) corresponds to the 1-loop correction.
* Non-perturbative quantum corrections: these are divided in D-instanton and NS5-instanton corrections. These have been extensively studied in the physics literature via twistor methods, see for example the reviews [1, 2] and the references therein. The inclusion of all D-instanton corrections is understood in the physics literature [1, 2], while the NS5-instanton corrections are still work in progress (see for example [1]).
When restricted to the simpler setting of mutually local D-instanton corrections, a fairly explicit local formula for the QK metric was given in the physics literature via twistor methods in [1]1. On the other hand, in [10] a mathematical treatment (based on the geometric approach of [1]) of a class of QK metrics related to the above mutually local case was given. Namely, if \((\overline{M},g_{\overline{M}})\) is a PSK manifold and \((M,g_{M},\omega_{M},\nabla,\xi)\) the associated conical affine special Kahler (CASK) manifold, then one can complement this data with a mutually local variation of BPS structures \((M,\Gamma,Z,\Omega)\) to construct a new QK metric \((\overline{N},g_{\overline{N}})\) (we suppress the choice of 1-loop parameter \(c_{\ell}\) from the notation). The general notion of variation of BPS structure can be found in [14] (see also Definition 2.2 below for the specific case to be used throughout this paper). Here \((M,\Gamma,Z,\Omega)\) is assumed to satisfy certain compatibility conditions with the CASK structure \((M,g_{M},\omega_{M},\nabla,\xi)\) (see Section 2 below), and encodes the analog of "mutually local D-instanton" corrections when compared to the string theory setting.
Footnote 1: Here by “fairly explicit” we mean that the expression is explicit except for a certain function \(\mathcal{R}\), which is implicitly defined in terms of the field variables.
On the other hand, type IIB string theory carries an \(\mathrm{SL}(2,\mathbb{Z})\)-symmetry called S-duality. In the physics literature, it has been shown that S-duality descends to an action by isometries on \(\mathcal{M}^{\rm IIB}_{\rm HM}(X)\) when one includes the appropriate amount of quantum corrections for the metric. For example:
* When one drops all corrections in the string coupling \(g_{s}\) and takes a large volume limit, one obtains the classical QK metric on \(\mathcal{M}^{\rm IIB}_{\rm HM}(X)\). This metric has been shown to have an \(\mathrm{SL}(2,\mathbb{R})\) acting by isometries [1], which comes from the \(\mathrm{SL}(2,\mathbb{Z})\) S-duality action. Furthermore, it has been shown in [1] that the \(\mathrm{SL}(2,\mathbb{R})\) (and even \(\mathrm{SL}(2,\mathbb{Z})\subset\mathrm{SL}(2,\mathbb{R})\)) is broken when one includes world-sheet instanton corrections and/or the 1-loop correction in \(g_{s}\).
* On the other hand, it has been shown in [1, 2] that when one includes world-sheet instanton corrections, the 1-loop correction, and D(-1) and D1 instanton corrections, one recovers again an \(\mathrm{SL}(2,\mathbb{Z})\) action by isometries coming from the S-duality action. As a consequence of their result, it also follows that only including perturbative world-sheet instanton corrections, the 1-loop correction in \(g_{s}\) and D(-1) instanton corrections also preserves the isometric S-duality action. For some other extensions of this result in the physics literature see for example [1, 2, 1, 1].
* The QK metric of \(\mathcal{M}^{\rm IIB}_{\rm HM}(X)\) is also expected to retain the \(\mathrm{SL}(2,\mathbb{Z})\) S-duality action by isometries when one includes all quantum corrections, but such a metric has not been constructed or understood (as far as the authors know) in the physics literature.
The classical metric obtained in the case when one drops all corrections in \(g_{s}\) and take a large volume limit lies in a subset of c-map metrics called q-map metrics [12]. Mathematically, the q-map assign to a \(n-1\geq 0\) dimensional projective special real (PSR) manifold a \(4n+4\) dimensional QK manifold [13, 14]. A purely differential geometric proof that q-map metrics carry an \(\operatorname{SL}(2,\mathbb{R})\) of isometries was given in [15], while more traditional supergravity arguments can be found for example in [12, 12]. On the other hand, including world-sheet instanton corrections and the 1-loop correction takes the q-map metric to the class of 1-loop corrected c-map metrics; while including D(-1) and D1 instanton corrections takes the 1-loop corrected c-map metric to the class of mutually local instanton corrected QK metrics studied in [1, 15].
Our main objective in this paper is to do a mathematical treatment of the S-duality results in [1, 2], and to study the universal isometries of instanton corrected q-map spaces, i.e. those isometries that are independent of the initial PSR manifold and the form of the quantum corrections that we restrict ourselves to. Namely, among the class of instanton corrected c-map metrics \((\overline{N},g_{\overline{N}})\) constructed in [15] we restrict to a subclass, which we call instanton corrected q-map metrics, and show under which conditions they carry an \(\operatorname{SL}(2,\mathbb{Z})\)-action (or an action by some of its subgroups) by isometries. This \(\operatorname{SL}(2,\mathbb{Z})\)-action is furthermore related to the S-duality symmetry in the string theory setting. Furthermore, we study how the universal group of isometries of q-map spaces described in [15] gets modified for instanton corrected q-map spaces (see Section 1.1 below for more details).
### Summary of main results
The main differences and new results compared to the works in the physics literature [1, 2], from which we take a lot of inspiration from, are the following:
* If the S-duality \(\operatorname{SL}(2,\mathbb{Z})\) action defined in (3.18) restricts to an action on the domain of definition of the instanton corrected q-map space, then the twistor space argument of [1, 2] follows and the action is also by isometries. However, verifying that the domain of definition actually carries such an action seems non-trivial. In Theorem 4.7 (explained in more detail below) we collect results regarding this point. In particular we find that, even in a case where we do not have an \(\operatorname{SL}(2,\mathbb{Z})\)-invariant domain of definition, one can always find either an \(S\in\operatorname{SL}(2,\mathbb{Z})\) or \(T\in\operatorname{SL}(2,\mathbb{Z})\)-invariant one (where \(T\) and \(S\) are the usual generators of \(\operatorname{SL}(2,\mathbb{Z})\) given in (3.83)). Furthermore, in Section 5 we give an explicit non-trivial (albeit simple) example where one can find such an \(\operatorname{SL}(2,\mathbb{Z})\)-invariant neighborhood of definition.
* Assuming that the domain of definition of the instanton corrected q-map space carries an action by the S-duality \(\operatorname{SL}(2,\mathbb{Z})\), we show that this action must be by isometries by following an argument similar but slightly different from [1]. Using their twistor description of the relevant QK manifolds, it is shown in [1] that certain "type IIA" Darboux coordinates for the contact structure of the twistor space can be Poisson resummed, and then it is shown that the later can be related via a gauge transformation to certain "type IIB" Darboux coordinates. The type IIB coordinates then make transparent that a certain lift of the \(\operatorname{SL}(2,\mathbb{Z})\) action to the twistor space acts by twistor space automorphisms, and hence that \(\operatorname{SL}(2,\mathbb{Z})\) acts by isometries on the QK metric. In our work, we find it simpler to do a direct Poisson resummation of the contact structure corresponding to the QK metric constructed in [15], and then use the resulting expression to show that the "type IIB" coordinates from [1] are Darboux coordinates for the contact structure.
* We further study certain universal isometry groups of instanton corrected q-map spaces, and compare with with what happens in the case where no quantum corrections are considered (see Section 4 and Theorem 4.7). In particular, while S-duality is a universal isometry for tree-level q-map spaces, we are not able to guarantee that the the same is true for instanton corrected q-map spaces (see Remark 1.1). Furthermore, in the example of Section 5 we use the \(\operatorname{SL}(2,\mathbb{Z})\) action by isometries together with the universal isometry group to show that our example admits a quotient of finite volume by a discrete group of isometries.
In the following, we explain in more detail the setting and the aforementioned results. As previously mentioned, our main results concerns a class of QK metrics called instanton corrected q-map spaces,
defined in Section 3.1. Roughly speaking, these are QK manifolds associated to a CASK manifold described by a holomorphic prepotential \((M,\mathfrak{F})\), together with a variation of mutually local BPS structures \((M,\Gamma,Z,\Omega)\) (see [1] for the general definition of variation of BPS structures) of the following form:
* The holomorphic prepotential \(\mathfrak{F}:M\subset\mathbb{C}^{n+1}\to\mathbb{C}\) has the form \[\mathfrak{F}(Z^{i})=-\frac{1}{6}k_{abc}\frac{Z^{a}Z^{b}Z^{c}}{Z^{0}}+\chi\frac {(Z^{0})^{2}\zeta(3)}{2(2\pi\mathrm{i})^{3}}-\frac{(Z^{0})^{2}}{(2\pi\mathrm{i })^{3}}\sum_{\tilde{\gamma}=q_{a}\gamma^{a}\in\Lambda^{+}}n_{\tilde{\gamma}} \mathrm{Li}_{3}(e^{2\pi\mathrm{i}q_{a}Z^{a}/Z^{0}}),\] (1.1) where \(i=0,...,n\); \(a,b,c=1,...,n\); \(k_{abc}\in\mathbb{R}\) are symmetric in the indices, \(\chi\in\mathbb{Z}\), \(n_{\tilde{\gamma}}\in\mathbb{Z}\), \(\mathrm{Li}_{n}(x)\) denote the n-th polylogarithms, \(\zeta(x)\) is the Riemann zeta function, and \(\Lambda^{+}:=\mathrm{span}_{\mathbb{Z}_{\geq 0}}\{\gamma^{a}\}_{a=1}^{n}-\{0\}\) is a commutative semigroup freely generated by \(n\) elements \(\{\gamma^{a}\}_{a=1}^{n}\). This choice of \(\mathfrak{F}\) is motivated by Calabi-Yau compactifications of string theory. Namely, if \(X\) denotes a Calabi-Yau threefold, and if \(k_{abc}\) are taken to be the triple intersection numbers of \(X\), \(\chi=\chi(X)\) the Euler characteristic, and for \(\hat{\gamma}\in H_{2}^{+}(X,\mathbb{Z})\) we have that \(n_{\tilde{\gamma}}=n_{\tilde{\gamma}}^{(0)}\) are the genus zero Gopakumar-Vafa invariants, then \(\mathfrak{F}\) denotes the prepotential specifying the PSK geometry of \(\mathcal{M}_{\mathrm{YM}}^{\mathrm{IIA}}(X)\) with all worldsheet corrections. Applying the 1-loop corrected c-map, one obtains from \(\mathcal{M}_{\mathrm{YM}}^{\mathrm{IIA}}(X)\) the 1-loop corrected metric on \(\mathcal{M}_{\mathrm{HM}}^{\mathrm{IIB}}(X)\).
* The charge lattice \(\Gamma\) and the central charge \(Z\) of \((M,\Gamma,Z,\Omega)\) are canonically determined by \(\mathfrak{F}\) (see Section 2.1.3), while the BPS indices \(\Omega(\gamma)\) are also determined by \(\mathfrak{F}\) as follows: with respect to a canonical Darboux basis \((\widetilde{\gamma}_{i},\gamma^{i})_{i=0}^{n}\) of \(\Gamma\) with respect to its symplectic pairing, we have \[\begin{cases}\Omega(q_{0}\gamma^{0})=-\chi,\quad q_{0}\in\mathbb{Z}-\{0\}\\ \Omega(q_{0}\gamma^{0}\pm q_{a}\gamma^{a})=\Omega(\pm q_{a}\gamma^{a})=n_{q_{ a}\gamma^{a}}\quad\text{for $q_{a}\gamma^{a}\in\Lambda^{+}$, $q_{0}\in\mathbb{Z}$}\\ \Omega(\gamma)=0\quad\text{else}.\end{cases}\] (1.2) The prescription (1.2) has previously appeared in the physics literature (see for example [1, Equation 4.5]), and is the data required to add the D(-1) and D1 instanton corrections to \(\mathcal{M}_{\mathrm{HM}}^{\mathrm{IIB}}(X)\) in an "S-duality compatible"-way. Furthermore, in the case of a non-compact Calabi-Yau threefold \(X\) without compact divisors the same structure for the BPS indices is expected. Indeed, in [11] the appropriate \(\Omega(\gamma)\) determining generalized Donaldson-Thomas invariants are constructed. It is then shown that \(\Omega(q_{0}\gamma^{0})=-\chi(X)\)[11, Section 6.3], and conjectured that \(\Omega(q_{0}\gamma^{0}\pm\hat{\gamma})=n_{\tilde{\gamma}}^{(0)}\)[11, Conjecture 6.20].
To the above data \((M,\mathfrak{F})\) and \((M,\Gamma,Z,\Omega)\) we can apply the construction of [10] and obtain a QK manifold \((\overline{N},g_{\overline{N}})\), which we call an instanton corrected q-map space. \((\overline{N},g_{\overline{N}})\) depends on a choice of projective special real (PSR) manifold \((\mathcal{H},g_{\mathcal{H}})\) (determining the first term in \(\mathfrak{F}\)), the choice of \(\chi\in\mathbb{Z}\) and \(n_{\hat{\gamma}}\in\mathbb{Z}\), and the choice of 1-loop parameter \(c_{\ell}\in\mathbb{R}\) (see Section 3.1). Our main results concern the isometries of a lift \((\widetilde{N},g_{\overline{N}})\to(\overline{N},g_{\overline{N}})\) on which we have no periodic directions (see Definition 2.14 for a more precise statement). In order to state the main results, we consider the following subgroups of the Heisenberg group \(\mathrm{Heis}_{2n+3}(\mathbb{R})\) (endowed with standard global coordinates \((\eta^{i},\widetilde{\eta}_{i},\kappa)\), \(i=0,\ldots,n\)):
\[\begin{split} H_{2n+2}&:=\{(\eta^{i},\widetilde{ \eta}_{i},\kappa)\in\mathrm{Heis}_{2n+3}(\mathbb{R})\mid\quad\eta^{0}=0\}\\ H_{2n+2,D}&:=\{(\eta^{i},\widetilde{\eta}_{i}, \kappa)\in H_{2n+2}\mid\quad\eta^{a}\in\mathbb{Z}\text{ for $a=1,...,n$}\}\,.\end{split} \tag{1.3}\]
The following theorem collects our main results:
**Theorem 4.7:** Consider an instanton corrected q-map space \((\widetilde{N},g_{\overline{N}})\) of dimension \(4n+4\) as defined in Section 3.1 (in particular \(\widetilde{N}\) here is the maximal domain of definition). Furthermore, let \(T,S\in\mathrm{SL}(2,\mathbb{Z})\) be as in (3.83), where \(\mathrm{SL}(2,\mathbb{Z})\) acts on the ambient manifold \(\overline{\mathcal{N}}_{\mathrm{IIB}}^{\mathrm{cl}}\supset\overline{\mathcal{N} }_{\mathrm{IIB}}\cong\overline{\mathcal{N}}_{\mathrm{IIA}}\supset\widetilde{N}\) as described in (3.18). Then:
* \((\widetilde{N},g_{\overline{N}})\) has a group acting by isometries of the form \[\langle T\rangle\ltimes(\mathbb{Z}^{n}\ltimes H_{2n+2,D})\] (1.4) where \(\langle T\rangle\cong\mathbb{Z}\) denotes the subgroup of \(\mathrm{SL}(2,\mathbb{Z})\) generated by \(T\).
* Assume that we take the one-loop parameter to be \(c_{\ell}=\frac{\chi}{192\pi}\). Then we can always find a non-empty open subset \(\widetilde{N}_{S}\subset\widetilde{N}\) where \((\widetilde{N}_{S},g_{\overline{N}})\) has a group acting by isometries of the form \[\langle S\rangle\ltimes(\mathbb{Z}^{n}\ltimes H_{2n+2,D}),\] (1.5) where \(\langle S\rangle\cong\mathbb{Z}/4\mathbb{Z}\) is the subgroup generated by \(S\). Furthermore, if \(\widetilde{N}_{\mathrm{SL}(2,\mathbb{Z})}\subset\widetilde{N}\) is an open subset, which is \(\mathrm{SL}(2,\mathbb{Z})\)-invariant under the S-duality action (3.18), then \(\mathrm{SL}(2,\mathbb{Z})\) acts by isometries on \((\widetilde{N}_{\mathrm{SL}(2,\mathbb{Z})},g_{\overline{N}})\). In particular, if \(\widetilde{N}\) is already invariant under \(\mathrm{SL}(2,\mathbb{Z})\) then (1.4) can be enhanced to \[\mathrm{SL}(2,\mathbb{Z})\ltimes(\mathbb{Z}^{n}\ltimes H_{2n+2,D})\,.\] (1.6)
* Finally, if \(n_{\hat{\gamma}}=0\) for all \(\hat{\gamma}\in\Lambda^{+}\), then in the previous statements we can replace \(\mathbb{Z}^{n}\) and \(H_{2n+2,D}\) by \(\mathbb{R}^{n}\) and \(H_{2n+2}\). If furthermore we take \(\chi=c_{\ell}=0\) and \(n_{\hat{\gamma}}=0\) for all \(\hat{\gamma}\in\Lambda^{+}\), then we return to the tree-level q-map space case, where there is a connected \(3n+6\) dimensional Lie group \(G\) acting by isometries on \((\widetilde{N},g_{\overline{N}})\), see [22, Theorem 3.17]. The group \(G\) in particular contains the S-duality action by \(\mathrm{SL}(2,\mathbb{R})\), an action by \(\mathbb{R}^{n}\ltimes H_{2n+2}\), and a dilation action by \(\mathbb{R}_{>0}\).
**Remark 1.1**.:
* Note that from Theorem 4.7 one finds that \(\langle T\rangle\ltimes(\mathbb{Z}^{n}\ltimes H_{2n+2,D})\) is a universal group of isometries, in the sense that it is always an isometry group for any instanton corrected q-map space (provided one takes the maximal domain of definition \(\widetilde{N}\) of the metric \(g_{\overline{N}}\)). On the other hand, even in the case of \(c_{\ell}=\frac{\chi}{192\pi}\), Theorem 4.7 does not guarantee in general that \(\mathrm{SL}(2,\mathbb{Z})\) is an isometry group for \((\widetilde{N},g_{\overline{N}})\), but rather one must first check that \(\widetilde{N}\) (or an open subset) carries an action by S-duality. In particular, Theorem 4.7 does not let us conclude that \(\mathrm{SL}(2,\mathbb{Z})\) is a universal group of isometries. This should be contrasted to the tree-level q-map space case, where \(\mathrm{SL}(2,\mathbb{R})\) is known to always act by isometries.
* We remark that the action of \(S\in\mathrm{SL}(2,\mathbb{Z})\) is perhaps the most interesting and non-trivial within \(\mathrm{SL}(2,\mathbb{Z})\), and corresponds to interchanging weak coupling and strong coupling in the type IIB string theory setting. On the other hand, the action by \(T\in\mathrm{SL}(2,\mathbb{Z})\) generates the discrete Heisenberg isometry that is missing from \(H_{2n+2,D}\).
In Section 5 we give an explicit example where we can achieve full \(\mathrm{SL}(2,\mathbb{Z})\) acting by isometries. More precisely, we consider the case where \(\mathfrak{F}\) is given simply by
\[\mathfrak{F}=-\frac{1}{6}\frac{(Z^{1})^{3}}{Z^{0}}+\chi\frac{(Z^{0})^{2}\zeta( 3)}{2(2\pi\mathrm{i})^{3}},\quad\chi>0\,, \tag{1.7}\]
with variation of BPS structures having BPS indices of the form
\[\Omega(\gamma)=\begin{cases}\Omega(q_{0}\gamma^{0})=-\chi,\quad q_{0}\neq 0\\ \Omega(\gamma)=0\quad\text{else}.\end{cases} \tag{1.8}\]
From this data one obtains an 8-dimensional instanton corrected q-map space. It satisfies the following:
**Corollary 5.3 and 5.5**: let \(\widetilde{N}\) be defined by (5.25) and take \(c_{\ell}=\frac{\chi}{192\pi}\). Then the instanton corrected q-map metric \(g_{\overline{N}}\) associated to (1.7) and (1.8) is defined and positive definite on \(\widetilde{N}\). Furthermore, it carries an effective action by isometries by a group of the form \(\mathrm{SL}(2,\mathbb{Z})\ltimes(\mathbb{R}\ltimes H_{4})\).
**Theorem 5.7**: _Let \((\widetilde{N},g_{\overline{N}})\) be as in Corollary 5.3. Then:_
* _There is a free and properly discontinuous action by isometries of a discrete group of the form_ \(\mathrm{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda\)_, where_ \(\Lambda\subset\mathbb{R}\ltimes H_{4}\) _is a lattice,_ \(\mathrm{SL}(2,\mathbb{Z})^{\prime}\subset\mathrm{SL}(2,\mathbb{Z})\) _is a finite index subgroup and the QK manifold_ \((\widetilde{N}/(\mathrm{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda),g_{ \overline{N}})\) _has finite volume._
* _Furthermore, there is a submanifold with boundary_ \(\hat{N}\subset\widetilde{N}\) _where_ \(\mathrm{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda\) _acts and the quotient_ \((\hat{N}/(\mathrm{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda),g_{\overline{N}})\) _gives a complete QK manifold with boundary_2 _and of finite volume. The manifold with boundary_ \(\hat{N}\) _is of the form_ \(\hat{N}=\hat{N}^{\prime}\cup\partial\hat{N}\)_, where_ \(\hat{N}^{\prime}\) _is defined in the second point of Remark_ 5.6_._ Footnote 2: Recall that a Riemannian manifold with boundary is complete if it is complete as a metric space with the induced distance function.
We note that the example \((\tilde{N},g_{\mathbb{N}})\) of Corollary 5.3 is incomplete (see Remark 5.6). We do not know if the metric and the \(\operatorname{SL}(2,\mathbb{Z})\)-action can be extended to a complete manifold (without boundary). At the end of Remark 5.8 we comment about some expectations of a related example associated to the resolved conifold.
Finally, we make a short comment related to the swampland program in physics. Among the geometric properties expected from the moduli space \(\mathcal{M}\) of a low energy effective theory consistent with quantum gravity, are that \(\mathcal{M}\) should be non-compact, have finite volume, and be geodesically complete (see [13] and [14, Section 4.7]). In particular, applied to type IIA/B string theory, they imply that \(\mathcal{M}_{\mathrm{HM}}^{\mathrm{IIA/B}}(X)\) must be a non-compact complete QK manifold of finite volume (after including all quantum corrections). On the other hand, the example from Theorem 5.7 produces a non-compact QK manifold of finite volume, with "partial completeness" in the sense that it has a complete end, and a boundary where the metric is geodesically incomplete. It would be interesting to see if a suitable extension of the example of Theorem 5.7 would produce a QK manifold with the required geometric properties expected by the swampland conjectures.
### Organization of topics
The topics are organized as follows:
* In Section 2 we review the construction of instanton corrected c-map metrics from [14], and also discuss their twistor description. In particular, the instanton corrected c-map spaces from [14] are in the image of the HK/QK correspondence, and we want to recall a description of the QK twistor space in terms of the HK data, as done in [14, Section 4.3].
* In Section 3 we start by specifying the class of instanton corrected q-map spaces within the class of instanton corrected c-map metrics. Following closely the work in the physics literature of [1, 2], we study when an instanton corrected q-map space carries an \(\operatorname{SL}(2,\mathbb{Z})\)-action by isometries, or at least an action by some of its the subgroups, like \(\langle S\rangle\subset\operatorname{SL}(2,\mathbb{Z})\).
* In Section 4 we study certain universal isometries of instanton corrected q-map spaces and how they are related to the \(\operatorname{SL}(2,\mathbb{Z})\) S-duality symmetry. We collect the main results from Section 3 and Section 4 in Theorem 4.7.
* In Section 5, we give an explicit example of an instanton corrected q-map space where the full \(S\)-duality acting by isometries is realized. Furthermore, we show that it admits a quotient of finite volume by a discrete group of isometries.
* Finally, in Appendix A we collect some useful integral identities involving Bessel functions, and in Appendix B we include a rather long computation that is not really needed for the main points of the paper, but we include for completeness.
**Acknowledgements:** this work was supported by the Deutsche Forschungsgemeinschaft (German Research Foundation) under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306. As with our previous related joint works [14, 14], the idea for this work originated from discussions within our Swampland seminar, which is part of the Cluster of Excellence Quantum Universe. The authors would like to thank Murad Alim, Jorg Teschner and Timo Weigand for their contributions in the aforementioned discussions.
## 2 Instanton corrected QK metrics and their twistor description
The main aims of this section are the following:
* On one hand, we want to recall the main results of [14], concerning the construction of QK metrics associated to certain CASK manifolds with mutually local variation of BPS structures. In the setting of Calabi-Yau compactifications of string theory, these metrics are related to the type IIA hypermultiplet metric with mutually local D-instanton corrections, studied in the physics literature in [1].
* On the other hand, we want to recall certain general facts about the twistor space of QK metrics in the image of the HK/QK correspondence. This part will be mostly based on [12, Section 4]. In particular, the QK metrics from the previous point lie in the image of the HK/QK correspondence, and we want to write down an explicit expression for the holomorphic contact structure of its twistor space in terms of the HK data (see (2.34) and (2.35)). These formulas will be used throughout the rest of this work, and in particular in Section 3, where we study the isometries of instanton corrected q-map spaces.
* Finally, we write down certain "type IIA" Darboux coordinates for the holomorphic contact structure, see (2.53). These have been previously written down in the physics literature [1], under slightly different conventions. Using the explicit formula for the contact structure obtained in the previous point, we will give in the Appendix B a direct proof of the fact that they are Darboux coordinates. This particular result is not needed for Section 3, where certain "type IIB" Darboux coordinates are found, but we include it for completeness.
### QK metrics associated to CASK manifolds with mutually local variations of BPS structures
We briefly recall the main ingredients in the construction of [12].
**Definition 2.1**.: An integral conical affine special Kahler (CASK) manifold is a tuple \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) where:
* \((M,g_{M},\omega_{M},\nabla)\) is an affine special Kahler (ASK) manifold. Namely, \((M,g_{M},\omega_{M})\) is pseudo-Kahler, with the complex structure \(J\) determined by the metric \(g_{M}\) and the Kahler form \(\omega_{M}\) by \(g_{M}(J-,-)=\omega_{M}(-,-)\); \(\nabla\) is a torsion-free flat connection with \(\nabla\omega_{M}=0\); and if \(d_{\nabla}:\Omega^{k}(M,TM)\to\Omega^{k+1}(M,TM)\) denotes the extension of \(\nabla:\Omega^{0}(M,TM)\to\Omega^{1}(M,TM)\) to higher degree forms, then \(d_{\nabla}J=0\), where we think of \(J\) as an element of \(\Omega^{1}(M,TM)\).
* \(\Gamma\subset TM\) is a sub-bundle of \(\nabla\)-flat lattices with \(\Gamma\otimes_{\mathbb{Z}}\mathbb{R}=TM\). Around any \(p\in M\), we can find a local trivialization \((\widetilde{\gamma}_{i},\gamma^{i})\) of \(\Gamma\) of Darboux frames with respect to \(\omega_{M}\). We denote \(\langle-,-\rangle:=\omega_{M}(-,-)|_{\Gamma\times\Gamma}\), and our conventions are that \(\langle\widetilde{\gamma}_{i},\gamma^{j}\rangle=\delta_{i}^{j}\).
* \(\xi\) is a vector field on \(M\) such that \(\nabla\xi=D\xi=\mathrm{Id}_{TM}\), where \(D\) denotes the Levi-Civita connection of \(g_{M}\), and \(\mathrm{Id}_{TM}\) is the identity endomorphism of \(TM\). Furthermore, we assume that \(g_{M}\) is positive definite on \(\mathcal{D}:=\mathrm{span}\{\xi,\jmath\xi\}\) and negative definite on \(\mathcal{D}^{\perp}\).
On the other hand, the data corresponding to the mutually local instanton corrections in the string theory setting was specified in terms of the notion of a mutually local variation of BPS structures (see for example [1] for the more general notion of a variation of BPS structures).
**Definition 2.2**.: A variation of mutually-local BPS structures over the complex manifold \(M^{\prime}\) is a tuple \((M^{\prime},\Gamma^{\prime},Z,\Omega)\) where
* \(\Gamma^{\prime}\to M^{\prime}\) is a local system of lattices with a skew-pairing \(\langle-,-\rangle:\Gamma^{\prime}\times\Gamma^{\prime}\to\mathbb{Z}\).
* \(Z\) is a holomorphic section of \((\Gamma^{\prime})^{*}\otimes\mathbb{C}\to M^{\prime}\), where \((\Gamma^{\prime})^{*}\) denotes the dual local system of \(\Gamma^{\prime}\). If \(\gamma\) is a local section of \(\Gamma^{\prime}\), then we denote by \(Z_{\gamma}:=Z(\gamma)\) the corresponding local holomorphic function on \(M^{\prime}\).
* \(\Omega:\Gamma^{\prime}-\{0\}\to\mathbb{Z}\) is a function of sets satisfying \(\Omega(\gamma)=\Omega(-\gamma)\) and the following properties
* Mutual-locality: if we define \(\mathrm{Supp}(\Omega):=\{\gamma\in\Gamma^{\prime}-\{0\}\ \mid\ \Omega(\gamma)\neq 0\}\), then \(\gamma_{1},\gamma_{2}\in\Gamma^{\prime}_{p}\cap\mathrm{Supp}(\Omega)\) implies that \(\langle\gamma_{1},\gamma_{2}\rangle=0\).
* Support property: for any compact set \(K\subset M^{\prime}\) and a choice of covariantly constant norm \(|\cdot|\) on \(\Gamma^{\prime}|_{K}\otimes_{\mathbb{Z}}\mathbb{R}\), there is a constant \(C>0\) such that for all \(\gamma\in\Gamma^{\prime}|_{K}\cap\mathrm{Supp}(\Omega)\) \[|Z_{\gamma}|>C|\gamma|\,.\] (2.1)
* Convergence property: for any \(R>0\), the series \[\sum_{\gamma\in\Gamma^{\prime}|_{p}}\Omega(\gamma)e^{-R|Z_{\gamma}|}\] (2.2) converges normally over compact subsets of \(M^{\prime}\).
* The numbers \(\Omega(\gamma)\), called BPS indices, are monodromy invariant. Namely if \(\gamma\) has monodromy \(\gamma\to A\gamma\) around a loop, then \(\Omega(\gamma)=\Omega(A\gamma)\).
Given an integral CASK manifold \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\), we will only consider mutually local variations of BPS structures \((M^{\prime},\Gamma^{\prime},Z,\Omega)\) where \((M,\Gamma)=(M^{\prime},\Gamma^{\prime})\), \(\langle-,-\rangle=\omega_{M}(-,-)|_{\Gamma\times\Gamma}\), and where \(Z\) is the canonical central charge associated to the integral CASK manifold [14, Proposition 2.15]. The later is determined as follows: if \(\xi^{1,0}=\frac{1}{2}(\xi-{\rm i}J\xi)\), then
\[Z:=2\omega_{M}(\xi^{1,0},-)|_{\Gamma}\,. \tag{2.3}\]
In particular, given a local Darboux frame \((\widetilde{\gamma}_{i},\gamma^{i})\) of \(\Gamma\), the locally defined functions \(\{Z_{\widetilde{\gamma}_{i}}\}_{i=0}^{n}\) and \(\{Z_{\gamma^{i}}\}_{i=0}^{n}\) give conjugate systems of holomorphic special coordinates for the CASK geometry, where \(n+1=\dim_{\mathbb{C}}(M)\).
#### 2.1.1 Associated instanton corrected HK manifold
To the data \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) and \((M,\Gamma,Z,\Omega)\) one can associate an "instanton-corrected" hyperkahler (HK) geometry [14, Section 3]. This HK manifold can be thought as a deformation of the canonical HK manifold associated to \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) via the rigid c-map (also known as the associated "semi-flat" HK manifold) [13, 12, 14]. In the physics literature, see [11], such instanton corrected HK metrics were studied in the context of \(S^{1}\)-compactifications of 4d \(\mathcal{N}=2\) SUSY gauge theories. There the description of the HK geometry is in terms of its associated twistor space, which in turn in described in terms of a twistor family of holomorphic Darboux coordinates satisfying "TBA"-like integral equations.
In order to describe the instanton corrected HK manifold, we first let \(N:=T^{*}M/\Gamma^{*}\). This can be canonically identified with
\[N\cong\{\zeta:\Gamma\to\mathbb{R}/\mathbb{Z}\mid\zeta_{\gamma+\gamma^{\prime} }=\zeta_{\gamma}+\zeta_{\gamma^{\prime}}\}\,. \tag{2.4}\]
In particular, slightly abusing notation and denoting by \(\zeta\) the evaluation map on \(N\), and given a local Darboux frame \((\widetilde{\gamma}_{i},\gamma^{i})\) of \(\Gamma\), we obtain local coordinates on \(N\) by \((Z_{\gamma^{i}},\zeta_{\widetilde{\gamma}_{i}},\zeta_{\gamma^{i}})\) (or \((Z_{\widetilde{\gamma}_{i}},\zeta_{\widetilde{\gamma}_{i}},\zeta_{\gamma^{i}})\)). Note that \(Z_{\gamma^{i}}\) and \(Z_{\widetilde{\gamma}_{i}}\) are (pull-backs of local) holomorphic functions on the base manifold \(M\) while \(\zeta_{\widetilde{\gamma}_{i}}\) and \(\zeta_{\gamma^{i}}\) are "fiber coordinates" taking values in the circle.
In the following, we will also denote by \(\langle-,-\rangle\) the pairing on \(\Gamma^{*}\) induced by the isomorphism \(\gamma\mapsto\langle\gamma,-\rangle\). With this definition, the dual of a Darboux basis of \(\Gamma\) is a Darboux basis of \(\Gamma^{*}\). We will also denote by \(\langle-,-\rangle\) the \(\mathbb{C}\)-linear extension of the pairing to \(\Gamma^{*}\otimes\mathbb{C}\).
Finally, if \(K_{i}:\mathbb{R}_{>0}\to\mathbb{R}\) denotes the \(i\)-th modified Bessel function of the second kind, and \(\gamma\) is a local section of \(\Gamma\) with \(\gamma\in\text{Supp}(\Omega)\), we define the following local function (resp. 1-form) on \(N\):
\[V_{\gamma}^{\text{inst}}:=\frac{1}{2\pi}\sum_{n>0}e^{2\pi{\rm i}\zeta_{\gamma}} K_{0}(2\pi n|Z_{\gamma}|),\quad A_{\gamma}^{\text{inst}}:=-\frac{1}{4\pi}\sum_{n>0}e^{2 \pi{\rm i}\zeta_{\gamma}}|Z_{\gamma}|K_{1}(2\pi n|Z_{\gamma}|)\Big{(}\frac{{ \rm d}Z_{\gamma}}{Z_{\gamma}}-\frac{{\rm d}\overline{Z}_{\gamma}}{\overline{ Z}_{\gamma}}\Big{)}\,. \tag{2.5}\]
Due to the convergence property and support property of variations of BPS structures, these expressions are well-defined local smooth functions (resp. 1-forms) on \(N\) (see [14, Lemma 3.9]).
Finally, we will need the following compatibility condition between the data \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) and \((M,\Gamma,Z,\Omega)\):
**Definition 2.3**.: Let \(\pi:N\to M\) be the canonical projection. We will say that \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) and \((M,\Gamma,Z,\Omega)\) are compatible if the tensor
\[T:=\pi^{*}g_{M}-\sum_{\gamma}\Omega(\gamma)V_{\gamma}^{\text{inst}}\pi^{*}|{ \rm d}Z_{\gamma}|^{2} \tag{2.6}\]
on \(N\) is horizontally non-degenerate.
We then have the following:
**Theorem 2.4**.: [14, Theorem 3.13] Let \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) and \((M,\Gamma,Z,\Omega)\) be as before. Furthermore, let \(\omega_{i}\in\Omega^{2}(N)\) for \(i=1,2,3\) be defined by
\[\omega_{1}+\mathrm{i}\omega_{2}:=-2\pi\left(\langle\mathrm{d}Z\wedge\mathrm{d} \zeta\rangle+\sum_{\gamma}\Omega(\gamma)\left(\mathrm{d}Z_{\gamma}\wedge A^{ \mathrm{inst}}_{\gamma}+\mathrm{i}V^{\mathrm{inst}}_{\gamma}\mathrm{d}\zeta_{ \gamma}\wedge\mathrm{d}Z_{\gamma}\right)\right) \tag{2.7}\]
\[\omega_{3}:=2\pi\left(\frac{1}{4}(\mathrm{d}Z\wedge\mathrm{d}\overline{Z})- \frac{1}{2}\langle\mathrm{d}\zeta\wedge\mathrm{d}\zeta\rangle-\sum_{\gamma} \Omega(\gamma)\left(\frac{\mathrm{i}}{2}V^{\mathrm{inst}}_{\gamma}\mathrm{d}Z_ {\gamma}\wedge\mathrm{d}\overline{Z}_{\gamma}+\mathrm{d}\zeta_{\gamma}\wedge A ^{\mathrm{inst}}_{\gamma}\right)\right)\,. \tag{2.8}\]
Then the triple of real \(2\)-forms \((\omega_{1},\omega_{2},\omega_{3})\) corresponds to the Kahler forms of a pseudo-HK structure3 on \(N\) if and only if \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) and \((M,\Gamma,Z,\Omega)\) are compatible.
Footnote 3: Our terminology is such that the signature of the metric is not assumed to be constant, in case \(N\) has several components.
**Definition 2.5**.: We denote the resulting instanton corrected HK manifold from the previous theorem by \((N,g_{N},\omega_{1},\omega_{2},\omega_{3})\).
**Remark 2.6**.:
* Compared to [14, Section 3], we have rescaled the above \(2\)-forms \(\omega_{i}\) by a factor of \(2\pi\), and rescaled \(\zeta\) by \(2\pi\) (i.e. \(\zeta:\Gamma\to\mathbb{R}/\mathbb{Z}\) instead of \(\zeta:\Gamma\to\mathbb{R}/2\pi\mathbb{Z}\)). Furthermore, we have changed by a sign the convention of how the BPS indices \(\Omega\) enter into the formulas (2.7), (2.8) (i.e. the above formulas would correspond in [14] to the HK metric associated to the mutually local variation of BPS structures \((M,\Gamma,Z,-\Omega)\)). We do this change of conventions in order to simplify the formulas taken from [14, Section 4] below and also to be able to compare more easily with the physics literature in Section 3 below.
* In the expressions (2.7) and (2.8) we are combining the wedge \(\wedge\) with the pairing \(\langle-,-\rangle\) on \(\Gamma^{*}\otimes\mathbb{C}\). For example, with respect to a Darboux frame \((\widetilde{\gamma}_{i},\gamma^{\dagger})\) of \(\Gamma\), we have \(\langle\mathrm{d}Z\wedge\mathrm{d}\overline{Z}\rangle=\mathrm{d}Z_{\widetilde {\gamma}_{i}}\wedge\mathrm{d}\overline{Z}_{\gamma^{\dagger}}-\mathrm{d}Z_{ \gamma^{\dagger}}\wedge\mathrm{d}\overline{Z}_{\widetilde{\gamma}_{i}}\) and \(\langle\mathrm{d}\zeta\wedge\mathrm{d}\zeta\rangle=\mathrm{d}\zeta_{ \widetilde{\gamma}_{i}}\wedge\mathrm{d}\zeta_{\gamma^{\dagger}}-\mathrm{d} \zeta_{\gamma^{\dagger}}\wedge\mathrm{d}\zeta_{\widetilde{\gamma}_{i}}=2 \mathrm{d}\zeta_{\widetilde{\gamma}_{i}}\wedge\mathrm{d}\zeta_{\gamma^{ \dagger}}\). Furthermore, the expressions in (2.7) and (2.8) are actually global and well-defined due to the monodromy invariance of \(\Omega(\gamma)\), and the support and convergence property of the variations of BPS structures.
* \((N,g_{N},\omega_{1},\omega_{2},\omega_{3})\) carries an infinitesimal rotating circle action [14, Proposition 3.20]. Namely, there is a vector field \(V\) on \(N\) such that \[\mathcal{L}_{V}(\omega_{1}+\mathrm{i}\omega_{2})=2\mathrm{i}(\omega_{1}+ \mathrm{i}\omega_{2}),\quad\mathcal{L}_{V}\omega_{3}=0\,.\] (2.9) Note that due to the factor \(2\) in (2.9) the vector field \(V\) is twice the vector field denoted \(V\) in [14].
* Under the mild assumption on the flow of \(\xi\) that it generates a free-action on \(M\) of the (multiplicative) monoid \(\mathbb{R}_{\geq 1}\), we can guarantee that \(g_{N}\) has signature \((4,4n)\) where \(n+1=\dim_{\mathbb{C}}(M)\)[14, Proposition 3.21].
* If one sets \(\Omega(\gamma)=0\) for all \(\gamma\in\Gamma\), then \((N,g_{N},\omega_{1},\omega_{2},\omega_{3})\) reduces to the semi-flat HK manifold obtained via the rigid c-map.
#### 2.1.2 Associated instanton corrected QK manifold via HK/QK correspondence
The instanton corrected QK manifold \((\overline{N},g_{\overline{N}})\) associated to the data of \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) and \((M,\Gamma,Z,\Omega)\) was constructed in [14] by applying the HK/QK correspondence to \((N,g_{N},\omega_{1},\omega_{2},\omega_{3})\). In order to do this, one needs the following additional data:
* A hyperholomorphic principal \(S^{1}\)-bundle \((\pi_{N}:P\to N,\eta)\), where \(\pi_{N}:P\to N\) is constructed via [14, Proposition 4.2], and \(\eta\) is a connection on \(P\) having curvature \[\mathrm{d}\eta=\pi_{N}^{*}(\omega_{3}-\frac{1}{2}\mathrm{d}\iota_{V}g_{N})\,.\] (2.10) The connection \(\eta\) is given by [14, Corollary 4.5] \[\eta:=\Theta+\pi_{N}^{*}\Big{(}\frac{\pi\mathrm{i}}{2}\pi_{M}^{*}(\overline{ \partial}r^{2}-\partial r^{2})-\sum_{\gamma}2\pi\Omega(\gamma)\eta_{\gamma}^{ \mathrm{inst}}-\frac{1}{2}\iota_{V}g_{N}\Big{)}\] (2.11)
where \(r^{2}:=g_{M}(\xi,\xi)\); \(V\) is the rotating vector field satisfying (2.9); \(\Theta\) is another connection on \(P\) having curvature \(-\pi\langle\mathrm{d}\zeta\wedge\mathrm{d}\zeta\rangle=2\pi\mathrm{d}\zeta_{ \gamma^{\prime}}\wedge\mathrm{d}\zeta_{\overline{\gamma}_{i}}\), and \[\eta^{\mathrm{inst}}_{\gamma}:=\frac{\mathrm{i}}{8\pi^{2}}\sum_{n>0}\frac{e^{2 \pi\mathrm{i}\zeta_{\gamma}}}{n}|Z_{\gamma}|K_{1}(2\pi n|Z_{\gamma}|)\Big{(} \frac{\mathrm{d}Z_{\gamma}}{Z_{\gamma}}-\frac{\mathrm{d}\overline{Z}_{\gamma}}{ \overline{Z}_{\gamma}}\Big{)}\,.\] (2.12) If \(\sigma\) denotes a local coordinate for the \(S^{1}\)-fiber, then one can write \[\Theta=\pi\left(\mathrm{d}\sigma-\pi_{N}^{*}\langle\mathrm{d}\zeta\rangle \right)\,.\] (2.13)
* We need to furthermore specify a Hamiltonian for \(\omega_{3}\) with respect to the rotating vector field \(V\) satisfying (2.9), which in this case is given by [14, Lemma 4.7] \[f=2\pi(r^{2}-8c_{\ell}-\sum_{\gamma}\Omega(\gamma)\iota_{V}\eta^{\mathrm{ inst}}_{\gamma}),\quad c_{\ell}\in\mathbb{R},\] (2.14) together with the lift \(V^{P}\) of \(V\) to \(P\) given by \[V^{P}:=\widetilde{V}+f_{3}\partial_{\sigma},\quad f_{3}:=f-\frac{1}{2}g_{N}(V,V)\] (2.15) where \(\widetilde{V}\) denotes the horizontal lift with respect to \(\eta\) and \(\partial_{\sigma}\) is the vertical vector field of \(P\) generating the \(S^{1}\)-action.
* Finally, we consider the open subset \(N^{\prime}\subset N\) given by \[N^{\prime}=\{p\in N\quad|\quad f(p)\neq 0,\quad f_{3}(p)\neq 0,\quad g_{N}(V_{p},V_{p})\neq 0\},\] (2.16) and the \(1\)-forms on \(P\) given by \[\theta^{P}_{0}=-\frac{1}{2}\pi_{N}^{*}\mathrm{d}f,\quad\theta^{P}_{3}:=\eta+ \frac{1}{2}\pi_{N}^{*}\iota_{V}g_{N},\quad\theta^{P}_{1}:=\frac{1}{2}\pi_{N}^{ *}\iota_{V}\omega_{2},\quad\theta^{P}_{2}:=-\frac{1}{2}\pi_{N}^{*}\iota_{V} \omega_{1}\,,\] (2.17)
We then have
**Theorem 2.7**.: [14, Theorem 4.10] Let \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) be an integral CASK manifold and \((M,\Gamma,Z,\Omega)\) a compatible mutually local variation of BPS structures. Furthermore, let \((N,g_{N},\omega_{1},\omega_{2},\omega_{3})\), \((\pi_{N}:P\to N,\eta)\), \(f\), \(f_{3}\), \(\theta^{P}_{i}\), \(V^{P}\) and \(N^{\prime}\) the associated data defined in the previous points. Given any submanifold \(\overline{N}\subset P|_{N^{\prime}}\) transverse to \(V^{P}\), the symmetric \(2\)-tensor
\[g_{\overline{N}}:=-\frac{1}{f}\left(\frac{2}{f_{3}}\eta^{2}+\pi_{N}^{*}g_{N}- \frac{2}{f}\sum_{i=0}^{3}(\theta^{P}_{i})^{2}\right)\Bigg{|}_{\overline{N}} \tag{2.18}\]
defines a pseudo-QK metric on \(\overline{N}\). Furthermore, if \((N,g_{N},\omega_{1},\omega_{2},\omega_{3})\) has signature \((4,4n)\), then \(g_{\overline{N}}\) is positive definite on \(\overline{N}_{+}=\overline{N}\cap\{f>0,f_{3}<0\}\).
**Remark 2.8**.: Recall that we can guarantee that \((N,g_{N},\omega_{1},\omega_{2},\omega_{3})\) has signature \((4,4n)\) if the flow of \(\xi\) generates a free-action on \(M\) of the monoid \(\mathbb{R}_{\geq 1}\) (see Section 2.1.1).
#### 2.1.3 The case of a CASK domain
We now specialize the previous construction to the case of a CASK domain. This will be the case of interest in the following Section 3.
**Definition 2.9**.: A CASK domain is a tuple \((M,\mathfrak{F})\) where
* \(M\subset\mathbb{C}^{n+1}-\{0\}\) is a \(\mathbb{C}^{\times}\)-invariant domain. We denote the canonical holomorphic coordinates by \(Z^{i}\), \(i=0,1,...,n\). To avoid inessential coordinate changes later on, we will assume for simplicity that \(Z^{0}\) does not vanish on \(M\).
* \(\mathfrak{F}:M\to\mathbb{C}\) is a holomorphic function, homogeneous of degree \(2\) with respect to the natural \(\mathbb{C}^{\times}\)-action on \(M\).
* The matrix \[\mathrm{Im}\left(\tau_{ij}\right),\quad\tau_{ij}:=\frac{\partial^{2}\mathfrak{F}}{ \partial Z^{i}\partial Z^{j}}\] (2.19) has signature \((n,1)\), and \(\mathrm{Im}(\tau_{ij})Z^{i}\overline{Z}^{j}<0\).
A CASK domain \((M,\mathfrak{F})\) induces in the usual way a CASK manifold \((M,g_{M},\omega_{M},\nabla,\xi)\)[1]. With our conventions on the signature of the CASK manifold, if \(Z_{i}:=\frac{\partial\mathfrak{F}}{\partial Z^{i}}\), then \(\{Z^{i}\}\) and \(\{-Z_{i}\}\) are a global system of conjugate conical special holomorphic coordinates. If \(x^{i}=\mathrm{Re}(Z^{i})\) and \(y_{i}:=\mathrm{Re}(Z_{i})\), then \(\nabla\) is defined such that \(\mathrm{d}x^{i}\) and \(\mathrm{d}y_{i}\) are flat. Furthermore
\[g_{M}=-\mathrm{Im}(\tau_{ij})\mathrm{d}Z^{i}\mathrm{d}\overline{Z}^{j},\quad \omega_{M}=-\frac{\mathrm{i}}{2}\mathrm{Im}(\tau_{ij})\mathrm{d}Z^{i}\wedge \mathrm{d}\overline{Z}^{j}=\mathrm{d}x^{i}\wedge\mathrm{d}y_{i},\quad\xi=Z^{ i}\partial_{Z^{i}}+\overline{Z}^{i}\partial_{\overline{Z}^{i}}\,. \tag{2.20}\]
Given a CASK domain \((M,\mathfrak{F})\) we can induce a canonical integral structure on the CASK manifold by defining \(\Gamma\to M\) to be \(\Gamma=\mathrm{span}_{2}\{\partial_{x^{i}},\partial_{y_{i}}\}\). In the following, we will assume that:
* \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) is an integral CASK manifold induced by a CASK domain \((M,\mathfrak{F})\) with the canonical integral structure. In this case, we will sometimes use the notation \((\partial_{x^{i}},\partial_{y_{i}})=(\widetilde{\gamma}_{i},\gamma^{i})\).
* Given a mutually local variation of BPS structures \((M,\Gamma,Z,\Omega)\) with \((M,\Gamma)\) as in the previous point, we assume that \(Z\) is the canonical central charge, and that \(\mathrm{Supp}(\Omega)\subset\mathrm{span}_{\mathbb{Z}}\{\partial_{y_{i}}\}\). In particular, the canonical central charge satisfies in this case \[Z=Z_{\widetilde{\gamma}_{i}}\mathrm{d}x^{i}+Z_{\gamma^{i}}\mathrm{d}y_{i}=-Z_ {i}\mathrm{d}x^{i}+Z^{i}\mathrm{d}y_{i}\,.\] (2.21)
In order to construct the associated QK metric, we need to choose \(\overline{N}\subset P|_{N^{\prime}}\) transverse to \(V^{P}\). Denoting by \(\pi_{M}:=\pi\circ\pi_{N}\) the composition of the projections \(\pi_{N}:P\to N\), and \(\pi:N\to M\), we have that \(\pi_{M}(p)=(Z^{0},...,Z^{n})\). We define \(\overline{N}\) by
\[\overline{N}:=\{p\in P|_{N^{\prime}}\quad|\quad\mathrm{Arg}(Z^{0})=0,\quad \text{ where }\pi_{M}(p)=(Z^{0},...,Z^{n})\}\,. \tag{2.22}\]
**Definition 2.10**.: Throughout the paper, we will use coordinates \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\) on (2.22) defined as follows:
* \(16\pi\rho:=2\pi r^{2}-16\pi c_{\ell}\) where \(r^{2}=g_{M}(\xi,\xi)=-\mathrm{Im}(\tau_{ij})Z^{i}\overline{Z}^{j}\) and \(c_{\ell}\in\mathbb{R}\). Note that \(r^{2}\) is a Kahler potential for the affine special Kahler metric \(g_{M}\).
* \(z^{a}:=Z^{a}/Z^{0}\) for \(a=1,...,n\). These are global holomorphic coordinates on the induced projective special Kahler (PSK) manifold \((\overline{M},g_{\overline{M}})\) induced by the CASK domain. In particular, we have a projection \(\overline{\pi_{M}}:M\to\overline{M}\).
* \((\zeta^{i},\widetilde{\zeta}_{i})\) are given by \(\zeta^{i}:=-\zeta_{\partial_{y_{i}}}=-\zeta_{\gamma^{i}}\) and \(\widetilde{\zeta}_{i}:=\zeta_{\partial_{x^{i}}}=\zeta_{\widetilde{\gamma}_{i}}\), where the latter are the evaluation map on \(N\) contracted with \(\partial_{x^{i}}\) and \(\partial_{y_{i}}\).
* \(\sigma\) is a local coordinate for the \(S^{1}\)-fiber of \(\pi_{N}:P\to N\) satisfying (2.13).
**Remark 2.11**.: In the string theory setting, the coordinates \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\) can be identified with certain fields from the type IIA hypermultiplet. Namely, \(\rho\) is the 4d-dilaton, \(z^{a}\) are coordinates for the complex moduli of the Calabi-Yau, \((\zeta^{i},\widetilde{\zeta}_{i})\) are the RR-axions, and \(\sigma\) is the NS-axion. Furthermore, the constant \(c_{\ell}\) appearing in the definition of \(\rho\) is identified with the 1-loop correction to the tree-level Ferrara-Sabharwal metric.
In [14, Theorem 5.4] an explicit expression of the resulting QK metric \((\overline{N},g_{\overline{N}})\) in these coordinates is given (with slightly different conventions). In order to write down the formula we introduce the following notation:
* We denote by \(\widetilde{Z}_{\gamma}:=Z_{\gamma}/Z^{0}\) the normalized central charge. In particular, we have \(\widetilde{Z}_{\gamma^{i}}=z^{i}\) with \(z^{0}=1\). If furthermore we let \(\mathcal{K}=-\log(-2\mathrm{Im}(\tau_{ij})z^{i}\overline{z}^{j})\), then \(\mathcal{K}\) is a global Kahler potential for the projective special Kahler manifold \((\overline{M},g_{\overline{M}})\) induced by the CASK domain. Note that \[r^{2}=|Z^{0}|^{2}\frac{e^{-\mathcal{K}}}{2}.\] (2.23)
* We denote \(N_{ij}=-2{\rm Im}(\tau_{ij})\). If \(\gamma\in{\rm Supp}(\Omega)\) we write \(\gamma=q_{i}(\gamma)\gamma^{i}\) (recall that we assume that \({\rm Supp}(\Omega)\subset{\rm span}_{2}\{\gamma^{i}\}\)), and define \[N_{ij}^{\rm inst}:=-2\sum_{\gamma}\Omega(\gamma)V_{\gamma}^{\rm inst}q_{i}( \gamma)q_{j}(\gamma)\,.\] (2.24)
* We let \[W_{i}:={\rm d}\zeta_{\widetilde{\gamma}_{i}}+\tau_{ij}{\rm d}\zeta_{\gamma^{j }},\ \ \ \ \ W_{i}^{\rm inst}:=-\sum_{\gamma}\Omega(\gamma)q_{i}(\gamma)(A_{ \gamma}^{\rm inst}-{\rm i}V_{\gamma}^{\rm inst}{\rm d}\zeta_{\gamma}),\] (2.25) and \[\eta^{\rm inst}:=-\sum_{\gamma}\Omega(\gamma)\eta_{\gamma}^{\rm inst}-\frac{ 1}{2}\iota_{V}\left(\frac{g_{N}}{2\pi}-\pi_{M}^{*}g_{M}\right)\,.\] (2.26)
* We split \(f=16\pi\rho+f^{\rm inst}=16\pi(\rho+\rho^{\rm inst})\) in (2.14) using that \(16\pi\rho=2\pi r^{2}-16\pi c_{\ell}\), and where \(f^{\rm inst}=16\pi\rho^{\rm inst}\) contains the terms with the BPS indices \(\Omega(\gamma)\), namely \[f^{\rm inst}=-2\pi\sum_{\gamma}\Omega(\gamma)\iota_{V}\eta_{\gamma}^{\rm inst }\,.\] (2.27) Finally, we denote \(f_{3}^{\rm inst}=16\pi\rho_{3}^{\rm inst}=2\pi\iota_{V}\eta^{\rm inst}\). We remark that in the case where \(\Omega(\gamma)=0\) for all \(\gamma\) we have \(f^{\rm inst}=f_{3}^{\rm inst}=0\), and similarly for all the other quantities with an \({}^{\rm inst}\) superscript.
We then have
**Theorem 2.12**.: [CT22a, Theorem 5.4, Proposition 5.6] Let \((M,\mathfrak{F})\) and \((M,\Gamma,Z,\Omega)\) be as before. By possibly restricting \(M\), we assume that \(M\) is the maximal open subset where \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) and \((M,\Gamma,Z,\Omega)\) are compatible. Furthermore, let \(\overline{N}\) be as (2.22). Then in the coordinates \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\) from Definition 2.10 the instanton corrected QK metric \((\overline{N},g_{\overline{N}})\) associated to \((M,g_{M},\omega_{M},\nabla,\xi,\Gamma)\) and \((M,\Gamma,Z,\Omega)\) has the form:
\[g_{\overline{N}}= \frac{\rho+c_{\ell}}{\rho+\rho^{\rm inst}}\Big{(}g_{\overline{M} }+2e^{\mathcal{K}}\sum_{\gamma}\Omega(\gamma)V_{\gamma}^{\rm inst}\Big{|}{ \rm d}\widetilde{Z}_{\gamma}+\widetilde{Z}_{\gamma}\Big{(}\frac{{\rm d}\rho}{ 2(\rho+c_{\ell})}+\frac{{\rm d}\mathcal{K}}{2}\Big{)}\Big{|}^{2}\Big{)}\] \[+\frac{1}{2(\rho+\rho^{\rm inst})^{2}}\Big{(}\frac{\rho+2c_{\ell} -\rho^{\rm inst}}{2(\rho+c_{\ell})}{\rm d}\rho^{2}+2{\rm d}\rho{\rm d}\rho^{ \rm inst}|_{\overline{N}}+({\rm d}\rho^{\rm inst})^{2}|_{\overline{N}}\Big{)}\] \[+\frac{\rho+c_{\ell}+\rho_{-}^{\rm inst}}{64(\rho+\rho^{\rm inst} )^{2}(\rho+2c_{\ell}-\rho_{3}^{\rm inst})}\Big{(}{\rm d}\sigma-\langle\zeta,{ \rm d}\zeta\rangle-4c_{\ell}{\rm d}^{\rm c}\mathcal{K}+\eta_{+}^{\rm inst}|_{ \overline{N}}+\frac{\rho_{+}^{\rm inst}-c_{\ell}}{\rho+c_{\ell}+\rho_{-}^{\rm inst }}\eta_{-}^{\rm inst}|_{\overline{N}}\Big{)}^{2}\] \[-\frac{1}{4(\rho+\rho^{\rm inst})}(W_{i}+W_{i}^{\rm inst}|_{ \overline{N}})(N+N^{\rm inst})^{ij}(\overline{W}_{j}+\overline{W}_{j}^{\rm inst }|_{\overline{N}})\] \[+\frac{(\rho+c_{\ell})e^{\mathcal{K}}}{2(\rho+\rho^{\rm inst})^{ 2}}\Big{|}z^{i}(W_{i}+W_{i}^{\rm inst}|_{\overline{N}})-\frac{{\rm i}}{2}\sum_ {\gamma}\Omega(\gamma)A_{\gamma}^{\rm inst}(V)\Big{(}{\rm d}\widetilde{Z}_{ \gamma}+\widetilde{Z}_{\gamma}\Big{(}\frac{{\rm d}\rho}{2(\rho+c_{\ell})}+ \frac{{\rm d}\mathcal{K}}{2}\Big{)}\Big{)}\Big{|}^{2}\] \[+\frac{\rho+c_{\ell}+\rho_{-}^{\rm inst}}{\rho+\rho^{\rm inst} }\Big{(}\frac{{\rm d}^{\rm c}\mathcal{K}}{2}+\frac{1}{8(\rho+c_{\ell}+\rho_{-}^ {\rm inst})}\eta_{-}^{\rm inst}|_{\overline{N}}\Big{)}^{2}-\frac{\rho+c_{\ell}} {\rho+\rho^{\rm inst}}\Big{(}\frac{{\rm d}^{\rm c}\mathcal{K}}{2}\Big{)}^{2} \tag{2.28}\]
where \({\rm d}^{\rm c}:={\rm i}(\overline{\partial}-\partial)\), \(\eta_{\pm}^{\rm inst}\) are given by
\[\eta_{\pm}^{\rm inst}:=\Big{(}\eta^{\rm inst}-4\rho_{3}^{\rm inst}\widetilde{ \eta}\Big{)}\pm\Big{(}\sum_{\gamma}\Omega(\gamma)\eta_{\gamma}^{\rm inst}-4 \rho^{\rm inst}\widetilde{\eta}\Big{)}\,,\ \ \ \widetilde{\eta}:={\rm d}^{\rm c}\log(r)\,, \tag{2.29}\]
and
\[\rho_{\pm}^{\rm inst}:=(\rho^{\rm inst}\pm\rho_{3}^{\rm inst})/2\,. \tag{2.30}\]
Furthermore, the open subset \(\overline{N}_{+}=\overline{N}\cap\{f>0,f_{3}<0\}\) is non-empty, and \(g_{\overline{N}}\) is positive-definite on \(\overline{N}_{+}\).
**Remark 2.13**.:
* In Theorem 2.12 we have relaxed a bit the possible restriction of \(M\) compared to [14, Theorem 5.4]. In [14] we assume that we can restrict to an \(M\) invariant under the action of the monoid \(\mathbb{R}_{\geq 1}\times S^{1}\) to make the CASK structure compatible with the BPS structure. This ensures that no matter the point of \(\overline{M}\) the metric is defined for \(\rho>K\) for some sufficiently big uniform \(K\). Our weakened assumption makes it so that the constant \(K\) might depend on the point \(z^{a}\in\overline{M}\).
* When \(\Omega(\gamma)=0\) for all \(\gamma\in\Gamma\) the expression reduces to the 1-loop corrected Ferrara-Sabharwal metric: \[\begin{split} g_{\overline{N}}=&\frac{\rho+c_{\ell} }{\rho}g_{\overline{M}}+\frac{\rho+2c_{\ell}}{4\rho^{2}(\rho+c_{\ell})} \mathrm{d}\rho^{2}+\frac{\rho+c_{\ell}}{64\rho^{2}(\rho+2c_{\ell})}\Big{(} \mathrm{d}\sigma-\langle\zeta,\mathrm{d}\zeta\rangle-4c_{\ell}\mathrm{d}^{ \mathrm{c}}\mathcal{K}\Big{)}^{2}\\ &-\frac{1}{4\rho}\left(N^{ij}-\frac{2(\rho+c_{\ell})e^{\mathcal{K }}}{\rho}z^{i}\overline{z}^{j}\right)W_{i}\overline{W}_{j}\,.\end{split}\] (2.31) In particular \((\overline{N},g_{\overline{N}})\) can be thought as a deformation of the 1-loop corrected metric.
* Since the instanton corrections of the HK geometry are exponentially suppressed as \(|Z_{\gamma}|\to\infty\) for \(\gamma\in\mathrm{Supp}(\Omega)\), it is easy to check that the possibly restricted \(M\) from above satisfying the required conditions is never empty. Furthermore, on \(\overline{N}\) the instanton corrections of the QK geometry are exponentially suppressed as \(\rho\to\infty\) (due to the relation \(|Z_{\gamma}|=|Z^{0}||\widetilde{Z}_{\gamma}|=4\sqrt{\rho+c_{\ell}}e^{\mathcal{K }/2}|\widetilde{Z}_{\gamma}|\)), so we can ensure that \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\in\overline{N}_{+}\) by taking \(\rho\) sufficiently big.
* The function \(f=16\pi\rho+f^{\mathrm{inst}}\) can be thought in the string theory setting as the D-instanton corrected 4d dilaton (up to different conventions in the normalization) [1].
* When comparing (2.28) to [14, Equation 5.5], we note that here we are using different conventions for the normalization of \(g_{N}\); the rotating vector field \(V\); the functions \(N_{ij}\), \(N^{\mathrm{inst}}_{ij}\), \(e^{-\mathcal{K}}\), \(\eta^{\mathrm{inst}}_{\pm}\); the signature of \(\mathrm{Im}(\tau_{ij})\); and the coordinates \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\). Compared to [14], \(g_{N}\) is scaled by \(2\pi\); \(V\), \(N_{ij}\), \(N^{\mathrm{inst}}_{ij}\) and \(e^{-\mathcal{K}}\) are scaled by \(2\); \(\eta^{\mathrm{inst}}_{\pm}\) are scaled by \(\pi^{-1}\); and the signature of \(\mathrm{Im}(\tau_{ij})\) is opposite. Furthermore, the coordinates and 1-loop constant of [14] are related to the ones in Definition 2.10 by performing the scaling \[\rho\to 16\pi\rho,\quad c_{\ell}\to 16\pi c_{\ell},\quad\sigma\to\pi\sigma, \quad\zeta^{i}\to-2\pi\zeta^{i},\quad\widetilde{\zeta}_{i}\to 2\pi \widetilde{\zeta}_{i}\,.\] (2.32) Finally, as mentioned in Remark 2.6, the sign with which the \(\Omega(\gamma)\) enter the formula in (2.28) is opposite to [14, Equation 5.5]. This changes of convention will make the formulas from subsequent section look more simple and more easily comparable to the physics literature.
In what follows, it will be useful to consider the following lift of \((\overline{N},g_{\overline{N}})\):
**Definition 2.14**.: We will denote by \((\widetilde{N},g_{\overline{N}})\) the QK manifold obtained by lifting the QK metric \((\overline{N},g_{\overline{N}})\) obtained in Theorem 2.12 to the open subset \(\widetilde{N}\subset\mathbb{R}_{>0}\times\overline{M}\times\mathbb{R}^{2n+2}\times \mathbb{R}\) obtained by considering \((\zeta^{i},\zeta_{i},\sigma)\in\mathbb{R}^{2n+2}\times\mathbb{R}\) as (non-periodic) global coordinates of \(\mathbb{R}^{2n+2}\times\mathbb{R}\). We will call such a space an instanton corrected c-map space.
### Twistor space description and Darboux coordinates
Let \((\overline{N},g_{\overline{N}},Q)\) denote a QK manifold, where \(Q\to\overline{N}\) denotes the associated quaternionic structure. Namely, a parallel subbundle \(Q\subset\mathrm{End}(T\overline{N})\) admitting local trivializations \((J_{1},J_{2},J_{3})\) by skew endomorphisms satisfying the quaternion relations. The structure of a QK manifold \((\overline{N},g_{\overline{N}},Q)\) can be encoded in a holomorphic object, known as the twistor space \((\mathcal{Z},\mathcal{I},\lambda,\tau)\)[15, 16]. Here \(\mathcal{Z}\to\overline{N}\) is a sphere subbundle of \(Q\) defined by
\[\mathcal{Z}_{p}:=\{J\in Q_{p}\mid J^{2}=-1\},\quad p\in\overline{N}; \tag{2.33}\]
\(\mathcal{I}\) is a canonical holomorphic structure on \(\mathcal{Z}\); \(\lambda\in\Omega^{1}(\mathcal{Z},\mathcal{L})\) defines a holomorphic contact structure on \(\mathcal{Z}\), where \(\mathcal{L}\to\mathcal{Z}\) is a certain holomorphic line bundle; and \(\tau\) is a real structure on \(\mathcal{Z}\) (i.e. an antiholomorphic involution).
In what follows, we consider \((\overline{N},g_{\overline{N}})\) built in the previous Section 2.1.3, and its lift \((\widetilde{N},g_{\overline{N}})\). We start by recalling the following from the discussion in [14, Section 4]:
**Proposition 2.15**.: Let \((\widetilde{N},g_{\overline{N}})\) be the lift of an instanton corrected QK manifold associated to a CASK domain \((M,\mathfrak{F})\) and mutually local variation of BPS structures \((M,\Gamma,Z,\Omega)\). Then \(\mathcal{Z}\cong\widetilde{N}\times\mathbb{C}P^{1}\) (non-holomorphically), and there is a holomorphic coordinate \(t\) on the \(\mathbb{C}P^{1}\) factor and a holomorphic section \(s\) of the holomorphic line bundle \(\mathcal{L}\to\mathcal{Z}\) vanishing at \(t=0,\infty\), such that
\[\lambda=\left(f\frac{\mathrm{d}t}{t}+t^{-1}\theta^{P}_{+\lfloor\overline{N} }-2\mathrm{i}\theta^{P}_{3}|_{\overline{N}}+t\theta^{P}_{-\lfloor\overline{N} }\right)\cdot s \tag{2.34}\]
where \(\theta^{P}_{\pm}:=\theta^{P}_{1}\pm\mathrm{i}\theta^{P}_{2}\), and \(\theta^{P}_{i}\) for \(i=1,2,3\) are defined as in (2.17), and \(f\) is defined as in (2.14).
Proof.: This follows from the discussion in [14, Section 4.3], where the twistor space of a QK manifold \((\overline{N},g_{\overline{N}})\) obtained via HK/QK correspondence is described in terms of the "HK data" given by \((N,g_{N},\omega_{1},\omega_{2},\omega_{3})\), \((\pi_{N}:P\to N,\eta)\), \(f\), and the associated HK cone. In particular, whenever the QK manifold admits a global chart of coordinates, which is the case of \((\widetilde{N},g_{\overline{N}})\) obtained in Section 2.1.3, it follows that \(\mathcal{Z}\cong\widetilde{N}\times\mathbb{C}P^{1}\) non-holomorphically. The formula (2.34) follows from [14, Section 4.3.1]. The lift to \(\widetilde{N}\to\overline{N}\) is understood.
In what follows, we will be concerned in describing Darboux coordinates for the contact structure \(\lambda\) expressed as (2.34), in the case where \((\widetilde{N},g_{\overline{N}})\) is the instanton corrected QK metric obtained in Section 2.1.3. For this it will be important to have an explicit expression for \(f\), \(\theta^{P}_{+\lfloor\overline{N}}\) and \(\theta^{P}_{3}|_{\overline{N}}\) (\(\theta^{P}_{-\lfloor\overline{N}}\) can be obtained from \(\theta^{P}_{-}=\overline{\theta^{P}_{+}}\)).
**Lemma 2.16**.: Consider the CASK domain \((M,\mathfrak{F})\) and mutually local variation of BPS structures \((M,\Gamma,Z,\Omega)\) as in Section 2.1.3. Then \(f\), \(\theta^{P}_{+\lfloor\overline{N}}\) and \(\theta^{P}_{3}|_{\overline{N}}\) from (2.34) have the following formulas with respect to the coordinates \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\):
\[f =16\pi\rho+\frac{2R}{\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0} \frac{e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn |\widetilde{Z}_{\gamma}|)\] \[\theta^{P}_{+\lfloor\overline{N}} =-4\pi R\langle\widetilde{Z},\mathrm{d}\zeta\rangle+2\mathrm{i}R \sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0}e^{-2\pi\mathrm{i}n \zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|)\mathrm{d}\zeta_{\gamma}\] \[\qquad+2R^{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma} \sum_{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn |\widetilde{Z}_{\gamma}|)\left(\frac{\mathrm{d}\widetilde{Z}_{\gamma}}{ \widetilde{Z}_{\gamma}}+\frac{\mathrm{d}\overline{Z}_{\gamma}}{\overline{Z}_ {\gamma}}+\frac{\mathrm{d}\rho}{(\rho+c_{\ell})}+\mathrm{d}\mathcal{K}\right)\] \[\theta^{P}_{3}|_{\overline{N}} =\pi\mathrm{d}\sigma-\pi\langle\zeta,\mathrm{d}\zeta\rangle-4\pi( \rho+c_{\ell})\mathrm{d}^{c}\mathcal{K}-\frac{\mathrm{i}R}{2\pi}\sum_{\gamma} \Omega(\gamma)\sum_{n>0}\frac{e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}| \widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)\left(\frac{ \mathrm{d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}-\frac{\mathrm{d} \overline{Z}_{\gamma}}{\overline{Z}_{\gamma}}\right) \tag{2.35}\]
where \(R:=2\sqrt{\rho+c_{\ell}}e^{\mathcal{K}/2}\), \(\widetilde{Z}_{\gamma}=q_{0}+q_{a}z^{a}\) and \(\zeta_{\gamma}=-q_{i}\zeta^{i}\) for \(\gamma=q_{i}\gamma^{i}\), and in the last formula we have used \(\mathrm{d}^{c}=\mathrm{i}(\overline{\partial}-\partial)\).
Proof.: To obtain the above identities we will use that
\[|Z^{0}|^{2}=16(\rho+c_{\ell})e^{\mathcal{K}}=4R^{2}\,, \tag{2.36}\]
together with \(\Omega(\gamma)=\Omega(-\gamma)\) and the fact that the rotating vector field is given globally in the case of a CASK domain by
\[V=2\mathrm{i}Z^{\prime}\partial_{Z^{i}}-2\mathrm{i}\overline{Z}^{\ell} \partial_{\overline{Z}^{i}}\,. \tag{2.37}\]
To obtain the formula for \(f\) we just use (2.14) together with (2.36) and a relabeling of the sum variable \(\gamma\to-\gamma\). To obtain the formula for \(\theta^{P}_{+\lfloor\overline{N}}\) we use the formulas (2.7) for \(\omega_{1}+\mathrm{i}\omega_{2}\), the definitions for \(\theta^{P}_{1}\) and \(\theta^{P}_{2}\) in (2.17), a relabeling \(\gamma\to-\gamma\) of the sums over \(\gamma\), the CASK relation \(Z_{\widetilde{\gamma}_{i}}=-Z_{i}=-\tau_{ij}Z^{j}=-\tau_{ij}Z_{\gamma^{j}}\), and the fact that
\[\left(\frac{\mathrm{d}Z_{\gamma}}{Z_{\gamma}}-\frac{\mathrm{d}\overline{Z}_{ \gamma}}{\overline{Z}_{\gamma}}\right)\Bigg{|}_{\overline{N}}=\left(\frac{ \mathrm{d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}-\frac{\mathrm{d} \overline{\widetilde{Z}}_{\gamma}}{\overline{\widetilde{Z}}_{\gamma}}\right), \quad\mathrm{d}Z_{\gamma}|_{\overline{N}}=|Z^{0}|\widetilde{Z}_{\gamma}\left( \frac{\mathrm{d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}+\frac{\mathrm{d} \rho}{2(\rho+c_{\ell})}+\frac{\mathrm{d}\mathcal{K}}{2}\right)\,. \tag{2.38}\]
Finally, the equality for \(\theta^{P}_{3}|_{\overline{N}}\) follows from the formulas (2.11) for \(\eta\), (2.17) for \(\theta^{P}_{3}\), the first equation from (2.38), and again a relabeling \(\gamma\to-\gamma\) in the sum over \(\gamma\). We also use the fact that
\[\mathrm{d}^{c}r^{2}|_{\overline{N}}=-r^{2}\mathrm{d}^{c}\mathcal{K}\,, \tag{2.39}\]
which follows from the second relation in (2.38).
**Remark 2.17**.: We remark that if we denote \(F_{i}:=\partial_{Z^{i}}\mathfrak{F}/Z^{0}=Z_{i}/Z^{0}\), then in the expression (2.35) we have
\[\langle\widetilde{Z},\mathrm{d}\zeta\rangle=\widetilde{Z}_{\widetilde{\gamma} _{i}}\mathrm{d}\zeta_{\gamma^{i}}-\widetilde{Z}_{\gamma^{i}}\mathrm{d}\zeta_{ \widetilde{\gamma}_{i}}=(-F_{i})\mathrm{d}(-\zeta^{i})-z^{i}\mathrm{d} \widetilde{\zeta}_{i}=F_{i}\mathrm{d}\zeta^{i}-z^{i}\mathrm{d}\widetilde{ \zeta}_{i}\,. \tag{2.40}\]
Similarly, we have
\[\langle\zeta,\mathrm{d}\zeta\rangle=\zeta_{\widetilde{\gamma}_{i}}\mathrm{d} \zeta_{\gamma^{i}}-\zeta_{\gamma^{i}}\mathrm{d}\zeta_{\widetilde{\gamma}_{i}} =-\widetilde{\zeta}_{i}\mathrm{d}\zeta^{i}+\zeta^{i}\mathrm{d} \widetilde{\zeta}_{i}\,. \tag{2.41}\]
#### 2.2.1 Darboux coordinates for c-map spaces associated to CASK domains
In this section we focus on the easier case of a c-map space associated to a CASK domain. The following coordinates have been been previously obtained in the physics literature in [13] in the \(c_{\ell}=0\) case, and in the 1-loop corrected case in [1] via slightly different methods.
**Proposition 2.18**.: Consider the QK manifold \((\widetilde{N},g_{\overline{N}})\) obtained from a CASK domain \((M,\mathfrak{F})\) via the c-map (i.e. by taking \(\Omega(\gamma)=0\) for all \(\gamma\) in our previous constructions). If \(F_{i}:=Z_{i}/Z^{0}=\partial_{Z^{i}}\mathfrak{F}/Z^{0}\), \(t\) denotes the twistor fiber coordinate from Proposition 2.15 and \(R=2\sqrt{\rho+c_{\ell}}e^{\mathcal{K}/2}\), then the functions on the twistor space given by
\[\begin{split}\xi^{i}&=\zeta^{i}-\mathrm{i}R(t^{-1} z^{i}+t\overline{z}^{i})\\ \widetilde{\xi}_{i}&=\widetilde{\zeta}_{i}-\mathrm{i }R(t^{-1}F_{i}+t\overline{F}_{i})\\ \alpha&=\sigma-\mathrm{i}R(t^{-1}\langle\widetilde{Z},\zeta\rangle+t\langle\overline{Z},\zeta\rangle)-8\mathrm{i}c_{\ell}\log(t)\,, \end{split} \tag{2.42}\]
define Darboux coordinates for the holomorphic contact structure \(\lambda\), in the sense that
\[\lambda=-2\pi\mathrm{i}(\mathrm{d}\alpha+\widetilde{\xi}_{i}\mathrm{d}\xi^{ i}-\xi^{i}\mathrm{d}\widetilde{\xi}_{i})\cdot s\,. \tag{2.43}\]
Proof.: By Proposition 2.15, it is enough to check that
\[-2\pi\mathrm{i}(\mathrm{d}\alpha+\widetilde{\xi}_{i}\mathrm{d}\xi^{i}-\xi^{i} \mathrm{d}\widetilde{\xi}_{i})=f\frac{\mathrm{d}t}{t}+t^{-1}\theta^{P}_{+| \overline{N}}-2\mathrm{i}\theta^{P}_{3}|_{\overline{N}}+t\theta^{P}_{-| \overline{N}} \tag{2.44}\]
where \(f\), \(\theta^{P}_{+}=\overline{\theta^{P}_{-}}\) and \(\theta^{P}_{3}\) are obtained by setting \(\Omega(\gamma)=0\) for all \(\gamma\) in (2.35). That is:
\[f=16\pi\rho,\qquad\theta^{P}_{3}\,|_{\overline{N}}=\pi\mathrm{d}\sigma-\pi \langle\zeta,\mathrm{d}\zeta\rangle-4\pi(\rho+c_{\ell})\mathrm{d}^{c} \mathcal{K},\qquad\theta^{P}_{+|\overline{N}}=-4\pi R(\widetilde{Z},\mathrm{d} \zeta\rangle\,. \tag{2.45}\]
We now compute
\[\begin{split}\widetilde{\xi}_{i}\mathrm{d}\xi^{i}-\xi^{i} \mathrm{d}\widetilde{\xi}_{i}=&-\langle\zeta,\mathrm{d}\zeta \rangle+\mathrm{i}(t^{-1}\langle\widetilde{Z},\zeta\rangle+t\langle \overline{\widetilde{Z}},\zeta\rangle)\mathrm{d}R+\mathrm{i}R(-t^{-2}\langle \widetilde{Z},\zeta\rangle)\mathrm{d}t+\langle\overline{\widetilde{Z}},\zeta \rangle\mathrm{d}t)+8\mathrm{i}(\rho+c_{\ell})\frac{\mathrm{d}t}{t}\\ &+\mathrm{i}R(t^{-1}\langle\mathrm{d}\widetilde{Z},\zeta\rangle+t \langle\overline{\widetilde{Z}},\zeta\rangle)-\mathrm{i}R(t^{-1}\langle \widetilde{Z},\mathrm{d}\zeta\rangle+t\langle\overline{\widetilde{Z}}, \mathrm{d}\zeta\rangle)-4(\rho+c_{\ell})\mathrm{d}^{c}\mathcal{K}\,,\end{split} \tag{2.46}\]
where for the terms \(8\mathrm{i}(\rho+c_{\ell})\frac{\mathrm{d}t}{t}\) and \(-4(\rho+c_{\ell})\mathrm{d}^{c}\mathcal{K}\) we have used that the CASK relation \(F_{i}=\tau_{ij}z^{j}\) implies
(2.47)
and by using the relation \(\mathrm{d}F_{i}=\tau_{ij}\mathrm{d}z^{i}\)
\[\begin{split}-4(\rho+c_{\ell})e^{\mathcal{K}}F_{i}\mathrm{d} \overline{z}^{i}&-4(\rho+c_{\ell})e^{\mathcal{K}}\overline{F}_{i }\mathrm{d}z^{i}+4(\rho+c_{\ell})e^{\mathcal{K}}\mathrm{d}\overline{F}_{i}z^{ i}+4(\rho+c_{\ell})e^{\mathcal{K}}\mathrm{d}F_{i}\overline{z}^{i}\\ &=-4(\rho+c_{\ell})e^{\mathcal{K}}(-2\mathrm{Im}(\tau_{ij})) \Big{(}\overline{z}^{j}\mathrm{d}z^{i}-iz^{i}\mathrm{d}\overline{z}^{j}\Big{)} \\ &=-4(\rho+c_{\ell})\mathrm{d}^{c}\mathcal{K}\,.\end{split} \tag{2.48}\]
(Recall that \(\mathcal{K}=-\log K\) where \(K=-2\mathrm{Im}(\tau_{ij})z^{i}\overline{z}^{j}\)). On the other hand, we find that
\[\begin{split}\mathrm{d}\alpha=&\mathrm{d}\sigma- \mathrm{i}(t^{-1}\langle\widetilde{Z},\zeta\rangle+t\langle\overline{\widetilde {Z}},\zeta\rangle)\mathrm{d}R+\mathrm{i}R(t^{-2}\langle\widetilde{Z},\zeta \rangle-\langle\overline{\widetilde{Z}},\zeta\rangle)\mathrm{d}t\\ &-\mathrm{i}R(t^{-1}(\mathrm{d}\widetilde{Z},\zeta)+t\langle \mathrm{d}\overline{\widetilde{Z}},\zeta\rangle)-\mathrm{i}R(t^{-1}\langle \widetilde{Z},\mathrm{d}\zeta\rangle+t\langle\overline{\widetilde{Z}},\mathrm{ d}\zeta\rangle)-8\mathrm{i}c_{\ell}\frac{\mathrm{d}t}{t},\end{split} \tag{2.49}\]
so we conclude that
\[\mathrm{d}\alpha+\widetilde{\xi}_{i}\mathrm{d}\xi^{i}-\xi^{i}\mathrm{d} \widetilde{\xi}_{i} =8\mathrm{i}\rho\frac{\mathrm{d}t}{t}+\mathrm{d}\sigma-\langle \zeta,\mathrm{d}\zeta\rangle-2\mathrm{i}R(t^{-1}\langle\widetilde{Z},\mathrm{ d}\zeta\rangle+t\langle\overline{\widetilde{Z}},\mathrm{d}\zeta\rangle)-4(\rho+c_{ \ell})\mathrm{d}^{c}\mathcal{K} \tag{2.50}\]
so that
\[\begin{split}-2\pi\mathrm{i}\left(\mathrm{d}\alpha+\widetilde{ \xi}_{i}\mathrm{d}\xi^{i}-\xi^{i}\mathrm{d}\widetilde{\xi}_{i}\right)& =f\frac{\mathrm{d}t}{t}-4\pi R(t^{-1}\langle\widetilde{Z},\mathrm{ d}\zeta\rangle+t\langle\overline{\widetilde{Z}},\mathrm{d}\zeta\rangle)-2 \mathrm{i}\left(\pi\mathrm{d}\sigma-\pi\langle\zeta,\mathrm{d}\zeta\rangle-4 \pi(\rho+c_{\ell})\mathrm{d}^{c}\mathcal{K}\right)\\ &=f\frac{\mathrm{d}t}{t}+t^{-1}\theta_{+}^{P}|_{\overline{N}}-2 \mathrm{i}\theta_{3}^{P}|_{\overline{N}}+t\theta_{-}^{P}|_{\overline{N}}\end{split} \tag{2.51}\]
and the result follows.
#### 2.2.2 The case with instanton corrections
We now consider the case where the BPS indices are not all equal to \(0\). We want to write down the modifications of the coordinates (2.42), such that
\[\lambda=-2\pi\mathrm{i}\left(\mathrm{d}\alpha+\widetilde{\xi}_{i}\mathrm{d} \xi^{i}-\xi^{i}\mathrm{d}\widetilde{\xi}_{i}\right)\cdot s=\left(f\frac{ \mathrm{d}t}{t}+t^{-1}\theta_{+}^{P}|_{\overline{N}}-2\mathrm{i}\theta_{3}^{P} |_{\overline{N}}+t\theta_{-}^{P}|_{\overline{N}}\right)\cdot s \tag{2.52}\]
where \(f\) and \(\theta_{\alpha}^{P}\) are as in (2.35).
The expressions for the coordinates below have been previously found in the physics literature (using slightly different arguments and conventions), for example in [1, 2].
**Theorem 2.19**.: Consider the instanton corrected QK manifold \((\widetilde{N},g_{\overline{N}})\) associated to a CASK domain and \((M,\Gamma,Z,\Omega)\), as before. Then the functions on the twistor space \(\mathcal{Z}\) given by
\[\begin{split}\xi^{i}&=\zeta^{i}-\mathrm{i}R(t^{-1}z^ {i}+t\overline{z}^{i})\\ \widetilde{\xi}_{i}&=\widetilde{\zeta}_{i}-\mathrm{i}R (t^{-1}F_{i}+t\overline{F}_{i})-\frac{1}{8\pi^{2}}\sum_{\gamma}\Omega(\gamma )n_{i}(\gamma)\int_{l_{\gamma}}\frac{\mathrm{d}\zeta}{\zeta}\frac{t+\zeta}{t- \zeta}\log(1-\exp(2\pi\mathrm{i}\xi_{\gamma}(\zeta)))\\ \alpha&=\sigma-\mathrm{i}R(t^{-1}\langle\widetilde{Z},\zeta\rangle+t\langle\overline{\widetilde{Z}},\zeta\rangle)-8\mathrm{i}c_{ \ell}\log(t)+\frac{1}{4\pi^{2}\mathrm{i}}\left(t^{-1}\mathcal{W}+t\overline{ \mathcal{W}}-\frac{1}{2\pi}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\frac{ \mathrm{d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta}\mathrm{L}(\exp 2\pi\mathrm{i}\xi_{\gamma}( \zeta))\right)\,,\end{split} \tag{2.53}\]
where the integration contours are given by \(l_{\gamma}:=\mathbb{R}_{<0}\widetilde{Z}_{\gamma}\) oriented from \(0\) to \(\infty\), \(\xi_{\gamma}(\zeta)=q_{i}\xi^{i}(\zeta)\) for \(\gamma=q_{i}\gamma^{i}\) and where we abbreviate \(\xi^{i}(\zeta)=\xi^{i}(R,z^{i},\zeta^{i},t=\zeta)\),
\[\mathcal{W}:=R\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0} \frac{e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma }|)\,, \tag{2.54}\]
and
\[\mathrm{L}(\exp(2\pi\mathrm{i}\xi_{\gamma}))=\mathrm{Li}_{2}(\exp(2\pi\mathrm{ i}\xi_{\gamma}))+\pi\mathrm{i}\xi_{\gamma}\log(1-\exp(2\pi\mathrm{i}\xi_{\gamma})); \tag{2.55}\]
define Darboux coordinates for the contact structure \(\lambda\) in the sense that
\[\lambda=-2\pi\mathrm{i}(\mathrm{d}\alpha+\widetilde{\xi}_{i}\mathrm{d}\xi^{i} -\xi^{i}\mathrm{d}\widetilde{\xi}_{i})\cdot s\,. \tag{2.56}\]
Proof.: A explicit proof is given in Appendix B. This result is not needed for the following sections and is only included for completeness. See also the work in the physics literature [2].
**Remark 2.20**.:
* The function \(\operatorname{Li}_{2}(x)\) appearing in (2.55) is the dilogarithm function, while the function \(\operatorname{L}(x)=\operatorname{Li}_{2}(x)+\frac{\log(x)}{2}\log(1-x)\) is the Rogers dilogarithm.
* We remark that along the chosen \(l_{\gamma}\) contours, the integrands are exponentially decreasing near \(0\) and \(\infty\). In particular, one can deform the contour \(l_{\gamma}\) within the half plane centered at \(l_{\gamma}\) without changing the value of the integral, as long as it does not collide with another ray of the form \(l_{\gamma}\) for \(\gamma\in\operatorname{Supp}(\Omega)\).
## 3 S-duality on instanton corrected q-map spaces
The structure of the section is as follows:
* In Section 3.1 we define instanton corrected q-map spaces \((\overline{N},g_{\overline{N}})\) in terms of the construction of [17] reviewed in Section 2.1, applied to certain special pairs of compatible initial data \((M,\mathfrak{F})\) and \((M,\Gamma,Z,\Omega)\). These spaces will be the main objects of study in the rest of the section.
* In Section 3.2 we define the quantum corrected mirror map, previously defined in [1], and relating the type IIA variables \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\) with the type IIB variables \((\tau_{1}+\mathrm{i}\tau_{2},b^{a}+\mathrm{i}t^{a},c^{a},c_{a},c_{0},\psi)\). In terms of the type IIB variables, we define the corresponding \(\operatorname{SL}(2,\mathbb{Z})\)-action.
* In Section 3.3 we show that the holomorphic contact structure of the twistor space associated to \((\overline{N},g_{\overline{N}})\) admits Darboux coordinates of the form studied in [1]. We show this directly by Poisson resumming the contact form (2.34), which in turn is expressed in terms of the Bessel function expressions (2.35) specialized to the particular form of the BPS indices below (3.12).
* In Section 3.4 we give conditions that guarantee that either the S-duality \(\operatorname{SL}(2,\mathbb{Z})\)-transformations or the subgroup \(\langle S\rangle\subset\operatorname{SL}(2,\mathbb{Z})\) acts by isometries on an instanton corrected q-map space.
### Setting
Let us first recall the notion of a PSK manifold in the image of the r-map, and the associated CASK domain.
**Definition 3.1**.: A projective special real (PSR) manifold is a Riemannian manifold \((\mathcal{H},g_{\mathcal{H}})\) where \(\mathcal{H}\subset\mathbb{R}^{n}\) is a hypersurface, for which there is a cubic polynomial \(h:\mathbb{R}^{n}\to\mathbb{R}\) such that \(\mathcal{H}\subset\{h(t)=1\}\) and \(g_{\mathcal{H}}=-\partial^{2}h|_{T\mathcal{H}\times T\mathcal{H}}\).
If we denote the canonical coordinates of \(\mathbb{R}^{n}\) by \(t^{a}\), then we can write
\[h(t)=\frac{1}{6}k_{abc}t^{a}t^{b}t^{c}, \tag{3.1}\]
with \(k_{abc}\in\mathbb{R}\) symmetric in the indices. Now let \(U:=\mathbb{R}_{>0}\cdot\mathcal{H}\subset\mathbb{R}^{n}-\{0\}\) and \(\overline{M}^{\mathrm{cl}}:=\mathbb{R}^{n}+\mathrm{i}U\subset\mathbb{C}^{n}\) with canonical holomorphic coordinates \(z^{a}:=b^{a}+\mathrm{i}t^{a}\). On \(\overline{M}^{\mathrm{cl}}\) we have a PSK metric by defining
\[g_{\overline{M}^{\mathrm{cl}}}:=\frac{\partial^{2}\mathcal{K}}{\partial z^{a }\partial\overline{z}^{b}}\mathrm{d}z^{a}\mathrm{d}\overline{z}^{b}\quad \quad\mathcal{K}=-\log(8h(t))\,. \tag{3.2}\]
The PSK manifold \((\overline{M}^{\mathrm{cl}},g_{\overline{M}^{\mathrm{cl}}})\) is associated to the CASK domain \((M^{\mathrm{cl}},\mathfrak{F}^{\mathrm{cl}})\) of signature \((n,1)\) given by
\[M^{\mathrm{cl}}:=\{(Z^{0},...,Z^{n})=Z^{0}\cdot(1,z)\in\mathbb{C}^{n+1}\mid Z^ {0}\in\mathbb{C}^{\times},z\in\overline{M}^{\mathrm{cl}}\},\quad\mathfrak{F }^{\mathrm{cl}}:=-\frac{1}{6}k_{abc}\frac{Z^{a}Z^{b}Z^{c}}{Z^{0}}\,. \tag{3.3}\]
Notice in particular the relation \(Z^{a}/Z^{0}=z^{a}=b^{a}+\mathrm{i}t^{a}\), which we use repeatedly below.
**Definition 3.2**.: The construction given above that associates the PSK manifold \((\overline{M}^{\mathrm{cl}},g_{\overline{M}^{\mathrm{cl}}})\) to the PSR manifold \((\mathcal{H},g_{\mathcal{H}})\) is called the r-map. Furthermore, the QK metric obtained by applying the c-map (resp. tree-level c-map) to a PSK manifold in the image of the r-map is called a q-map space (resp. tree-level q-map space).
We now want to consider a tuple \((M,\mathfrak{F})\) of the following special type:
* We start with a PSR manifold \((\mathcal{H},g_{\mathcal{H}})\) and consider the associated CASK domain \((M^{\mathrm{cl}},\mathfrak{F}^{\mathrm{cl}})\). We further let \(\Lambda^{+}:=\mathrm{span}_{\mathbb{Z}_{\geq 0}}\{\gamma^{a}\}_{a=1}^{n}-\{0\}\) be a commutative semi-group generated by \(\{\gamma^{a}\}_{a=1}^{n}\), where \(n=\dim(\mathcal{H})+1\). We want to consider a new CASK domain with a holomorphic prepotential of the form \[\mathfrak{F}(Z^{i})=\mathfrak{F}^{\mathrm{cl}}(Z^{i})+\mathfrak{F}^{\mathrm{w. s.}}(Z^{i})\] (3.4) where \(\mathfrak{F}^{\mathrm{cl}}\) is as before, and \[\mathfrak{F}^{\mathrm{w.s.}}:=\chi\frac{(Z^{0})^{2}\zeta(3)}{2(2\pi\mathrm{i}) ^{3}}-\frac{(Z^{0})^{2}}{(2\pi\mathrm{i})^{3}}\sum_{\hat{\gamma}=q_{a}\gamma^ {a}\in\Lambda^{+}}n_{\hat{\gamma}}\mathrm{Li}_{3}(e^{2\pi\mathrm{i}q_{a}Z^{a }/Z^{0}})\;.\] (3.5) In the above expression \(\chi\in\mathbb{Z}\), \(n_{\hat{\gamma}}\in\mathbb{Z}\), \(\mathrm{Li}_{n}(z)\) denotes the n-th polylogarithm, and \(\zeta(x)\) denotes the Riemann zeta function.
* We assume that the \(n_{\hat{\gamma}}\) are such that the \(\mathbb{C}^{\times}\)-invariant subset of \(\mathbb{C}^{n+1}\) defined by \[M^{q}:=M^{\mathrm{cl}}\cap\{(Z^{0},...,Z^{n})\in\mathbb{C}^{n+1}\mid q_{a}t^{ a}>0\text{ for all }\hat{\gamma}=q_{a}\gamma^{a}\in\Lambda^{+}\text{ with }n_{\hat{\gamma}}\neq 0\}\] (3.6) is open4. We further assume that the growth of the \(n_{\hat{\gamma}}\) with \(\hat{\gamma}\) is such that for any \(\epsilon>0\) the series \[\sum_{\hat{\gamma}=q_{a}\gamma^{a}\in\Lambda^{+}}|n_{\hat{\gamma}}|e^{-\epsilon q _{a}t^{a}}\] (3.7) has compact normal convergence on \(M^{q}\), so that \(\mathfrak{F}\) defines a holomorphic function on \(M^{q}\). In particular, the condition \(q_{a}t^{a}>0\) ensures that \(\mathrm{Li}_{3}(e^{2\pi\mathrm{i}q_{a}Z^{a}/Z^{0}})\) is well defined and can be expressed as Footnote 4: This condition is automatic if the commutative semigroup generated by \(\{\hat{\gamma}\in\Lambda^{+}\mid n_{\hat{\gamma}}\neq 0\}\) is finitely generated. \[\mathrm{Li}_{3}(e^{2\pi\mathrm{i}q_{a}Z^{a}/Z^{0}})=\sum_{k>0}\frac{e^{2\pi \mathrm{i}kq_{a}Z^{a}/Z^{0}}}{k^{3}}\,.\] (3.8)
* We denote by \(M\subset M^{q}\) the maximal open subset of \(M^{q}\) where \(\mathrm{Im}(\tau_{ij})=\mathrm{Im}(\partial_{i}\partial_{j}\mathfrak{F})\) has signature \((n,1)\) and \(\mathrm{Im}(\tau_{ij})Z^{i}\overline{Z}^{j}<0\). These conditions are \(\mathbb{C}^{\times}\)-invariant, so that \((M,\mathfrak{F})\) defines a CASK domain on each connected component of \(M\).
* To such a tuple \((M,\mathfrak{F})\) we associate the canonical lattice \(\Gamma\to M\), together with the canonical central charge \(Z\) (recall Section 2.1.3). Namely, if \(Z_{i}:=\partial_{Z^{i}}\mathfrak{F}\) and \(x^{i}=\mathrm{Re}(Z^{i})\), \(y_{i}=\mathrm{Re}(Z_{i})\), then \((\partial_{x^{i}},\partial_{y_{i}})=(\widetilde{\gamma}_{i},\gamma^{i})\) defines a global Darboux frame for \(\Gamma\to M\). For the canonical central charge we then have \[Z_{\gamma^{i}}=Z^{i},\quad Z_{\widetilde{\gamma}_{i}}=-Z_{i}=-\frac{\partial \mathfrak{F}}{\partial Z^{i}}\,.\] (3.9) We will identify the semigroups \(\Lambda^{+}\cong\mathrm{span}_{\mathbb{Z}_{\geq 0}}\{\partial_{y_{a}}\}_{a=1}^{n}-\{0\}\).
**Remark 3.3**.:
* We remark that given any \((Z^{0},...,Z^{n})=Z^{0}\cdot(1,b^{a}+\mathrm{i}t^{a})\in M^{q}\), all points of the form \(Z^{0}\cdot(1,b^{a}+\lambda\mathrm{i}t^{a})\) with \(\lambda>0\) must also be in \(M^{q}\). In particular, for \(\lambda>0\) sufficiently big we have \(\mathrm{Im}(\tau_{ij})\sim\mathrm{Im}(\partial_{i}\partial_{j}\mathfrak{F}^{ \mathrm{cl}})\) due to the exponential decay of the terms with polylogarithms. Since \(\mathrm{Im}(\partial_{i}\partial_{j}\mathfrak{F}^{\mathrm{cl}})\) has signature \((n,1)\) and \(\mathrm{Im}(\partial_{i}\partial_{j}\mathfrak{F}^{\mathrm{cl}})Z^{i}\overline{ Z}^{j}<0\), it follows that at the points \(Z^{0}\cdot(1,b^{a}+\lambda\mathrm{i}t^{a})\) for \(\lambda>0\) sufficiently big we have that \(\mathrm{Im}(\tau_{ij})\) has signature \((n,1)\) and \(\mathrm{Im}(\tau_{ij})Z^{i}\overline{Z}^{j}<0\), so that the required \(M\) is never empty, provided \(M^{q}\) is not empty.
* We would also like to comment on the particular form of the prepotential (3.4). In the setting of Calabi-Yau compactifications of type IIB string theory, the term \(\mathfrak{F}^{\mathrm{cl}}\) has \(k_{abc}\in\mathbb{Z}\) equal to the triple intereesection numbers of the Calabi-Yau threefold \(X\), and corresponds to the holomorphic prepotential determining (via the c-map) the classical QK geometry of the hypermultiplet moduli space. On the other hand, \(\mathfrak{F}^{\mathrm{w.s.}}\) correspond to world-sheet instanton corrections. In such a setting, \(\chi\) coincides with the Euler characteristic \(\chi(X)\) of \(X\), and the numbers \(n_{\hat{\gamma}}\) are the so-called genus zero Gopakumar-Vafa invariants, and usually denoted by \(n_{\hat{\gamma}}^{(0)}\). Since we are in a setting independent of string theory, we will use the simpler notation of \(n_{\hat{\gamma}}\) for the coefficients appearing in (3.4). Furthermore, it will be useful to define \(n_{0}:=-\frac{\chi}{2}\). Then, using that \(\mathrm{Li}_{3}(1)=\zeta(3)\) we can write \[\mathfrak{F}^{\mathrm{w.s.}}=-\frac{(Z^{0})^{2}}{(2\pi\mathrm{i})^{3}}\sum_{ \hat{\gamma}\in\Lambda^{+}\cup\{0\}}n_{\hat{\gamma}}\mathrm{Li}_{3}(e^{2\pi \mathrm{i}q_{a}Z^{a}/Z^{0}})\,.\] (3.10)
* In the physics literature sometimes an additional term of the form \[\frac{1}{2}A_{ij}Z^{i}Z^{j},\quad A_{ij}=A_{ji}\in\mathbb{R}\] (3.11) is added to \(\mathfrak{F}\) in (3.4). While the addition of such a term does not alter the PSK metric \(g_{\overline{M}}\) or the CASK metric \(g_{M}\) (since \(A_{ij}\) is real), the inclusion of such a term turns out to be important for mirror symmetry between D-branes of type IIA and type IIB string theory (see for example the review [1, Chapter V], and the references therein). However, below we will focus on a case analogous to including D(-1) and D1 instanton corrections on type IIB, and we can safely ignore the inclusion of such a term.
Finally, after possibly restricting \(M\), we assume that \(M\) is the maximal open subset such that \((M,\Gamma,Z,\Omega)\) with \(\Gamma\) and \(Z\) as before and with
\[\begin{cases}\Omega(q_{0}\gamma^{0})=-\chi=2n_{0},\quad q_{0}\in\mathbb{Z}- \{0\}\\ \Omega(q_{0}\gamma^{0}\pm q_{a}\gamma^{a})=\Omega(\pm q_{a}\gamma^{a})=n_{q_{a }\gamma^{a}}\quad\text{for $q_{a}\gamma^{a}\in\Lambda^{+}$, $q_{0}\in\mathbb{Z}$}\\ \Omega(\gamma)=0\quad\text{else},\end{cases} \tag{3.12}\]
is a mutually local variation of BPS structures compatible (in the sense of Definition 2.3) with the CASK manifold \((M,g_{M},\omega_{M},\nabla,\xi)\) associated to \((M,\mathfrak{F})\). To check that \((M,\Gamma,Z,\Omega)\) defines a mutually local variation of BPS structures, one only needs to check the support property, since the convergence property follows easily from (3.7), while the mutual locality and invariance under monodromy is obvious.
We remark that the BPS indices in (3.12) are determined by the holomorphic prepotential \(\mathfrak{F}\), see (3.5). Such a prescription of BPS indices has previously appeared in the physics literature in [1, 2] or more explicitly in [1, Equation 4.5]. See also [13] (in particular, [13, Section 6.3, Conjecture 6.20]), where such a BPS spectrum is conjectured for a non-compact Calabi-Yau 3-folds without compact divisors).
In the following, we will consider the associated instanton corrected QK manifold \((\overline{N},g_{\overline{N}})\) associated to \((M,\mathfrak{F})\) and \((M,\Gamma,Z,\Omega)\) as in Theorem 2.12. It will be convenient to lift the QK metric \((\overline{N},g_{\overline{N}})\) to \((\widetilde{N},g_{\overline{N}})\) as in Definition 2.14, where the \((\zeta^{i},\widetilde{\zeta}_{i},\sigma)\) directions are no longer periodic. In particular, we recall that we have the following:
* \(\widetilde{N}\subset\mathbb{R}_{>0}\times\overline{M}\times\mathbb{R}^{2n+2 }\times\mathbb{R}\) is an open subset, where the splitting is such that \((\rho,z^{a},\zeta^{i},\widetilde{\zeta_{i}},\sigma)\in\mathbb{R}_{>0}\times \mathbb{C}^{n}\times\mathbb{R}^{2n+2}\times\mathbb{R}\), where \(\overline{M}\) is the associated PSK manifold.
* The twistor space \(\mathcal{Z}\) of \((\widetilde{N},g_{\overline{N}})\) smoothly decomposes as \(\mathcal{Z}=\widetilde{N}\times\mathbb{C}P^{1}\). In particular, the expression (2.34) for the contact structure holds for the global coordinates \((\rho,z^{a},\zeta^{i},\widetilde{\zeta_{i}},\sigma,t)\) of \(\mathcal{Z}\). Here we are slightly abusing notation by considering \(t\), the identity map on \(\mathbb{C}P^{1}\), as a global coordinate.)
**Definition 3.4**.: If \((M,\mathfrak{F})\) and \((M,\Gamma,Z,\Omega)\) are given as before, we will call the resulting QK manifold \((\overline{N},g_{\overline{N}})\) (or its lift \((\widetilde{N},g_{\overline{N}})\)) an instanton corrected q-map space.
The reason for the calling \((\overline{N},g_{\overline{N}})\) (or \((\widetilde{N},g_{\overline{N}})\)) an instanton corrected q-map space is the following:
* By setting the one-loop parameter to \(c_{\ell}=0\), and \(n_{\tilde{\gamma}}=0\) for all \(\tilde{\gamma}\in\Lambda^{+}\cup\{0\}\) (and hence, also \(\Omega(\gamma)=0\) for all \(\gamma\)), one recovers a q-map space. That is the QK metric obtained by applying the c-map to a PSK manifold in the image of the r-map: \[g_{\overline{N}}=g_{\overline{M}}+\frac{\mathrm{d}\rho^{2}}{4\rho^{2}}-\frac{ 1}{4\rho}(N^{ij}-2e^{\mathcal{K}}z^{i}\overline{z}^{j})W_{i}\overline{W}_{j}+ \frac{1}{64\rho^{2}}(\mathrm{d}\sigma+\widetilde{\zeta_{i}}\mathrm{d}\zeta^{i} -\zeta^{i}\mathrm{d}\widetilde{\zeta_{i}})^{2}\,.\] (3.13) where \(g_{\overline{M}}\), \(N_{ij}\), \(\mathcal{K}\) and \(W_{i}\) are constructed in terms of \(\mathfrak{F}^{\mathrm{cl}}\) (see Section 2.1.3 for the definition of the above terms).
* In the setting of Calabi-Yau compactification of type IIB string theory, the terms due to the BPS structure \((M,\Gamma,Z,\Omega)\) are thought as D(-1), D1 instanton corrections, and those due to \(\mathfrak{F}^{\mathrm{w.s.}}\) are thought as world-sheet instantons corrections. Hence, the QK metric obtained by the above \((M,\mathfrak{F})\) and \((M,\Gamma,Z,\Omega)\) can be thought as a q-map with the inclusion of the analog of the above corrections.
### The quantum corrected mirror map and the S-duality action
In the sections that follow we will rescale the twistor fiber coordinate \(t\to-\mathrm{i}t\). The contact structure on the twistor space is then expressed by (compare with (2.34))
\[\lambda=\left(f\frac{\mathrm{d}t}{t}+t^{-1}\mathrm{i}\theta_{+}^{P}|_{\overline {N}}-2\mathrm{i}\theta_{3}^{P}|_{\overline{N}}-t\mathrm{i}\theta_{-}^{P}|_{ \overline{N}}\right)\cdot s\,. \tag{3.14}\]
In order to define the S-duality action, we consider the following diffeomorphism, first defined (under slightly different conventions) in [1]:
**Definition 3.5**.: Let \(\overline{M}:=\{z\in\mathbb{C}^{n}\mid(1,z)\in M\}\) where \(M\) was given in the previous section, and consider the manifold \(\overline{\mathcal{N}}_{\mathrm{IIA}}:=\mathbb{R}_{>-c_{\mathrm{I}}}\times \overline{M}\times\mathbb{R}^{2n+2}\times\mathbb{R}\) with global coordinates \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\). We will call such coordinates type IIA coordinates. On the other hand, if \(H\subset\mathbb{C}\) denotes the upper half-plane, we define \(\overline{\mathcal{N}}_{\mathrm{IIB}}:=H\times\overline{M}\times\mathbb{R}^{2 n}\times\mathbb{R}^{2}\) with global coordinates \((\tau_{1}+\mathrm{i}\tau_{2},b^{a}+\mathrm{i}t^{a},c^{a},c_{a},c_{0},\psi)\). We call the latter type IIB coordinates. The type IIB coordinates are related to the type IIA coordinates via the diffeomorphism (see Remark 3.6 below) \(\mathcal{M}:\overline{\mathcal{N}}_{\mathrm{IIB}}\to\overline{\mathcal{N}}_{ \mathrm{IIA}}\) defined by
\[\begin{split} z^{a}&=b^{a}+\mathrm{i}t^{a},\ \ \ \ \rho=\frac{\tau_{2}^{2}}{16}e^{-\mathcal{K}}-c_{\ell},\ \ \ \ \ \zeta^{0}=\tau_{1},\ \ \ \ \zeta^{a}=-(c^{a}-\tau_{1}b^{a}),\\ \widetilde{\zeta}_{a}&=c_{a}+\frac{k_{abc}}{2}b^{b}( c^{c}-\tau_{1}b^{c})+\widetilde{\zeta}_{a}^{\mathrm{inst}},\ \ \ \ \ \widetilde{\zeta}_{0}=c_{0}-\frac{k_{abc}}{6}b^{a}b^{b}(c^{c}-\tau_{1}b^{c})+ \widetilde{\zeta}_{0}^{\mathrm{inst}}\\ \sigma&=-2(\psi+\frac{1}{2}\tau_{1}c_{0})+c_{a}(c^{ a}-\tau_{1}b^{a})-\frac{k_{abc}}{6}b^{a}c^{b}(c^{c}-\tau_{1}b^{c})+\sigma^{ \mathrm{inst}}\,,\end{split} \tag{3.15}\]
where \(\mathcal{K}=-\log(-2\mathrm{Im}(\tau_{ij})z^{i}\overline{z}^{j})\) with \(z^{0}=1\), \(\tau_{ij}=\frac{\partial^{2}\overline{\mathcal{N}}}{\partial Z^{i}\partial \overline{Z}^{j}}\), \(c_{\ell}\in\mathbb{R}\) is the 1-loop parameter, and
\[\begin{split}\widetilde{\zeta}_{a}^{\mathrm{inst}}& :=\frac{1}{8\pi^{2}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma}}q_{a} \sum_{\begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\frac{m\tau_{1}+n}{m|m\tau+n|^{2}}e^{-S_{\hat{ \gamma},m,n}}\\ \widetilde{\zeta}_{0}^{\mathrm{inst}}&:=\frac{ \mathrm{i}}{16\pi^{3}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma}}\sum_{ \begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\left(\frac{(m\tau_{1}+n)^{2}}{|m\tau+n|^{3}}+2 \pi q_{a}\left(t^{a}+\mathrm{i}b^{a}\frac{m\tau_{1}+n}{|m\tau+n|}\right) \right)\frac{e^{-S_{\hat{\gamma},m,n}}}{m|m\tau+n|}\\ \sigma^{\mathrm{inst}}&:=\tau_{1}\widetilde{\zeta}_{0}^{ \mathrm{inst}}-(c^{a}-\tau_{1}b^{a})\widetilde{\zeta}_{a}^{\mathrm{inst}}- \frac{\mathrm{i}\tau_{2}^{2}}{8\pi^{2}}\sum_{\hat{\gamma}\in\Lambda_{+}}n_{ \hat{\gamma}}q_{a}t^{a}\sum_{n\in\mathbb{Z}-\{0\}}\frac{e^{-S_{\hat{\gamma},0, n}}}{n|n|}\\ &\qquad\ \ \ \ +\frac{\mathrm{i}}{8\pi^{3}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{ \hat{\gamma}}\sum_{\begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\left(2-\frac{(m\tau_{1}+n)^{2}}{|m\tau+n|^{2}} \right)\frac{(m\tau_{1}+n)e^{-S_{\hat{\gamma},m,n}}}{m^{2}|m\tau+n|^{2}}\\ \end{split} \tag{3.16}\]
where
\[S_{\hat{\gamma},m,n}:=2\pi q_{a}(|m\tau+n|t^{a}+\mathrm{i}mc^{a}+\mathrm{i}nb^ {a})\,. \tag{3.17}\]
We will refer to the diffeomorphism \(\mathcal{M}\) as the quantum corrected mirror map.
**Remark 3.6**.:
* Note that since \(z=(z^{a})\in\overline{M}\) and \(M\subset M^{q}\), we have \(\mathrm{Re}(S_{\hat{\gamma},m,n})>0\) for all \(\hat{\gamma}\in\Lambda^{+}\) with \(n_{\hat{\gamma}}\neq 0\), so (3.7) implies that the sums in (3.16) have compact normal convergence on \(\overline{\mathcal{N}}_{\mathrm{IIB}}\). Furthermore, we remark that (3.15) really defines a diffeomorphism. Indeed, since \(-\mathrm{Im}(\tau_{ij})z^{i}\overline{z}^{j}>0\) on \(\overline{M}\) and \(\tau_{2}>0\), we can always invert the relation involving \(\rho\), while \(\widetilde{\zeta}_{i}^{\mathrm{inst}}\) and \(\sigma^{\mathrm{inst}}\) only depend on \(\tau=\tau_{1}+\mathrm{i}\tau_{2}\), \(z^{a}\) and \(c^{a}\), so it is easy to invert all the other relations. Note, in particular, that the expressions for \(\widetilde{\zeta}_{i}^{\mathrm{inst}}\) and \(\sigma^{\mathrm{inst}}\) are real. Furthermore, if we set \(c_{\ell}=0\), \(n_{\hat{\gamma}}=\chi=0\) for all \(\hat{\gamma}\in\Lambda^{+}\), then we recover the classical mirror map.
* While the expressions (3.16) look complicated, they satisfy nice transformation properties with respect to the expected isometry groups of the metric. See Section 4, Lemma 4.1 and Corollary 4.2.
**Definition 3.7**.: Let \(\overline{\mathcal{N}}_{\mathrm{IIB}}^{\mathrm{cl}}:=H\times\overline{M}^{ \mathrm{cl}}\times\mathbb{R}^{2n}\times\mathbb{R}^{2}\) with global type IIB variables \((\tau=\tau_{1}+\mathrm{i}\tau_{2},b^{a}+\mathrm{i}t^{a},c^{a},c_{a},c_{0},\psi)\). We define an \(\mathrm{SL}(2,\mathbb{Z})\)-action on \(\overline{\mathcal{N}}_{\mathrm{IIB}}^{\mathrm{cl}}\) by
\[\tau \to\frac{a\tau+b}{c\tau+d},\ \ \ \ \ t^{a}\to|c\tau+d|t^{a},\ \ \ c_{a}\to c_{a}, \tag{3.18}\] \[\begin{pmatrix}c^{a}\\ b^{a}\end{pmatrix} \to\begin{pmatrix}a&b\\ c&d\end{pmatrix}\begin{pmatrix}c^{a}\\ b^{a}\end{pmatrix},\ \ \ \begin{pmatrix}c_{0}\\ \psi\end{pmatrix}\to\begin{pmatrix}d&-c\\ -b&a\end{pmatrix}\begin{pmatrix}c_{0}\\ \psi\end{pmatrix},\ \ \ \begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{SL}(2,\mathbb{Z})\end{pmatrix}\]
We call this action the S-duality action. We note that there is the possibility that S-duality does not act on \(\overline{\mathcal{N}}_{\mathrm{IIB}}\subset\overline{\mathcal{N}}_{\mathrm{ IIB}}^{\mathrm{cl}}\), since it may happen that \(\overline{M}\) is not invariant under the scaling of the \(t^{a}\) in (3.18).
**Remark 3.8**.: Recall from Section 2.1.2 that the quaternionic Kahler manifold \(\overline{N}\) was constructed in [14] via HK/QK correspondence (by specializing [1]) as the hypersurface of the bundle \(P\to N\) which is defined by the equation \(\operatorname{Arg}Z^{0}=0\). Using the IIB variables we can now simply match \(|Z^{0}|=\tau_{2}\).
### Type IIB Darboux coordinates
We now want to write a distinguished set of Darboux coordinates which will be useful for studying S-duality or the action by specific elements of \(\mathrm{SL}(2,\mathbb{Z})\). The coordinates below have been previously written in [1]. Nevertheless, our approach will be different from [1] in the sense that we will start from the mathematical construction of the instanton QK metric obtained in [14], and then explicitly show that (3.22) indeed define Darboux coordinates for the contact structure on its twistor space.
Recalling (3.15) and (3.16), we let
\[\widetilde{\zeta}_{i}^{\mathrm{cl}}:=\widetilde{\zeta}_{i}- \widetilde{\zeta}_{i}^{\mathrm{inst}},\ \ \ \sigma^{\mathrm{cl}}:=\sigma-\sigma^{\mathrm{inst}}\,, \tag{3.19}\]
and define
\[\xi^{i,\mathrm{cl}} :=\zeta^{i}+R(t^{-1}z^{i}-t\overline{z}^{i}) \tag{3.20}\] \[\widetilde{\xi}_{i}^{\mathrm{cl}} :=\widetilde{\zeta}_{i}^{\mathrm{cl}}+R(t^{-1}F_{i}^{\mathrm{cl}} -t\overline{F}_{i}^{\mathrm{cl}})\] \[\alpha^{\mathrm{cl}} :=\sigma^{\mathrm{cl}}+R(t^{-1}\langle\widetilde{Z}^{\mathrm{cl} },\zeta^{\mathrm{cl}}\rangle-t\langle\widetilde{Z}^{\mathrm{cl}},\zeta^{ \mathrm{cl}}\rangle)\,,\]
where \(\widetilde{Z}^{\mathrm{cl}}:=Z^{\mathrm{cl}}/Z^{0}\) and \(Z^{\mathrm{cl}}\) is defined replacing \(Z\) and \(\mathfrak{F}\) in equation (3.9) by \(Z^{\mathrm{cl}}\) and \(\mathfrak{F}^{\mathrm{cl}}\), and \(F_{i}^{\mathrm{cl}}:=\partial_{Z^{i}}\mathfrak{F}^{\mathrm{cl}}/Z^{0}\). The expressions (3.20) match the coordinates (2.42) for the case \(\mathfrak{F}=\mathfrak{F}^{\mathrm{cl}}\) after the scaling \(t\to-\mathrm{i}t\) done in (3.14) and setting \(c_{\ell}=0\). In particular, they define Darboux coordinates for the contact structure of twistor space of the tree-level q-map space defined by \(\mathfrak{F}^{\mathrm{cl}}\). Finally, if \(c\in\mathbb{R}-\{0\}\) and \(d\in\mathbb{R}\), we denote by \(t_{\pm}^{c,d}\) the roots of \(t(c\xi^{0,\mathrm{cl}}+d)=0\) in the variable \(t\). Using that \(\zeta^{0}=\tau_{1}\) and \(R=2\sqrt{\rho+c_{\ell}}e^{\mathcal{K}/2}=\tau_{2}/2\) (recall (3.15)), we find
\[t_{\pm}^{c,d}=\frac{c\tau_{1}+d\pm|c\tau+d|}{c\tau_{2}}\,. \tag{3.21}\]
**Theorem 3.9**.: Consider an instanton corrected q-map space \((\widetilde{N},g_{\overline{N}})\) with 1-loop parameter \(c_{\ell}=\frac{\chi}{192\pi}\). Then the functions \((\xi^{i},\widetilde{\xi}_{i},\alpha)\) of the associated twistor space \(\mathcal{Z}\cong\widetilde{N}\times\mathbb{C}P^{1}\) given by
\[\xi^{i} =\xi^{i,\mathrm{cl}} \tag{3.22}\] \[\widetilde{\xi}_{a} =\widetilde{\xi}_{a}^{\mathrm{cl}}+\frac{\tau_{2}}{8\pi^{2}}\sum _{\hat{\gamma}\in\Lambda^{+}\cup\{0\}}n_{\hat{\gamma}}q_{a}\left(\sum_{(m,n) \in\mathbb{Z}^{2}-\{0\}}\frac{e^{-S_{\hat{\gamma},m}}}{|m\tau+n|^{2}}\frac{1 +t_{+}^{m,n}t}{t-t_{+}^{m,n}}\right)\] \[\widetilde{\xi}_{0} =\widetilde{\xi}_{0}^{\mathrm{cl}}+\frac{\mathrm{i}\tau_{2}}{16 \pi^{3}}\sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0\}}n_{\hat{\gamma}}\sum_{(m,n) \in\mathbb{Z}^{2}-\{0\}}\left(\frac{1}{m\xi^{0}+n}+\frac{m\tau_{1}+n}{|m\tau+ n|^{2}}\right)\frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}}\frac{e^{-S_{\hat{\gamma},m}}}{|m \tau+n|^{2}}\] \[\quad-\frac{\tau_{2}}{8\pi^{2}}\sum_{\hat{\gamma}\in\Lambda^{+} \cup\{0\}}n_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left(q_{a}b^{a} \frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}}+\mathrm{i}q_{a}t^{a}\frac{1-t_{+}^{m,n}t }{t-t_{+}^{m,n}}\right)\frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\] \[\alpha =-\frac{1}{2}(\alpha^{\mathrm{cl}}-\xi^{i}\widetilde{\xi}_{i}^{ \mathrm{cl}})+\frac{\mathrm{i}\tau_{2}^{2}}{32\pi^{3}}\sum_{\hat{\gamma}\in \Lambda^{+}\cup\{0\}}n_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left( (m\tau_{1}+n)(t^{-1}-t)-2m\tau_{2}\right)\frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}} \frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{4}}\,,\]
define Darboux coordinates for the contact structure of \(\mathcal{Z}\), in the sense that
\[\lambda=4\pi\mathrm{i}\left(\mathrm{d}\alpha-\widetilde{\xi}_{i}\mathrm{d}\xi^{i} \right)\cdot s\,. \tag{3.23}\]
In the above expressions, for the sums in the case where \(m=0\) we set
\[\frac{1+t_{+}^{0,n}t}{t-t_{+}^{0,n}}:=\begin{cases}1/t,&n<0\\ -t,&n>0\end{cases}\quad,\qquad\qquad\frac{1-t_{+}^{0,n}t}{t-t_{+}^{0,n}}:= \begin{cases}1/t,&n<0\\ t,&n>0\,.\end{cases} \tag{3.24}\]
Proof.: This will be proven in Section 3.3.3 below.
**Remark 3.10**.:
* The definitions (3.24) above for the case \(m=0\) (where \(t_{\pm}^{m,n}\) is not defined) can be motivated as follows. If \(n>0\) then it is easy to check that \[\lim_{c\to 0}\frac{1+t_{+}^{c,n}t}{t-t_{+}^{c,n}}=-t\,,\quad\lim_{c\to 0}\frac{1-t_{+}^{c,n}t}{ t-t_{+}^{c,n}}=t.\] (3.25) On the other hand, by using that for \(c\neq 0\) we have \(t_{+}^{c,d}t_{-}^{c,d}=-1\), one similarly finds for \(n<0\) \[\lim_{c\to 0}\frac{t_{-}^{c,n}}{t_{-}^{c,n}}\left(\frac{1+t_{+}^{c,n}t}{ t-t_{+}^{c,n}}\right)=\lim_{c\to 0}\frac{t_{-}^{c,n}-t}{t_{-}^{c,n}t+1}=\frac{1}{t}\,, \qquad\lim_{c\to 0}\frac{t_{-}^{c,n}}{t_{-}^{c,n}}\left(\frac{1-t_{+}^{c,n}t}{ t-t_{+}^{c,n}}\right)=\lim_{c\to 0}\frac{t_{-}^{c,n}+t}{t_{-}^{c,n}t+1}=\frac{1}{t}.\] (3.26)
* As previously mentioned in the introduction, one can relate the Darboux coordinates (3.22) to the Darboux coordinates (2.53) by Poisson resumming the later and applying a contact transformation, as done in [1]. We will follow a similar approach by Poisson resumming the contact structure directly, and then checking that (3.22) give Darboux coordinates.
Using (3.7) it is not hard to see that the coordinates are well defined on an open dense subset of \(\mathcal{Z}\cong\widetilde{N}\times\mathbb{C}P^{1}\), namely the points
\[\{(\tau,b^{a}+\mathrm{i}t^{a},c^{a},c_{a},c_{0},\psi,t)\in\mathcal{Z}\mid t \neq 0,\infty,\ \ t\not\in\{t_{+}^{m,n}(\tau)\}_{m\in\mathbb{Z}-\{0\},n\in\mathbb{Z}}\}\,, \tag{3.27}\]
where we have used the quantum corrected mirror map (3.15) to put type IIB coordinates on \(\widetilde{N}\subset\overline{\mathcal{N}}_{\mathrm{IIA}}\). In subsequent sections, we will also frequently use the notation
\[\widetilde{\xi}_{i}^{\mathrm{inst}}:=\widetilde{\xi}_{i}-\widetilde{\xi}_{i}^{ \mathrm{cl}},\quad\alpha^{\mathrm{inst}}:=\alpha+\frac{1}{2}(\alpha^{\mathrm{ cl}}-\xi^{i}\widetilde{\xi}_{i}^{\mathrm{cl}})\,. \tag{3.28}\]
**Remark 3.11**.: When comparing (3.16) and (3.22) with the formulas appearing in [1], we remark that we are using different conventions for the definition of \(t_{\pm}^{c,d}\) and \(S_{\hat{\gamma},m,n}\). Namely, what we call \(S_{\hat{\gamma},m,n}\) in their notation is \(S_{\hat{\gamma},-m,-n}\) and \(t_{\pm}^{c,d}\) in their notation is \(t_{\mp}^{c,d}\).
As an immediate corollary of Theorem 3.9, we obtain the following
**Corollary 3.12**.: Consider an instanton corrected q-map space \((\widetilde{N},g_{\overline{N}})\) with one-loop parameter \(c_{\ell}\in\mathbb{R}\). If \((\xi^{i},\widetilde{\xi}_{i},\alpha)\) are the functions on \(\mathcal{Z}\) defined by (3.22), then if \(\alpha_{c_{\ell}}:=\alpha+4\mathrm{i}\left(c_{\ell}-\frac{\chi}{192\pi}\right) \log(t)\), the functions \((\xi^{i},\widetilde{\xi}_{i},\alpha_{c_{\ell}})\) of the associated twistor space are Darboux coordinates for the contact structure \(\lambda\) of \(\mathcal{Z}\), in the sense that
\[\lambda=4\pi\mathrm{i}\left(\mathrm{d}\alpha_{c_{\ell}}-\widetilde{\xi}_{i} \mathrm{d}\xi^{i}\right)\cdot s\,. \tag{3.29}\]
Proof.: Notice that since the one loop parameter only enters in the contact structure via \(f\) by (2.35) and \(\rho=2\pi r^{2}-c_{\ell}\), by writing \(f=(f+16\pi(c_{\ell}-\frac{\chi}{192\pi}))-16\pi(c_{\ell}-\frac{\chi}{192\pi})\) we have by Theorem 3.9 that
\[4\pi\mathrm{i}\left(\mathrm{d}\alpha-\widetilde{\xi}_{i}\mathrm{d}\xi^{i} \right)=\left(f+16\pi\left(c_{\ell}-\frac{\chi}{192\pi}\right)\right)\frac{ \mathrm{d}t}{t}+t^{-1}\mathrm{i}\theta_{+}^{P}|_{\overline{N}}-2\mathrm{i} \theta_{3}^{P}|_{\overline{N}}-t\mathrm{i}\theta_{-}^{P}|_{\overline{N}}\,. \tag{3.30}\]
It then follows immediately that
\[4\pi\mathrm{i}\left(\mathrm{d}\alpha_{c_{\ell}}-\widetilde{\xi}_{i}\mathrm{d} \xi^{i}\right)=f\frac{\mathrm{d}t}{t}+t^{-1}\mathrm{i}\theta_{+}^{P}|_{ \overline{N}}-2\mathrm{i}\theta_{3}^{P}|_{\overline{N}}-t\mathrm{i}\theta_{-}^{ P}|_{\overline{N}}\,, \tag{3.31}\]
compare (3.14).
#### 3.3.1 Preliminary lemmas
In this section we give some preliminary lemmas and expressions that will be useful to prove Theorem 3.9.
We will divide each of the 1-forms \(\theta_{+}^{P}|_{\overline{N}}\), \(\theta_{3}^{P}|_{\overline{N}}\) appearing in the contact structure (3.14) as follows:
\[\theta_{+}^{P,\mathrm{cl}}|_{\overline{N}}=\theta_{+}^{P,\mathrm{cl}}+\theta_{ +}^{P,\mathrm{w.s.}}+\theta_{+}^{P,\mathrm{inst}},\qquad\theta_{3}^{P}|_{ \overline{N}}=\theta_{3}^{P,\mathrm{cl}}+\theta_{3}^{P,\mathrm{w.s.}}+\theta_{ 3}^{P,\mathrm{inst}}, \tag{3.32}\]
where using that \(R=2\sqrt{\rho+c_{\ell}}e^{\mathcal{K}/2}=\tau_{2}/2\), the decompositions \(F_{i}=F_{i}^{\mathrm{cl}}+F_{i}^{\mathrm{ws}}\) where \(F_{i}^{\mathrm{cl}}:=\partial_{Z^{i}}\Re^{\mathrm{cl}}/Z^{0}\) and \(F_{i}^{\mathrm{w.s.}}:=\partial_{Z^{i}}\Re^{\mathrm{w.s.}}/Z^{0}\), \(\widetilde{\zeta}_{i}=\widetilde{\zeta}_{i}^{\mathrm{cl}}+\widetilde{\zeta}_{ i}^{\mathrm{inst}}\) and \(\sigma=\sigma^{\mathrm{cl}}+\sigma^{\mathrm{inst}}\), and (2.35) we have
\[\theta_{+}^{P,\mathrm{cl}} :=-2\pi\tau_{2}\left(F_{i}^{\mathrm{cl}}\mathrm{d}\zeta^{-i}-z^{ \mathrm{cl}}\widetilde{\zeta}_{i}^{\mathrm{cl}}\right) \tag{3.33}\] \[\theta_{+}^{P,\mathrm{w.s.}} :=-2\pi\tau_{2}\left(F_{i}^{\mathrm{w.s.}}\mathrm{d}\zeta^{i}-z^{ \mathrm{cl}}\widetilde{\zeta}_{i}^{\mathrm{inst}}\right)\] \[\theta_{+}^{P,\mathrm{inst}} =\mathrm{i}\tau_{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{ \gamma}\sum_{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_ {\gamma}|)\mathrm{d}\zeta_{\gamma}\] \[\quad+\frac{\tau_{2}^{2}}{2}\sum_{\gamma}\Omega(\gamma)\widetilde{ Z}_{\gamma}\sum_{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4 \pi Rn|\widetilde{Z}_{\gamma}|)\left(\frac{\mathrm{d}\widetilde{Z}_{\gamma}}{ \widetilde{Z}_{\gamma}}+\frac{\mathrm{d}\overline{\widetilde{Z}}_{\gamma}}{ \widetilde{Z}_{\gamma}}+\frac{2}{\tau_{2}}\mathrm{d}\tau_{2}\right).\]
For \(\theta_{3}^{P}|_{\overline{N}}\) we use the relation
\[\mathrm{d}^{c}\mathcal{K}=e^{\mathcal{K}}(-2\mathrm{Im}(\tau_{ij}))\Big{(} \mathrm{i}\overline{z}^{j}\mathrm{d}z^{i}-\mathrm{i}z^{i}\mathrm{d}\overline{ z}^{j}\Big{)}, \tag{3.34}\]
so that
\[4\pi(\rho+c_{\ell})\mathrm{d}^{c}\mathcal{K}=\pi R^{2}(-2\mathrm{Im}(\tau_{ ij}))\Big{(}\mathrm{i}\overline{z}^{j}\mathrm{d}z^{i}-\mathrm{i}z^{i}\mathrm{d} \overline{z}^{j}\Big{)}=\pi\tau_{2}^{2}\mathrm{Im}(\tau_{0a})\mathrm{d}t^{a}+ \pi\tau_{2}^{2}\mathrm{Im}(\tau_{ab})(b^{a}\mathrm{d}t^{b}-t^{a}\mathrm{d}b^{b}), \tag{3.35}\]
and hence we can split \(\theta_{3}^{P}|_{\overline{N}}\) as follows using (2.35)
\[\theta_{3}^{P,\mathrm{cl}} :=\pi\mathrm{d}\sigma^{\mathrm{cl}}+\pi\left(\widetilde{\zeta}_{ i}^{\mathrm{cl}}\mathrm{d}\zeta^{i}-\zeta^{i}\mathrm{d}\widetilde{\zeta}_{i}^{ \mathrm{cl}}\right)-\pi\tau_{2}^{2}\mathrm{Im}(\tau_{0a}^{\mathrm{cl}}) \mathrm{d}t^{a}-\pi\tau_{2}^{2}\mathrm{Im}(\tau_{ab}^{\mathrm{cl}})(b^{a} \mathrm{d}t^{b}-t^{a}\mathrm{d}b^{b}) \tag{3.36}\] \[\theta_{3}^{P,\mathrm{w.s.}} :=\pi\mathrm{d}\sigma^{\mathrm{inst}}+\pi\left(\widetilde{\zeta}_{ i}^{\mathrm{inst}}\mathrm{d}\zeta^{i}-\zeta^{i}\mathrm{d}\widetilde{\zeta}_{i}^{ \mathrm{inst}}\right)-\pi\tau_{2}^{2}\mathrm{Im}(\tau_{0a}^{\mathrm{w.s.}}) \mathrm{d}t^{a}-\pi\tau_{2}^{2}\mathrm{Im}(\tau_{ab}^{\mathrm{w.s.}})(b^{a} \mathrm{d}t^{b}-t^{a}\mathrm{d}b^{b})\] \[\theta_{3}^{P,\mathrm{inst}} =-\frac{\mathrm{i}\tau_{2}}{4\pi}\sum_{\gamma}\Omega(\gamma)\sum_{ n>0}\frac{e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}|\widetilde{Z}_{\gamma}|K_{1}(4 \pi Rn|\widetilde{Z}_{\gamma}|)\left(\frac{\mathrm{d}\widetilde{Z}_{\gamma}}{ \widetilde{Z}_{\gamma}}-\frac{\mathrm{d}\overline{\widetilde{Z}}_{\gamma}}{ \widetilde{Z}_{\gamma}}\right)\,,\]
where \(\tau_{ij}^{\mathrm{cl}}=\partial_{Z^{i}}\partial_{Z^{j}}\Re^{\mathrm{cl}}\) and \(\tau_{ij}^{\mathrm{w.s.}}=\partial_{Z^{i}}\partial_{Z^{j}}\Re^{\mathrm{w.s.}}\).
Similarly, using that \(f=16\pi\rho+f^{\mathrm{inst}}\) and (using the second relation in (3.15))
\[\rho=\frac{\tau_{2}^{2}}{16}e^{-\mathcal{K}}-c_{\ell}=\frac{\tau_{2}^{2}}{16}K -c_{\ell},\quad K=-2\mathrm{Im}(\tau_{ij})z^{i}\overline{z}^{j}\,, \tag{3.37}\]
we can decompose \(f\) as
\[f=f^{\mathrm{cl}}+f^{\mathrm{w.s.}}-16\pi c_{\ell}+f^{\mathrm{inst}}, \tag{3.38}\]
where
\[f^{\mathrm{cl}} :=-2\pi\tau_{2}^{2}\mathrm{Im}(\tau_{ij}^{\mathrm{cl}})z^{i} \overline{z}^{j} \tag{3.39}\] \[f^{\mathrm{w.s.}} :=-2\pi\tau_{2}^{2}\mathrm{Im}(\tau_{ij}^{\mathrm{w.s.}})z^{i} \overline{z}^{j}=\frac{\tau_{2}^{2}}{2\pi^{2}}\sum_{\tilde{\gamma}\in\Lambda^{+} \cup\{0\}}n_{\gamma}\mathrm{Re}\left(\mathrm{Li}_{3}(e^{2\pi\mathrm{i}q_{a}z^{ \mathrm{z}}})+2\pi q_{a}t^{a}\mathrm{Li}_{2}(e^{2\pi\mathrm{i}q_{a}z^{\mathrm{z }}})\right)\] \[f^{\mathrm{inst}} =\frac{\tau_{2}}{\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{ e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{ \gamma}|)\,.\]
In order to prove Theorem 3.9, we start with the following lemma, which will allow us to rewrite the Bessel function expressions of the instanton terms \(f^{\mathrm{inst}}\), \(\theta_{\pm}^{P,\mathrm{inst}}\) and \(\theta_{3}^{P,\mathrm{inst}}\) in terms of expressions similar to those appearing in (3.22).
**Definition 3.13**.: Given \(\hat{\gamma}\in\Lambda^{+}\) with \(n_{\gamma}\neq 0\), and \(\nu\in\mathbb{N}\), we define
\[\mathcal{I}_{\hat{\gamma}}^{(\nu)} :=2\sum_{q_{0}\in\mathbb{Z}}\sum_{s=\pm 1}\sum_{n=1}^{\infty} \frac{e^{-2\pi\mathrm{i}ns\zeta_{q_{0}\gamma^{0}+\hat{\gamma}}}K_{0}(4\pi Rn| \widetilde{Z}_{q_{0}\gamma^{0}+\hat{\gamma}}|)}{\mathcal{I}_{0}^{(\nu)}}:=2 \sum_{q_{0}\in\mathbb{Z}-\{0\}}\sum_{s=\pm 1}\sum_{n=1}^{\infty}\frac{e^{-2\pi ins \zeta_{q_{0}\gamma^{0}}}}{(sn)^{\nu}}K_{0}(4\pi Rn|\widetilde{Z}_{q_{0}\gamma ^{0}}|)\,. \tag{3.40}\]
Notice that in \(\mathcal{I}_{0}^{(\nu)}\) we exclude the term \(q_{0}=0\) from the sum, since \(K_{0}(x)\) diverges at \(x=0\). Furthermore, since \(\hat{\gamma}\in\Lambda^{+}\) with \(n_{\hat{\gamma}}\neq 0\) implies that \(\pm(q_{0}\gamma^{0}+\hat{\gamma})\in\mathrm{Supp}(\Omega)\) for all \(q_{0}\in\mathbb{Z}\), by the support property of variations of BPS structures one finds that \(|Z_{q_{0}\gamma^{0}+\hat{\gamma}}|\neq 0\), so that \(|\widetilde{Z}_{q_{0}\gamma^{0}+\hat{\gamma}}|\neq 0\) on \(\widetilde{N}\) (or \(\overline{N}\)). By the exponential decay of \(K_{0}(x)\) as \(x\to\infty\), it is easy to check that the convergence of the series \(\mathcal{I}_{\hat{\gamma}}^{(\nu)}\) and \(\mathcal{I}_{0}^{(\nu)}\) is compact normal on \(\widetilde{N}\). In particular, they define smooth functions on \(\widetilde{N}\) and we can interchange sums with differentiation.
**Lemma 3.14**.: The function \(\mathcal{I}_{\hat{\gamma}}^{(\nu)}\) can be expressed as
\[\mathcal{I}_{\hat{\gamma}}^{(\nu)}=\sum_{n\in\mathbb{Z}}\sum_{m\in\mathbb{Z}- \{0\}}\frac{e^{-S_{\hat{\gamma},m,n}}}{m^{\nu}|m\tau+n|}, \tag{3.41}\]
where \(S_{\hat{\gamma},m,n}\) was defined in (3.17). Furthermore, we have for \(\nu\geq 2\)
\[\partial_{\tau_{2}}\mathcal{I}_{0}^{(\nu)}=\partial_{\tau_{2}}\mathcal{I}_{ \hat{\gamma}}^{(\nu)}|_{\hat{\gamma}=0}+\frac{2}{\tau_{2}}\sum_{s=\pm 1}\sum_{n>0} \frac{1}{(sn)^{\nu}},\quad\partial_{\tau_{1}}\partial_{\tau_{2}}\mathcal{I}_{ 0}^{(\nu)}=\partial_{\tau_{1}}\partial_{\tau_{2}}\mathcal{I}_{\hat{\gamma}}^{( \nu)}|_{\hat{\gamma}=0},\quad\partial_{\tau_{1}}^{2}\mathcal{I}_{0}^{(\nu)}= \partial_{\tau_{1}}^{2}\mathcal{I}_{\hat{\gamma}}^{(\nu)}|_{\hat{\gamma}=0}\,, \tag{3.42}\]
where \(\partial_{\tau_{2}}\mathcal{I}_{\hat{\gamma}}^{(\nu)}|_{\hat{\gamma}=0}\) (and similarly for the other derivatives) means taking the derivative of (3.41) and then evaluating at \(\hat{\gamma}=0\).
Proof.: We will use a similar notation to [1, Appendix B] and follow the same idea of Poisson resummation, but in an easier case than the one presented in [1, Appendix B]. We denote \(x=q_{a}b^{a}\), \(y=q_{a}t^{a}\) and \(\Theta:=q_{a}(\zeta^{a}-b^{a}\zeta^{0})=-q_{a}c^{a}\), so that we can write
\[\mathcal{I}_{\hat{\gamma}}^{(\nu)}=\sum_{q_{0}\in\mathbb{Z}}f(x+q_{0},y,\zeta ^{i},R) \tag{3.43}\]
where
\[f(x,y,\zeta^{i},R):=2\sum_{s=\pm 1}\sum_{n=1}^{\infty}\frac{e^{2\pi\mathrm{i}ns( \Theta+x\zeta^{0})}}{(sn)^{\nu}}K_{0}(4\pi Rn|x+\mathrm{i}y|)\,. \tag{3.44}\]
We have used above that \(\zeta_{q_{1}\gamma^{i}}=-q_{i}\zeta^{i}\) (recall Definition 2.10).
Equation (3.43) makes it clear that \(\mathcal{I}_{\hat{\gamma}}^{(\nu)}\) is a function that is invariant under integer shifts in the \(x\) variable, and the idea is to now compute the Poisson resummation with respect to the \(x\)-variable. Namely,
\[\mathcal{I}_{\hat{\gamma}}^{(\nu)}=\sum_{q_{0}\in\mathbb{Z}}f(x+q_{0},y,\zeta^ {i},R)=\sum_{q_{0}\in\mathbb{Z}}\hat{f}(q_{0},y,\zeta^{i},R)e^{2\pi\mathrm{i} q_{0}x}, \tag{3.45}\]
where
\[\hat{f}(w)=\int_{-\infty}^{\infty}\mathrm{d}x\;f(x)e^{-2\pi\mathrm{i}xw} \tag{3.46}\]
denotes the Fourier transform (from now on, we omit the dependence of \(f\) of the variables \(y\), \(\zeta^{i}\) and \(R\) from the notation).
Using the integral representation (A.1) of \(K_{0}\) we have
\[K_{0}(4\pi Rn|x+\mathrm{i}y|) =\int_{0}^{\infty}\mathrm{d}t\exp(-4\pi Rn|x+\mathrm{i}y|\cosh(t) )=\frac{1}{2}\int_{-\infty}^{\infty}\mathrm{d}t\exp(-4\pi Rn|x+\mathrm{i}y| \cosh(t)) \tag{3.47}\] \[=\frac{1}{2}\int_{0}^{\infty}\frac{\mathrm{d}z}{z}\exp(-2\pi Rn|x+ \mathrm{i}y|(z^{-1}+z))\,.\]
Furthermore, since the integral exponentially decays at both ends, provided \(x\neq 0\), we can deform the integration path by changing \(z\) to \(z\to z\frac{|x+\mathrm{i}y|}{x+\mathrm{i}y}\mathrm{sign}(x)\), obtaining
\[K_{0}(4\pi Rn|x+\mathrm{i}y|)=\frac{1}{2}\int_{0}^{\infty}\frac{\mathrm{d}z}{z} \exp(-2\pi Rn\cdot\mathrm{sign}(x)((x+\mathrm{i}y)z^{-1}+(x-\mathrm{i}y)z))\,. \tag{3.48}\]
In particular, we have
\[\hat{f}(w) =\int_{-\infty}^{\infty}\mathrm{d}x\sum_{s=\pm 1}\sum_{n=1}^{ \infty}\frac{e^{2\pi\mathrm{i}ns\Theta}}{(sn)^{\nu}}\int_{0}^{\infty}\frac{ \mathrm{d}z}{z}\exp(-2\pi nR\cdot\mathrm{sign}(x)((x+\mathrm{i}y)z^{-1}+(x- \mathrm{i}y)z))e^{2\pi\mathrm{i}x(-w+ns\zeta^{0})}\] \[=\sum_{s=\pm 1}\sum_{n=1}^{\infty}\frac{e^{2\pi\mathrm{i}ns\Theta} }{(sn)^{\nu}}\int_{0}^{\infty}\frac{\mathrm{d}z}{z}\int_{0}^{\infty}\mathrm{d }x\,e^{2\pi x(-nRz^{-1}-nRz-\mathrm{i}w+\mathrm{i}ns\zeta^{0})+2\pi y(-\mathrm{ i}nz^{-1}+\mathrm{i}nRz)}\] \[\quad+\sum_{s=\pm 1}\sum_{n=1}^{\infty}\frac{e^{2\pi\mathrm{i}ns \Theta}}{(sn)^{\nu}}\int_{0}^{\infty}\frac{\mathrm{d}z}{z}\int_{0}^{\infty} \mathrm{d}x\,e^{2\pi x(-nRz^{-1}-nRz+\mathrm{i}w-\mathrm{i}ns\zeta^{0})+2\pi y (\mathrm{i}nz^{-1}-\mathrm{i}nRz)}\] \[\quad=\frac{1}{2\pi}\sum_{s=\pm 1}\sum_{n=1}^{\infty}\frac{e^{2 \pi\mathrm{i}ns\Theta}}{(sn)^{\nu}}\left(\int_{0}^{\infty}\frac{\mathrm{d}z}{z }\frac{e^{2\pi y(-\mathrm{i}nRz^{-1}+\mathrm{i}nRz)}}{nRz^{-1}+nRz+\mathrm{i}w -\mathrm{i}ns\zeta^{0}}+\int_{0}^{\infty}\frac{\mathrm{d}z}{z}\frac{e^{2\pi y( \mathrm{i}nz^{-1}-\mathrm{i}nRz)}}{nRz^{-1}+nRz-\mathrm{i}w+\mathrm{i}ns\zeta^{ 0}}\right). \tag{3.49}\]
We can combine the \(s=1\) and \(s=-1\) terms of the previous sum into an integral over \(\mathbb{R}\) and a sum over \(n\in\mathbb{Z}-\{0\}\) to obtain
\[\hat{f}(w)=\frac{1}{2\pi}\sum_{n\in\mathbb{Z}-\{0\}}\frac{e^{2\pi\mathrm{i}n \Theta}}{n^{\nu-1}|n|}\int_{-\infty}^{\infty}\frac{\mathrm{d}z}{z}\frac{e^{-2 \pi\mathrm{i}nRy(z^{-1}-z)}}{nR(z^{-1}+z)+\mathrm{i}w-\mathrm{i}n\zeta^{0}}\,. \tag{3.50}\]
The integrand of the \(n\)-th term of \(\hat{f}(m)\) with \(m\in\mathbb{Z}\) has poles at \(z=\mathrm{i}t_{\pm}^{n,-m}\) where \(t_{\pm}^{c,d}\) was defined before in (3.21) as the roots in \(t\) of \(t(c\xi^{0}(t)+d)=0\). Since one of our defining conditions on the manifold \(M\) is that \(\mathrm{sign}(y)=\mathrm{sign}(q_{a}t^{a})>0\) for \(n_{\dot{\gamma}}\neq 0\) (see (3.6)), we can compute the previous integral by closing the contour in the upper half plane when \(n>0\), and in the lower half plane if \(n<0\). Independently of the sign of \(n\), only the pole at \(\mathrm{i}t_{+}^{n,-m}\) contributes to the \(n\)-th integral. In particular, we obtain
\[\hat{f}(m)=\sum_{n\in\mathbb{Z}-\{0\}}\frac{e^{2\pi\mathrm{i}n\Theta}}{n^{\nu }}\frac{e^{-2\pi nRy(1/t_{+}^{n,-m}-t_{+}^{n,-m})}}{nR(t_{+}^{n,-m}-t_{-}^{n,-m} )}=\sum_{n\in\mathbb{Z}-\{0\}}\frac{e^{2\pi\mathrm{i}n\Theta}}{n^{\nu}}\frac{e ^{-2\pi y|n\tau-m|}}{|n\tau-m|}\,, \tag{3.51}\]
where we used that \(t_{+}^{n,-m}t_{-}^{n,-m}=-1\) and the expressions (3.21) of \(t_{\pm}^{n,-m}\).
After plugging the previous result in (3.45), using \(2R=\tau_{2}\), (3.17) and relabeling the summation indices we then obtain
\[\mathcal{I}_{\dot{\gamma}}^{(\nu)}=\sum_{\begin{subarray}{c}m\in\mathbb{Z}-\{0 \}\\ n\in\mathbb{Z}\end{subarray}}\frac{e^{2\pi\mathrm{i}m\Theta-2\pi\mathrm{i}nx} }{m^{\nu}}\frac{e^{-2\pi y|m\tau+n|}}{|m\tau+n|}=\sum_{\begin{subarray}{c}m \in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\frac{e^{-S_{\dot{\gamma},m,n}}}{m^{\nu}|m\tau+n|}\,. \tag{3.52}\]
Finally, we use the following identities to prove (3.42):
\[\lim_{z\to 0}zK_{1}(2\pi\tau_{2}nz)=\frac{1}{2\pi n\tau_{2}},\quad\lim_{z\to 0 }z^{2}K_{1}(2\pi\tau_{2}nz)=0,\quad\lim_{z\to 0}z^{2}K_{0}(2\pi\tau_{2}nz)=0\,. \tag{3.53}\]
We show the first identity in (3.42), with the others following similarly. By applying the first limit in (3.53), the dominated convergence theorem (here we use that \(\nu\geq 2\)) and \(K_{0}^{\prime}=-K_{1}\), we obtain:
\[\partial_{\tau_{2}}\mathcal{I}_{0}^{(\nu)} =\lim_{\zeta_{\dot{\gamma}}\to 0,\widetilde{Z}_{\dot{\gamma}}\to 0} \left(\partial_{\tau_{2}}\mathcal{I}_{\dot{\gamma}}^{(\nu)}+2\sum_{s=\pm 1}\sum_{n>0}\frac{e^{-2\pi \mathrm{i}ns\zeta_{\dot{\gamma}}}}{(sn)^{\nu}}2\pi n|\widetilde{Z}_{\dot{ \gamma}}|K_{1}(2\pi n\tau_{2}|\widetilde{Z}_{\dot{\gamma}}|)\right) \tag{3.54}\] \[=\left(\partial_{\tau_{2}}\mathcal{I}_{\dot{\gamma}}^{(\nu)}\right) \Big{|}_{\dot{\gamma}=0}+\frac{2}{\tau_{2}}\sum_{s=\pm 1}\sum_{n>0}\frac{1}{(sn)^{\nu}}\,.\]
#### 3.3.2 Poisson resummation of the quantum corrections of the contact structure
In this section, we seek to rewrite the quantum corrections terms in \(f\), \(\theta_{+}^{p}\) and \(\theta_{3}^{p}\) using the Poisson resummation of Lemma 3.14. This will help us show that the coordinates (3.22) are Darboux coordinates for the contact structure describing the instanton corrected q-map metric associated to the data of Section 3.1. We start with the following proposition for \(f\):
**Proposition 3.15**.: The following holds for \(f^{\rm w.s.}\) and \(f^{\rm inst}\) defined in (3.39):
\[f^{\rm w.s.}+f^{\rm inst} =\frac{\chi}{12}+\frac{\tau_{2}^{2}}{(2\pi)^{2}}\sum_{\hat{\gamma }\in\Lambda+\cup\{0\}}n_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}} \left(\frac{1}{|m\tau+n|}+2\pi q_{a}t^{a}\right)\frac{e^{-S_{\hat{\gamma},m,n} }}{|m\tau+n|^{2}} \tag{3.55}\]
Proof.: On one hand, we have
\[\begin{split} f^{\rm inst}&=\frac{2R}{\pi}\sum_{ \gamma}\Omega(\gamma)\sum_{n>0}\frac{e^{-2\pi in\zeta_{\gamma}}}{n}|\widetilde{ Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)\\ &=\frac{2R}{\pi}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma} }\sum_{s=\pm 1}\sum_{q_{0}\in\mathbb{Z}}\sum_{n>0}\frac{e^{-2\pi ins\zeta_{q_{0} \gamma_{0}+\hat{\gamma}}}}{n}|\widetilde{Z}_{q_{0}\gamma^{0}+\hat{\gamma}}|K_{ 1}(4\pi Rn|\widetilde{Z}_{q_{0}\gamma^{0}+\hat{\gamma}}|)\\ &\qquad-\frac{R}{\pi}\chi\sum_{s=\pm 1}\sum_{q_{0}\in\mathbb{Z}-\{0\}} \sum_{n>0}\frac{e^{-2\pi ins\zeta_{q_{0}\gamma^{0}}}}{n}|\widetilde{Z}_{q_{0} \gamma^{0}}|K_{1}(4\pi Rn|\widetilde{Z}_{q_{0}\gamma^{0}}|)\\ &=-\frac{R}{2\pi^{2}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{ \gamma}}\partial_{\tau_{2}}\mathcal{I}_{\hat{\gamma}}^{(2)}+\frac{R}{(2\pi)^ {2}}\chi\partial_{\tau_{2}}\mathcal{I}_{0}^{(2)}\,,\end{split} \tag{3.56}\]
where we have used the particular form of the BPS indices (3.12), the expressions (3.40) and the fact that \(R=\tau_{2}/2\). Using Lemma 3.14 we therefore obtain that
\[\begin{split} f^{\rm inst}&=\frac{\tau_{2}^{2}}{(2 \pi)^{2}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma}}\sum_{\begin{subarray} {c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\left(\frac{1}{|m\tau+n|}+2\pi q_{a}t^{a}\right) \frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\\ &\qquad-\frac{\tau_{2}^{2}}{2(2\pi)^{2}}\chi\sum_{\begin{subarray} {c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\frac{1}{|m\tau+n|^{3}}+\frac{\tau_{2}\chi}{2(2 \pi)^{2}}\frac{4}{\tau_{2}}\sum_{n>0}\frac{1}{n^{2}}\\ &=\frac{\chi}{12}+\frac{\tau_{2}^{2}}{(2\pi)^{2}}\sum_{\hat{ \gamma}\in\Lambda^{+}\cup\{0\}}n_{\hat{\gamma}}\sum_{\begin{subarray}{c}m\in \mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\left(\frac{1}{|m\tau+n|}+2\pi q_{a}t^{a}\right) \frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\,,\end{split} \tag{3.57}\]
where in the last equality we have used that \(\sum_{n>0}\frac{1}{n^{2}}=\frac{\pi^{2}}{6}\) and combined the other two sums by using the convention \(n_{0}=-\frac{\chi}{2}\) from Section 3.1. On the other hand, it is not hard to check using (3.39) that one can write
\[f^{\rm w.s.}=\frac{\tau_{2}^{2}}{(2\pi)^{2}}\sum_{\hat{\gamma}\in\Lambda^{+} \cup\{0\}}n_{\hat{\gamma}}\sum_{n\in\mathbb{Z}-\{0\}}\left(\frac{1}{|n|}+2\pi q _{a}t^{a}\right)\frac{e^{-S_{\hat{\gamma},0,n}}}{n^{2}} \tag{3.58}\]
By summing (3.58) and (3.57) we therefore obtain (3.55).
**Remark 3.16**.: Combining Proposition 3.15 with (3.38) we see that
\[f=8\pi\tau_{2}^{2}h(t)+16\pi\left(\frac{\chi}{192\pi}-c\ell\right)+\frac{\tau_ {2}^{2}}{(2\pi)^{2}}\sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0\}}n_{\hat{\gamma}} \sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left(\frac{1}{|m\tau+n|}+2\pi q_{a}t^{a} \right)\frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\,, \tag{3.59}\]
where \(f^{\rm cl}=-2\pi\tau_{2}^{2}{\rm Im}(\tau_{\rm cl}^{\rm cl})^{2}z^{\overline{ \sigma}}=8\pi\tau_{2}^{2}h(t)\) and \(h(t)\) is given in (3.1). In particular, we see that the quantum corrected \(f\) has the same transformation rule as \(f^{\rm cl}\) under \(S\)-duality if and only if \(c_{\ell}=\frac{\chi}{192\pi}\). Namely, for this value of \(c_{\ell}\), \(f\) transforms under \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in{\rm SL}(2,\mathbb{Z})\) as
\[f\to\frac{f}{|c\tau+d|}\,. \tag{3.60}\]
This suggests that this particular value of the 1-loop constant \(c_{\ell}\) could play a special role in studying \(S\)-duality, and indeed, we will see in Section 3.4 that for this value S-duality acts by isometries on the instanton corrected q-map space (provided \(S\)-duality acts on the domain of definition of the metric). Furthermore, in the setting of Calabi-Yau compactification of type IIB string theory, the quantity \(f\) with \(c_{\ell}=\frac{\chi}{192}\) is proportional to the 4d dilaton with perturbative, world-sheet instanton, D(-1) and D1 instanton corrections, and the corresponding transformation property has been previously remarked in the physics literature (see for example [1, Equation 4.1] and the paragraphs below the equation).
We now continue with the Poisson resummation of the other terms of the contact form.
**Proposition 3.17**.: The 1-form \(-2{\rm i}(\theta_{3}^{P,{\rm w.s.}}+\theta_{3}^{P,{\rm inst}})\) can be rewritten in terms of the coordinates \((\tau_{2},b^{a},t^{a},\tau_{1}=\zeta^{0},\zeta^{a},\widetilde{\zeta}_{i},\sigma)\) as follows
\[-2{\rm i}(\theta_{3}^{P,{\rm w.s.}}+\theta_{3}^{P,{\rm inst}})\] \[= \sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0\}}n_{\hat{\gamma}}\sum_{ (m,n)\in\mathbb{Z}^{2}-\{0\}}e^{-S_{\hat{\gamma},m,n}}\Bigg{[}\frac{{\rm i} \tau_{2}^{4}}{2\pi}\frac{m^{2}}{|m\tau+n|^{4}}q_{a}{\rm d}b^{a}-\frac{\tau_{2} ^{2}}{2\pi}\frac{m\tau_{1}+n}{|m\tau+n|^{3}}q_{a}{\rm d}t^{a}+\frac{{\rm i} \tau_{2}^{2}}{2\pi}\frac{m(m\tau_{1}+n)}{|m\tau+n|^{4}}q_{a}{\rm d}\zeta^{a}\] \[-\left(\frac{\tau_{2}^{3}}{\pi^{2}}\frac{m^{2}(m\tau_{1}+n)}{|m \tau+n|^{6}}+\frac{\tau_{2}}{2\pi}q_{a}t^{a}\frac{(m\tau_{1}+n)^{3}+2m^{2}\tau _{2}^{2}(m\tau_{1}+n)}{|m\tau+n|^{5}}\right){\rm d}\tau_{2}\] \[+\left(-\frac{{\rm i}\tau_{2}^{2}}{2\pi}q_{a}b^{a}\frac{m(m\tau_{ 1}+n)}{|m\tau+n|^{4}}+\frac{\tau_{2}^{4}}{2\pi}q_{a}t^{a}\frac{m^{3}}{|m\tau+n |^{5}}+\frac{\tau_{2}^{2}}{2\pi^{2}}\frac{m(m^{2}\tau_{2}^{2}-(m\tau_{1}+n)^{ 2})}{|m\tau+n|^{6}}\right){\rm d}\tau_{1}\Bigg{]}.\]
Proof.: We follow the same idea as in the previous proposition. Namely, notice that using (3.36) and (3.40), \(\theta_{3}^{P,{\rm inst}}\) can be written as
\[\theta_{3}^{P,{\rm inst}}= -\frac{{\rm i}R}{2\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{ e^{-2\pi{\rm i}n\zeta_{\gamma}}}{n}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|)\left(\frac{{\rm d}\widetilde{Z}_{\gamma}}{\widetilde {Z}_{\gamma}}-\frac{{\rm d}\overline{\widetilde{Z}}_{\gamma}}{\widetilde{Z}_{ \gamma}}\right)\] \[=-\frac{{\rm i}R}{2\pi}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{ \gamma}}\sum_{s=\pm\hat{\perp}}\sum_{q_{0}\in\mathbb{Z}}\sum_{n>0}\frac{e^{-2 \pi{\rm i}n\zeta_{q_{0}\gamma^{0}+\hat{\gamma}}}}{n}|\widetilde{Z}_{q_{0}\gamma ^{0}+\hat{\gamma}}|K_{1}(4\pi Rn|\widetilde{Z}_{q_{0}\gamma^{0}+\hat{\gamma}} |)\left(\frac{{\rm d}\widetilde{Z}_{\hat{\gamma}}}{\widetilde{Z}_{q_{0}\gamma^{0 }+\hat{\gamma}}}-\frac{{\rm d}\overline{\widetilde{Z}}_{\hat{\gamma}}}{ \widetilde{Z}_{q_{0}\gamma^{0}+\hat{\gamma}}}\right)\] \[=\frac{{\rm i}}{8\pi^{2}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat {\gamma}}\partial_{\varepsilon^{a}}\mathcal{I}_{\hat{\gamma}}^{(2)}{\rm d}z^{a} -\frac{{\rm i}}{8\pi^{2}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma}} \partial_{\overline{z}^{a}}\mathcal{I}_{\hat{\gamma}}^{(2)}{\rm d}\overline{z}^ {a}, \tag{3.62}\]
where in second equality we have used that no \(\hat{\gamma}=0\) terms appear due to \({\rm d}\widetilde{Z}_{q_{0}\gamma^{0}}={\rm d}q_{0}=0\). Using Lemma 3.14 we then obtain the following:
\[-2{\rm i}\theta_{3}^{P,{\rm inst}}\] \[\qquad=\frac{{\rm i}}{2\pi}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{ \hat{\gamma}}\sum_{\begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\frac{e^{-S_{\hat{\gamma},m,n}}}{m^{2}}q_{a}{\rm d }b^{a}+\frac{1}{2\pi}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma}}\sum_{ \begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}e^{-S_{\hat{\gamma},m,n}}\frac{m\tau_{1}+n}{m^{2}|m \tau+n|}q_{a}{\rm d}t^{a}\,. \tag{3.63}\]
Observe for the calculation that the partial derivatives \(\partial_{z^{a}}\) and \(\partial_{\overline{z}^{a}}\) are to be taken with respect to the mixed coordinates \((\tau_{2},b^{a},t^{a},\tau_{1}=\zeta^{0},\zeta^{a},\widetilde{\zeta}_{i},\sigma)\), which are neither the IIA nor the IIB coordinates. On the other hand, using directly the formulas (3.16) and (3.10) for \(\sigma^{\rm inst}\), \(\widetilde{\zeta}_{i}^{\rm inst}\) and \(\mathfrak{F}^{\rm w.s.}\) one can compute \(-2{\rm i}\theta_{3}^{P,{\rm w.s.}}\) in (3.36). The \({\rm d}\zeta^{a}\), \({\rm d}\tau_{1}\) and \({\rm d}\tau_{2}\) components of \(-2{\rm i}\theta_{3}^{P,{\rm w.s.}}\) can be seen to match
the corresponding components in the right-hand side of (3.61). On the other hand, the \(\mathrm{d}t^{a}\) and \(\mathrm{d}b^{a}\) components of \(-2\mathrm{i}\theta_{3}^{P,\mathrm{w.s.}}\) are as follows
\[-2\mathrm{i}\theta_{3}^{P,\mathrm{w.s.}}|_{\mathrm{d}t^{a}} =-\frac{\mathrm{i}}{2\pi}\sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0 \}}n_{\hat{\gamma}}q_{a}\sum_{\begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\left(2-\frac{(m\tau_{1}+n)^{2}}{|m\tau+n|^{2}} \right)\frac{e^{-S_{\hat{\gamma},m.n}}(m\tau_{1}+n)^{2}}{m^{2}|m\tau+n|^{2}}\] \[-2\mathrm{i}\theta_{3}^{P,\mathrm{w.s.}}|_{\mathrm{d}t^{a}} =-\frac{\tau_{2}^{2}}{2\pi}\sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0 \}}n_{\hat{\gamma}}q_{a}\sum_{\begin{subarray}{c}n\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\frac{e^{-S_{\hat{\gamma},0.n}}}{n|n|}\] \[\quad-\frac{1}{2\pi}\sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0\}}n_ {\hat{\gamma}}q_{a}\sum_{\begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\left(2-\frac{(m\tau_{1}+n)^{2}}{|m\tau+n|^{2}} \right)\frac{e^{-S_{\hat{\gamma},m.n}}(m\tau_{1}+n)}{m^{2}|m\tau+n|}\,. \tag{3.64}\]
One can then directly check that the \(\mathrm{d}b^{a}\) and \(\mathrm{d}t^{a}\) components of (3.63) and (3.64) combine into the corresponding components of (3.61). We therefore obtain the required expression.
Finally, we compute a similar expression for \(\theta_{+}^{P}\):
**Proposition 3.18**.: The 1-form \(\mathrm{i}(\theta_{+}^{P,\mathrm{w.s.}}+\theta_{+}^{P,\mathrm{inst}})\) can be rewritten in terms of the coordinates \((\tau_{2},b^{a},t^{a},\tau_{1}=\zeta^{0},\zeta^{a},\widetilde{\zeta}_{i},\sigma)\) as follows
\[\mathrm{i}(\theta_{+}^{P,\mathrm{w.s.}}+\theta_{+}^{P,\mathrm{ inst}})\] \[=\sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0\}}n_{\hat{\gamma}}\sum_ {\begin{subarray}{c}(m,n)\in\mathbb{Z}^{2}-\{0\}\end{subarray}}e^{-S_{\hat{ \gamma},m.n}}\left[\frac{\mathrm{i}\tau_{2}^{3}}{4\pi}\frac{m(|m\tau+n|-(m\tau_ {1}+n))}{|m\tau+n|^{4}}q_{a}\mathrm{d}b^{a}\right.\] \[\quad-\frac{\tau_{2}^{3}}{4\pi}\frac{m}{|m\tau+n|^{3}}q_{a} \mathrm{d}t^{a}+\frac{\mathrm{i}\tau_{2}}{4\pi}\frac{(m\tau_{1}+n)(|m\tau+n|-( m\tau_{1}+n))}{|m\tau+n|^{4}}q_{a}\mathrm{d}\zeta^{a}\] \[\quad+\frac{\tau_{2}}{4\pi}\frac{(m\tau_{1}+n)((m\tau_{1}+n)-|m \tau+n|)}{|m\tau+n|^{4}}\left(q_{a}t^{a}\frac{m\tau_{1}+n}{|m\tau+n|}+\mathrm{ i}q_{a}b^{a}\right)\mathrm{d}\tau_{1} \tag{3.65}\] \[\quad-\frac{\tau_{2}}{8\pi^{2}}\frac{2(m\tau_{1}+n)(m^{2}\tau_{2} ^{2}-(m\tau_{1}+n)^{2})+|m\tau+n|(2(m\tau_{1}+n)^{2}-m^{2}\tau_{2}^{2})}{|m \tau+n|^{6}}\] \[\quad+\frac{\tau_{2}^{2}}{4\pi}\frac{m(m\tau_{1}+n)}{|m\tau+n|^{5 }}\left(q_{a}t^{a}((m\tau_{1}+n)-|m\tau+n|)+\frac{1}{2\pi}\frac{4(m\tau_{1}+n) -3|m\tau+n|}{|m\tau+n|}\right)\mathrm{d}\tau_{2}\right]\,.\]
Proof.: As before, we first notice that we can write the terms of \(\theta_{+}^{P,\mathrm{inst}}\) as follows. For the first sum in (3.33) we have
\[2R\mathrm{i}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum _{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma} |)\mathrm{d}\zeta_{\gamma}\] \[=2R\mathrm{i}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma}} \sum_{q_{0}\in\mathbb{Z}}\sum_{s=\pm 1}\sum_{n>0}\widetilde{Z}_{q_{0}\gamma^{0}+\hat{ \gamma}}e^{-2\pi\mathrm{i}ns\zeta_{q_{0}\gamma^{0}+\hat{\gamma}}}K_{0}(4\pi Rn| \widetilde{Z}_{q_{0}\gamma^{0}+\hat{\gamma}}|)\mathrm{d}\zeta_{q_{0}\gamma^{0 }+\hat{\gamma}} \tag{3.66}\] \[\quad-R\mathrm{i}\chi\sum_{q_{0}\in\mathbb{Z}}\sum_{s=\pm 1}\sum_{n>0} \widetilde{Z}_{q_{0}\gamma^{0}}e^{-2\pi\mathrm{i}ns\zeta_{q_{0}\gamma^{0}}}K_{0} (4\pi Rn|\widetilde{Z}_{q_{0}\gamma^{0}}|)\mathrm{d}\zeta_{q_{0}\gamma^{0}}\] \[=-\frac{1}{(2\pi)^{3}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{ \gamma}}\mathrm{d}\zeta^{0}\partial_{\zeta^{0}}\partial_{\tau_{2}}\partial_{ \overline{z}_{\gamma}}\mathcal{I}_{\hat{\gamma}}^{(3)}-\frac{\mathrm{i}}{(2\pi )^{2}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma}}\mathrm{d}\zeta^{a} \partial_{\tau_{2}}\partial_{\overline{z}_{\gamma}}\mathcal{I}_{\hat{\gamma}}^{(2)}- \frac{R\mathrm{i}}{2(2\pi)^{2}}\chi\mathrm{d}\zeta^{0}\partial_{\zeta^{0}} ^{2}\mathcal{I}_{0}^{(2)}\,,\]
where \(\partial_{z_{\hat{\gamma}}}:=\frac{1}{\hat{\gamma}\theta}\frac{1}{q_{a}}\partial_{z _{a}}\) with the sum in the index \(a\) only for the \(a=1,...,n\) such that \(q_{a}\neq 0\), and \(\#q\) is the number of non-zero \(q_{a}\)5. By Lemma 3.14, we then find
Footnote 5: It is natural to denote \(z_{\hat{\gamma}}:=q_{a}z^{a}\) for \(\hat{\gamma}\in\Lambda^{+}\). With this notation, the differential operator \(\partial_{\widetilde{z}_{\hat{\gamma}}}\) satisfies \(\partial_{z_{\hat{\gamma}}}z_{\hat{\gamma}}=1\).
\[2R\mathrm{i}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0}e^{-2\pi \mathrm{i}n\zeta_{\gamma}}K_{0}(2Rn|\widetilde{Z}_{\gamma}|)\mathrm{d}\zeta_{\gamma}\]
\[=\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma}}\sum_{\begin{subarray} {c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}e^{-S_{\hat{\gamma},m,n}}\bigg{[}-\frac{\mathrm{i} \tau_{2}}{8\pi^{2}}\left(\frac{1}{|m\tau+n|^{3}}\left(1-3\frac{(m\tau_{1}+n)^{ 2}}{|m\tau+n|^{2}}\right)-2\pi\mathrm{i}\frac{q_{a}b^{a}(m\tau_{1}+n)}{|m\tau+ n|^{3}}\right)\mathrm{d}\zeta^{0}\] \[\quad-\frac{\mathrm{i}\tau_{2}}{4\pi}\frac{q_{a}t^{a}}{|m\tau+n|^{ 4}}\left(m^{2}\tau_{2}^{2}-|m\tau+n|(m\tau_{1}+n)-2(m\tau_{1}+n)^{2}\right) \mathrm{d}\zeta^{0}\] \[\quad+\frac{\mathrm{i}\tau_{2}}{2}q_{a}t^{a}q_{b}\frac{m\tau_{1}+n +|m\tau+n|}{|m\tau+n|^{2}}\left(\mathrm{i}b^{b}+\frac{(m\tau_{1}+n)t^{b}}{|m \tau+n|}\right)\mathrm{d}\zeta^{0}\] \[\quad+\frac{\tau_{2}}{4\pi}\frac{q_{a}}{|m\tau+n|^{2}}\left(2\pi q _{b}t^{b}(m\tau_{1}+n+|m\tau+n|)+\frac{m\tau_{1}+n}{|m\tau+n|}\right)\mathrm{d }\zeta^{a}\bigg{]}\] \[\quad+\frac{\mathrm{i}\tau_{2}}{(4\pi)^{2}}\chi\sum_{ \begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\frac{1}{|m\tau+n|^{3}}\left(1-3\frac{(m\tau_{1}+n) ^{2}}{|m\tau+n|^{2}}\right)\mathrm{d}\zeta^{0}\,. \tag{3.67}\]
Similarly, for the remaining term of \(\theta_{+}^{P,\mathrm{inst}}\) in (3.33) we have that
\[2R^{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0 }e^{-2\pi\mathrm{i}\alpha\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|)\left(\frac{\mathrm{d}\widetilde{Z}_{\gamma}}{ \widetilde{Z}_{\gamma}}+\frac{\mathrm{d}\overline{Z}_{\gamma}}{\widetilde{Z} _{\gamma}}+\frac{2}{\tau_{2}}\mathrm{d}\tau_{2}\right)\] \[=2R^{2}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}e^{-2\pi\mathrm{i} \alpha\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{ \gamma}|)\mathrm{d}\widetilde{Z}_{\gamma}+2R^{2}\sum_{\gamma}\Omega(\gamma) \frac{\widetilde{Z}_{\gamma}^{2}}{|\widetilde{Z}_{\gamma}|}\sum_{n>0}e^{-2\pi \mathrm{i}\alpha\zeta_{\gamma}}K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)\mathrm{ d}\widetilde{Z}_{\gamma}\] \[\quad+\tau_{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma} \sum_{n>0}e^{-2\pi\mathrm{i}\alpha\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1} (4\pi Rn|\widetilde{Z}_{\gamma}|)\mathrm{d}\tau_{2}\] \[=\frac{\mathrm{i}\tau_{2}^{2}}{16\pi^{2}}\sum_{\hat{\gamma}\in \Lambda^{+}}n_{\hat{\gamma}}\partial_{\zeta^{a}}\partial_{\tau_{2}}\mathcal{I }_{\hat{\gamma}}^{(2)}\mathrm{d}z^{a}-\frac{1}{(2\pi)^{3}}\sum_{\hat{\gamma} \in\Lambda^{+}}n_{\hat{\gamma}}\partial_{\overline{\tau}}\partial_{\tau_{2}} \partial_{\overline{\tau}_{\hat{\gamma}}}\mathcal{I}_{\hat{\gamma}}^{(3)} \mathrm{d}\overline{z}^{a}\] \[\quad-\frac{1}{(2\pi)^{3}}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat {\gamma}}\left(\partial_{\tau_{2}}\partial_{\overline{\tau}_{\hat{\gamma}}} \partial_{\tau_{2}}\mathcal{I}_{\hat{\gamma}}^{(3)}-\frac{1}{\tau_{2}} \partial_{\overline{\tau}_{\hat{\gamma}}}\partial_{\tau_{2}}\mathcal{I}_{\hat {\gamma}}^{(3)}\right)\mathrm{d}\tau_{2}-\frac{\mathrm{i}\tau_{2}\chi}{(4\pi)^{ 2}}\partial_{\zeta^{0}}\partial_{\tau_{2}}\mathcal{I}_{0}^{(2)}\mathrm{d} \tau_{2}\,,\]
so using again Lemma 3.14 one finds that
\[2R^{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0 }e^{-2\pi\mathrm{i}\alpha\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|)\left(\frac{\mathrm{d}\widetilde{Z}_{\gamma}}{\widetilde {Z}_{\gamma}}+\frac{\mathrm{d}\overline{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}+ \frac{2}{\tau_{2}}\mathrm{d}\tau_{2}\right)\] \[=\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{\gamma}}\sum_{ \begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}e^{-S_{\hat{\gamma},m,n}}\Bigg{[}\frac{\tau_{2}^{3}}{ 8\pi}q_{a}\frac{m}{|m\tau+n|^{2}}\left(\frac{1}{|m\tau+n|}+2\pi q_{b}t^{b} {}\right)\mathrm{d}z^{a}\] \[\quad+\frac{\tau_{2}}{8\pi}q_{a}\frac{m\tau_{1}+n+|m\tau+n|^{2}}{ m|m\tau+n|^{2}}\left(\frac{|m\tau+n|-(m\tau_{1}+n)}{|m\tau+n|}-2\pi q_{a}t^{a}(m \tau_{1}+n+|m\tau+n|)\right)\mathrm{d}\overline{z}^{a}\] \[\quad+\frac{\mathrm{i}\tau_{2}^{2}}{4\pi}\left(\frac{2\pi q_{a}t^{a }}{|m\tau+n|^{2}}+\frac{1}{|m\tau+n|^{3}}\right)(m\tau_{1}+n+|m\tau+n|)\frac{ mq_{b}t^{b}}{|m\tau+n|}\mathrm{d}\tau_{2}\] \[\quad+\frac{\mathrm{i}\tau_{2}^{2}}{8\pi^{2}}m\left(\frac{4\pi q_{ a}t^{a}}{|m\tau+n|^{4}}+\frac{3}{|m\tau+n|^{5}}\right)(m\tau_{1}+n+|m\tau+n|) \mathrm{d}\tau_{2}-\frac{\mathrm{i}\tau_{2}^{2}}{8\pi^{2}}m\left(\frac{4\pi q_{b}t^ {b}}{|m\tau+n|^{3}}+\frac{3}{|m\tau+n|^{4}}\right)\mathrm{d}\tau_{2}\Bigg{]}\] \[\quad-\frac{3\mathrm{i}\tau_{2}^{2}\chi}{(4\pi)^{2}}\sum_{ \begin{subarray}{c}m\in\mathbb{Z}-\{0\}\\ n\in\mathbb{Z}\end{subarray}}\frac{m(m\tau_{1}+n)}{|m\tau+n|^{5}}\mathrm{d}\tau_{2}\,. \tag{3.69}\]
On the other hand, using the definitions of \(\sigma^{\mathrm{inst}}\), \(\widetilde{\zeta}_{i}^{\mathrm{inst}}\) and \(\mathbf{\tilde{g}}^{\mathrm{w.s.}}\) in (3.16) and (3.10), one can compute \(\theta_{+}^{P,\mathrm{w.s.}}\). Summing the contributions of \(\theta_{+}^{P,\mathrm{w.s.}}\) to (3.67) and (3.69), and multiplying the whole result by \(\mathrm{i}\), one then obtains (3.65).
#### 3.3.3 Proof of Theorem 3.9
Now we have all the preliminary results needed to prove Theorem 3.9:
Proof.: First, we remark that the functions \((\xi^{i},\widetilde{\xi}^{\rm cl}_{i},\alpha^{\rm cl})\) satisfy
\[-2\pi{\rm i}({\rm d}\alpha^{\rm cl}+\widetilde{\xi}^{\rm cl}_{i}{\rm d}\xi^{i}- \xi^{i}{\rm d}\widetilde{\xi}^{\rm cl}_{i})=f^{\rm cl}\frac{{\rm d}t}{t}+t^{-1} {\rm i}\theta^{P,\rm cl}_{+}-2{\rm i}\theta^{P,\rm cl}_{3}-t{\rm i}\theta^{P, \rm cl}_{-}\,. \tag{3.70}\]
The proof of (3.70) is almost the same as Proposition 2.18, but we replace in the proof \(t\to-{\rm i}t\) (recall that we did this rescaling at the beginning of Section 3.2) and drop the \(\log(t)\) term of \(\alpha\) in (2.42), while keeping in mind that \(f^{\rm cl}\) differs from \(f\) in Proposition 2.18 by \(16\pi c\ell\), compare (3.38), where \(f^{\rm w.s}\) and \(f^{\rm inst}\) need to be set to zero to get the \(f\) in Proposition 2.18. We then obtain immediately the following identity
\[4\pi{\rm i}\left(-\frac{1}{2}{\rm d}(\alpha^{\rm cl}-\xi^{i}\widetilde{\xi}^{ \rm cl}_{i})-\widetilde{\xi}^{\rm cl}_{i}{\rm d}\xi^{i}\right)=f^{\rm cl} \frac{{\rm d}t}{t}+t^{-1}{\rm i}\theta^{P,\rm cl}_{+}-2{\rm i}\theta^{P,\rm cl }_{3}-t{\rm i}\theta^{P,\rm cl}_{-}\,. \tag{3.71}\]
In particular, to prove Theorem 3.9 it is enough to show using the decompositions at the beginning of Section 3.3.1 that
\[4\pi{\rm i}\left({\rm d}\alpha^{\rm inst}-\widetilde{\xi}^{\rm inst}_{i}{\rm d }\xi^{i}\right)=(-16\pi c\ell+f^{\rm w.s.}+f^{\rm inst})\frac{{\rm d}t}{t}+t^{ -1}{\rm i}(\theta^{P,\rm w.s.}_{+}+\theta^{P,\rm inst}_{+})-2{\rm i}(\theta^{ P,\rm w.s.}_{3}+\theta^{P,\rm inst}_{3})-t{\rm i}\overline{(\theta^{P,\rm w.s.}_{+} +\theta^{P,\rm inst}_{+})}, \tag{3.72}\]
where we have defined
\[\alpha^{\rm inst}:=\alpha+\frac{1}{2}(\alpha^{\rm cl}-\xi^{i}\widetilde{\xi} ^{\rm cl}_{i}),\quad\widetilde{\xi}^{\rm inst}_{i}:=\widetilde{\xi}_{i}- \widetilde{\xi}^{\rm cl}_{i}\,. \tag{3.73}\]
In order to do this, one can use the explicit formulas (3.22) to compute the left-hand side of (3.72). In terms of the coordinates \((\tau_{2},b^{a},t^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\), the left-hand side has only \({\rm d}t\), \({\rm d}t^{a}\), \({\rm d}b^{a}\), \({\rm d}\zeta^{i}\) and \({\rm d}\tau_{2}\) components, since \(\alpha^{\rm inst}\) and \(\xi^{i}\) do not depend on \(\widetilde{\zeta}_{i}\) and \(\sigma\). Recall that \(R=\tau_{2}/2\). For the \({\rm d}t\) component, one obtains
\[4\pi{\rm i}\left({\rm d}\alpha^{\rm inst}-\widetilde{\xi}^{\rm inst }_{i}{\rm d}\xi^{i}\right)\Big{|}_{{\rm d}t} \tag{3.74}\] \[= \frac{\tau_{2}^{2}}{2(2\pi)^{2}}\sum_{\hat{\gamma}\in\Lambda^{+} \cup\{0\}}n_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}(m\tau_{1}+n)(t^ {-2}+1)\frac{1+t^{m,n}_{+}t}{t-t^{m,n}_{+}}\,\frac{e^{-S_{i,m,n}}}{|m\tau+n|^{ 4}}\] \[+\frac{\tau_{2}^{2}}{2(2\pi)^{2}}\sum_{\hat{\gamma}\in\Lambda^{+} \cup\{0\}}n_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left((m\tau_{1}+n )(t^{-1}-t)-2m\tau_{2}\right)\frac{1+(t^{m,n}_{-})^{2}}{(t-t^{m,n}_{+})^{2}} \,\frac{e^{-S_{i,m,n}}}{|m\tau+n|^{4}}\] \[-\frac{\tau_{2}^{2}}{2(2\pi)^{2}}(t^{-2}+1)\sum_{\hat{\gamma}\in \Lambda^{+}\cup\{0\}}n_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left( \frac{1}{m\xi^{0}+n}+\frac{m\tau_{1}+n}{|m\tau+n|^{2}}\right)\frac{1+t^{m,n}_ {+}t}{t-t^{m,n}_{+}}\,\frac{e^{-S_{i,m,n}}}{|m\tau+n|^{2}}\] \[+\frac{\tau_{2}^{2}}{2\pi}t^{-1}\sum_{\hat{\gamma}\in\Lambda^{+} \cup\{0\}}n_{\hat{\gamma}}q_{a}t^{a}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\frac{e^ {-S_{i,m,n}}}{|m\tau+n|^{2}}\] \[= \frac{\tau_{2}^{2}}{2(2\pi)^{2}}\sum_{\Lambda^{+}\cup\{0\}}n_{ \hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left[\left(\frac{(m\tau_{1}+n)( t^{-1}-t)-2m\tau_{2}}{|m\tau+n|^{2}}\right)\frac{1+(t^{m,n}_{+})^{2}}{(t-t^{m,n}_{+})^{2} }-\frac{(t^{-2}+1)}{m\xi^{0}+n}\frac{1+t^{m,n}_{+}t}{t-t^{m,n}_{+}}\right] \frac{e^{-S_{i,m,n}}}{|m\tau+n|^{2}}\] \[+\frac{\tau_{2}^{2}}{2\pi}t^{-1}\sum_{\hat{\gamma}\in\Lambda^{+} \cup\{0\}}n_{\hat{\gamma}}q_{a}t^{a}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\frac{e^ {-S_{i,m,n}}}{|m\tau+n|^{2}}\,,\]
where in the first equality we are using in the case \(m=0\) equation (3.24) and
\[\frac{1+(t^{0,n}_{+})^{2}}{(t-t^{0,n}_{+})^{2}}:=\begin{cases}-\frac{{\rm d}}{{\rm d }t}\left(\frac{1+t^{0,n}_{+}t}{t-t^{0,n}_{+}}\right)=1/t^{2},\quad n<0\\ -\frac{{\rm d}}{{\rm d}t}\left(\frac{1+t^{0,n}_{+}}{t-t^{0,n}_{+}}\right)=1, \quad n>0\end{cases}\,, \tag{3.75}\]
while in last equation we left the last sum the same and grouped the rest of the terms (notice that two sums cancel each other). For the term in square brackets in (3.74), we use for \(m\neq 0\) that
\(-mR(t-t_{+}^{m,n})(t-t_{-}^{m,n})\) and \(t_{+}^{m,n}t_{-}^{m,n}=-1\), together with the fact that \(t_{+}^{m,n}+t_{-}^{m,n}=2(m\tau_{1}+n)/m\tau_{2}\) and \(t_{+}^{m,n}-t_{-}^{m,n}=2|m\tau+n|/m\tau_{2}\), see (3.21). We then obtain the following for the case \(m\neq 0\):
\[\left(\frac{(m\tau_{1}+n)(t^{-1}-t)-2m\tau_{2}}{|m\tau+n|^{2}} \right)\frac{1+(t_{+}^{m,n})^{2}}{(t-t_{+}^{m,n})^{2}}-\frac{(t^{-2}+1)}{m \xi^{0}+n}\frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}}\] \[=\frac{2t_{+}^{m,n}((t_{+}^{m,n}+t_{-}^{m,n})(t-1-t)-4)}{m\tau_{2 }(t_{+}^{m,n}-t_{-}^{m,n})(t-t_{+}^{m,n})^{2}}+\frac{2t_{+}^{m,n}(t^{-1}+t)}{m \tau_{2}(t-t_{+}^{m,n})^{2}}\] \[=\frac{4}{m\tau_{2}t(t_{+}^{m,n}-t_{-}^{m,n})}=\frac{2}{t|m\tau+n|}\,. \tag{3.76}\]
In the case \(m=0\) we use (3.24) and (3.2), so that
\[\left[\left(\frac{(m\tau_{1}+n)(t^{-1}-t)-2m\tau_{2}}{|m\tau+n|^{2}}\right) \frac{1+(t_{+}^{m,n})^{2}}{(t-t_{+}^{m,n})^{2}}-\frac{(t^{-2}+1)}{m\xi^{0}+n} \frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}}\right]\Big{|}_{m=0}=\frac{2}{t|n|}\,. \tag{3.77}\]
Joining everything together we find
\[4\pi\mathrm{i}\left(\mathrm{d}\alpha^{\mathrm{inst}}-\widetilde{\xi}_{i}^{ \mathrm{inst}}\mathrm{d}\xi^{i}\right)\Big{|}_{\mathrm{dt}}=t^{-1}\frac{\tau_{ 2}^{2}}{(2\pi)^{2}}\sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0\}}n_{\hat{\gamma}} \sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left(\frac{1}{|m\tau+n|}+2\pi q_{a}t^{a} \right)\frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\,. \tag{3.78}\]
In particular, we find using Proposition 3.15 that
\[4\pi\mathrm{i}\left(\mathrm{d}\alpha^{\mathrm{inst}}-\widetilde{\xi}_{i}^{ \mathrm{inst}}\mathrm{d}\xi^{i}\right)\Big{|}_{\mathrm{dt}}=t^{-1}\left(f^{ \mathrm{w.s.}}+f^{\mathrm{inst}}-\frac{\chi}{12}\right)\;, \tag{3.79}\]
matching the required term on the right-hand side of (3.72) provided that \(c_{\ell}=\frac{\chi}{192\pi}\). On the other hand, for the other components \(\mathrm{d}t^{a}\), \(\mathrm{d}b^{a}\), \(\mathrm{d}\zeta^{i}\) and \(\mathrm{d}\tau_{2}\) of the left-hand side of (3.72), one needs to show that they decompose into three summands with factors \(t^{-1}\), \(t^{0}\) and \(t\), which should match the corresponding component of \(\mathrm{i}(\theta_{+}^{P,\mathrm{w.s.}}+\theta_{+}^{P,\mathrm{inst}})\), \(-\mathrm{i}(\theta_{+}^{P,\mathrm{w.s.}}+\theta_{3}^{P,\mathrm{inst}})\), and \(-\mathrm{i}(\theta_{+}^{P,\mathrm{w.s.}}+\theta_{+}^{P,\mathrm{inst}})\) in (3.72), respectively. In particular, the summand with the \(t\) factor should correspond to the conjugate of the \(t^{-1}\) summand, for each component. For example the computation for the \(\mathrm{d}t^{a}\) component gives
\[4\pi\mathrm{i}\left(\mathrm{d}\alpha^{\mathrm{inst}}-\widetilde{ \xi}_{i}^{\mathrm{inst}}\mathrm{d}\xi^{i}\right)\Big{|}_{\mathrm{dt}^{a}}\] \[=\frac{\tau_{2}^{2}}{4\pi}\sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0 \}}n_{\hat{\gamma}}q_{a}\sum_{(m,n)\neq(0,0)}\left(\frac{m\tau_{1}+n}{|m\tau+n| }(t^{-1}-t)-\frac{2m\tau_{2}}{|m\tau+n|}+(t^{-1}+t)\right)\frac{1+t_{+}^{m,n}t }{t-t_{+}^{m,n}}\frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\] \[=\frac{\tau_{2}^{2}}{4\pi}\sum_{\hat{\gamma}\in\Lambda^{+}\cup\{0 \}}n_{\hat{\gamma}}q_{a}\sum_{(m,n)\neq(0,0)}\left(-t^{-1}\frac{m\tau_{2}}{|m \tau+n|}-2\frac{m\tau_{1}+n}{|m\tau+n|}+t\frac{m\tau_{2}}{|m\tau+n|}\right) \frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\]
where for the last equality we used the identities for \(t_{\pm}^{m,n}\) used in the computation of the \(\mathrm{d}t\) component. Comparing with Proposition 3.17 and 3.18 one readily sees that the \(t^{-1}\) and \(t^{0}\) summands match the corresponding \(\mathrm{d}t^{a}\) component. Furthermore, the summand with the \(t\)-factor is seen to correspond to the conjugate of the summand with the \(t^{-1}\) factor, since \(\overline{S_{\hat{\gamma},m,n}}=S_{\hat{\gamma},-m,-n}\), so the summand with the \(t\)-factor also gives the required contribution.
By a similar (albeit tedious) computation one can check using Proposition 3.17 and 3.18 that the remaining components on the left-hand side and right-hand side of (3.72) match, showing that (3.22) indeed define Darboux coordinates for the contact structure associated to instanton corrected q-map spaces.
### S-duality
Recall that we have the S-duality action on \(\overline{\mathcal{N}}_{\mathrm{IIB}}^{\mathrm{cl}}\), defined by (3.18). We start by defining a lift of the S-duality action from \(\overline{\mathcal{N}}_{\mathrm{IIB}}^{\mathrm{cl}}\) to \(\overline{\mathcal{N}}_{\mathrm{IIB}}^{\mathrm{cl}}\times\mathbb{C}P^{1}\), following [1, 1].
**Definition 3.19**.: Given an element
\[A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{SL}(2,\mathbb{Z}) \tag{3.81}\]
we lift the action of \(A\) from \(\overline{\mathcal{N}}_{\operatorname{IIB}}^{\operatorname{cl}}\) to \(\overline{\mathcal{N}}_{\operatorname{IIB}}^{\operatorname{cl}}\times\mathbb{ C}P^{1}\) by defining the action on the fiber coordinate \(t\in\mathbb{C}P^{1}\) over \((\tau_{1}+\mathrm{i}\tau_{2},b^{a}+\mathrm{i}t^{a},c_{a},c_{0},\psi)\in \overline{\mathcal{N}}_{\operatorname{IIB}}^{\operatorname{cl}}\) by:
\[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot t:=\begin{cases}t&\text{if $c=0$ and $a>0$}\\ -1/t&\text{if $c=0$ and $a<0$}\\ \frac{1+t_{+}^{c,d}t}{t_{+}^{c,d}-t}&\text{if $c\neq 0$}\end{cases}\,. \tag{3.82}\]
**Remark 3.20**.: Since \(\operatorname{SL}(2,\mathbb{Z})\) is generated by
\[T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix},\quad S=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix} \tag{3.83}\]
it is not hard to check that (3.82) defines a lift of the S-duality action. We note that the transformation on \(t\) depends on the base point in the case where \(c\neq 0\), since \(t_{+}^{c,d}\) depends on \(\tau\). We further remark that by using the fact that \(t_{+}^{c,d}t_{-}^{c,d}=-1\) the \(t\)-variable transformation when \(c\neq 0\) can be rewritten as
\[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot t=\frac{1+t_{+}^{c,d}t}{t_{+}^{c,d}-t}=-\frac{t_{-}^{c,d}- t}{1+t_{-}^{c,d}t}\,. \tag{3.84}\]
Now let \((\widetilde{N},g_{\overline{N}})\) be an instanton corrected q-map space constructed in Section 3.1. By using the mirror map (3.15), we can identify \(\widetilde{N}\subset\overline{\mathcal{N}}_{\operatorname{IIA}}\) with an open subset of \(\mathcal{M}^{-1}(\widetilde{N})\subset\overline{\mathcal{N}}_{\operatorname{ IIB}}\subset\overline{\mathcal{N}}_{\operatorname{IIB}}^{\operatorname{cl}}\). It then follows that its twistor space satisfies \(\mathcal{Z}\cong\widetilde{N}\times\mathbb{C}P^{1}\subset\overline{ \mathcal{N}}_{\operatorname{IIB}}^{\operatorname{cl}}\times\mathbb{C}P^{1}\). If \(A\in\operatorname{SL}(2,\mathbb{Z})\) leaves \(\widetilde{N}\subset\overline{\mathcal{N}}_{\operatorname{IIB}}^{ \operatorname{cl}}\) invariant, then we get an induced action of \(A\) on \(\mathcal{Z}\). Assuming that \(A\) acts on \(\mathcal{Z}\), we now show the key transformation property of the Darboux coordinates, following again [1, 2]:
**Proposition 3.21**.: Let \((\widetilde{N},g_{\overline{N}})\) be an instanton corrected q-map space with 1-loop parameter \(c_{\ell}=\frac{\chi}{192\pi}\) such that \(\widetilde{N}\) is invariant under
\[A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{SL}(2,\mathbb{Z})\,. \tag{3.85}\]
Furthermore consider the Darboux coordinates \((\xi^{i},\widetilde{\xi}_{i},\widetilde{\alpha})\) of \(\mathcal{Z}\), where \(\widetilde{\alpha}:=\alpha-\xi^{i}\widetilde{\xi}_{i}\) and \((\xi^{i},\widetilde{\xi}_{i},\alpha)\) are the Darboux coordinates given by (3.22). Then \((\xi^{i},\widetilde{\xi}_{i},\widetilde{\alpha})\) satisfy the following transformation under the lift of the action of \(A\) given in (3.82):
\[\begin{split}\xi^{0}&\to\frac{a\xi^{0}+b}{c\xi^{0}+d}, \quad\xi^{a}\to\frac{\xi^{a}}{c\xi^{0}+d},\quad\widetilde{\xi}_{a}\to \widetilde{\xi}_{a}+\frac{c}{2(c\xi^{0}+d)}\kappa_{abc}\xi^{b}\xi^{c}\\ &\begin{pmatrix}\widetilde{\xi}_{0}\\ \widetilde{\alpha}\end{pmatrix}\to\begin{pmatrix}d&-c\\ -b&a\end{pmatrix}\begin{pmatrix}\widetilde{\xi}_{0}\\ \widetilde{\alpha}\end{pmatrix}+\frac{1}{6}\kappa_{abc}\xi^{a}\xi^{b}\xi^{c} \begin{pmatrix}c^{2}/(c\xi^{0}+d)\\ -[c^{2}(a\xi^{0}+b)+2c]/(c\xi^{0}+d)^{2}\end{pmatrix}.\end{split} \tag{3.86}\]
Proof.: To show the required transformation rule we follow the same argument of [1, which we include here with more detail. It is straightforward to check that the classical coordinates \((\xi^{i},\widetilde{\xi}_{i}^{\operatorname{cl}},\widetilde{\alpha}^{ \operatorname{cl}})\), where \(\widetilde{\alpha}^{\operatorname{cl}}:=-\frac{1}{2}(\alpha^{\operatorname{cl}} -\xi^{i}\widetilde{\xi}_{i}^{\operatorname{cl}})-\xi^{i}\widetilde{\xi}_{i}^ {\operatorname{cl}}=-\frac{1}{2}(\alpha^{\operatorname{cl}}+\xi^{i}\widetilde{ \xi}_{i}^{\operatorname{cl}})\) satisfy (3.86) [1]. Hence, it is enough to show that \(\widetilde{\xi}_{i}^{\operatorname{inst}}=\widetilde{\xi}_{i}-\widetilde{\xi}_{ i}^{\operatorname{cl}}\) and \(\widetilde{\alpha}^{\operatorname{inst}}=\widetilde{\alpha}-\widetilde{\alpha}^{ \operatorname{cl}}\) transform under under the action of \(A\in\operatorname{SL}(2,\mathbb{Z})\) by
\[\widetilde{\xi}_{a}^{\operatorname{inst}}\to\widetilde{\xi}_{a}^{\operatorname{ inst}},\quad\begin{pmatrix}\widetilde{\xi}_{0}^{\operatorname{inst}}\\ \widetilde{\alpha}^{\operatorname{inst}}\end{pmatrix}\to\begin{pmatrix}d&-c\\ -b&a\end{pmatrix}\begin{pmatrix}\widetilde{\xi}_{0}^{\operatorname{inst}}\\ \widetilde{\alpha}^{\operatorname{inst}}\end{pmatrix}. \tag{3.87}\]
Using that (3.73) and (3.22), we find that \(\widetilde{\alpha}^{\operatorname{inst}}=\alpha^{\operatorname{inst}}-\xi^{i} \widetilde{\xi}_{i}^{\operatorname{inst}}\) and
\[\begin{split}\widetilde{\alpha}^{\operatorname{inst}}&=- \frac{\mathrm{i}\tau_{2}}{2(2\pi)^{3}}\sum_{\tilde{\gamma}\in\Lambda^{+}\cup \{0\}}n_{\tilde{\gamma}}^{(0)}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left(\frac{ \xi^{0}}{m\xi^{0}+n}+\frac{m|\tau|^{2}+n\tau_{1}}{|m\tau+n|^{2}}\right)\frac{1+t_ {+}^{m,n}t}{t-t_{+}^{m,n}}\frac{e^{-S_{\gamma,m,n}}}{|m\tau+n|^{2}}\\ &+\frac{\tau_{2}}{2(2\pi)^{2}}\sum_{\tilde{\gamma}\in\Lambda_{+}\cup\{0\}}n_{ \tilde{\gamma}}^{(0)}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left(q_{a}c^{a}\frac{1 +t_{+}^{m,n}t}{t-t_{+}^{m,n}}+\mathrm{i}q_{a}t^{a}\left(\tau_{1}\frac{1-t_{+}^ {m,n}t}{t-t_{+}^{m,n}}-\tau_{2}\frac{t+t_{+}^{m,n}}{t-t_{+}^{m,n}}\right) \right)\frac{e^{-S_{\gamma,m,n}}}{|m\tau+n|^{2}}\,.\end{split} \tag{3.88}\]
where, in addition to (3.24), we have defined
\[\frac{t+t_{+}^{0,n}}{t-t_{+}^{0,n}}:=\begin{cases}-1&n>0\\ 1&n<0\end{cases}. \tag{3.89}\]
We start by showing that \(\widehat{\xi}_{a}^{\rm inst}\) is invariant under the S-duality \(\mathrm{SL}(2,\mathbb{Z})\) action, where
\[\widehat{\xi}_{a}^{\rm inst}=\frac{\tau_{2}}{8\pi^{2}}\sum_{\hat{\gamma}\in \Lambda^{+}\cup\{0\}}n_{\hat{\gamma}}q_{a}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}} \frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\frac{1+t_{+}^{m,n}t}{t-t_{+}^ {m,n}}\,. \tag{3.90}\]
Denoting
\[\begin{pmatrix}m^{\prime}\\ n^{\prime}\end{pmatrix}=\begin{pmatrix}a&b\\ c&d\end{pmatrix}^{\intercal}\begin{pmatrix}m\\ n\end{pmatrix}, \tag{3.91}\]
where the \(\intercal\) subscript denotes the transpose matrix, we find the following transformation rules under \(A\in\mathrm{SL}(2,\mathbb{Z})\):
\[\tau_{2}\to\frac{\tau_{2}}{|c\tau+d|^{2}},\quad|m\tau+n|\to\frac{|m^{\prime} \tau+n^{\prime}|}{|c\tau+d|},\quad S_{\hat{\gamma},m,n}\to S_{\hat{\gamma},m^ {\prime},n^{\prime}}. \tag{3.92}\]
We now discuss the transformation properties of the factor
\[\frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}}\,. \tag{3.93}\]
We first consider the case \(m\neq 0\). Because \(t_{\pm}^{m,n}=t_{\pm}^{km,kn}\) for any \(k>0\), we can assume that \((m,n)\) are coprime, so that there exist \(p,q\in\mathbb{Z}\) so that
\[\begin{pmatrix}p&q\\ m&n\end{pmatrix}\in\mathrm{SL}(2,\mathbb{Z})\,. \tag{3.94}\]
Since (3.82) defines an action on \(t\), we find that (making the dependence of \(t_{+}^{m,n}\) on \(\tau\) explicit)
\[\frac{1+t_{+}^{m^{\prime},n^{\prime}}(\tau)t}{t_{+}^{m^{\prime},n^{\prime}}( \tau)-t}=\left(\begin{pmatrix}p&q\\ m&n\end{pmatrix}\cdot\begin{pmatrix}a&b\\ c&d\end{pmatrix}\right)\cdot t=\begin{pmatrix}p&q\\ m&n\end{pmatrix}\cdot\left(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot t\right)=\frac{1+t_{+}^{m,n}\left(\frac{a\tau+b}{c\tau+d }\right)\left(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\right)\cdot t}{t_{+}^{m,n}\left(\frac{a\tau+b}{c\tau+d} \right)-\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot t} \tag{3.95}\]
so we have the transformation rule
\[\frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}}\to\frac{1+t_{+}^{m^{\prime},n^{\prime}}t }{t-t_{+}^{m^{\prime},n^{\prime}}}\,. \tag{3.96}\]
The same transformation rule (3.96) follows when \(m=0\) by using the property that \(t_{\pm}^{c,d}=t_{\pm}^{kc,kd}\) when \(k>0\) and \(t_{\pm}^{c,d}=t_{\mp}^{kc,kd}\) when \(k<0\), together with (3.24). Hence, from (3.92) and (3.96) it follows that (3.90) is invariant under the \(\mathrm{SL}(2,\mathbb{Z})\)-action.
We now verify the transformation rule of \(\widehat{\xi}_{0}^{\rm inst}\) in (3.87). For this we use that under the \(\mathrm{SL}(2,\mathbb{Z})\)-action:
\[m\xi^{0}+n\to\frac{m^{\prime}\xi^{0}+n^{\prime}}{c\xi^{0}+d},\quad\frac{m \tau_{1}+n}{|m\tau+n|^{2}}\to\frac{c(m^{\prime}|\tau|^{2}+n^{\prime}\tau_{1}) +d(m^{\prime}\tau_{1}+n^{\prime})}{|m^{\prime}\tau+n^{\prime}|^{2}}\,. \tag{3.97}\]
After a rather lengthy but straightforward computation, one can also check that
\[\frac{1-t_{+}^{m,n}t}{t-t_{+}^{m,n}}t^{a}\to(c\tau_{1}+d)\frac{1-t_{+}^{m^{ \prime},n^{\prime}}t}{t-t_{+}^{m^{\prime},n^{\prime}}}t^{a}-c\frac{t+t_{+}^{m^ {\prime},n^{\prime}}}{t-t_{+}^{m^{\prime},n^{\prime}}}{}^{\prime}2t^{a}. \tag{3.98}\]
We therefore find using (3.92), (3.96), (3.97) and (3.98), that the terms of \(\widehat{\xi}_{0}^{\rm inst}\) transform as follows
\[\frac{\mathrm{i}\tau_{2}}{16\pi^{3}}\sum_{\hat{\gamma}\in\Lambda^{+} \cup\{0\}}n^{(0)}_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left(\frac{1 }{m\xi^{0}+n}+\frac{m\tau_{1}+n}{|m\tau+n|^{2}}\right)\frac{1+t_{+}^{m,n}t}{t- t_{+}^{m,n}}\,\frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\] \[\to d\left(\frac{\mathrm{i}\tau_{2}}{16\pi^{3}}\sum_{\hat{\gamma} \in\Lambda^{+}\cup\{0\}}n^{(0)}_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{ 0\}}\left(\frac{1}{m\xi^{0}+n}+\frac{m\tau_{1}+n}{|m\tau+n|^{2}}\right)\frac{1 +t_{+}^{m,n}t}{t-t_{+}^{m,n}}\,\frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\right)\] \[\qquad\qquad+c\left(\frac{\mathrm{i}\tau_{2}}{16\pi^{3}}\sum_{ \hat{\gamma}\in\Lambda^{+}\cup\{0\}}n^{(0)}_{\hat{\gamma}}\sum_{(m,n)\in \mathbb{Z}^{2}-\{0\}}\left(\frac{\xi^{0}}{m\xi^{0}+n}+\frac{m|\tau|^{2}+n\tau_ {1}}{|m\tau+n|^{2}}\right)\frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}}\,\frac{e^{-S_{ \hat{\gamma},m,n}}}{|m\tau+n|^{2}}\right)\;, \tag{3.99}\]
while
\[-\frac{\tau_{2}}{8\pi^{2}}\sum_{\hat{\gamma}\in\Lambda_{+}\cup\{ 0\}}n_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\left(q_{a}b^{a}\frac{1 +t_{+}^{m,n}t}{t-t_{+}^{m,n}}+\mathrm{i}q_{a}t^{a}\frac{1-t_{+}^{m,n}t}{t-t_{ +}^{m,n}}\right)\frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\] \[\to d\left(-\frac{\tau_{2}}{8\pi^{2}}\sum_{\hat{\gamma}\in \Lambda_{+}\cup\{0\}}n_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}} \left(q_{a}b^{a}\frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}}+\mathrm{i}q_{a}t^{a} \frac{1-t_{+}^{m,n}t}{t-t_{+}^{m,n}}\right)\frac{e^{-S_{\hat{\gamma},m,n}}}{|m \tau+n|^{2}}\right)\] \[\qquad+c\left(-\frac{\tau_{2}}{8\pi^{2}}\sum_{\hat{\gamma}\in \Lambda_{+}\cup\{0\}}n^{(0)}_{\hat{\gamma}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}} \left(q_{a}c^{a}\frac{1+t_{+}^{m,n}t}{t-t_{+}^{m,n}}+\mathrm{i}q_{a}t^{a} \left(\tau_{1}\frac{1-t_{+}^{m,n}t}{t-t_{+}^{m,n}}-\tau_{2}\frac{t+t_{+}^{m,n} }{t-t_{+}^{m,n}}\right)\right)\frac{e^{-S_{\hat{\gamma},m,n}}}{|m\tau+n|^{2}}\right) \tag{3.100}\]
so overall
\[\widetilde{\xi}^{\mathrm{inst}}_{0}\to d\widetilde{\xi}^{\mathrm{inst}}_{0}-c \widetilde{\alpha}^{\mathrm{inst}}\,. \tag{3.101}\]
Finally, we check the transformation rule for \(\widetilde{\alpha}^{\mathrm{inst}}\) in (3.87). Given that we know the transformation rules of the rest of the variables, it is easy to check that this is equivalent to showing that \(\alpha^{\mathrm{inst}}=\widetilde{\alpha}^{\mathrm{inst}}+\xi^{i}\widetilde{ \xi}^{\mathrm{inst}}_{i}\) transforms by
\[\alpha^{\mathrm{inst}}\to\frac{\alpha^{\mathrm{inst}}}{c\xi^{0}+d}\,. \tag{3.102}\]
To check the later, one uses the fact that under the action of \(A\in\mathrm{SL}(2,\mathbb{Z})\)
\[(m\tau_{1}+n)(t^{-1}-t)-2m\tau_{2}\to\frac{(m^{\prime}\tau_{1}+n^{\prime})(t^{ -1}-t)-2m^{\prime}\tau_{2}}{c\xi^{0}+d}\,. \tag{3.103}\]
The result then follows immediately from the formula for \(\alpha^{\mathrm{inst}}\) obtained via (3.73) and (3.22), and the transformations (3.92), (3.103).
**Theorem 3.22**.: Let \((\widetilde{N},g_{\overline{N}})\) be an instanton corrected q-map space with 1-loop parameter \(c_{\ell}=\frac{\chi}{192\pi}\). If, after possibly restricting \(\widetilde{N}\), we have that \(\widetilde{N}\) is invariant under the action of \(A\in\mathrm{SL}(2,\mathbb{Z})\) by the S-duality action (3.18), then \(A\) also acts by isometries on \((\widetilde{N},g_{\overline{N}})\). In particular, if \(\widetilde{N}\) is invariant under the full S-duality action, then \(\mathrm{SL}(2,\mathbb{Z})\) acts by isometries on \((\widetilde{N},g_{\overline{N}})\).
Proof.: Since \(A\) leaves \(\widetilde{N}\) invariant, we have \(S_{\tilde{A}}:\mathcal{Z}\to\mathcal{Z}\) the diffeomorphism obtained by lifting the S-duality action of \(A\) to the twistor space \(\mathcal{Z}\cong\widetilde{N}\times\mathbb{C}P^{1}\) by (3.82). We want to show that \(S_{A}\) is a twistor space automorphism (i.e. it is holomorphic, preserves the contact distribution, and commutes with the real structure of the twistor space), so that we can conclude that the action of \(A\) on \((\widetilde{N},g_{\overline{N}})\) is isometric.
We first want to show that \(S_{A}\) is holomorphic. Notice that if \(S^{1}\subset\mathbb{C}P^{1}\) denotes the compactification of the real line \(\mathbb{R}\subset\mathbb{C}\) (with respect to the \(t\)-coordinate), the Darboux coordinates \((\xi^{i},\widetilde{\xi}_{i},\widetilde{\alpha})\) are defined on the open dense subset of \(\mathcal{Z}\) given by
\[\mathcal{Z}_{D}:=\mathcal{Z}-\widetilde{N}\times S^{1}, \tag{3.104}\]
since for each \(p\in\widetilde{N}\), the singularities of the Darboux coordinates are at \(t=0,\infty\) and \(t\in\{t_{+}^{m,n}\}_{m\neq 0,n\in\mathbb{Z}}\subset\mathbb{R}\subset S^{1}\). Using the coordinates \((\xi^{i},\widetilde{\xi}_{i},\widetilde{\alpha})\) on \(\mathcal{Z}\), and using that Darboux coordinates for a holomorphic
contact structure must be holomorphic coordinates (see for example the proof of the second statement of [26, Proposition 7]), the transformation rule (3.86) shows that the map
\[S_{A}:\mathcal{Z}_{D}\cap S_{A}^{-1}(\mathcal{Z}_{D})\to\mathcal{Z}_{D} \tag{3.105}\]
is holomorphic. On the other hand, if \(\mathcal{I}\) denotes the holomorphic structure of \(\mathcal{Z}\), we have that the diffeomorphism \(S_{A}\) is holomorphic if and only if
\[\mathcal{I}\circ\mathrm{d}S_{A}=\mathrm{d}S_{A}\circ\mathcal{I}\,. \tag{3.106}\]
Since \(\mathcal{Z}_{D}\cap S_{A}^{-1}(\mathcal{Z}_{D})\) is the intersection of two open dense subsets of \(\mathcal{Z}\), we have that \(\mathcal{Z}_{D}\cap S_{A}^{-1}(\mathcal{Z}_{D})\) must be dense in \(\mathcal{Z}\). We then have that (3.106) holds on a dense set of \(\mathcal{Z}\). Since \(S_{A}\) and \(\mathcal{I}\) are globally defined on \(\mathcal{Z}\), we then conclude by continuity that (3.106) must hold on all of \(\mathcal{Z}\), and \(S_{A}\) must be holomorphic.
On the other hand, the fact that the coordinates \((\xi^{i},\widetilde{\xi}_{i},\widetilde{\alpha}=\alpha-\xi^{i}\widetilde{ \xi}_{i})\) transform via (3.86) implies that
\[S_{A}^{*}(\mathrm{d}\widetilde{\alpha}+\xi^{i}\mathrm{d}\widetilde{\xi}_{i}) =\frac{\mathrm{d}\widetilde{\alpha}+\xi^{i}\mathrm{d}\widetilde{\xi}_{i}}{c \xi^{0}+d}\,. \tag{3.107}\]
Hence \(S_{A}\) preserves the contact distribution \(\mathrm{Ker}(\lambda)\) on a dense subset of \(\mathcal{Z}\). We then conclude as before by continuity and the fact that the contact distribution is globally defined, that the contact distribution must be globally preserved by \(S_{A}\).
Finally, to check that the action of \(A\) preserves the real structure, it is enough to check that (3.82) commutes with the antipodal map \(t\to-1/\overline{t}\). Indeed, we have for \(c\neq 0\) that
\[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot\Big{(}-\frac{1}{\overline{t}}\Big{)}=\frac{\overline{t}-t _{+}^{c,d}}{t_{+}^{c,d}\overline{t}+1}=-\overline{\Big{[}\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot t\Big{]}}^{-1}\,, \tag{3.108}\]
where we have used that \(t_{+}^{c,d}\in\mathbb{R}\). The case when \(c=0\) follow by a trivial computation.
Hence, we conclude that the action of \(A\) is via twistor space automorphisms, and hence \(\mathrm{SL}(2,\mathbb{Z})\) must act via isometries on \((\widetilde{N},g_{\overline{N}})\).
For the final statement, if \(\widetilde{N}\) is invariant under the \(\mathrm{SL}(2,\mathbb{Z})\) S-duality action, then we can lift the \(\mathrm{SL}(2,\mathbb{Z})\) action to the twistor space via (3.82). By the proof of the first statement, this lift acts by twistor space automorphisms, so \(\mathrm{SL}(2,\mathbb{Z})\) must act by isometries on \((\widetilde{N},g_{\overline{N}})\).
Recall that \(\mathrm{SL}(2,\mathbb{Z})\) is generated by \(T\) and \(S\) given in (3.83). The transformations generated by \(T\) correspond to part of the usual Heisenberg isometries (see Section 4 below). On the other hand, the transformation given by \(S\) in the "non-trivial" transformation that exchanges weak and strong coupling in the type IIB string theory setting. The following theorem guarantees that an instanton corrected q-map space always carries (after possibly restricting \(\widetilde{N}\)), an action by isometries by \(S\in\mathrm{SL}(2,\mathbb{Z})\).
**Theorem 3.23**.: Let \(S\in\mathrm{SL}(2,\mathbb{Z})\) be given as in (3.83), and consider an instanton corrected q-map space \((\widetilde{N},g_{\overline{N}})\) with \(c_{\ell}=\frac{\chi}{192\pi^{2}}\). Then we can find a non-empty \(S\)-invariant open subset \(\widetilde{N}_{S}\subset\widetilde{N}\) such that the restricted instanton corrected q-map space \((\widetilde{N}_{S},g_{\overline{N}})\) is positive definite and carries a \(\mathbb{Z}_{4}\)-action by isometries generated by \(S\). The open subset \(\widetilde{N}_{S}\) is given by
\[\widetilde{N}_{S}=\{p\in\widetilde{N}\mid\epsilon<\tau_{2},\ \ \epsilon<\frac{\tau_{2}}{|\tau_{1}|^{2}+|\tau_{2}|^{2}},\ \ t^{a}>K,\ \ |\tau|t^{a}>K\} \tag{3.109}\]
for some \(0<\epsilon<1\) and \(K>0\).
Proof.: We would first like to show that there is \(K>0\) and \(0<\epsilon<1\) such that \(g_{\overline{N}}\) is defined and positive definite on
\[\widetilde{N}_{K,\epsilon}:=\left\{(\tau_{2},b^{a}+\mathrm{i}t^{a},\zeta^{i}, \widetilde{\zeta}_{i},\sigma)\in\widetilde{N}\mid\tau_{2}>\epsilon,\ \ t^{a}>K,\ \ a=1,...,n\right\}. \tag{3.110}\]
In order to show this we first study the CASK geometry of signature \((2,2n)\) defined by \((M,-\mathfrak{F})\). In terms of the natural holomorphic coordinates \(Z^{i}\), \(i=0,...,n\), the CASK geometry has a Kahler potential \((Z^{i},\overline{Z}^{i})\) given by
\[k(Z^{i},\overline{Z}^{j})=-{\rm Im}(\tau_{ij})Z^{i}\overline{Z}^{j}=-|Z^{0}|^{2}{ \rm Im}(\tau_{ij})z^{i}\overline{z}^{j}\,,\quad z^{i}=\frac{Z^{i}}{Z^{0}}\,. \tag{3.111}\]
Since \(Z^{0}\neq 0\) on \(M\), we can use instead the holomorphic coordinates \((Z^{0},z^{a})\). In terms of \((Z^{0},z^{a})\) we find using the formula (3.4) for \(\mathfrak{F}\), that
\[k(Z^{0},z^{a})\] \[=|Z^{0}|^{2}\left(4h(t)-\frac{\chi\zeta(3)}{(2\pi)^{3}}+\frac{2}{ (2\pi)^{3}}\sum_{q_{a}\gamma^{a}\in\Lambda^{+}}n_{\gamma}{\rm Re}({\rm Li}_{3}( {\rm e}^{2\pi iq_{a}z^{a}}))+\frac{2}{(2\pi)^{2}}\sum_{q_{a}\gamma^{a}\in \Lambda^{+}}n_{\gamma}{\rm Re}({\rm Li}_{2}(e^{2\pi iq_{a}z^{a}}))q_{a}t^{a} \right)\,, \tag{3.112}\]
where we recall that \(h(t)\) is the cubic polynomial defining the PSR manifold (see Section 3.1). From the previous formula, it immediately follows that the coefficients of the CASK metric \(g_{M}\) in the coordinates \((Z^{0},z^{a})\) only depend periodically on \(b^{a}={\rm Re}(z^{a})\). Furthermore, as \(t^{a}={\rm Im}(z^{a})\to\infty\) the classical terms due to \(\mathfrak{F}^{\rm cl}\) dominate over the terms due to \(\mathfrak{F}^{\rm w.s}\), which are either independent of \(t^{a}\), or exponentially decreasing as \(t^{a}\to\infty\). Since the classical terms must satisfy the CASK conditions, we find that there is \(K>0\), such that
\[M_{K}:=\{(Z^{0},b^{a}+{\rm i}t^{a})\in M^{q}\mid t^{a}>K,\,\,\,a=1,...,n\} \subset M\,, \tag{3.113}\]
where \(M\) was defined in Section 3.1 as the maximal open set of \(M^{\rm cl}\) where the CASK geometry defined by \((M,\mathfrak{F})\) has signature \((2n,2)\) and \({\rm Im}(\tau_{ij})Z^{i}\overline{Z}^{j}<0\).
Now let us look at the tensor \(T\) defined in (2.6), determining the compatibility condition between the CASK structure and the BPS structure. The instanton contribution to \(T\) due to the BPS indices is by an expression of the form
\[\sum_{\gamma}\Omega(\gamma)\sum_{n>0}e^{2\pi{\rm i}n\zeta_{\gamma}}K_{0}(2\pi n \tau_{2}|q_{0}+q_{a}(b^{a}+{\rm i}t^{a})|)|{\rm d}Z_{\gamma}|^{2},\quad\gamma= q_{0}\gamma^{0}+q_{a}\gamma^{a}\,. \tag{3.114}\]
Due to the exponential decay of the Bessel functions \(K_{0}(x)\) as \(x\to\infty\) and the convergence property of the BPS structure, by bounding below \(\tau_{2}\) and increasing \(K\) if necessary, we can make the CASK metric term of (2.6) dominate over the instanton corrections uniformly in the rest of the parameters, and in particular make \(T\) horizontally non-degenerate on this region. More precisely, if \(0<\epsilon<1\), there is \(K>0\) sufficiently big such that the (pseudo-)HK metric \(g_{N}\) is defined on
\[N_{K,\epsilon}:=\{(Z^{0},z^{a},\zeta^{i},\widetilde{\zeta}_{i})\in M\times( \mathbb{R}/\mathbb{Z})^{2n+2}\mid\epsilon<\tau_{2}=|Z^{0}|,\,\,\,t^{a}>K,\,a=1,...,n\}\,. \tag{3.115}\]
By the same arguments as those given in the previous paragraphs, we find that as \(t^{a}\to\infty\) for \(a=1,...,n\), the functions \(f\), \(f_{3}\) and \(g_{N}(V,V)\) defined on Section 2.1 are asymptotically approximated (uniformly in the other parameters) by \(f^{\rm cl}\), \(f_{3}^{\rm cl}\) and \(g_{N}^{\rm cl}(V,V)\), where the superscript \({}^{\rm cl}\) refers to the corresponding functions obtained by setting \(n_{\hat{\gamma}}=0\) for all \(\hat{\gamma}\in\Lambda^{+}\cup\{0\}\). Since we have \(f^{\rm cl}>0\), \(f_{3}^{\rm cl}<0\) and \(g_{N}^{\rm cl}(V,V)\neq 0\), it follows that we can pick \(K\) such that on \(N_{K,\epsilon}\) we obtain that
\[f>0,\quad f_{3}<0,\quad g_{N}(V,V)\neq 0\,. \tag{3.116}\]
This ensures by the end of Theorem 2.12 that \(g_{\overline{N}}\) is defined and positive definite on
\[\overline{N}_{K,\epsilon}:=\{(\tau_{2},b^{a}+{\rm i}t^{a},\zeta^{i}, \widetilde{\zeta}_{i},\sigma)\in\overline{N}\mid\tau_{2}>\epsilon,\,\,\,t^{a}> K,\,a=1,...,n\}\,. \tag{3.117}\]
As before, we lift the metric \(g_{\overline{N}}\) to the subset \(\widetilde{N}_{K,\epsilon}\to N_{K,\epsilon}\) where the periodic coordinates are made non-periodic.
Now notice that since \(S^{4}={\rm Id}\), the open subset
\[\widetilde{N}_{S}:=\widetilde{N}_{K,\epsilon}\cap S\cdot\widetilde{N}_{K, \epsilon}\cap S^{2}\cdot\widetilde{N}_{K,\epsilon}\cap S^{3}\cdot\widetilde{ N}_{K,\epsilon} \tag{3.118}\]
is \(S\)-invariant. To see that it is also non-empty, notice that the points (in Type IIB coordinates) of the form \((\tau_{1}+{\rm i}\tau_{2},b^{a}+{\rm i}t^{a},c^{a},c_{a},c_{0},\psi)=(0+{\rm i },0+{\rm i}t^{a},0,0,0,0)\) are \(S\)-fixed. In particular, for \(t^{a}>K\) we
have \((0+\mathrm{i},0+\mathrm{i}t^{a},0,0,0,0)\in\widetilde{N}_{\epsilon,K}\) and, since these points are \(S\)-fixed, they must lie on \(\widetilde{N}_{S}\). Since \(S\) acts on \(\widetilde{N}_{S}\), we can apply Theorem 3.22 in the case of \(A=S\in\mathrm{SL}(2,\mathbb{Z})\) to concluce that \((\widetilde{N}_{S},g_{\overline{N}})\) carries a \(\mathbb{Z}_{4}\)-action by isometries generated by \(S\).
Finally, the fact that \(\widetilde{N}_{S}\) is given by (3.109) follows immediately from the defining relations of \(\widetilde{N}_{K,\epsilon}\) together with (3.118) and the S-duality transformations (3.18) which imply that \(S^{2}\cdot\widetilde{N}_{K,\epsilon}=\widetilde{N}_{K,\epsilon}\).
## 4 Universal isometries of instanton corrected \(\mathbf{q}\)-map spaces and S-duality
We start by recalling a certain universal group of isometries in the case of a tree-level \(\mathrm{q}\)-map space (recall Definition 3.2). In [12, Theorem 3.17] it was shown that a tree level \(\mathrm{q}\)-map space of real dimension \(4n+4\) with \(n>0\) has a universal (i.e. independent of the PSR manifold) \(3n+6\) dimensional connected Lie group of isometries \(G\), whose Lie algebra \(\mathfrak{g}\) has the form
\[\mathfrak{g}=\mathbb{R}\ltimes(\mathfrak{sl}(2,\mathbb{R})\ltimes(\mathbb{R} ^{n}\ltimes\mathfrak{h}_{2n+2}))\,. \tag{4.1}\]
The first factor corresponds to an action by dilations of the group \(\mathbb{R}_{>0}\); the second to the S-duality \(\mathrm{SL}(2,\mathbb{R})\)-action; the third to an action by \(\mathbb{R}^{n}\) which, among other things, shifts the real part of the PSK coordinates \(z^{a}\); and the last factor to the action of a certain codimension 1 subgroup \(H_{2n+2}\) of the Heisberg group \(\mathrm{Heis}_{2n+3}(\mathbb{R})\) (to be defined below). In order to state this more precisely, we consider the coordinates \((\rho^{\mathrm{cl}},z^{a},\zeta^{i},\widetilde{\zeta}^{\mathrm{cl}}_{i}, \sigma^{\mathrm{cl}})\) on \(\overline{\mathcal{N}}^{\mathrm{cl}}_{\mathrm{IIA}}:=\mathbb{R}_{>0}\times \overline{M}^{\mathrm{cl}}\times\mathbb{R}^{2n+2}\times\mathbb{R}\), related to the type IIB coordinates \((\tau,b^{a}+\mathrm{i}t^{a},c^{a},c_{a},c_{0},\psi)\) on \(\overline{\mathcal{N}}^{\mathrm{cl}}_{\mathrm{IIB}}\) (where the latter was given in Definition 3.7) via the classical Mirror map (i.e. by (3.15) with \(c_{\ell}=n_{\hat{\gamma}}=\chi=0\)). The groups then act as follows:
* The multiplicative \(\mathbb{R}_{>0}\) group acts by a dilation action on the \((\rho^{\mathrm{cl}},\zeta^{i},\widetilde{\zeta}^{\mathrm{cl}}_{i},\sigma^{ \mathrm{cl}})\) variables via \[r\cdot(\rho^{\mathrm{cl}},\zeta^{i},\widetilde{\zeta}^{\mathrm{cl}}_{i},\sigma ^{\mathrm{cl}})=(r\rho^{\mathrm{cl}},\sqrt{\tau}\zeta^{i},\sqrt{\tau} \widetilde{\zeta}^{\mathrm{cl}}_{i},r\sigma^{\mathrm{cl}})\,.\] (4.2)
* The \(\mathrm{SL}(2,\mathbb{R})\) factor corresponds to the S-duality action given in (3.18). We have \(\mathrm{SL}(2,\mathbb{R})\) instead of \(\mathrm{SL}(2,\mathbb{Z})\) due to the absence of quantum corrections.
* The vector \(v=(v^{a})\in\mathbb{R}^{n}\) acts via \[v\cdot\begin{pmatrix}z^{a}\\ \rho^{\mathrm{cl}}\\ \zeta^{0}\\ \zeta^{a}\\ \widetilde{\zeta}^{\mathrm{cl}}_{0}\\ \widetilde{\zeta}^{\mathrm{cl}}_{1}\\ \widetilde{\zeta}^{\mathrm{cl}}_{1}\\ \widetilde{\zeta}^{\mathrm{cl}}_{2}\\ \widetilde{\zeta}^{\mathrm{cl}}_{1}\\ \widetilde{\zeta}^{\mathrm{cl}}_{a}\\ \sigma^{\mathrm{cl}}\end{pmatrix}=\begin{pmatrix}z^{a}+v^{a}\\ \rho^{\mathrm{cl}}\\ \zeta^{0}\\ \zeta^{a}+\zeta^{0}v^{a}\\ \widetilde{\zeta}^{\mathrm{cl}}_{0}+\frac{1}{6}k_{abc}v^{a}v^{b}v^{c}\zeta^{ 0}+\frac{1}{2}k_{abc}v^{a}v^{b}\zeta^{c}-\widetilde{\zeta}^{\mathrm{cl}}_{a}v ^{a}\\ \widetilde{\zeta}^{\mathrm{cl}}_{a}-\frac{1}{2}\zeta^{0}k_{abc}v^{b}v^{c}-k_{ abc}v^{b}\zeta^{c}\\ \sigma^{\mathrm{cl}}\end{pmatrix}\,,\] (4.3) where we recall that \(k_{abc}\) are the coefficients of the cubic polynomial (3.1) defining the PSR manifold.
* If \(\mathrm{Heis}_{2n+3}(\mathbb{R})\cong\mathbb{R}^{2n+3}\) denotes the Heisenberg group, then \((\eta^{i},\widetilde{\eta}_{i},\kappa)\in\mathrm{Heis}_{2n+3}(\mathbb{R})\), \(i=0,1,...,n\); acts on \((\zeta^{i},\widetilde{\zeta}^{\mathrm{cl}}_{i},\sigma^{\mathrm{cl}})\) via \[(\eta^{i},\widetilde{\eta}_{i},\kappa)\cdot(\zeta^{i},\widetilde{\zeta}^{ \mathrm{cl}}_{i},\sigma^{\mathrm{cl}})=(\zeta^{i}+\eta^{i},\widetilde{\zeta}^{ \mathrm{cl}}_{i}+\widetilde{\eta}_{i},\sigma^{\mathrm{cl}}+\kappa+\widetilde{ \zeta}^{\mathrm{cl}}_{i}\eta^{i}-\zeta^{i}\widetilde{\eta}_{i})\,.\] (4.4) On the other hand, \(H_{2n+2}\subset\mathrm{Heis}_{2n+3}(\mathbb{R})\) is the codimension 1 subgroup given by \[H_{2n+2}:=\{(\eta^{i},\widetilde{\eta}_{i},\kappa)\in\mathrm{Heis}_{2n+3}( \mathbb{R})\mid\eta^{0}=0\}\,.\] (4.5) The transformations shifting \(\zeta^{0}\) missing from \(H_{2n+2}\) are already included in the \(\mathrm{SL}(2,\mathbb{R})\) transformations.
On the other hand, the semi-direct product of Lie algebras \(\mathbb{R}^{n}\ltimes\mathfrak{h}_{2n+2}\) in (4.1) corresponds at the group level to the semi-direct product \(\mathbb{R}^{n}\ltimes_{\varphi}H_{2n+2}\subset\mathbb{R}^{n}\ltimes_{\varphi} \operatorname{Heis}_{2n+3}(\mathbb{R})\), where the automorphism \(\varphi:\mathbb{R}^{n}\to\operatorname{Aut}(\operatorname{Heis}_{2n+3}( \mathbb{R}))\) is given by a similar formula to (4.3), namely
\[\varphi(v)\cdot\begin{pmatrix}\eta^{0}\\ \eta^{a}\\ \widetilde{\eta}_{0}\\ \widetilde{\eta}_{a}\\ \kappa\end{pmatrix}=\begin{pmatrix}\eta^{0}\\ \eta^{a}+\eta^{0}v^{a}\\ \widetilde{\eta}_{0}+\frac{1}{6}k_{abc}v^{a}v^{b}v^{c}\eta^{0}+\frac{1}{2}k_ {abc}v^{a}v^{b}\eta^{c}-\widetilde{\eta}_{a}v^{a}\\ \widetilde{\eta}_{a}-\frac{1}{2}\eta^{0}k_{abc}v^{b}v^{c}-k_{abc}v^{b}\eta^{c} \\ \kappa\end{pmatrix},\quad v\in\mathbb{R}^{n},\quad(\eta^{i},\widetilde{\eta}_{i},\kappa)\in\operatorname{Heis}_{2n+3}(\mathbb{R})\,. \tag{4.6}\]
The other semi-direct products of Lie algebras in (4.1) can be described via the Lie algebra structure given in [14, Proposition 3.10].
We now consider \((M,\mathfrak{F})\) and a mutually local variation of BPS structures \((M,\Gamma,Z,\Omega)\) as in Section 3.1, so that we obtain an instanton corrected q-map space \((\widetilde{N},g_{\overline{N}})\). We want to study how to instanton corrections affect the \(3n+6\)-dimensional isometry group \(G\) from the tree-level q-map space case. In the following, unless otherwise specified, we assume that \(\widetilde{N}\) is the (lift of the) maximal domain of definition of \(g_{\overline{N}}\) obtained via HK/QK correspondence from the HK metric \((N,g_{N})\) from Section 2.1.2. We also define the following subgroups of \(\operatorname{Heis}_{2n+3}(\mathbb{R})\):
\[\begin{split}\operatorname{Heis}_{2n+3,D}&:=\{(\eta^{i },\widetilde{\eta}_{i},\kappa)\in\operatorname{Heis}_{2n+3}(\mathbb{R})\,| \quad\eta^{i}\in\mathbb{Z}\text{ for }i=0,...,n\}\\ H_{2n+2,D}&:=\{(\eta^{a},\widetilde{\eta}_{i}, \kappa)\in H_{2n+2}\,|\quad\eta^{a}\in\mathbb{Z}\text{ for }a=1,...,n\}\,.\end{split} \tag{4.7}\]
The letter \(D\) in the above notation is meant to emphasize that the directions \(\eta^{i}\) are broken to a discrete subgroup due to the inclusion of (part of) the "D-instanton corrections" due to terms involving the BPS indices \(\Omega(\gamma)\).
We begin by studying how \(\operatorname{Heis}_{2n+3,D}\subset\operatorname{Heis}_{2n+3}(\mathbb{R})\) and \(\mathbb{Z}^{n}\subset\mathbb{R}^{n}\) act on the corrected coordinates \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\), related to the type IIB coordinates via the quantum corrected mirror map (3.15). We use the notation \(\rho^{\operatorname{w.s.}}:=f^{\operatorname{w.s.}}/16\pi\), so that \(\rho=\rho^{\operatorname{cl}}+\rho^{\operatorname{w.s.}}-\epsilon_{\ell}\) (recall (3.37) and (3.38)).
**Lemma 4.1**.: \(\operatorname{Heis}_{2n+3,D}\) acts on the functions \((\rho^{\operatorname{w.s.}},\zeta^{\operatorname{inst}}_{i},\sigma^{ \operatorname{inst}})\) by
\[(\eta^{i},\widetilde{\eta}_{i},\kappa)\cdot(\rho^{\operatorname{w.s.}}, \widetilde{\zeta}^{\operatorname{inst}}_{i},\sigma^{\operatorname{inst}})=( \rho^{\operatorname{w.s.}},\widetilde{\zeta}^{\operatorname{inst}}_{i},\sigma^ {\operatorname{inst}}+\widetilde{\zeta}^{\operatorname{inst}}_{i}\eta^{j})\,. \tag{4.8}\]
On the other hand, \(\mathbb{Z}^{n}\subset\mathbb{R}^{n}\) acts on the functions \((\rho^{\operatorname{w.s.}},\widetilde{\zeta}^{\operatorname{inst}}_{i}, \sigma^{\operatorname{inst}})\) by
\[(v^{a})\cdot(\rho^{\operatorname{w.s.}},\widetilde{\zeta}^{\operatorname{ inst}}_{0},\widetilde{\zeta}^{\operatorname{inst}}_{a},\sigma^{ \operatorname{inst}})=(\rho^{\operatorname{w.s.}},\widetilde{\zeta}^{ \operatorname{inst}}_{0}-v_{a}\widetilde{\zeta}^{\operatorname{inst}}_{a}, \widetilde{\zeta}^{\operatorname{inst}}_{a},\sigma^{\operatorname{inst}}) \tag{4.9}\]
Proof.: The first statement (4.8) follows easily from (3.16), (3.39) and (4.4). On the other hand note that (4.3) and the fact that \(\rho^{\operatorname{cl}}=\frac{\gamma_{2}^{2}}{2}h(t)\) imply that \(\tau_{2}\) is invariant under the action of \(\mathbb{Z}^{n}\). Equation (4.9) then follows again easily from (3.16) and (3.39). The restrictions to \(\operatorname{Heis}_{2n+3,D}\subset\operatorname{Heis}_{2n+3}(\mathbb{R})\) and \(\mathbb{Z}^{n}\subset\mathbb{R}^{n}\) are required, since in the computation we use the periodicity of the complex exponential function.
**Corollary 4.2**.: The action of \(\operatorname{Heis}_{2n+3,D}\subset\operatorname{Heis}_{2n+3}(\mathbb{R})\) and \(\mathbb{Z}^{n}\subset\mathbb{R}^{n}\) on \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\in\overline{\mathcal{N}}_{ \operatorname{IIA}}=\mathbb{R}_{\gamma>-c_{\ell}}\times\overline{M}\times \mathbb{R}^{2n+2}\times\mathbb{R}\) has the same transformation rules, (4.4) and (4.3), as their action on \((\rho^{\operatorname{cl}},z^{a},\zeta^{i},\widetilde{\zeta}^{\operatorname{cl}}_{i},\sigma^{\operatorname{cl}})\in\overline{\mathcal{N}}^{\operatorname{cl}}_{ \operatorname{IIA}}\). Namely, for \(v\in\mathbb{Z}^{n}\) and \((\eta^{i},\widetilde{\eta}_{i},\kappa)\in\operatorname{Heis}_{2n+3,D}\),
\[v\cdot\begin{pmatrix}z^{a}\\ \rho\\ \zeta^{a}\\ \widetilde{\zeta}_{0}\\ \widetilde{\zeta}_{a}\\ \sigma\end{pmatrix}=\begin{pmatrix}z^{a}+v^{a}\\ \rho\\ \zeta^{0}\\ \zeta^{a}+\zeta^{0}v^{a}\\ \widetilde{\zeta}_{0}+\frac{1}{6}k_{abc}v^{a}v^{b}v^{c}\zeta^{0}+\frac{1}{2}k_{ abc}v^{a}v^{b}\zeta^{c}-\widetilde{\zeta}^{\operatorname{cl}}_{a}v^{a}\\ \widetilde{\zeta}_{a}-\frac{1}{2}\zeta^{0}k_{abc}v^{b}v^{c}-k_{abc}v^{b}\zeta^{c} \\ \sigma\end{pmatrix},\quad(\eta^{i},\widetilde{\eta}_{i},\kappa)\cdot\begin{pmatrix}z^{a} \\ \rho\\ \zeta^{i}\\ \widetilde{\zeta}_{i}\\ \sigma\end{pmatrix}=\begin{pmatrix}z^{a}\\ \rho\\ \zeta^{i}+\eta^{i}\\ \widetilde{\zeta}_{i}\\ \widetilde{\zeta}_{i}\\ \sigma\end{pmatrix}. \tag{4.10}\]
Furthermore, the action of \(\mathrm{Heis}_{2n+3,D}\) and \(\mathbb{Z}^{n}\) on \((z^{a},\rho,\zeta^{i},\widetilde{\zeta}_{i},\sigma)\) expressed in terms of type IIB coordinates via the quantum corrected mirror map (3.15) coincides with the action (4.4) and (4.3) expressed in type IIB variables via the classical mirror map.
Proof.: The first statement follow from Lemma 4.1 together with the actions (4.4) and (4.3). The last statement follows immediately from (4.10), together with (4.4), (4.3) and (3.15).
**Remark 4.3**.:
* We remark that since the actions (4.4) leave the \(t^{a}=\mathrm{Im}(z^{a})\) coordinates invariant, via the quantum corrected mirror map we get an action of \(\mathbb{Z}^{n}\) and \(\mathrm{Heis}_{2n+3,D}\) not only on \(\overline{\mathcal{N}}_{\mathrm{IIB}}=\mathcal{M}^{-1}(\overline{\mathcal{N }}_{\mathrm{IIA}})\), but also the bigger manifold \(\overline{\mathcal{N}}_{\mathrm{IIB}}^{\mathrm{cl}}\supset\overline{ \mathcal{N}}_{\mathrm{IIB}}\) where \(\mathrm{SL}(2,\mathbb{Z})\) always acts via (3.18).
* It is not hard to the check that \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\) do not have any nice transformation property under the \(\mathbb{R}_{>0}\) scaling action (4.2).
**Proposition 4.4**.: The group \(\mathrm{Heis}_{2n+3,D}\) acts by isometries on any instanton corrected c-map space \((\widetilde{N},g_{\overline{N}})\) (recall Definition 2.14). Furthermore, if \(n_{\hat{\gamma}}=0\) for all \(\hat{\gamma}\in\Lambda^{+}\) (but with \(\chi\) possibly non-zero), then \((\widetilde{N},g_{\overline{N}})\) carries an action by isometries of \(\{(\eta^{i},\widetilde{\eta}_{i},\kappa)\in\mathrm{Heis}_{2n+3}(\mathbb{R})\mid \eta^{0}\in\mathbb{Z}\}\), and in particular by \(H_{2n+2}\).
Proof.: By Corollary 4.2, the group \(\mathrm{Heis}_{2n+3,D}\) acts on the \((\zeta^{i},\widetilde{\zeta}_{i},\sigma)\) part of the type IIA coordinates \((\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\) via (4.10). Because the instanton corrections are periodic in \(\zeta^{i}\) and not depend on \(\widetilde{\zeta}_{i}\) and \(\sigma\), it follows immediately that the tensor \(T\) in (2.6) as well as the sets \(N^{\prime}\) from (2.16) and \(\overline{N}\) from (2.22) are invariant under the action of \(\mathrm{Heis}_{2n+3,D}\). In particular, it follows that \(\widetilde{N}\) must carry an action of \(\mathrm{Heis}_{2n+3,D}\). On the other hand, by the explicit formula (2.28) for \(g_{\overline{N}}\) and using again the fact that all instanton correction terms are periodic in \(\zeta^{i}\) and independent from \(\widetilde{\zeta}_{i}\) and \(\sigma\), it can be easily checked that the action (4.4) of \(\mathrm{Heis}_{2n+3,D}\) acts by isometries on \((\widetilde{N},g_{\overline{N}})\).
The last statement follows immediately from the previous argument, together with the fact that if \(n_{\hat{\gamma}}=0\) for all \(\hat{\gamma}\in\Lambda^{+}\), then the instanton correction terms only depend on \(\zeta^{0}\).
We now show the following proposition, dealing with the \(\mathbb{R}^{n}\)-factor of (4.1) in the case of instanton corrected q-map spaces.
**Proposition 4.5**.: Let \((\widetilde{N},g_{\overline{N}})\) be an instanton corrected q-map space. Then \((\widetilde{N},g_{\overline{N}})\) carries an action by isometries by \(\mathbb{Z}^{n}\subset\mathbb{R}^{n}\) given by (4.10). Furthermore, if \(n_{\hat{\gamma}}=0\) for all \(\hat{\gamma}\in\Lambda^{+}\) (but with \(\chi\) possibly non-zero), then \((\widetilde{N},g_{\overline{N}})\) carries an action by isometries by the full \(\mathbb{R}^{n}\).
Proof.: Notice that with respect to the coordinates \((Z^{0},z^{a})=(Z^{0},Z^{a}/Z^{0})\) of \(M\), we can write the Kahler potential \(k\) of the CASK manifold as (3.112). In particular, we see that \(k(Z^{0},b^{a}+\mathrm{i}t^{a})\) is invariant under integer shifts of the \(b^{a}\), and hence, \(M\) must be invariant under the action of \(\mathbb{Z}^{n}\), and it must be an isometry of the CASK metric.
On the other hand, when considering the associated instanton corrected HK geometry, one must consider the tensor \(T\) given in (2.6). In our particular case, we can rewrite \(T\) as follows
\[T =-\mathrm{Im}(\tau_{ij})\mathrm{d}Z^{i}\mathrm{d}\overline{Z}^{j }+\frac{\chi}{2\pi}\sum_{q_{0}\in\mathbb{Z}-\{0\}}\sum_{n>0}e^{-2\pi inq_{0} \zeta^{0}}K_{0}(4\pi Rn|q_{0}|)|q_{0}\mathrm{d}Z^{0}|^{2}\] \[\qquad-\frac{1}{2\pi}\sum_{\hat{\gamma}\in\Lambda^{+}}n_{\hat{ \gamma}}\sum_{q_{0}\in\mathbb{Z}}\sum_{n>0}e^{-2\pi in(q_{0}\zeta^{+}+q_{0} \zeta^{0})}K_{0}(4\pi Rn|q_{0}+q_{a}z^{a}|)|Z^{0}q_{a}\mathrm{d}z^{a}+(q_{0}+q _{a}z^{a})\mathrm{d}Z^{0}|^{2}\,. \tag{4.11}\]
From the above formula together with (4.3), it follows that the instanton part of \(T\) in (4.11) is invariant under the action of \(\mathbb{Z}^{n}\). Indeed, each summand in the term proportional to \(\chi\) is invariant, while in the last term one finds that for each fixed \(\hat{\gamma}=q_{a}\gamma^{a}\in\Lambda^{+}\), we have \(q_{0}\to q_{0}+q_{a}v^{a}\), which remains invariant since we have a sum over all \(q_{0}\in\mathbb{Z}\). Since \(\mathbb{Z}^{n}\) also acts by isometries on the CASK metric, it follows that \(T\) is invariant under \(\mathbb{Z}^{n}\), and hence the maximal domain of definition \(N\) of the HK metric is invariant under \(\mathbb{Z}^{n}\). Similarly, the conditions defining the subsets \(N^{\prime}\) in (2.16) and \(\overline{N}\) in (2.22) required in the construction of the instanton corrected QK manifold via HK/QK correspondence are seen to be
\(\mathbb{Z}^{n}\)-invariant, so that \(\tilde{N}\) carries an action of \(\mathbb{Z}^{n}\). To show that the action of \(\mathbb{Z}^{n}\) is by isometries, we show that one can lift the action to the twistor space acting by twistor space automorphisms. We define the lift by declaring that it acts trivially on the \(\mathbb{C}P^{1}\) fibers of \(\mathcal{Z}\cong\tilde{N}\times\mathbb{C}P^{1}\). By using the explicit formulas for \((\xi^{i},\widetilde{\xi}^{\rm cl}_{i},\alpha^{\rm cl})\) in (3.70) one finds that they have the following transformation rule under the action of \(\mathbb{Z}^{n}\) (or even \(\mathbb{R}^{n}\)):
\[v\cdot\begin{pmatrix}\xi^{0}\\ \xi^{a}\\ \widetilde{\xi}^{\rm cl}_{0}\\ \widetilde{\xi}^{\rm cl}_{a}\\ -\frac{1}{2}(\alpha^{\rm cl}-\xi^{i}\widetilde{\xi}^{\rm cl}_{i})\end{pmatrix}= \begin{pmatrix}\xi^{0}\\ \xi^{a}+\xi^{0}v^{a}\\ \widetilde{\xi}^{\rm cl}_{0}+\frac{1}{6}k_{abc}v^{a}v^{b}v^{c}\xi^{0}+\frac{ 1}{2}k_{abc}v^{a}v^{b}\xi^{c}-\widetilde{\xi}^{\rm cl}_{a}v^{a}\\ \widetilde{\xi}^{\rm cl}_{a}-\frac{1}{2}\xi^{0}k_{abc}v^{b}v^{c}-k_{abc}v^{b} \xi^{c}\\ -\frac{1}{2}(\alpha^{\rm cl}-\xi^{i}\widetilde{\xi}^{\rm cl}_{i})-\frac{1}{ 6}(\xi^{0})^{2}k_{abc}v^{a}v^{b}v^{c}-\frac{1}{2}k_{abc}v^{a}v^{b}\xi^{c}\xi^ {0}-\frac{1}{2}k_{abc}v^{b}\xi^{a}\xi^{c}\end{pmatrix} \tag{4.12}\]
where we have used \(-\frac{1}{2}(\alpha^{\rm cl}-\xi^{i}\widetilde{\xi}^{\rm cl}_{i})\) instead of \(\alpha^{\rm cl}\) because we want to relate to the Darboux coordinates (3.22). On the other hand, using (3.22) and (4.3) one finds that under the action of \(\mathbb{Z}^{n}\) the following holds
\[v\cdot\begin{pmatrix}\widetilde{\xi}^{\rm inst}_{0}\\ \widetilde{\xi}^{\rm inst}_{a}\\ \alpha^{\rm inst}\end{pmatrix}=\begin{pmatrix}\widetilde{\xi}^{\rm inst}_{0} -\widetilde{\xi}^{\rm inst}_{a}v^{a}\\ \widetilde{\xi}^{\rm inst}_{a}\\ \alpha^{\rm inst}\end{pmatrix}\,. \tag{4.13}\]
Hence, joining everything together one finds that \((\xi^{i},\widetilde{\xi}_{i},\alpha)\) from (3.22) transforms under the \(\mathbb{Z}^{n}\) action by
\[v\cdot\begin{pmatrix}\xi^{0}\\ \widetilde{\xi}^{a}\\ \widetilde{\xi}_{0}\\ \widetilde{\xi}_{a}\\ \alpha\end{pmatrix}=\begin{pmatrix}\xi^{0}\\ \xi^{a}+\xi^{0}v^{a}\\ \widetilde{\xi}_{0}+\frac{1}{6}k_{abc}v^{a}v^{b}v^{c}\xi^{0}+\frac{1}{2}k_{abc }v^{a}v^{b}\xi^{c}-\widetilde{\xi}_{a}v^{a}\\ \widetilde{\xi}_{a}-\frac{1}{2}\xi^{0}k_{abc}v^{b}v^{c}-k_{abc}v^{b}\xi^{c}\\ \alpha-\frac{1}{6}(\xi^{0})^{2}k_{abc}v^{a}v^{b}v^{c}-\frac{1}{2}k_{abc}v^{b} \xi^{c}\xi^{0}-\frac{1}{2}k_{abc}v^{b}\xi^{a}\xi^{c}.\end{pmatrix} \tag{4.14}\]
In particular, the same holds for the Darboux coordinates \((\xi^{i},\widetilde{\xi}_{i},\alpha_{c_{\ell}})\) obtained in Corollary 3.12, since \(t\) is invariant under the lift of the \(\mathbb{Z}^{n}\) action. From the transformation rules (4.14) it follows that
\[\mathrm{d}\alpha_{c_{\ell}}-\widetilde{\xi}_{i}\mathrm{d}\xi^{i} \tag{4.15}\]
is invariant under the action of \(\mathbb{Z}^{n}\) and acts holomorphically on a dense open subset of \(\mathcal{Z}\) (here we use that Darboux coordinates for a holomorphic contact structure must be holomorphic coordinates). Since the action is globally defined and smooth, it follows that \(\mathbb{Z}^{n}\) must act holomorphically globally on \(\mathcal{Z}\) and preserve the contact structure. The real structure is also trivially preserved, since the action on the fibers is trivial. Hence, we conclude that \(\mathbb{Z}^{n}\) acts by twistor space automorphisms, and hence by isometries on \((\tilde{N},g_{\overline{N}})\).
Finally, the case where \(n_{\dot{\gamma}}=0\) for all \(\dot{\gamma}\in\Lambda^{+}\) follows by a simplified version of the previous argument. The restriction of translations to \(\mathbb{Z}^{n}\) is no longer required since \(\widetilde{\mathfrak{F}}^{n,s}\) does not have polylogarithm terms in this case, and the instanton corrections due to \(\Omega(\gamma)\) only depends on the \(\tau_{2}\) and \(\zeta^{0}=\tau_{1}\) variables, which are left invariant under the full \(\mathbb{R}^{n}\)-action (4.3).
Before joining the previous results and the ones from Section 3 together, we will need the following lemma:
**Lemma 4.6**.: The group \(\mathrm{SL}(2,\mathbb{R})\ltimes(\mathbb{R}^{n}\ltimes H_{2n+2})\) combining the S-duality action with the action of \(\mathbb{R}^{n}\ltimes H_{2n+2}\) on \(\overline{\mathcal{N}}^{\rm cl}_{\rm IIB}\) has \(\mathrm{SL}(2,\mathbb{Z})\ltimes(\mathbb{Z}^{n}\ltimes H_{2n+2,D})\) as a subgroup.
Proof.: The fact that the \(\mathrm{SL}(2,\mathbb{R})\) S-duality action and the \(\mathbb{R}^{n}\ltimes H_{2n+2}\) action combine into an action of a group of the form \(\mathrm{SL}(2,\mathbb{R})\ltimes(\mathbb{R}^{n}\ltimes H_{2n+2})\) follows from [13, Proposition 3.10]. On the other hand, to check that
\[\mathrm{SL}(2,\mathbb{R})\ltimes(\mathbb{Z}^{n}\ltimes H_{2n+2,D})\subset \mathrm{SL}(2,\mathbb{R})\ltimes(\mathbb{R}^{n}\ltimes H_{2n+2}) \tag{4.16}\]
defines a subgroup, it is enough to check that the action by automorphisms of \(\mathbb{Z}^{n}\) on \(H_{2n+2}\) preserves the integrality contraint of \(H_{2n+2,D}\subset H_{2n+2}\), and that the action by automorphisms of \(\mathrm{SL}(2,\mathbb{Z})\) on \(\mathbb{R}^{n}\ltimes H_{2n+2}\) preserves the integrality constraint of \(\mathbb{Z}^{n}\ltimes H_{2n+2,D}\subset\mathbb{R}^{n}\ltimes H_{2n+2}\). The fact that the
action by automorphisms of \(\mathbb{Z}^{n}\) preserves the integrality constraint follows immediately from (4.6). On the other hand, since \(\mathrm{SL}(2,\mathbb{Z})\) is generated by \(T\) and \(S\) given in (3.83), it is enough to check that the induced automorphism \(\varphi_{T},\varphi_{S}\in\mathrm{Aut}(\mathbb{R}^{n}\ltimes H_{2n+2})\) preserve the integrality constraint of \(\mathbb{Z}^{n}\ltimes H_{2n+2,D}\). From the fact that in the group \(\mathrm{SL}(2,\mathbb{R})\ltimes(\mathbb{R}^{n}\ltimes H_{2n+2})\) we have that
\[(A,0,0)\cdot(1,v^{a},(\eta^{a},\eta_{i},\kappa))\cdot(A^{-1},0,0)=(1,\varphi_{ A}(v^{a},(\eta^{a},\eta_{i},\kappa))) \tag{4.17}\]
for \(A\in\mathrm{SL}(2,\mathbb{R}),\ (v^{a})\in\mathbb{R}^{n},\ (\eta_{a},\eta_{i}, \kappa)\in H_{2n+2}\), it is straightforward to use the actions of \(\mathrm{SL}(2,\mathbb{R})\) and \(\mathbb{R}\ltimes H_{2n+2}\) to compute that
\[\varphi_{T}(v^{a},(\eta^{a},\eta_{i},\kappa))=(v^{a},(\eta^{a}-v^{a},...)), \quad\varphi_{S}(v^{a},(\eta^{a},\eta_{i},\kappa))=(-\eta^{a},(v^{a},...))\,. \tag{4.18}\]
It then follows that the integrality constraints of \(\mathbb{Z}^{n}\ltimes H_{2n+2,D}\) are preserved by the automorphisms of \(\mathrm{SL}(2,\mathbb{Z})\), and hence we obtain the desired result.
We can now state our main theorem:
**Theorem 4.7**.: Consider an instanton corrected q-map space \((\widetilde{N},g_{\overline{N}})\) of dimension \(4n+4\), where are before, we take \(\widetilde{N}\) to be the maximal domain of definition of \(g_{\overline{N}}\). Let \(T,S\in\mathrm{SL}(2,\mathbb{Z})\) be as in (3.83). Then:
* The group \[\langle T\rangle\ltimes(\mathbb{Z}^{n}\ltimes H_{2n+2,D})\] (4.19) acts by isometries on \((\widetilde{N},g_{\overline{N}})\), where \(\langle T\rangle\cong\mathbb{Z}\) is the subgroup generated by \(T\).
* Assume that we take the one-loop parameter to be \(c_{\ell}=\frac{\chi}{192\pi}\). We can always find a non-empty open subset \(\widetilde{N}_{S}\subset\widetilde{N}\) where \((\widetilde{N}_{S},g_{\overline{N}})\) carries an isometry group the the form \[\langle S\rangle\ltimes(\mathbb{Z}^{n}\ltimes H_{2n+2,D}),\] (4.20) where \(\langle S\rangle\cong\mathbb{Z}/4\mathbb{Z}\) is the group generated by \(S\in\mathrm{SL}(2,\mathbb{Z})\). Furthermore, if \(\widetilde{N}_{\mathrm{SL}(2,\mathbb{Z})}\subset\widetilde{N}\) is \(\mathrm{SL}(2,\mathbb{Z})\)-invariant, then \(\mathrm{SL}(2,\mathbb{Z})\) acts by isometries on \((\widetilde{N}_{\mathrm{SL}(2,\mathbb{Z})},g_{\overline{N}})\). In particular, if \(\widetilde{N}\) is already invariant under \(\mathrm{SL}(2,\mathbb{Z})\) then (4.19) can be enhanced to \[\mathrm{SL}(2,\mathbb{Z})\ltimes(\mathbb{Z}^{n}\ltimes H_{2n+2,D})\,.\] (4.21)
* Finally, if \(n_{\tilde{\gamma}}=0\) for all \(\tilde{\gamma}\in\Lambda^{+}\), then in the previous statements we can replace \(\mathbb{Z}^{n}\) and \(H_{2n+2,D}\) by \(\mathbb{R}^{n}\) and \(H_{2n+2}\), respectively. If furthermore we take \(\chi=c_{\ell}=0\) and \(n_{\tilde{\gamma}}=0\) for all \(\tilde{\gamma}\in\Lambda^{+}\), then we return to the tree-level q-map space case, where there is a connected \(3n+6\) dimensional Lie group \(G\) acting by isometries on \((\widetilde{N},g_{\overline{N}})\), see [13, Theorem 3.17]. The group \(G\) in particular contains the S-duality action by \(\mathrm{SL}(2,\mathbb{R})\), an action by \(\mathbb{R}^{n}\ltimes H_{2n+2}\), and a dilation action by \(\mathbb{R}_{>0}\). (4.1).
Proof.: The first statement of the theorem then follows from Lemma 4.6, Proposition 4.5, Proposition 4.4, and the fact that the action of \(T\) and \(H_{2n+2,D}\) generate the action of \(\mathrm{Heis}_{2n+3,D}\). To check the second statement, notice that by Theorem 3.23, there is a non-empty open S-invariant subset \(\widetilde{N}_{S}\subset\widetilde{N}\) where \(S\) acts by isometries on \((\widetilde{N}_{S},g_{\overline{N}})\). This subset is characterized in Theorem 3.23 by
\[\widetilde{N}_{S}=\{(\rho,z^{a},\zeta^{i},\widetilde{\zeta}_{i},\sigma)\in \widetilde{N}\mid\epsilon<\tau_{2},\ \ \epsilon<\frac{\tau_{2}}{|\tau_{1}|^{2}+|\tau_{2}^{2}|},\ \ t^{a}>K,\ \ |\tau|t^{a}>K\}\,. \tag{4.22}\]
In particular, using that the action of \(\mathbb{Z}^{n}\) and \(H_{2n+2,D}\) leave \(\tau_{1}\), \(\tau_{2}\) and \(t^{a}\) invariant, it follows that \(\widetilde{N}_{S}\) carries an action by both groups. By the same proof as in Proposition 4.4 and Proposition 4.5, it follows that \(\mathbb{Z}^{n}\) and \(H_{2n+2,D}\) must act by isometries on \((\widetilde{N}_{S},g_{\overline{N}})\). The result then follows from Lemma 4.6. The last part of the second point and the final statement follow from Theorem 3.22 and the aforementioned propositions and lemma.
An example of full S-duality
As stated in Theorem 4.7, in order to guarantee that S-duality acts by isometries on the instanton corrected q-map metric, one needs to guarantee that the domain of the metric carries an action by the S-duality \(\mathrm{SL}(2,\mathbb{Z})\). Here we explicitly give an example where the full \(\mathrm{SL}(2,\mathbb{Z})\) action by isometries can be guaranteed. We start by specifying the initial data as in Section 3.1.
* The PSR manifold \((\mathcal{H},g_{\mathcal{H}})\) is specified by the cubic polynomial \(h:\mathbb{R}\to\mathbb{R}\) given by \[h(t)=\frac{t^{3}}{6}\,.\] (5.1) In particular \(\mathcal{H}\) just reduces to a point. The corresponding PSK manifold \((\overline{M}^{\mathrm{cl}},g_{\overline{M}^{\mathrm{cl}}})\) obtained via the r-map has domain \[\overline{M}^{\mathrm{cl}}=\mathbb{R}+\mathrm{i}\mathbb{R}_{>0}\mathcal{H}= \mathbb{R}+\mathrm{i}\mathbb{R}_{>0}\,,\] (5.2) with the corresponding CASK domain \((M^{\mathrm{cl}},\mathfrak{F}^{\mathrm{cl}})\) given by \[M^{\mathrm{cl}}=\{(Z^{0},Z^{1})=Z^{0}(1,z^{1})\in\mathbb{C}^{2}\mid Z^{0}\in \mathbb{C}^{\times},\ z^{1}\in\overline{M}^{\mathrm{cl}}\},\quad\mathfrak{F} ^{\mathrm{cl}}=-\frac{1}{6}\frac{(Z^{1})^{3}}{Z^{0}}\,.\] (5.3) Furthermore we have \(\Lambda^{+}=\mathrm{span}_{\mathbb{Z}_{>0}}\{\gamma^{1}\}\) with the prepotential \(\mathfrak{F}\) given by \[\mathfrak{F}=-\frac{1}{6}\frac{(Z^{1})^{3}}{Z^{0}}+\chi\frac{(Z^{0})^{2}\zeta (3)}{2(2\pi\mathrm{i})^{3}}\,,\quad\chi\in\mathbb{Z}_{>0}.\] (5.4) Notice that we are restricting \(\chi\) to be positive and we take \(n_{\hat{\gamma}}=0\) for all \(\hat{\gamma}\in\Lambda^{+}\).
* Since \(n_{\hat{\gamma}}=0\) for all \(\hat{\gamma}\in\Lambda^{+}\), we find that \(M^{\mathrm{cl}}=M^{q}\), where \(M^{q}\) was defined in (3.6). \(M\) is then the maximal open subset of \(M^{\mathrm{cl}}\) where we have that \(\mathrm{Im}(\tau)\) has signature \((1,1)\) and \(\mathrm{Im}(\tau_{ij})Z^{i}\overline{Z}^{j}<0\). For our simple example, we can describe \(M\) explicitly. Setting \(z^{1}=b+\mathrm{i}t\), we have \[\mathrm{Im}(\tau_{00})=\frac{t^{3}}{3}-b^{2}t+\frac{\chi\zeta(3)}{(2\pi)^{3} },\quad\mathrm{Im}(\tau_{01})=bt,\quad\mathrm{Im}(\tau_{11})=-t,\] (5.5) so that \[\mathrm{Det}(\mathrm{Im}(\tau))=-\left(\frac{t^{4}}{3}+\frac{t\chi\zeta(3)}{(2 \pi)^{3}}\right),\quad\mathrm{Im}(\tau_{ij})Z^{i}\overline{Z}^{j}=|Z^{0}|^{2} \left(-\frac{2t^{3}}{3}+\frac{\chi\zeta(3)}{(2\pi)^{3}}\right)\] (5.6) Since \(\chi>0\) and \(\mathrm{Im}(\tau)\) is a \(2\times 2\) matrix, \(M\) is then given by \[M=\{(Z^{0},Z^{1})=Z^{0}(1,z^{1})\in\mathbb{C}^{2}\mid Z^{0}\in\mathbb{C}^{ \times},\ z^{1}\in\overline{M}^{\mathrm{cl}},\ \ \frac{t^{3}}{3}>\frac{\chi\zeta(3)}{(2\pi)^{3}}\}\,.\] (5.7) The prepotential \(\mathfrak{F}\) induces on \(M\) the structure of a CASK manifold fibering over the complete PSK manifold \(\overline{M}=\{z^{1}\in\overline{M}^{\mathrm{cl}}\mid\frac{t^{3}}{3}>\frac{ \chi\zeta(3)}{2(2\pi)^{3}}\}\) with the Kahler potential \(-\log(h(t)-\frac{\chi\zeta(3)}{4(2\pi)^{3}})\), as follows from the general results of [1]. The completeness follows from [1, Theorem 6.2] due to our assumption \(\chi>0\). For \(\chi<0\) the PSK metric is incomplete [1, Remark 5.7].
* Over \(M\), we consider the trivial local system \(\Gamma\to M\) with global sections \((\gamma^{0},\gamma^{1},\widetilde{\gamma}_{0},\widetilde{\gamma}_{1})\) defined by the CASK structure (recall Section 2.1.3), together with the canonical central charge \(Z:M\to\Gamma^{*}\otimes\mathbb{C}\) given by \(Z_{\gamma^{i}}=Z^{i}\), \(Z_{\widetilde{\gamma}_{i}}=-\frac{\partial\overline{M}}{\partial Z^{i}}\) for \(i=0,1\). The BPS indices are given by \[\begin{cases}\Omega(q_{0}\gamma^{0})=-\chi,\quad q_{0}\neq 0\\ \Omega(\gamma)=0\quad\mathrm{else},\end{cases}\] (5.8) which has the required structure from Section 3.1.
We would like to now study the tensor \(T\) from (2.6) determining the domain of definition \(N\) of the instanton corrected HK structure, as well as the functions \(f\), \(f_{3}\) and \(g_{N}(V,V)\) from Section 2.1.2, determining the domain of the instanton corrected QK metric, as well as its signature.
Recalling that \(Z^{1}/Z^{0}=z^{1}=b+{\rm i}t\), \(|Z^{0}|=\tau_{2}\), \(\zeta^{0}=\tau_{1}\), and \(\tau=\tau_{1}+{\rm i}\tau_{2}\), we consider the open set \(N\subset M^{\rm cl}\times\mathbb{R}^{4}\) defined by
\[N:=\{(Z^{0},Z^{1},\zeta^{0},\zeta^{1},\widetilde{\zeta}_{0},\widetilde{\zeta}_ {1})\in M^{\rm cl}\times\mathbb{R}^{4}\mid R_{1}(t,\tau)>0,\ \ R_{2}(t,\tau)>0\,\} \tag{5.9}\]
where
\[\begin{split} R_{1}(t,\tau)&:=\frac{t^{3}}{3}- \frac{\chi}{4(2\pi)^{3}}\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\frac{1}{|m\tau+n| ^{3}}\\ R_{2}(t,\tau)&:=\frac{t^{3}}{3}-\frac{3\chi}{4(2\pi )^{3}}\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\frac{1}{|m\tau+n|^{3}}-\frac{(3\chi) ^{2}}{(4\pi)^{6}}\left(\sum_{(m,n)\in\mathbb{Z}^{2}}\frac{1}{|m\tau+n|^{3}} \right)^{2}(R_{1})^{-1}\.\end{split} \tag{5.10}\]
The origin of these expressions will become clear with the following results. They will guarantee that the domain of the resulting QK metric carries an action of the S-duality \(\mathrm{SL}(2,\mathbb{Z})\) and that the metric is positive definite. Furthermore, we remark that \(N\) is non-empty, since for a fixed \(\tau\) the inequalities can be clearly achieved for \(t\) sufficiently big. Furthermore, we remark that \(R_{1}(t,\tau)>0\) already implies that \((Z^{0},Z^{1})\in M\), since (using that \(\chi>0\))
\[\frac{\chi}{4(2\pi)^{3}}\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\frac{1}{|m\tau+n| ^{3}}>\frac{\chi}{4(2\pi)^{3}}\sum_{n\in\mathbb{Z}-\{0\}}\frac{1}{|n|^{3}}= \frac{\chi\zeta(3)}{2(2\pi)^{3}}. \tag{5.11}\]
**Proposition 5.1**.: The instanton corrected HK metric associated to the previous data as in Section 2.1.1 is well defined on \(N\) and of signature \((4,4)\).
Proof.: The tensor determining the domain of definition of the HK geometry given in (2.6) (compare (2.20)) reduces in our case to:
\[T=-\mathrm{Im}(\tau_{ij})\mathrm{d}Z^{i}\mathrm{d}\overline{Z}^{j}+\frac{\chi }{2\pi}\sum_{q_{0}\in\mathbb{Z}-\{0\}}\sum_{n>0}e^{-2\pi\mathrm{i}ng_{0}\zeta ^{0}}q_{0}^{2}K_{0}(2\pi n\tau_{2}|q_{0}|)|\mathrm{d}Z^{0}|^{2}\,. \tag{5.12}\]
We see that only the \(T_{0\overline{0}}\) component of this tensor receives corrections due to BPS indices, while \(T_{0\overline{1}}=T_{1\overline{0}}=-\mathrm{Im}(\tau_{01})\) and \(T_{1\overline{1}}=-\mathrm{Im}(\tau_{11})\). The key thing is that we can Poisson resum \(T_{0\overline{0}}\) using Lemma 3.14 as follows
\[\begin{split} T_{0\overline{0}}&=-\mathrm{Im}(\tau _{00})-\frac{\chi}{4(2\pi)^{3}}\partial_{\zeta^{0}}^{2}\mathcal{I}_{0}^{(2)} \\ &=-\frac{t^{3}}{3}+b^{2}t-\frac{\chi}{2(2\pi)^{3}}\sum_{n\neq 0} \frac{1}{n^{2}|n|}-\frac{\chi}{4(2\pi)^{3}}\partial_{\zeta^{0}}^{2}\mathcal{I} _{0}^{(2)}\\ &=-\frac{t^{3}}{3}+b^{2}t-\frac{\chi}{4(2\pi)^{3}}\sum_{(m,n)\in \mathbb{Z}^{2}-(0,0)}\left(\frac{3(m\tau_{1}+n)^{2}}{|m\tau+n|^{5}}-\frac{1}{| m\tau+n|^{3}}\right)\,.\end{split} \tag{5.13}\]
From this computation, we find that the determinant along the horizontal directions gives
\[\mathrm{Det}(T)=T_{0\overline{0}}T_{1\overline{1}}-(T_{0\overline{1}})^{2}=- \frac{t^{4}}{3}-\frac{t\chi}{4(2\pi)^{3}}\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)} \left(\frac{3(m\tau_{1}+n)^{2}}{|m\tau+n|^{5}}-\frac{1}{|m\tau+n|^{3}}\right)\,. \tag{5.14}\]
Since \(\chi>0\) and \(t>0\), we therefore have
\[\frac{t\chi}{4(2\pi)^{3}}\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\frac{3(m\tau_{1}+ n)^{2}}{|m\tau+n|^{5}}>0 \tag{5.15}\]
and hence on the points of \(N\) we have
\[\mathrm{Det}(T)(p)<-\frac{t^{4}}{3}+\frac{t\chi}{4(2\pi)^{3}}\sum_{(m,n)\in \mathbb{Z}^{2}-(0,0)}\frac{1}{|m\tau+n|^{3}}=-t\cdot R_{1}(t,\tau)<0\,. \tag{5.16}\]
This shows that the tensor \(T\) is horizontally non-degenerate on \(N\), and hence by Theorem 2.4 we have that the instanton corrected HK metric \(g_{N}\) is well defined on \(N\). On the other hand by [13, Equation 3.38], if the signature of the real matrix \((T_{i\overline{j}})\) is \((n,m)\) (called \(M_{ij}\) in [13]), then the signature of \(g_{N}\) is \((4n,4m)\). In our case \((T_{i\overline{j}})\) is a \(2\times 2\) matrix, and since \(\mathrm{Det}(T)<0\), we must have that \((T_{i\overline{j}})\) has signature \((1,1)\). It then follows that the signature of \(g_{N}\) is \((4,4)\)
**Proposition 5.2**.: If \(f\), \(f_{3}\) and \(g_{N}(V,V)\) are the functions on \(N\) defined on Section 2.1.2, and the 1-loop parameter is taken to be \(c_{\ell}=\frac{\chi}{192\pi}\), then on \(N\) we have
\[f>0,\quad f_{3}<0,\quad g_{N}(V,V)>0\,. \tag{5.17}\]
Proof.: Using (3.59) we obtain the following expressions for \(f\) in our case:
\[f =2\pi\tau_{2}^{2}\left(\frac{2t^{3}}{3}\right)-\frac{\tau_{2}^{2} \chi}{2(2\pi)^{3}}\sum_{(m,n)\in\mathbb{Z}^{2}-\{0\}}\frac{1}{|m\tau+n|^{3}} \tag{5.18}\] \[=4\pi\tau_{2}^{2}\left(\frac{t^{3}}{3}-\frac{\chi}{8(2\pi)^{3}} \sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\frac{1}{|m\tau+n|^{3}}\right)\] \[>4\pi\tau_{2}^{2}R_{1}(t,\tau).\]
So it follows that \(f>0\) on \(N\).
On the other hand, we remark that if we show that \(f_{3}<0\) on \(N\), then \(g_{N}(V,V)>0\) follows due to the relation \(g_{N}(V,V)=2(f-f_{3})\) and the fact that we showed that \(f>0\) on \(N\).
To study \(f_{3}\) we need to study the expression \(g_{N}(V,V)\). Using [25, Equation 3.35] adapted to our conventions gives the following expression for \(g_{N}\):
\[g_{N}=2\pi\left(T_{i\overline{j}}\mathrm{d}Z^{i}\mathrm{d}\overline{Z}^{j}+( W_{i}+W_{i}^{\mathrm{inst}})T^{i\overline{j}}(\overline{W}_{j}+\overline{W}_{j}^{ \mathrm{inst}})\right), \tag{5.19}\]
where (using the notation \(\gamma=q_{i}(\gamma)\gamma^{i}\))
\[W_{i}=\mathrm{d}\zeta_{\overline{\gamma}_{i}}+\tau_{ij}\mathrm{d}\zeta_{ \gamma^{j}},\quad W_{i}^{\mathrm{inst}}=-\sum_{\gamma}\Omega(\gamma)q_{i}( \gamma)\left(A_{\gamma}^{\mathrm{inst}}-\mathrm{i}V_{\gamma}^{\mathrm{inst}} \mathrm{d}\zeta_{\gamma}\right)\,. \tag{5.20}\]
Using that \(V=2\mathrm{i}(Z^{i}\partial_{Z^{i}}-\overline{Z}^{i}\partial\overline{Z}^{i})\) we can compute the evaluation of \(g_{N}(V,V)\), which in our case gives
\[g_{N}(V,V)= 8\pi T_{i\overline{j}}Z^{i}\overline{Z}^{j}+2\pi|W_{0}^{\mathrm{ inst}}(V)|^{2}T^{0\overline{0}} \tag{5.21}\] \[=8\pi\tau_{2}^{2}\left(\frac{2t^{3}}{3}-\frac{\chi}{4(2\pi)^{3}} \sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\left(\frac{3(m\tau_{1}+n)^{2}}{|m\tau+n|^{ 5}}-\frac{1}{|m\tau+n|^{3}}\right)\right)+2\pi|W_{0}^{\mathrm{inst}}(V)|^{2}T^ {0\overline{0}}\]
where (using (3.40) and Lemma 3.14 for the expression of \(W_{0}^{\mathrm{inst}}\))
\[T^{0\overline{0}} =\frac{t}{\mathrm{Det}(T)}<0, \tag{5.22}\] \[W_{0}^{\mathrm{inst}}(V) =-\frac{\mathrm{i}\chi}{\pi}\tau_{2}\sum_{q_{0}q_{0}}\sum_{n>0}e^ {2\pi\mathrm{i}n\zeta_{q_{0}\gamma^{0}}}q_{0}|q_{0}|K_{1}(2\pi n\tau_{2}|q_{0}|)\] \[=-\frac{\chi\tau_{2}}{2(2\pi)^{3}}\partial_{\tau_{1}}\partial_{ \tau_{2}}T_{0}^{(2)}\] \[=-\frac{3\chi\tau_{2}^{2}}{2(2\pi)^{3}}\sum_{(m,n)\in\mathbb{Z}^{ 2}}\frac{(m\tau_{1}+n)m}{|m\tau+n|^{5}}\,.\]
In the following, it will be convenient to find a lower bound for the negative term \(|W_{0}^{\mathrm{inst}}(V)|^{2}T^{0\overline{0}}\). We have
\[2\pi|W_{0}^{\mathrm{inst}}(V)|^{2}T^{0\overline{0}} =2\pi\left|\frac{3\chi\tau_{2}^{2}}{2(2\pi)^{3}}\sum_{(m,n)\in \mathbb{Z}^{2}}\frac{(m\tau_{1}+n)m}{|m\tau+n|^{5}}\right|^{2}\frac{t}{ \mathrm{Det}(T)}\] \[>2\pi\left(\frac{3\chi\tau_{2}}{2(2\pi)^{3}}\right)^{2}\left| \sum_{(m,n)\in\mathbb{Z}^{2}}\frac{(m\tau_{1}+n)m\tau_{2}}{|m\tau+n|^{5}} \right|^{2}\,\left(-R_{1}(t,\tau)\right)^{-1}\]
\[>2\pi\left(\frac{3\chi_{7}}{2(2\pi)^{3}}\right)^{2}\left(\sum_{(m,n) \in\mathbb{Z}^{2}}\frac{|m\tau_{1}+n||m|\tau_{2}}{|m\tau+n|^{5}}\right)^{2}\left( -R_{1}(t,\tau)\right)^{-1}\] \[>2\pi\left(\frac{3\chi_{7}}{2(2\pi)^{3}}\right)^{2}\left(\sum_{(m, n)\in\mathbb{Z}^{2}}\frac{(m\tau_{1}+n)^{2}+(m\tau_{2})^{2}}{2|m\tau+n|^{5}} \right)^{2}\left(-R_{1}(t,\tau)\right)^{-1}\] \[=16\pi\tau_{2}^{2}\frac{(3\chi)^{2}}{2(4\pi)^{6}}\left(\sum_{(m,n) \in\mathbb{Z}^{2}}\frac{1}{|m\tau+n|^{3}}\right)^{2}\left(-R_{1}(t,\tau)\right) ^{-1}\]
where in the first inequality we have used that \(\operatorname{Det}(T)<-t\cdot R_{1}<0\); in the second we used the triangle inequality, and in the last one we have used that \(\sqrt{xy}\leq(x+y)/2\) for \(x=(m\tau_{1}+n)^{2}\) and \(y=(m\tau_{2})^{2}\).
Using (5.23) and (5.21), we then find that \(f_{3}=f-\frac{1}{2}g_{N}(V,V)\) gives
\[f_{3} =-4\pi\tau_{2}^{2}\left(\frac{t^{3}}{3}-\frac{\chi}{4(2\pi)^{3}} \sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\left(\frac{3(m\tau_{1}+n)^{2}}{|m\tau+n|^{ 5}}-\frac{3}{2|m\tau+n|^{3}}\right)\right)-\pi|W_{0}^{\text{inst}}(V)|^{2}T^{0 \overline{0}}\] \[<-4\pi\tau_{2}^{2}\left(\frac{t^{3}}{3}-\frac{3\chi}{4(2\pi)^{3} }\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\frac{(m\tau_{1}+n)^{2}}{|m\tau+n|^{5}} \right)-\pi|W_{0}^{\text{inst}}(V)|^{2}T^{0\overline{0}}\] \[<-4\pi\tau_{2}^{2}\left(\frac{t^{3}}{3}-\frac{3\chi}{4(2\pi)^{3} }\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\frac{1}{|m\tau+n|^{3}}\right)-\pi|W_{0}^{ \text{inst}}(V)|^{2}T^{0\overline{0}}\] \[<-4\pi\tau_{2}^{2}\left(\frac{t^{3}}{3}-\frac{3\chi}{4(2\pi)^{3} }\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\frac{1}{|m\tau+n|^{3}}\right)+8\pi\tau_{2 }^{2}\frac{(3\chi)^{2}}{2^{7}(2\pi)^{6}}\left(\sum_{(m,n)\in\mathbb{Z}^{2}} \frac{1}{|m\tau+n|^{3}}\right)^{2}\left(R_{1}(t,\tau)\right)^{-1}\] \[=-4\pi\tau_{2}^{2}R_{2}(t,\tau)<0\,. \tag{5.24}\]
The required inequalities therefore hold on \(N\).
We then obtain as a corollary of Propositions 5.1, 5.2, and Theorem 2.12 the following:
**Corollary 5.3**.: The instanton corrected q-map metric \(g_{\overline{N}}\) associated to the prepotential (5.4), BPS indices (5.8) and with 1-loop constant \(c_{\ell}=\frac{\chi}{192\pi}\) is defined and positive definite on
\[\widetilde{N}:=\{(\tau_{2},b+\mathrm{i}t,\tau_{1},\zeta^{1},\widetilde{\zeta}_ {0},\widetilde{\zeta}_{1},\sigma)\in\mathbb{R}_{>0}\times\overline{M}^{\text{ cl}}\times\mathbb{R}^{4}\times\mathbb{R}\mid R_{1}(t,\tau)>0,\quad R_{2}(t, \tau)>0\,\}. \tag{5.25}\]
Proof.: Because of Propositions 5.1 and 5.2, we can take \(N^{\prime}=N\), where \(N^{\prime}\) is defined on (2.16). It then follows from Theorem 2.12 that \(g_{\overline{N}}\) from (2.28) is defined and positive definite on \(\widetilde{N}\) from (5.25).
The most important thing about \(\widetilde{N}\) is the following:
**Proposition 5.4**.: \(\widetilde{N}\) carries an action of S-duality.
Proof.: Since the constraints in (5.25) are given only on \(t\) and \(\tau\), we focus only on this variables. Under the action of \(S\in\operatorname{SL}(2,\mathbb{Z})\) we have
\[t\to|\tau|t,\quad\tau\to-1/\tau\,, \tag{5.26}\]
while under the action of \(T\in\operatorname{SL}(2,\mathbb{Z})\) we have
\[t\to t,\quad\tau\to\tau+1\,. \tag{5.27}\]
From the definition of \(R_{1}\) and \(R_{2}\), it follows immediately that they are invariant under \(T\), and under \(S\) they satisfy
\[R_{i}(|\tau|t,-1/\tau)=|\tau|^{3}R_{i}(t,\tau),\quad i=1,2 \tag{5.28}\]
so we conclude that \(\widetilde{N}\) is invariant under \(T\) and \(S\). Since \(S\) and \(T\) generate \(\operatorname{SL}(2,\mathbb{Z})\) it then follows that \(\widetilde{N}\) is invariant under \(S\)-duality.
**Corollary 5.5**.: The positive definite QK metric \((\widetilde{N},g_{\overline{N}})\) from Corollary 5.3 has an effective action by isometries by the group \(\operatorname{SL}(2,\mathbb{Z})\ltimes(\mathbb{R}\ltimes H_{4})\).
Proof.: By Proposition 5.4, \(\widetilde{N}\) carries an action by \(\operatorname{SL}(2,\mathbb{Z})\). Furthermore, since the definition of \(\widetilde{N}\) in (5.25) only constrains \(\tau\) and \(t\), and the action of \(\mathbb{R}\ltimes H_{4}\) given in Section 4 acts trivially on \(\tau\) and \(t\), \(\widetilde{N}\) must carry an action of the group \(\mathbb{R}\ltimes H_{4}\). It then follows from the Theorem 4.7 that \((\widetilde{N},g_{\overline{N}})\) carries and action by isometries by the groups \(\operatorname{SL}(2,\mathbb{Z})\) and \(\mathbb{R}\ltimes H_{4}\) and hence by \(\operatorname{SL}(2,\mathbb{Z})\ltimes(\mathbb{R}\ltimes H_{4})\). We remark that we get \(\mathbb{R}\ltimes H_{4}\) instead of \(\mathbb{Z}\ltimes H_{4,D}\) since in our case we have \(n_{\dot{\gamma}}=0\) for all \(\dot{\gamma}\).
**Remark 5.6**.:
* Notice that the QK metric from Corollary 5.3 is incomplete. This follows from the fact that the functions \(R_{i}(t,\tau)\) defining \(\widetilde{N}\) in (5.25) are obtained from strict inequalities involving the conditions (see (5.16), (5.18), (5.24)) \[\operatorname{Det}(T)<0,\quad f>0,\quad f_{3}<0\,.\] (5.29) This means that \(g_{\overline{N}}\) extends to a positive definite metric on a bigger manifold containing (5.25), and hence cannot be complete. It might be that (5.29) is preserved by the S-duality action, but (except for \(f>0\)) this is not obvious from the formulas and the \(\operatorname{SL}(2,\mathbb{Z})\) action.
* We also note that one can find a smaller S-duality invariant open subset \(\widetilde{N}^{\prime}\subset\widetilde{N}\) defined by a single relation \(R(t,\tau)>0\). Namely, if we set \[R(t,\tau):=\frac{t^{3}}{3}-\frac{9\chi}{8(2\pi)^{3}}\sum_{(m,n)\in\mathbb{Z}^ {2}-(0,0)}\frac{1}{|m\tau+n|^{3}}\] (5.30) and define \[s(\tau):=\frac{\chi}{4(2\pi)^{3}}\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)}\frac{1} {|m\tau+n|^{3}},\] (5.31) then we find that \(R(t,\tau)>0\) implies that \[R_{1}(t,\tau)=\frac{t^{3}}{3}-s(\tau)>\frac{t^{3}}{3}-\frac{9}{2}s(\tau)=R(t, \tau)>0,\] (5.32) and furthermore \[R_{1}(t,\tau)R_{2}(t,\tau) =\left(\frac{t^{3}}{3}-3s(\tau)\right)R_{1}-\frac{(3\chi)^{2}}{( 4\pi)^{6}}\left(\sum_{(m,n)\in\mathbb{Z}^{2}}\frac{1}{|m\tau+n|^{3}}\right)^{2}\] (5.33) \[>\left(\frac{t^{3}}{3}-3s(\tau)\right)^{2}-\frac{(3\chi)^{2}}{(4 \pi)^{6}}\left(\sum_{(m,n)\in\mathbb{Z}^{2}}\frac{1}{|m\tau+n|^{3}}\right)^{2}\] \[=\left(\frac{t^{3}}{3}-\frac{9}{2}s(\tau)\right)\left(\frac{t^{3} }{3}-\frac{3}{2}s(\tau)\right)\] \[>R^{2}(t,\tau)>0\,.\] It then follows that \(R(t,\tau)>0\) also implies \(R_{2}(t,\tau)>0\).
**Theorem 5.7**.: Let \((\widetilde{N},g_{\overline{N}})\) be as in Corollary 5.3. Then:
* There is a free and properly discontinuous action by isometries of a discrete group of the form \(\operatorname{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda\), where \(\Lambda\subset\mathbb{R}\ltimes H_{4}\) is a lattice, \(\operatorname{SL}(2,\mathbb{Z})^{\prime}\subset\operatorname{SL}(2,\mathbb{ Z})\) is a finite index subgroup and the QK manifold \((\widetilde{N}/(\operatorname{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda),g_{ \overline{N}})\) has finite volume.
* Furthermore, there is a submanifold with boundary \(\hat{N}\subset\widetilde{N}\) where \(\operatorname{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda\) acts and the quotient \((\hat{N}/(\operatorname{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda),g_{ \overline{N}})\) gives a complete QK manifold with boundary and of finite volume.
Proof.: By using the same argument as in [14, Theorem 3.21], we can find a lattice \(\Lambda\subset\mathbb{R}\ltimes H_{4}\) such that \(\operatorname{SL}(2,\mathbb{Z})\ltimes\Lambda\) is a lattice of the group of isometries \(\operatorname{SL}(2,\mathbb{Z})\ltimes(\mathbb{R}\ltimes H_{4})\) (recall Corollary 5.5). On the other hand, by using the \(\operatorname{SL}(2,\mathbb{Z})\) action by isometries, we can assume that \(\tau\) lies in the usual fundamental domain \(\mathcal{F}_{\mathbb{H}}\) of the upper half place \(\mathbb{H}\subset\mathbb{C}\) given by
\[\mathcal{F}_{\mathbb{H}}:=\{\tau\in\mathbb{H}\mid|\tau|>1,\ \ |\tau_{1}|<1/2\}\cup\{ \tau\in\mathbb{H}\mid|\tau|\geq 1,\ \ \tau_{1}=-\frac{1}{2}\}\cup\{\tau\in\mathbb{H}\mid|\tau|=1,\ \ -\frac{1}{2}<\tau_{1}\leq 0\}\,. \tag{5.34}\]
If we define
\[F:=\{(\tau_{2},b+\mathrm{i}t,\tau_{1},\zeta^{1},\widetilde{\zeta}_{0}, \widetilde{\zeta}_{1},\sigma)\in\widetilde{N}\mid\tau\in\mathcal{F}_{\mathbb{H }}\} \tag{5.35}\]
we can then think of the quotient \(\widetilde{N}/(\operatorname{SL}(2,\mathbb{Z})\ltimes\Lambda)\) as
\[\widetilde{N}/(\operatorname{SL}(2,\mathbb{Z})\ltimes\Lambda)=F/\Lambda\,. \tag{5.36}\]
Note that \(\Lambda\) actually acts on \(F\), since \(F\) is defined by restrictions on \(\tau\) and \(t\) and the latter are left invariant by the action of \(\mathbb{R}\ltimes H_{4}\). On \(F/\Lambda\) the coordinates \(b\), \(\tau_{1}\), \(\zeta^{1}\), \(\widetilde{\zeta}_{0}\), \(\widetilde{\zeta}_{1}\) and \(\sigma\) are periodic. Furthermore, \(\tau_{2}\) is bounded below by some positive constant (in fact, by \(\sqrt{3}/2>0\)), and the same holds for \(t\), since the relation \(R_{1}(t,\tau)>0\) defining \(\widetilde{N}\) implies
\[\frac{t^{3}}{3}>\frac{\chi}{4(2\pi)^{3}}\sum_{(m,n)\in\mathbb{Z}^{2}-(0,0)} \frac{1}{|m\tau+n|^{3}}>\frac{\chi}{4(2\pi)^{3}}\sum_{n\in\mathbb{Z}-\{0\}} \frac{1}{|n|^{3}}=\frac{\chi\zeta(3)}{2(2\pi)^{3}}>0. \tag{5.37}\]
On the other hand, \(F\) has a boundary where \(R_{1}(t,\tau)=0\) or \(R_{2}(t,\tau)=0\). Since we know that the metric admits an extension beyond this boundary (see Remark 5.6), it cannot cause \(F/\Lambda\) to have infinite volume. Furthermore, on the end of \(F/\Lambda\), where \(\tau_{2},t\to\infty\), the quantum correction terms in \(g_{\overline{N}}\) are dominated by the tree-level q-map metric \((\overline{\mathcal{N}}^{\mathrm{cl}}_{\mathrm{IIB}},g^{\mathrm{cl}})\) associated to the PSR manifold \(\mathcal{H}=\{\mathrm{point}\}\) (defined by \(h(t)=t^{3}/6\)). Namely, in terms of the components of the metrics \(g_{\overline{N}}\) and \(g^{\mathrm{cl}}\) we have \(g_{\overline{N},ij}\sim g_{ij}^{\mathrm{cl}}\) on the end of \(F/\Lambda\)6. We can therefore use the volume density of \(g^{\mathrm{cl}}\) to asymptotically approximate the volume density of \(g_{\overline{N}}\) on this end. But by [14, Theorem 3.21], we know that \(\overline{\mathcal{N}}^{\mathrm{cl}}_{\mathrm{IIB}}/(\operatorname{SL}(2, \mathbb{Z})\ltimes\Lambda)\) has finite volume with respect to \(g_{\mathrm{cl}}\), so that \(\widetilde{N}/(\operatorname{SL}(2,\mathbb{Z})\ltimes\Lambda)\subset\overline {\mathcal{N}}^{\mathrm{cl}}_{\mathrm{IIB}}/(\operatorname{SL}(2,\mathbb{Z}) \ltimes\Lambda)\) also has finite volume with respect to \(g^{\mathrm{cl}}\). It follows that the end of \(F/\Lambda\) where \(\tau_{2},t\to\infty\) must give a finite volume contribution with respect to \(g_{\overline{N}}\). Hence, \((\widetilde{N}/(\operatorname{SL}(2,\mathbb{Z})\ltimes\Lambda),g_{\overline{ N}})\) has finite volume.
Footnote 6: This can be checked by looking at the \(t\) and \(\tau_{2}\) dependence of the quantum corrections. In our example, we only have the analog of a perturbative world-sheet correction encoded in the \(\chi\) term of \(\mathfrak{F}\), the analog of the perturbative 1-loop correction encoded in \(c_{\ell}=\chi/192\pi\), and the analog of \(D(\cdot)\) instanton corrections encoded in terms with \(\Omega(q\gamma^{0})=-\chi\). The D(-1) corrections are exponentially suppressed as \(\tau_{2}\to\infty\), while the perturbative corrections can be safely ignored when \(\tau_{2},t\to\infty\), since these are dominated by the classical terms of the tree-level q-map metric.
The action of \(\operatorname{SL}(2,\mathbb{Z})\ltimes\Lambda\) on \(\widetilde{N}\) is easily seen to be properly discontinuous, however it is not free, since the points satisfying \(\tau_{2}=\mathrm{i}\), \(b=\tau_{1}=c^{1}=c_{0}=\psi=0\) are fixed by the subgroup \(\langle S\rangle=\mathbb{Z}/4\mathbb{Z}\subset\operatorname{SL}(2,\mathbb{Z})\). We can nevertheless pick a finite index subgroup \(\operatorname{SL}(2,\mathbb{Z})^{\prime}\subset\operatorname{SL}(2,\mathbb{Z})\) that acts freely by choosing a finite index subgroup that intersects \(\langle S\rangle\) only at the identity (for example by picking the kernel of the homomorphism \(\operatorname{SL}(2,\mathbb{Z})\to\operatorname{SL}(2,\mathbb{Z}/N\mathbb{Z})\) reducing the entries modulo \(N\) for \(N\geq 3\)). We therefore get a free, properly discontinuous action of \(\operatorname{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda\) on \(\widetilde{N}\). Since \((\widetilde{N}/(\operatorname{SL}(2,\mathbb{Z})\ltimes\Lambda),g_{\overline{N}})\) has finite volume and \(\operatorname{SL}(2,\mathbb{Z})^{\prime}\subset\operatorname{SL}(2,\mathbb{Z})\) is a finite index subgroup, it follows that \((\widetilde{N}/(\operatorname{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda),g_{ \overline{N}})\) is a QK manifold of finite volume.
We now prove the last statement. We define \(\hat{N}\) by
\[\hat{N}:=\{(\tau_{2},b+\mathrm{i}t,\tau_{1},\zeta^{1},\widetilde{\zeta}_{0}, \widetilde{\zeta}_{1},\sigma)\in\widetilde{N}\mid R(t,\tau)\geq 0\ \}, \tag{5.38}\]
where \(\widetilde{N}\) is as in Corollary 5.3 and \(R(t,\tau)\) was defined in (5.30). It is easy to check that the function \(R(t,\tau)\) is regular on the level set \(R(t,\tau)=0\) (for example, by using that \(t>0\) on \(\widetilde{N}\), due to (5.37)), so that \(\hat{N}\) is a smooth manifold with boundary.
By the same argument of Proposition 5.4 and Corollary 5.5 it follows that the action of \(\operatorname{SL}(2,\mathbb{Z})\ltimes(\mathbb{R}\ltimes H_{4})\) on \(\widetilde{N}\) restricts to \(\hat{N}\). Furthermore, since \((\widetilde{N}/(\operatorname{SL}(2,\mathbb{Z})^{\prime}\ltimes\Lambda),g_{ \overline{N}})\) has finite volume, \((\hat{N}/(\operatorname{SL}(2,\mathbb{Z})\ltimes\Lambda),g_{\overline{N}})\) also has finite volume. Finally, to show that \((\hat{N}/(\operatorname{SL}(2,\mathbb{Z})\ltimes\Lambda),g_{\overline{N}})\) is complete, recall that
a Riemannian manifold with boundary is complete if it is a complete metric space with the induced distance function. This follows from the fact that it contains the boundary points satisfying \(R(t,\tau)=0\); the coordinates \(b\), \(\tau_{1}\), \(\zeta^{1}\), \(\widetilde{\zeta}_{0}\), \(\widetilde{\zeta}_{1}\), \(\sigma\) are periodic; and the metric in the end where \(\tau_{2},t\to\infty\) is complete, since it can be approximated by the complete tree-level q-map metric \(g^{\mathrm{cl}}\) (the completeness of \(g^{\mathrm{cl}}\) follows from [13, Theorem 27]).
We finish with the following remark about a related example associated to the resolved conifold.
**Remark 5.8**.: To the resolved conifold one can associate a natural holomorphic prepotential of the required form (3.4), where
\[\mathfrak{F}=-\frac{1}{6}\frac{(Z^{1})^{3}}{Z^{0}}+\chi\frac{(Z^{0})^{2}\zeta (3)}{2(2\pi\mathrm{i})^{3}}-\frac{(Z^{0})^{2}}{(2\pi\mathrm{i})^{3}}n_{\gamma ^{1}}\mathrm{Li}_{3}(e^{2\pi\mathrm{i}Z^{1}/Z^{0}})\,,\quad\chi=2,\quad n_{ \gamma^{1}}=1\,. \tag{5.39}\]
This form of the prepotential can be motivated by considering a certain extension of the Picard-Fuchs operators, see [12]. On the other hand, the BPS spectrum of the resolved conifold also has the required form (3.12), with
\[\begin{cases}\Omega(q_{0}\gamma^{0})=-\chi=-2,\quad q_{0}\in\mathbb{Z}-\{0\} \\ \Omega(q_{0}\gamma^{0}\pm\gamma^{1})=\Omega(\pm\gamma^{1})=n_{\gamma^{1}}=1 \quad\text{for $\gamma^{1}\in\Lambda^{+}$, $q_{0}\in\mathbb{Z}$}\\ \Omega(\gamma)=0\quad\text{else}.\end{cases} \tag{5.40}\]
We can therefore apply the construction of Section 3 and obtain an instanton corrected q-map space. We expect that a similar argument as in the example can be done in order to find an S-duality invariant open subset of the domain of definition of the associated instanton corrected q-map space. As in Theorem 5.7, we also expect that it admits a quotient of finite volume. Furthermore, we remark that since the resolved conifold is a non-compact Calabi-Yau 3-fold without compact divisors, the possible quantum corrections in the type IIB string theory language simplify to just the world-sheet corrections and the 1-loop correction \(c_{\ell}\) together with the D(-1) and D1-instanton corrections. Since these are all accounted for in the above construction, one could expect that that the resulting instanton corrected q-map space with its maximal domain of definition might be complete. We leave the study of this question for future work.
## Appendix A Integral identities in terms of Bessel functions
In this appendix we will prove the following lemma, relating certain integrals with infinite sums of Bessel functions. We recall that given \(\gamma=q_{i}\gamma^{i}\) and the functions \(\xi^{i}\) from (2.42), we use the notation from Section 2.2.2, where \(\xi_{\gamma}=q_{i}\xi^{i}=q_{i}(\zeta^{i}-\mathrm{i}R(t^{-1}z^{i}+t\overline{ z}^{i}))=-\zeta_{\gamma}-\mathrm{i}R(t^{-1}\widetilde{Z}_{\gamma}+t\overline{ \widetilde{Z}_{\gamma}})\).
**Lemma A.1**.: Consider the modified Bessel functions of the second kind \(K_{\nu}:\mathbb{R}_{>0}\to\mathbb{R}\), which have the following integral representation
\[K_{\nu}(x)=\int_{0}^{\infty}dt\exp(-x\cosh(t))\cosh(\nu t),\quad\nu=0,1,2,...\] (A.1)
Following the notation from Section 2.2.2, we have the following identities for \(\gamma=q_{i}\gamma^{i}\)
\[\int_{l_{\gamma}}\!\frac{d\zeta}{\zeta^{1-q}}\frac{\exp(2\pi\mathrm{i}\xi_{ \gamma})}{1-\exp(2\pi\mathrm{i}\xi_{\gamma})}=\begin{cases}2\frac{| \widetilde{Z}_{\gamma}|^{2}}{\widetilde{Z}_{\gamma}}\sum_{n>0}e^{-2\pi\mathrm{i }\zeta_{\gamma}}\left(K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|)+\frac{K_{1}(4\pi Rn |\widetilde{Z}_{\gamma}|)}{2\pi Rn|\widetilde{Z}_{\gamma}|}\right),\quad q=-2 \\ -2\frac{|\widetilde{Z}_{\gamma}|}{\widetilde{Z}_{\gamma}}\sum_{n>0}e^{-2\pi \mathrm{i}n\zeta_{\gamma}}K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|),\quad q=-1 \\ 2\sum_{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{ \gamma}|),\quad q=0\\ -2\frac{|\widetilde{Z}_{\gamma}|}{|\widetilde{Z}_{\gamma}|}\sum_{n>0}e^{-2\pi \mathrm{i}n\zeta_{\gamma}}K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|),\quad q=1 \end{cases}\] (A.2)
and
\[\int_{l_{\gamma}}\frac{d\zeta}{\zeta^{1-q}}\log(1-\exp(2\pi\mathrm{i}\xi_{ \gamma}))=\begin{cases}2\frac{|\widetilde{Z}_{\gamma}|}{\widetilde{Z}_{\gamma }}\sum_{n>0}\frac{e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|),\quad q=-1\\ -2\sum_{n>0}\frac{e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}K_{0}(4\pi Rn| \widetilde{Z}_{\gamma}|),\quad q=0\\ 2\frac{\widetilde{Z}_{\gamma}|}{\widetilde{Z}_{\gamma}|}\sum_{n>0}\frac{e^{-2 \pi\mathrm{i}n\zeta_{\gamma}}}{n}K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|),\quad q =1\,.\end{cases}\] (A.3)
Proof.: We start by computing the integrals of the form
\[\int_{l_{\gamma}}\frac{d\zeta}{\zeta^{1-q}}\frac{\exp(2\pi\mathrm{i}\xi_{\gamma})} {1-\exp(2\pi\mathrm{i}\xi_{\gamma})},\quad q=-2,-1,0,1,2\,,\] (A.4)
where
\[l_{\gamma}=\{t\mid\widetilde{Z}_{\gamma}/t\in\mathbb{R}_{<0}\}\,.\] (A.5)
Setting \(\zeta=-s\widetilde{Z}_{\gamma}/|\widetilde{Z}_{\gamma}|\) for \(s\in(0,\infty)\), and using that \(|\exp(2\pi\mathrm{i}\xi_{\gamma})|_{l_{\gamma}}|<1\), we have
\[\int_{l_{\gamma}}\frac{d\zeta}{\zeta^{1-q}}\frac{\exp(2\pi\mathrm{ i}\xi_{\gamma})}{1-\exp(2\pi\mathrm{i}\xi_{\gamma})} =(-1)^{q}\frac{\widetilde{Z}_{\gamma}^{q}}{|\widetilde{Z}_{\gamma }|^{q}}\int_{0}^{\infty}\frac{ds}{s^{1-q}}\frac{\exp(-2\pi R|\widetilde{Z}_{ \gamma}|(s^{-1}+s)-2\pi\mathrm{i}\zeta_{\gamma})}{1-\exp(-2\pi R|\widetilde{Z} _{\gamma}|(s^{-1}+s)-2\pi\mathrm{i}\zeta_{\gamma})}\] \[=(-1)^{q}\frac{\widetilde{Z}_{\gamma}^{q}}{|\widetilde{Z}_{\gamma }|^{q}}\sum_{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}\int_{0}^{\infty}\frac{ds} {s^{1-q}}\exp(-2\pi Rn|\widetilde{Z}_{\gamma}|(s^{-1}+s))\] \[=(-1)^{q}\frac{\widetilde{Z}_{\gamma}^{q}}{|\widetilde{Z}_{\gamma }|^{q}}\sum_{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}\int_{-\infty}^{\infty}dxe ^{qx}\exp(-4\pi Rn|\widetilde{Z}_{\gamma}|\cosh(x))\] \[=2(-1)^{q}\frac{\widetilde{Z}_{\gamma}^{q}}{|\widetilde{Z}_{\gamma }|^{q}}\sum_{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}\int_{0}^{\infty}dx\exp(-4 \pi Rn|\widetilde{Z}_{\gamma}|\cosh(x))\cosh(|q|x)\] \[=2(-1)^{q}\frac{\widetilde{Z}_{\gamma}^{q}}{|\widetilde{Z}_{ \gamma}|^{q}}\sum_{n>0}e^{-2\pi\mathrm{i}\zeta_{\gamma}}K_{|q|}(4\pi Rn| \widetilde{Z}_{\gamma}|)\] (A.6)
Hence, we obtain (A.2), where for the case \(q=\pm 2\) we have used the identity \(K_{2}(x)=K_{0}(x)+2K_{1}(x)/x\).
We can similarly compute integrals of the form
\[\int_{l_{\gamma}}\frac{d\zeta}{\zeta^{1-q}}\log(1-\exp(2\pi\mathrm{i}\xi_{ \gamma})),\quad q=-1,0,1\] (A.7)
Indeed since \(|\exp(2\pi\mathrm{i}\xi_{\gamma})|_{l_{\gamma}}|<1\), we can expand \(\log(1-x)=-\sum_{n>0}x^{n}/n\) and obtain
\[\int_{l_{\gamma}}\frac{d\zeta}{\zeta^{1-q}}\log(1-\exp(2\pi \mathrm{i}\xi_{\gamma})) =-\sum_{n>0}\frac{e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}\int_{l_ {\gamma}}\frac{d\zeta}{\zeta^{1-q}}\exp(2\pi nR\widetilde{Z}_{\gamma}/\zeta+nR \overline{Z}_{\gamma}\zeta)\] \[=(-1)^{q+1}\frac{\widetilde{Z}_{\gamma}^{q}}{|\widetilde{Z}_{ \gamma}|^{q}}\sum_{n>0}\frac{e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}\int_{0}^{ \infty}\frac{ds}{s^{1-q}}\exp(-2\pi Rn|\widetilde{Z}_{\gamma}|(s^{-1}+s))\] (A.8) \[=2(-1)^{q+1}\frac{\widetilde{Z}_{\gamma}^{q}}{|\widetilde{Z}_{ \gamma}|^{q}}\sum_{n>0}\frac{e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}K_{|q|}(4 \pi Rn|\widetilde{Z}_{\gamma}|)\]
so that we obtain (A.3).
## Appendix B Type IIA Darboux coordinates for instanton corrected c-map spaces
Here we give a proof of Theorem 2.19.
Proof.: We note that \(\widetilde{\xi}_{i}\) and \(\alpha\) can be written as \(\widetilde{\xi}_{i}=\widetilde{\xi}_{i}^{c}+\widetilde{\xi}^{\mathrm{inst}}\) and \(\alpha=\alpha^{c}+\alpha^{\mathrm{inst}}\), where the index \(c\) denotes the part of the coordinates that coincides with the c-map case of (2.42), and inst denotes the
terms involving \(\Omega(\gamma)\). We have a similar decomposition for \(f\) and \(\theta_{i}^{P}\) using (2.35), and hence \(\lambda\) by using (2.34). Because of Proposition 2.18 it is then enough to show that
\[f^{\rm inst}\frac{{\rm d}t}{t}+t^{-1}\theta_{+}^{P,{\rm inst}}|_{ \overline{N}}-2{\rm i}\theta_{3}^{P,{\rm inst}}|_{\overline{N}}+t\theta_{-}^{P,{\rm inst}}|_{\overline{N}}=-2\pi{\rm i}\left({\rm d}\alpha^{\rm inst}+ \widetilde{\xi}_{i}^{\rm inst}{\rm d}\xi^{i}-\xi^{i}{\rm d}\widetilde{\xi}_{i }^{\rm inst}\right)\,.\] (B.1)
We will check this by a direct computation.
First we compute \({\rm d}\alpha^{\rm inst}\). From the expression of \({\cal W}\) in (2.54) we compute \({\rm d}{\cal W}\):
\[\begin{split}{\rm d}{\cal W}=& R\sum_{\gamma} \Omega(\gamma)\sum_{n>0}\frac{e^{-2\pi{\rm i}n\zeta_{\gamma}}}{n}K_{0}(4\pi Rn| \widetilde{Z}_{\gamma}|)\left(\frac{\widetilde{Z}_{\gamma}}{2(\rho+c_{\ell})} {\rm d}\rho+\frac{\widetilde{Z}_{\gamma}}{2}{\rm d}{\cal K}+{\rm d} \widetilde{Z}_{\gamma}\right)\\ &-2\pi{\rm i}R\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma} \sum_{n>0}e^{-2\pi{\rm i}n\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma} |){\rm d}\zeta_{\gamma}\\ &-2\pi R^{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma} \sum_{n>0}e^{-2\pi{\rm i}n\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn |\widetilde{Z}_{\gamma}|)\left(\frac{{\rm d}\rho}{\rho+c_{\ell}}+{\rm d}{\cal K }+\frac{{\rm d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}+\frac{{\rm d} \overline{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}\right)\end{split}\] (B.2)
where we have used on the last line that \((K_{0}(x))^{\prime}=-K_{1}(x)\).
On the other hand, we have
\[\begin{split}&{\rm d}\left(-\frac{1}{2\pi}\sum_{\gamma}\Omega( \gamma)\int_{l_{\gamma}}\frac{{\rm d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta}{\rm L }(\exp(2\pi{\rm i}\xi_{\gamma}))\right)\\ =&-\frac{1}{2\pi}\sum_{\gamma}\Omega(\gamma){\rm d}t \int_{l_{\gamma}}\frac{{\rm d}\zeta}{\zeta}\partial_{t}\left(\frac{t+\zeta}{t -\zeta}\right){\rm L}(\exp(2\pi{\rm i}\xi_{\gamma}(\zeta)))\\ &-\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\frac{{\rm d}\zeta} {\zeta}\frac{t+\zeta}{t-\zeta}\left(-\frac{{\rm i}}{2}\log(1-\exp(2\pi{\rm i} \xi_{\gamma}))+\frac{1}{2}\frac{\exp({\rm i}\xi_{\gamma})}{1-\exp(2\pi{\rm i} \xi_{\gamma})}2\pi\xi_{\gamma}\right){\rm d}\xi_{\gamma}(\zeta)\end{split}\] (B.3)
We can then join everything together to conclude that
\[\begin{split}-2\pi{\rm i}{\rm d}\alpha^{\rm inst}=&- \frac{1}{2\pi}\left(-t^{-2}{\cal W}+\overline{{\cal W}}\right){\rm d}t\\ &-t^{-1}\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{e ^{-2\pi{\rm i}n\zeta_{\gamma}}}{n}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|)\left( \frac{\widetilde{Z}_{\gamma}}{2(\rho+c_{\ell})}{\rm d}\rho+\frac{\widetilde{Z} _{\gamma}}{2}{\rm d}{\cal K}+{\rm d}\widetilde{Z}_{\gamma}\right)\\ &+t^{-1}{\rm i}R\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma} \sum_{n>0}e^{-2\pi{\rm i}n\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}| ){\rm d}\zeta_{\gamma}\\ &+t^{-1}R^{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma} \sum_{n>0}e^{-2\pi{\rm i}n\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn |\widetilde{Z}_{\gamma}|)\left(\frac{{\rm d}\rho}{\rho+c_{\ell}}+{\rm d}{\cal K }+\frac{{\rm d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}+\frac{{\rm d} \overline{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}\right)\\ &+t\cdot\mbox{(complex conjugate of $t^{-1}$ term factor)}\\ &+\frac{1}{4\pi^{2}}\sum_{\gamma}\Omega(\gamma){\rm d}t\int_{l_{ \gamma}}\frac{{\rm d}\zeta}{\zeta}\partial_{t}\left(\frac{t+\zeta}{t-\zeta} \right){\rm L}(\exp(2\pi{\rm i}\xi_{\gamma}(\zeta)))\\ &+\frac{1}{2\pi}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\frac{{ \rm d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta}\left(-\frac{{\rm i}}{2}\log(1-\exp( 2\pi{\rm i}\xi_{\gamma}))+\frac{1}{2}\frac{\exp(2\pi{\rm i}\xi_{\gamma})}{1- \exp(2\pi{\rm i}\xi_{\gamma})}2\pi\xi_{\gamma}\right){\rm d}\xi_{\gamma}(\zeta) \end{split}\] (B.4)
On the other hand, for the terms \(\widetilde{\xi}_{i}^{\rm inst}{\rm d}\xi^{i}-\xi^{i}{\rm d}\widetilde{\xi}_{i }^{\rm inst}\) we have
\[-2\pi{\rm i}\left(\widetilde{\xi}_{i}^{\rm inst}{\rm d}\xi^{i}-\xi^{i}{ \rm d}\widetilde{\xi}_{i}^{\rm inst}\right)= -\frac{1}{4\pi{\rm i}}\sum_{\gamma}\Omega(\gamma){\rm d}\xi_{\gamma }(t)\int_{l_{\gamma}}\frac{{\rm d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta}\log(1- \exp(2\pi{\rm i}\xi_{\gamma}))\] (B.5) \[+\frac{1}{4\pi{\rm i}}\sum_{\gamma}\Omega(\gamma)\xi_{\gamma}(t){ \rm d}t\int_{l_{\gamma}}\frac{{\rm d}\zeta}{\zeta}\partial_{t}\left(\frac{t+ \zeta}{t-\zeta}\right)\log(1-\exp(2\pi{\rm i}\xi_{\gamma}(\zeta)))\] \[-\frac{1}{2}\sum_{\gamma}\Omega(\gamma)\xi_{\gamma}(t)\int_{l_{ \gamma}}\frac{{\rm d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta}\frac{\exp(2\pi{\rm i }\xi_{\gamma}(\zeta))}{1-\exp(2\pi{\rm i}\xi_{\gamma}(\zeta))}{\rm d}\xi_{ \gamma}(\zeta)\,.\]
To start combining terms, we notice that
\[\frac{t+\zeta}{t-\zeta}({\rm d}\xi_{\gamma}(t)-{\rm d}\xi_{\gamma}(\zeta))= \left(\frac{1}{\zeta}+\frac{1}{t}\right){\rm i}{\rm d}(R\widetilde{Z}_{\gamma })-(t+\zeta){\rm i}{\rm d}(R\overline{\widetilde{Z}}_{\gamma})-\frac{t+\zeta}{ t-\zeta}{\rm i}R(-t^{-2}\widetilde{Z}_{\gamma}+\overline{\widetilde{Z}}_{ \gamma}){\rm d}t\] (B.6)
and
\[\frac{t+\zeta}{t-\zeta}(\xi_{\gamma}(t)-\xi_{\gamma}(\zeta))= \left(\frac{1}{\zeta}+\frac{1}{t}\right){\rm i}R\widetilde{Z}_{\gamma}-(t+ \zeta){\rm i}R\overline{\widetilde{Z}}_{\gamma}\] (B.7)
so that we can combine the first and third term of (B.5) with the last term of (B.4) into
\[-\frac{1}{4\pi{\rm i}}\sum_{\gamma}\Omega(\gamma){\rm d}\xi_{ \gamma}(t)\int_{l_{\gamma}}\frac{{\rm d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta} \log(1-\exp(2\pi{\rm i}\xi_{\gamma}))-\frac{1}{2}\sum_{\gamma}\Omega(\gamma) \xi_{\gamma}(t)\int_{l_{\gamma}}\frac{{\rm d}\zeta}{\zeta}\frac{t+\zeta}{t- \zeta}\frac{\exp(2\pi{\rm i}\xi_{\gamma}(\zeta))}{1-\exp(2\pi{\rm i}\xi_{ \gamma}(\zeta))}{\rm d}\xi_{\gamma}(\zeta)\] \[\quad+\frac{1}{2\pi}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}} \frac{{\rm d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta}\left(-\frac{{\rm i}}{2}\log( 1-\exp(2\pi{\rm i}\xi_{\gamma}))+\frac{1}{2}\frac{\exp(2\pi{\rm i}\xi_{\gamma} )}{1-\exp(2\pi{\rm i}\xi_{\gamma})}2\pi\xi_{\gamma}(\zeta)\right){\rm d}\xi_{ \gamma}(\zeta)\] \[=-\frac{1}{4\pi{\rm i}}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}} \frac{{\rm d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta}\log(1-\exp(2\pi{\rm i}\xi_{ \gamma}))({\rm d}\xi_{\gamma}(t)-{\rm d}\xi_{\gamma}(\zeta))\] \[\quad-\frac{1}{2}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}} \frac{{\rm d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta}\frac{\exp(2\pi{\rm i}\xi_{ \gamma}(\zeta))}{1-\exp(2\pi{\rm i}\xi_{\gamma}(\zeta))}(\xi_{\gamma}(t)-\xi_ {\gamma}(\zeta)){\rm d}\xi_{\gamma}(\zeta)\] \[=-\frac{1}{4\pi{\rm i}}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}} \frac{{\rm d}\zeta}{\zeta}\log(1-\exp(2\pi{\rm i}\xi_{\gamma}))\left(\left( \frac{1}{\zeta}+\frac{1}{t}\right){\rm i}R\widetilde{Z}_{\gamma})-(t+\zeta){\rm i }d(R\overline{\widetilde{Z}}_{\gamma})-\frac{t+\zeta}{t-\zeta}{\rm i}R(-t^{-2} \widetilde{Z}_{\gamma}+\overline{\widetilde{Z}}_{\gamma}){\rm d}t\right)\] \[\quad+\frac{1}{2}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}} \frac{{\rm d}\zeta}{\zeta}\frac{\exp(2\pi{\rm i}\xi_{\gamma}(\zeta))}{1-\exp(2 \pi{\rm i}\xi_{\gamma}(\zeta))}\left(\left(\frac{1}{\zeta}+\frac{1}{t}\right){ \rm i}R\widetilde{Z}_{\gamma}-(t+\zeta){\rm i}R\overline{\widetilde{Z}}_{\gamma} \right)\left({\rm d}\zeta_{\gamma}+{\rm i}\zeta^{-1}{\rm d}(R\widetilde{Z}_{ \gamma})+{\rm i}\zeta{\rm d}(R\overline{\widetilde{Z}}_{\gamma})\right)\,.\] (B.8)
We can rewrite the previous integrals in the last equality in terms of Bessel functions by using the identities (A.2) and (A.3), obtaining
\[= \frac{1}{4\pi}\sum_{\gamma}\Omega(\gamma){\rm d}t\int_{l_{\gamma}} \frac{{\rm d}\zeta}{\zeta}\log(1-\exp(2\pi{\rm i}\xi_{\gamma}))\left(\frac{t+ \zeta}{t-\zeta}R(-t^{-2}\widetilde{Z}_{\gamma}+\overline{\widetilde{Z}}_{ \gamma})\right)\] \[-\frac{1}{2\pi}\sum_{\gamma}\Omega(\gamma){\rm d}(R\widetilde{Z}_{ \gamma})\left(\sum_{n>0}\frac{|\widetilde{Z}_{\gamma}|}{\widetilde{Z}_{\gamma}} \frac{e^{-2\pi{\rm i}n\zeta_{\gamma}}}{n}K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)- \frac{1}{t}\sum_{n>0}\frac{e^{-2\pi{\rm i}n\zeta_{\gamma}}}{n}K_{0}(4\pi Rn| \widetilde{Z}_{\gamma}|)\right)\] \[-\frac{1}{2\pi}\sum_{\gamma}\Omega(\gamma){\rm d}(R\overline{ \widetilde{Z}}_{\gamma})\left(-\sum_{n>0}\frac{\widetilde{Z}_{\gamma}}{| \widetilde{Z}_{\gamma}|}\frac{e^{-2\pi{\rm i}n\zeta_{\gamma}}}{n}K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|)+t\sum_{n>0}\frac{e^{-2\pi{\rm i}n\zeta_{\gamma}}}{n}K_{0 }(4\pi Rn|\widetilde{Z}_{\gamma}|)\right)\] \[-\sum_{\gamma}\Omega(\gamma)\frac{|\widetilde{Z}_{\gamma}|^{2}}{ \widetilde{Z}_{\gamma}}R{\rm d}(R\widetilde{Z}_{\gamma})\sum_{n>0}e^{-2\pi{\rm i}n \zeta_{\gamma}}\left(K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|)+\frac{K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|)}{2\pi Rn|\widetilde{Z}_{\gamma}|}\right)\] \[+\sum_{\gamma}\Omega(\gamma)\left(-{\rm i}\widetilde{Z}_{\gamma}R{ \rm d}\zeta_{\gamma}+\left(\widetilde{Z}_{\gamma}/t-t\overline{\widetilde{Z}}_{ \gamma}\right)R{\rm d}(R\widetilde{Z}_{\gamma})\right)\frac{|\widetilde{Z}_{ \gamma}|}{\widetilde{Z}_{\gamma}}\sum_{n>0}e^{-2\pi{\rm i}n\zeta_{\gamma}}K_{1}(4 \pi Rn|\widetilde{Z}_{\gamma}|)\]
\[-\sum_{\gamma}\Omega(\gamma)\left(-(\widetilde{Z}_{\gamma}/t-t \widetilde{Z}_{\gamma}){\rm i}R{\rm d}\zeta_{\gamma}+R\widetilde{Z}_{\gamma}{\rm d }(R\widetilde{Z}_{\gamma})-\widetilde{Z}_{\gamma}R{\rm d}(R\widetilde{Z}_{\gamma })\right)\sum_{n>0}e^{-2\pi{\rm i}\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{ \gamma}|)\] \[+\sum_{\gamma}\Omega(\gamma)\left(\widetilde{\overline{Z}}_{ \gamma}{\rm i}R{\rm d}\zeta_{\gamma}+(\widetilde{Z}_{\gamma}/t-t\overline{ \widetilde{Z}}_{\gamma})R{\rm d}(R\overline{\widetilde{Z}}_{\gamma})\right) \frac{\widetilde{Z}_{\gamma}}{|\widetilde{Z}_{\gamma}|}\sum_{n>0}e^{-2\pi{\rm i }\zeta_{\gamma}}K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)\] \[+\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}R{\rm d}(R \overline{\widetilde{Z}}_{\gamma})\frac{\widetilde{Z}_{\gamma}^{2}}{| \widetilde{Z}_{\gamma}|^{2}}\sum_{n>0}e^{-2\pi{\rm i}\zeta_{\gamma}}\left(K_{ 0}(4\pi Rn|\widetilde{Z}_{\gamma}|)+\frac{K_{1}(4\pi Rn|\widetilde{Z}_{\gamma} |)}{2\pi Rn|\widetilde{Z}_{\gamma}|}\right)\] (B.9)
Overall, we obtain the following \(t^{-1}\) term for \(-2\pi{\rm i}({\rm d}\alpha^{\rm inst}+\widetilde{\xi}_{i}^{\rm inst}{\rm d} \xi^{i}-\xi^{i}{\rm d}\widetilde{\xi}_{i}^{\rm inst})\):
\[\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{e^{-2\pi{ \rm i}\zeta_{\gamma}}}{n}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|)\left(\frac{ \widetilde{Z}_{\gamma}}{2(\rho+c\epsilon)}{\rm d}\rho+\frac{\widetilde{Z}_{ \gamma}}{2}{\rm d}{\cal K}+{\rm d}\widetilde{Z}_{\gamma}\right)\] \[+R^{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0} e^{-2\pi{\rm i}\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{ \gamma}|)\left(\frac{{\rm d}\rho}{2(\rho+c\epsilon)}+\frac{{\rm d}{\cal K}}{2} +\frac{{\rm d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}\right)\] \[+{\rm i}R\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n >0}e^{-2\pi{\rm i}\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|){\rm d }\zeta_{\gamma}\] \[+R^{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0} e^{-2\pi{\rm i}\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{ \gamma}|)\left(\frac{{\rm d}\rho}{2(\rho+c\epsilon)}+\frac{{\rm d}{\cal K}}{2} +\frac{{\rm d}\overline{\widetilde{Z}}_{\gamma}}{\overline{\widetilde{Z}}_{ \gamma}}\right)\] \[-\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{e^{-2\pi{ \rm i}\alpha\zeta_{\gamma}}}{n}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|)\left( \frac{{\rm d}\gamma}{2(\rho+c\epsilon)}{\rm d}\rho+\frac{\widetilde{Z}_{\gamma} }{2}{\rm d}{\cal K}+{\rm d}\widetilde{Z}_{\gamma}\right)\] \[+{\rm i}R\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0 }e^{-2\pi{\rm i}\alpha\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|){ \rm d}\zeta_{\gamma}\] \[+R^{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0}e^{ -2\pi{\rm i}\alpha\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|)\left(\frac{{\rm d}\rho}{\rho+c\epsilon}+{\rm d}{ \cal K}+\frac{{\rm d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}+\frac{{ \rm d}\overline{\widetilde{Z}}_{\gamma}}{\overline{\widetilde{Z}}_{\gamma}}\right)\] \[= 2{\rm i}R\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0 }e^{-2\pi{\rm i}n\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|){\rm d }\zeta_{\gamma}\] \[+2R^{2}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{\gamma}\sum_{n>0 }e^{-2\pi{\rm i}n\zeta_{\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|)\left(\frac{{\rm d}\rho}{\rho+c_{\ell}}+{\rm d}{\cal K} +\frac{{\rm d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}+\frac{{\rm d} \overline{\widetilde{Z}}_{\gamma}}{\overline{\widetilde{Z}}_{\gamma}}\right)\]
By comparing with (2.35) we see that this matches \(\theta_{+}^{P,{\rm inst}}|_{\overline{N}}\). The \(t\) term follows from the \(t^{-1}\), by noticing that all the \(t\)-term are conjugates of the \(t^{-1}\)-term. In particular, the \(t\)-term will match \(\theta_{-}^{\rm inst}=\overline{\theta_{+}^{\rm inst}}\).
Now we collect the \(t^{0}\) term from our previous expressions for \(-2\pi{\rm i}\left({\rm d}\alpha^{\rm inst}+\widetilde{\xi}_{i}^{\rm inst}{\rm d }\xi^{i}-\xi^{i}{\rm d}\widetilde{\xi}_{i}^{\rm inst}\right)\). We obtain the following:
\[-\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{e^{-2\pi{ \rm i}\zeta_{\gamma}}}{n}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{ \gamma}|)\left(\frac{{\rm d}\rho}{2(\rho+c_{\ell})}+\frac{{\rm d}{\cal K}}{2}+ \frac{{\rm d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}\right)\] \[+\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{e^{-2\pi{ \rm i}\zeta_{\gamma}}}{n}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{ \gamma}|)\left(\frac{{\rm d}\rho}{2(\rho+c_{\ell})}+\frac{{\rm d}{\cal K}}{2}+ \frac{{\rm d}\overline{\widetilde{Z}}_{\gamma}}{\overline{\widetilde{Z}}_{\gamma}}\right)\] \[-R^{2}\sum_{\gamma}\Omega(\gamma)|\widetilde{Z}_{\gamma}|^{2}\sum_{n> 0}e^{-2\pi{\rm i}n\zeta_{\gamma}}\left(K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|)+ \frac{K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)}{2\pi Rn|\widetilde{Z}_{\gamma}|} \right)\left(\frac{{\rm d}\rho}{2(\rho+c_{\ell})}+\frac{{\rm d}{\cal K}}{2}+ \frac{{\rm d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}\right)\] \[-{\rm i}R\sum_{\gamma}\Omega(\gamma)\sum_{n>0}e^{-2\pi{\rm i}n\zeta_ {\gamma}}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|){\rm d }\zeta_{\gamma}\]
\[+R^{2}\sum_{\gamma}\Omega(\gamma)|\widetilde{Z}_{\gamma}|^{2}\left( \left(\frac{\mathrm{d}\rho}{2(\rho+c_{\ell})}+\frac{\mathrm{d}\mathcal{K}}{2}+ \frac{\mathrm{d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}\right)-\left( \frac{\mathrm{d}\rho}{2(\rho+c_{\ell})}+\frac{\mathrm{d}\mathcal{K}}{2}+\frac {\mathrm{d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma}}\right)\right)\sum_{n >0}e^{-2\pi\mathrm{i}n\zeta\gamma}K_{0}(4\pi Rn|\widetilde{Z}_{\gamma}|)\] \[+\mathrm{i}R\sum_{\gamma}\Omega(\gamma)\sum_{n>0}e^{-2\pi\mathrm{i }n\zeta\gamma}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|) \mathrm{d}\zeta_{\gamma}\] \[+R^{2}\sum_{\gamma}\Omega(\gamma)|\widetilde{Z}_{\gamma}|^{2} \left(\frac{\mathrm{d}\rho}{2(\rho+c_{\ell})}+\frac{\mathrm{d}\mathcal{K}}{2}+ \frac{\mathrm{d}\overline{\widetilde{Z}}_{\gamma}}{\overline{\widetilde{Z}}_{ \gamma}}\right)\sum_{n>0}e^{-2\pi\mathrm{i}n\zeta\gamma}\left(K_{0}(4\pi Rn| \widetilde{Z}_{\gamma}|)+\frac{K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)}{2\pi Rn |\widetilde{Z}_{\gamma}|}\right)\] \[= -\frac{R}{\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{e^{-2\pi \mathrm{i}n\zeta\gamma}}{n}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn|\widetilde{Z}_ {\gamma}|)\left(\frac{\mathrm{d}\widetilde{Z}_{\gamma}}{\widetilde{Z}_{\gamma} }-\frac{\mathrm{d}\overline{\widetilde{Z}}_{\gamma}}{\overline{\widetilde{Z}}_ {\gamma}}\right)\] (B.10)
so comparing with (2.35) we see that the \(t^{0}\) terms matches \(-2\mathrm{i}\theta_{3}^{P,\mathrm{inst}}\).
Finally, we need to collect the \(\mathrm{d}t\) term from \(-2\pi\mathrm{i}\left(\mathrm{d}\alpha^{\mathrm{inst}}+\tilde{\xi}_{i}^{\mathrm{ inst}}\mathrm{d}\xi^{i}-\xi^{i}\mathrm{d}\tilde{\xi}_{i}^{\mathrm{inst}}\right)\). This one corresponds to
\[-t^{-2}\frac{R}{4\pi}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{ \gamma}\int_{l_{\gamma}}\frac{\mathrm{d}\zeta}{\zeta}\log(1-\exp(2\pi\mathrm{i }\xi_{\gamma}(\zeta)))-\frac{R}{4\pi}\sum_{\gamma}\Omega(\gamma)\overline{ \widetilde{Z}}_{\gamma}\int_{l_{\gamma}}\frac{\mathrm{d}\zeta}{\zeta}\log(1- \exp(2\pi\mathrm{i}\xi_{\gamma}(\zeta)))\] \[+\frac{1}{4\pi^{2}}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}} \frac{\mathrm{d}\zeta}{\zeta}\partial_{t}\left(\frac{t+\zeta}{t-\zeta}\right) \mathrm{L}(\exp(2\pi\mathrm{i}\xi_{\gamma}(\zeta)))+\frac{1}{4\pi\mathrm{i}} \sum_{\gamma}\Omega(\gamma)\xi_{\gamma}(t)\int_{l_{\gamma}}\frac{\mathrm{d} \zeta}{\zeta}\partial_{t}\left(\frac{t+\zeta}{t-\zeta}\right)\log(1-\exp(2\pi \mathrm{i}\xi_{\gamma}(\zeta)))\] \[+\frac{1}{4\pi}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\frac{ \mathrm{d}\zeta}{\zeta}\frac{t+\zeta}{t-\zeta}R(-t^{-2}\widetilde{Z}_{\gamma}+ \overline{\widetilde{Z}}_{\gamma})\log(1-\exp(2\pi\mathrm{i}\xi_{\gamma}( \zeta)))\] \[= -\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\frac{ \mathrm{d}\zeta}{\zeta}\frac{1}{t-\zeta}(t^{-1}\widetilde{Z}_{\gamma}-\zeta \overline{\widetilde{Z}_{\gamma}})\log(1-\exp(2\pi\mathrm{i}\xi_{\gamma}(\zeta )))-\frac{1}{2\pi^{2}}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\mathrm{d} \zeta\frac{1}{(t-\zeta)^{2}}\mathrm{L}(\exp(2\pi\mathrm{i}\xi_{\gamma}(\zeta )))\] \[-\frac{1}{2\pi\mathrm{i}}\sum_{\gamma}\Omega(\gamma)\xi_{\gamma}(t )\int_{l_{\gamma}}\mathrm{d}\zeta\frac{1}{(t-\zeta)^{2}}\log(1-\exp(2\pi \mathrm{i}\xi_{\gamma}(\zeta)))\] (B.11)
Integrating by parts the second and third term of the last equality, we obtain
\[-\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\frac{ \mathrm{d}\zeta}{\zeta}\frac{1}{t-\zeta}(t^{-1}\widetilde{Z}_{\gamma}-\zeta \overline{\widetilde{Z}_{\gamma}})\log(1-\exp(2\pi\mathrm{i}\xi_{\gamma}(\zeta)))\] \[+\frac{1}{2\pi}\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\mathrm{ d}\zeta\frac{1}{t-\zeta}\left(-\mathrm{i}\log(1-\exp(2\pi\mathrm{i}\xi_{ \gamma})))+\frac{\exp(2\pi\mathrm{i}\xi_{\gamma})}{1-\exp(2\pi\mathrm{i}\xi_{ \gamma})}2\pi\xi_{\gamma}(\zeta)\right)(\zeta^{-2}\mathrm{i}R\widetilde{Z}_{ \gamma}-\mathrm{i}R\overline{\widetilde{Z}_{\gamma}})\] \[-\sum_{\gamma}\Omega(\gamma)\xi_{\gamma}(t)\int_{l_{\gamma}}\mathrm{ d}\zeta\frac{1}{t-\zeta}\frac{\exp(2\pi\mathrm{i}\xi_{\gamma})}{1-\exp(2\pi \mathrm{i}\xi_{\gamma})}(\zeta^{-2}\mathrm{i}R\widetilde{Z}_{\gamma}-\mathrm{i} R\overline{\widetilde{Z}_{\gamma}})\] \[= t^{-1}\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{ \gamma}\int_{l_{\gamma}}\frac{\mathrm{d}\zeta}{\zeta^{2}}\log(1-\exp(2\pi \mathrm{i}\xi_{\gamma}(\zeta)))\] \[-\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\mathrm{d}\zeta\frac{1 }{t-\zeta}(\xi_{\gamma}(t)-\xi_{\gamma}(\zeta))\frac{\exp(2\pi\mathrm{i}\xi_{ \gamma})}{1-\exp(2\pi\mathrm{i}\xi_{\gamma})}(\zeta^{-2}\mathrm{i}R\widetilde{Z}_ {\gamma}-\mathrm{i}R\overline{\widetilde{Z}_{\gamma}})\,.\] (B.12)
Using that
\[\frac{1}{t-\zeta}(\xi_{\gamma}(t)-\xi_{\gamma}(\zeta))=\mathrm{i}R\frac{ \widetilde{Z}_{\gamma}}{\zeta t}-\mathrm{i}R\overline{\widetilde{Z}}_{\gamma}\] (B.13)
we then obtain that the right-hand side of the equality of (B.12) becomes
\[t^{-1}\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{ \gamma}\int_{l_{\gamma}}\frac{\mathrm{d}\zeta}{\zeta^{2}}\log(1-\exp(2\pi\mathrm{i} \xi_{\gamma}(\zeta)))\]
\[-\sum_{\gamma}\Omega(\gamma)\int_{l_{\gamma}}\mathrm{d}\zeta(\mathrm{i}R \frac{\widetilde{Z}_{\gamma}}{\zeta t}-\mathrm{i}R\overline{\widetilde{Z}}_{ \gamma})\frac{\exp(2\pi\mathrm{i}\xi_{\gamma})}{1-\exp(2\pi\mathrm{i}\xi_{ \gamma})}(\zeta^{-2}\mathrm{i}R\widetilde{Z}_{\gamma}-\mathrm{i}R\overline{ \widetilde{Z}}_{\gamma})\] \[= t^{-1}\frac{R}{2\pi}\sum_{\gamma}\Omega(\gamma)\widetilde{Z}_{ \gamma}\int_{l_{\gamma}}\frac{\mathrm{d}\zeta}{\zeta^{2}}\log(1-\exp(2\pi \mathrm{i}\xi_{\gamma}(\zeta)))+t^{-1}R^{2}\sum_{\gamma}\Omega(\gamma) \widetilde{Z}_{\gamma}^{2}\int_{l_{\gamma}}\frac{\mathrm{d}\zeta}{\zeta^{3}} \frac{\exp(2\pi\mathrm{i}\xi_{\gamma})}{1-\exp(2\pi\mathrm{i}\xi_{\gamma})}\] \[-t^{-1}R^{2}\sum_{\gamma}\Omega(\gamma)|\widetilde{Z}_{\gamma}|^ {2}\int_{l_{\gamma}}\frac{\mathrm{d}\zeta}{\zeta}\frac{\exp(2\pi\mathrm{i} \xi_{\gamma})}{1-\exp(2\pi\mathrm{i}\xi_{\gamma})}-R^{2}\sum_{\gamma}\Omega( \gamma)|\widetilde{Z}_{\gamma}|^{2}\int_{l_{\gamma}}\frac{\mathrm{d}\zeta}{ \zeta^{2}}\frac{\exp(2\pi\mathrm{i}\xi_{\gamma})}{1-\exp(2\pi\mathrm{i}\xi_{ \gamma})}\] \[+R^{2}\sum_{\gamma}\Omega(\gamma)\overline{\widetilde{Z}}_{\gamma }^{2}\int_{l_{\gamma}}\mathrm{d}\zeta\frac{\exp(2\pi\mathrm{i}\xi_{\gamma})}{1 -\exp(2\pi\mathrm{i}\xi_{\gamma})}\] (B.14)
The above integrals can be solved explicitly by using (A.2) and (A.3), giving
\[= t^{-1}\frac{R}{\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{e^{-2 \pi\mathrm{i}n\zeta_{\gamma}}}{n}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|)\] \[+2t^{-1}R^{2}\sum_{\gamma}\Omega(\gamma)|\widetilde{Z}_{\gamma}| ^{2}\sum_{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}\left(K_{0}(4\pi Rn|\widetilde{ Z}_{\gamma}|)+\frac{K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)}{2\pi Rn|\widetilde{Z}_{ \gamma}|}\right)\] \[-2t^{-1}R^{2}\sum_{\gamma}\Omega(\gamma)|\widetilde{Z}_{\gamma}| ^{2}\sum_{n>0}e^{-2\pi\mathrm{i}n\zeta_{\gamma}}K_{0}(4\pi Rn|\widetilde{Z}_{ \gamma}|)\] \[-R^{2}\sum_{\gamma}\Omega(\gamma)|\widetilde{Z}_{\gamma}|^{2} \left(-2|\frac{\widetilde{Z}_{\gamma}|}{\widetilde{Z}_{\gamma}}\sum_{n>0}e^{-2 \pi\mathrm{i}n\zeta_{\gamma}}K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)\right)+R^ {2}\sum_{\gamma}\Omega(\gamma)\overline{Z}_{\gamma}^{2}\left(-2\frac{ \widetilde{Z}_{\gamma}}{|\widetilde{Z}_{\gamma}|}\sum_{n>0}e^{-2\pi\mathrm{i}n \zeta_{\gamma}}K_{1}(4\pi Rn|\widetilde{Z}_{\gamma}|)\right)\] \[= t^{-1}\frac{2R}{\pi}\sum_{\gamma}\Omega(\gamma)\sum_{n>0}\frac{ e^{-2\pi\mathrm{i}n\zeta_{\gamma}}}{n}|\widetilde{Z}_{\gamma}|K_{1}(4\pi Rn| \widetilde{Z}_{\gamma}|)\] (B.15)
Hence, by comparing with (2.35), we see that the \(\mathrm{d}t\) component of \(-2\pi\mathrm{i}\left(\mathrm{d}\alpha^{\mathrm{inst}}+\widetilde{\xi}_{i}^{ \mathrm{inst}}\mathrm{d}\xi^{i}-\xi^{i}\mathrm{d}\widetilde{\xi}_{i}^{\mathrm{ inst}}\right)\) matches with \(f^{\mathrm{inst}}/t\). This completes the proof.
|
2307.00454 | Structural, vibrational and electronic properties of Nb substituted
orthovanadates LaV$_{1-x}$Nb$_x$O$_4$ | We investigate the structural, vibrational, morphological, and electronic
properties of Nb substituted orthovanadate LaV$_{1-x}$Nb$_x$O$_4$ samples
prepared by the solid-state reaction method. The x-ray diffraction (XRD)
analysis reveals the presence of three crystal structures [monoclinic monazite
($m-m$) type for the $x=$ 0, two-phase equilibrium of monoclinic monazite
($m-m$) and tetragonal scheelite ($t-s$) type for the 0.2$\leq$$x$$\leq$0.8,
and monoclinic fergusonite ($m-f$) type for the $x=$ 1 samples] with an
increase in Nb$^{5+}$ concentration. The Raman spectroscopy and x-ray
photoelectron spectroscopy (XPS) were employed to study the vibrational and
electronic properties of all the samples, respectively. In order to choose an
excitation wavelength that does not cause undesirable fluorescence and has
observable intensities of all the vibrational modes, the Raman spectra are
collected using 532 nm, 633 nm, and 785 nm laser lines. With increasing the
Nb$^{5+}$ concentration, new Raman modes associated with Nb-bonds are clearly
visible and the intensity of V-bonds assigned modes is decreasing. The XPS
analysis shows the unchanged 3+ oxidation state of La ion where the intensity
of the V 2$p$ core-level decreases while the Nb 3$d$ core-level increases with
$x$. The equal spin-orbit energy splitting of the states is confirmed by the
average energy difference (across La core-level spectra for all the samples)
for state I as well as bonding and anti-bonding of state II. Interesting, the
relative intensity of La 3$d$ state I and state II show systematic change with
Nb doping altering the metal ligand overlap. We discuss and provide insight
into the evolution of the structural, morphological, and chemical features with
Nb substitution in LaV$_{1-x}$Nb$_x$O$_4$ samples. | Ashok Kumar, Anurag Sharma, Madhav Sharma, Vinod Singh, Anita Dhaka, Rajendra S. Dhaka | 2023-07-02T02:27:15Z | http://arxiv.org/abs/2307.00454v1 | Structural, vibrational and electronic properties of Nb substituted orthovanadates LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\)
###### Abstract
We investigate the structural, vibrational, morphological, and electronic properties of Nb substituted orthovanadate LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples prepared by the solid-state reaction method. The x-ray diffraction (XRD) analysis reveals the presence of three crystal structures [monoclinic monazite \((m-m)\) type for the \(x=0\), two-phase equilibrium of monoclinic monazite \((m-m)\) and tetragonal cubicite \((t-s)\) type for the 0.2\(\leq\)\(x\)\(\leq\)0.8, and monoclinic fergusonite \((m-f)\) type for the \(x=1\) samples] with an increase in Nb\({}^{5+}\) concentration. The Raman spectroscopy and x-ray photoelectron spectroscopy (XPS) were employed to study the vibrational and electronic properties of all the samples, respectively. In order to choose an excitation wavelength that does not cause undesirable fluorescence and has observable intensities of all the vibrational modes, the Raman spectra are collected using 532 nm, 633 nm, and 785 nm laser lines. With increasing the Nb\({}^{5+}\) concentration, new Raman modes associated with Nb-bonds are clearly visible and the intensity of V-bonds assigned modes is decreasing. The XPS analysis shows the unchanged 3+ oxidation state of La ion where the intensity of the V 2\(p\) core-level decreases while the Nb 3\(d\) core-level increases with \(x\). The equal spin-orbit energy splitting of the states is confirmed by the average energy difference (across La core-level spectra for all the samples) for state I as well as bonding and anti-bonding of state II. Interesting, the relative intensity of La 3\(d\) state I and state II show systematic change with Nb doping altering the metal ligand overlap. We discuss and provide insight into the evolution of the structural, morphological, and chemical features with Nb substitution in LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples.
## I Introduction
In various polycrystalline oxides, rare earth orthovanadates (RVO\({}_{4}\); R-Rare earth elements) are interesting because of their potential applications in catalysis, polarizers, luminescent materials, and laser host materials [1, 2, 3]. Also, researchers have reported that complex oxide materials show interesting structural, magnetic and electronic properties [4, 5, 6, 7], and may be utilized for various applications such solid oxide fuel cells and as an electrode material for Lithium-ion batteries due of their high specific capacity and cycle stability [8]. It is interesting to note that the lanthanum based orthovanadates LaVO\({}_{4}\) shows the structural trend in rare-earth family, it crystallizes in tetragonal-zircon \((t-z)\) type polymorphs with space group I4\({}_{1}\)/amd and monoclinic-monazite \((m-m)\) type polymorph with space group P2\({}_{1}\)/n. However, it thermally stabilizes in \(m-m\) type, whereas the \(t-z\) structure remains in metastable state at room temperature, because of the largest ionic radius of La\({}^{3+}\) in lanthanide series, it has a higher oxygen coordination number (9) in \(m-m\) type structure as compared to 8 in \(t-z\) type [9]. The zircon structure contains a pattern of VO\({}_{4}\) tetrahedra (having four identical V-O bonds) [10] and RO\({}_{8}\) dodecahedra (coordination no. 8), sharing their edges alternatively and linked together in chains along the \(c-\)axis. In the monazite structure, deformed VO\({}_{4}\) tetrahedra with four different V-O bonds [11] are connected to RO\({}_{9}\) polyhedra (coordination no. 9) and sharing their edges. The zircon type LaVO\({}_{4}\) sample is difficult to prepare at ambient conditions by conventional solid state reaction method but few reports say that it can be synthesized and stabilized by hydrothermal and precipitation methods [12, 13, 14].
The structural and electronic properties of lanthanum orthovanadate with pentavalent niobium substitution are vital to understand for their practical use. Though the parent compound LaVO\({}_{4}\) with substitution at the La site has been extensively explored [15, 16], there are very few studies to understand the effect of substitution at V site [17, 18]. As the niobium is located just below vanadium in the periodic table and has many advantages like Vanadium prices have recently rise to about 300% higher, niobium (Nb\({}^{5+}\)) is biocompatible, isoelectronic to vanadium ion and has a larger ionic radius (0.48 A) with four coordination numbers in comparison to vanadium ion (0.36 A) [19]. The LaNbO\({}_{4}\) is a rare-earth niobate and shows a well-known temperature and composition/substitution-induced structural transformation. For example, the LaNbO\({}_{4}\) undergoes a thermally induced structural transition from monoclinic fergusonite \((m-f)\) with space group I2/a to tetragonal schelite \((t-s)\) with space group I4\({}_{1}\)/a) phase at \(\sim\)495\({}^{\circ}\)C [20]. Similarly, it undergoes structural transformation by substituting Nb\({}^{5+}\) at V\({}^{5+}\) site [21]. It has been reported that lanthanum niobate shows interesting properties and very useful for technological applications such as proton con
ductivity [22; 23], good dielectric, high energy emission using X-ray excitation [24] and its potential for applications in a variety of fields, including sensors [25], contrast agents, waveguides, ferroelectrics [26], phosphors [27], laser crystals [28], luminophores, LEDs [29], etc.
In this paper, we study the structural, vibrational, morphological, and electronic properties of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) using various experimental tools like x-ray powder diffraction (XRD), scanning electron microscopy (SEM), high resolution transmission electron microscopy (HR-TEM), selected area electron diffraction (SAED), Raman spectroscopy, and x-ray photoelectron spectroscopy (XPS). We find the phase purity and structural transition by performing the Rietveld refinement of the measured XRD patterns at room temperature. The Raman spectra of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples are measured with different excitation wavelengths of 532 nm, 633 nm, and 785 nm, where we find significant intensity of all the Raman active modes as well as interesting changes with Nb substitution. The Raman spectra exhibit a pattern of maximum intensity peaks that is compatible with Badger's rule. The structural phase transition observed in the XRD analysis of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) is also supported by the intensity variation in the Raman mode observed in the samples with increasing Nb concentration. Through the SEM micrographs, we identify that the samples contain fine particles along with pores as well as changes in particle size and shape can be seen in the surface images of the samples. The core-level photoemission reveals the oxidation state and electronic structure of the constitute elements in these samples. The intensity of the core-level spectra of all the samples varied systematically with an increase in Nb\({}^{5+}\) concentration, as shown by XPS analysis. The average energy difference (for the La core-level spectra of all the samples) for state I, state II bonding, and state II anti-bonding verified the equal spin-orbit energy splitting of the states. Moreover, we find a systematic change in the relative intensity of La 3\(d\) state I and state II with Nb doping, which suggest an altering in the metal ligand overlap.
## II Experimental
We use solid-state reaction method to prepare LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) (\(x=0\) to 1) samples by mixing V\({}_{2}\)O\({}_{5}\) (99.6%, Sigma), Nb\({}_{2}\)O\({}_{5}\) (99.99%, Sigma), and La\({}_{2}\)O\({}_{3}\) (99.99%, Sigma) as precursors in the stoichiometric proportions. The La\({}_{2}\)O\({}_{3}\) was pre-dried for 6 hrs at 900degC to remove the moisture. After that the mixture was ground evenly for 8 hours, then heated for 17 hrs at 1000degC. The mixture was then reground and sintered at 1250degC for 13 hrs to improve the crystallinity of the samples. The phase purity and structural parameters of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) were determined using Panalytical XPert [3] powder x-ray diffractometer at room temperature using the Cu source of K\(\alpha\) radiation (\(\lambda=1.5406\) A ). We use the step size of 0.033deg for each XRD scan taken in the 2\(\theta\) range from 10deg to 90deg. The lattice parameters are extracted by the Rietveld refinement of XRD patterns using FullProf software, where linear interpolation is used to fit the background. We use Jeol JSM-7800F Prime field emission scanning electron microscope (FE-SEM) with LN\({}_{2}\) free SDD X-max 80 EDS detector in high vacuum mode to produce the scanning electron microscope (SEM) micrographs of the materials' surfaces. The analysis of particle size and change in morphology of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) was done using ImageJ software by analyzing SEM micrographs at the surface of the pellet samples. In order to execute FE-SEM, the non-conducting LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) pellets were turned into conducting by coating the surface with a thin layer of Au using a sputter coater. We use the JEOL/JEM-F200 microscope, equipped with thermal electron field emission and OneView CMOS camera (4k \(\times\) 4k pixels), to collect HR-TEM data by operating the system at an acceleration voltage of 200 keV.
The Raman spectra were recorded at room temperature with the Renishaw inVia confocal Raman microscope using 2400 lines/mm grating, 10X objective, and three different wavelengths; (i) 532 nm, gas laser with a power of 1 mW, (ii) 633 nm, where the semiconductor diode laser with a power of 1 mW, (iii) 785 nm semiconductor diode laser with a power of 0.1 mW. The samples can be identified by their particular Raman fingerprint, and their structural and chemical information can be discovered through the examination of several Raman active modes in LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\). The x-ray photo emission spectroscopy (XPS) measurements are done using AXIS Supra instrument (Kratos Analytical Ltd). The survey spectra and core level spectra (La 3\(d\), Nb 3\(d\), V 2\(p\), and O 1\(s\), for each sample), were recorded at room temperature using the monochromatic X-ray source: Al K\(\alpha\)-1486.6 eV(step size 1 eV for the survey and 0.1 eV for core level spectra), with a charge neutralizer, is used to offset the charging impact in these insulating materials. The pass energy of the analyzer was 160 eV and 20 eV for the survey and core-level spectra, respectively. For all the wide scans and core-level spectra, the C 1\(s\) peak is fitted to obtain the peak binding energy (BE) and the calibration for charge correction was done using the C 1\(s\) BE reference at 284.6 eV for each sample. We utilize the Igor Pro 9 software to analyze the observed Raman spectra and fitted the modes using the Lorentzian peak function as well as XPS spectra using the Voigt function.
## III Results and Discussion
The Rietveld refined room-temperature x-ray diffraction (XRD) patterns of the polycrystalline LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) (\(x=0\)-1) samples are displayed in Fig. 1 and lattice parameters of the samples are summarised in Table 1, where we can see that angle \(\beta\) is increasing in the \(m-m\) type phase of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples with Nb\({}^{5+}\) substitution due to higher ionic size of Nb\({}^{5+}\) as compared to V\({}^{5+}\). The crystallization of LaV\({}_{1-x}\)Nb\({}_{x}\)O
is clearly observed in three different phases depending on the substitution of Nb\({}^{5+}\) at the site of V\({}^{5+}\), and also been reported by Aldred _et al._ in ref. [21]. We observe that the structure changes from \(m-m\) to \(m-f\) with increase in the Nb\({}^{5+}\) concentration from 0 to 100%. For the \(x=0\) and 1, a pure monoclinic phase is obtained with no impurity peaks. In between \(x=0.2\) and 0.8, a monoclinic monazite (\(m-m\)) and a tetragonal scheelite (\(t-s\)) type phases coexist. Moreover, all the Bragg reflections of LaVO\({}_{4}\) and LaNbO\({}_{4}\) can easily be indexed to the \(m-m\) and \(m-f\) phases with the space group P\({}_{2}\)1/n and I2/a for the \(x=0\) and 1 samples, respectively. We find that the contribution of space group I\({}_{4}\)1/a is increasing from the \(x=0.2\) to 0.8 samples (see Table 1).
Figure 1: (a–f) The Rietveld refined x-ray diffraction patterns of LaV\({}_{1-x}\)Nb\({}_{2}\)O\({}_{4}\) (\(x=0\)–1) samples. The experimental, simulated, and difference between the experimental and simulated spectra are shown by open red circles, black solid lines, and blue solid lines, respectively. The Bragg positions corresponding to their respective space groups are shown by green vertical markers. In the side of each panel (a1–f1), we show partial amplification for clarity between 2\(\theta=\) 25–35\({}^{\circ}\) for all the samples.
due to the increase of \(t-s\) phase with the substitution of Nb\({}^{5+}\) at the site of V\({}^{5+}\) in LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples. So, it can clearly be seen that the LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples crystallize in monoclinic monazite (\(m-m\)) type (\(x=0\)), coexistence of monoclinic monazite (\(m-m\)) and tetragonal scheelite (\(t-s\)) type (0.2\(\leq\)\(x\)\(\leq\)0.8), and monoclinic fergusonite (\(m-f\)) type (\(x=1\)) [21; 30].
Moreover, for the \(x=0\) sample, the \(m-m\) type crystal structure shows high intensity diffraction peaks corresponding to (200) as well as (120) crystal planes at 26.17\({}^{*}\) and 27.78\({}^{*}\), respectively. However, the \(t-s\) type structure contains a peak corresponding to (112) plane at 28.08\({}^{*}\), and the \(m-f\) type structure shows high intensity peaks for the (\(\overline{1}\)21) and (121) planes at 27.5\({}^{o}\) and 28.9\({}^{o}\) respectively. In the measured XRD pattern for the \(x=0.2\) to 0.8 samples, the diffraction peaks for the (200), (120) and (112) planes are present, which clearly indicate the co-existence of both the \(m-m\) or \(t-s\) type structures. The presence of (110) plane at 17.65\({}^{*}\) for the \(x=0.2\) and 0.4 samples is due to the dominance of \(m-m\) type structure in the LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\). The (200) and (120) peaks are also present in these samples; however, their intensity decreasing with higher concentration of Nb substitution and become negligible for the \(x\geq 0.6\) sample. As the Nb\({}^{5+}\) concentration becomes more than V\({}^{5+}\) concentration the \(t-s\) type structure dominates, which results in the reduction/absence of diffraction peaks corresponding to (200) and (120) planes. The variation in peak intensity corresponding to (200) and (120) crystal planes and the presence of (112) plane indicate the co-existence of \(t-s\) and \(m-m\) type structures for the \(x=0.2\) to 0.8 samples. This also validates that the \(m-m\) type structure (P2\({}_{1}\)/n) is decreasing and \(t-s\) type structure (I4\({}_{1}\)/a) is increasing with increasing the Nb concentration, i.e., from the \(x=0.2\) to 0.8 samples. The determined % of phases by Rietveld refinement of XRD data is presented in Table 1. For the \(x=1\) sample, the presence of (\(\overline{1}\)21) and (121) peaks further confirms the \(m-f\) type structure of LaNbO\({}_{4}\) and consistent with literature [31].
Note that pure \(m-m\) and \(m-f\) phases are observed for the \(x=0\) and 1 samples, respectively. However, for the \(x=0.2\)-0.8 samples, both the monoclinic and scheelite-tetragonal phases coexist in a certain ratio. These results reveal that LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples undergo three phase transformation; monoclinic monazite (\(m-m\)) type (for the \(x=0\)), two-phase equilibrium of monoclinic monazite (\(m-m\)) and tetragonal scheelite (\(t-s\)) type (0.2\(\leq\)\(x\)\(\leq\)0.8), and monoclinic fergusonite (\(m-f\)) type (for the \(x=1\)) with increased substitution of Nb\({}^{5+}\) at the V\({}^{5+}\) site. It is quite interesting to note that small amount of Nb\({}^{5+}\) substitution can transform LaVO\({}_{4}\) from \(m-m\) phase to mix of \(m-m\) and \(t-s\) phases. It has also been observed that LaNbO\({}_{4}\) shows structural transition from monoclinic to a tetragonal phase at \(\sim\)495\({}^{\circ}\)C. This structural transformation is very important in governing the protonic conductivity of LaNbO\({}_{4}\)[32]. For some compositions of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\), this transition temperature shifts near room temperature. The reported temperature-dependent XRD measurements also suggest that at \(x=0.75\) (25% substitution of V\({}^{5+}\) at Nb\({}^{+5}\) sites in LaNbO\({}_{4}\)) [21], it possess a tetragonal structure at room temperature as its transition temperature is 250 K. The XRD pattern below 250 K shows some residual intensity (broadened lines) of tetragonal structures because of precursor effects. Similarly, we can see broad peaks in XRD patterns for the \(x=0.8\) sample due to the above mentioned effect [21]. As we increase the Nb concentration, we find some new peaks appearing in the \(x=0.2\) sample at 33.56\({}^{o}\), 52.68\({}^{o}\), 56.69\({}^{o}\), and 58.06\({}^{o}\). All these peaks are the symbols of t-s structure that belongs to (020), (116), (312), and (224) planes, respectively [33]. These peaks maintained up to the \(x=0.8\) sample, which confirms presence of some \(t-s\) phase, and also indicate the substitution-induced phase transformation. This is an important finding that LaNbO\({}_{4}\) can possess a tetragonal structure at room temperature by just 20% replacement of Nb\({}^{5+}\) sites by the V\({}^{5+}\) sites. This result opens the possibility for a wide range of applications of LaNbO\({}_{4}\) at room temperature. All these patterns discussed above suggest that substitution of larger Nb\({}^{5+}\) (\(r=0.48\) A) ion for V\({}^{5+}\) (\(r=0.36\) A) affects the lattice constant of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) and confirms the transformation of 3 different phases with increasing concentrations of Nb\({}^{5+}\).
The scanning electron microscope images of the LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) for the \(x=0\)-1 are shown in Fig. 2, which depict the closed packed surface morphology in all the
Figure 2: The scanning electron microscope images of the LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) (\(x=0\)–1) samples.
samples and some variation in the particle size is clearly visible. The pores are clearly visible from the top view of the surface. We can see that with the increase in Nb\({}^{5+}\) concentration, the particle size slightly decreases from \(x=0\) to \(x=0.4\) sample, then increases and becomes maximum at \(x=0.8\) and again decreases for the \(x=1\) sample. An average particle size (D) of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) is 5.14 um for the \(x=0\), 4.22 um for the \(x=0.2\), 3.56 um for the \(x=0.4\), 8.73 um for the \(x=0.6\), 11.31 um for the \(x=0.8\), and 5.70 um for the \(x=1\) samples. It is found that the change in crystal surface morphology of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples with increasing Nb\({}^{5+}\) concentration causes variation in the particle size and shape.
Further, in Figs. 3 (a, b) we display the HR-TEM images indicating distinct sets of planes with characteristic spacing for the \(x=0.2\) and 0.8 samples. The images in Figs. 3(c, d) and (e, f) for the samples \(x=0.2\) and \(x=0.8\), respectively, show these plane sets in magnified view. The spacing between the planes is determined using ImageJ software, and we find the \(d-\)spacings of 0.43 and 0.32 nm for the (-1,1,1) and (1,2,0) planes in the \(P2_{1}/n\) phase for the \(x=0.2\) sample, and 0.28 and 0.31 nm for the (0,0,4) and (1,1,2) planes in the \(I4_{1}/a\) phase for the \(x=0.8\) sample. However, these planes only correspond to the dominating phase of the mixed-phase samples. The selected area electron diffraction (SAED) patterns in Figs. 3(g, h) indicate the contributions from both the phases. The indexed \((h,k,l)\) planes that relate to \(P2_{1}/n\) are coloured in white and yellow colour is designated to the \(I4_{1}/a\) space group, as marked in Figs. 3(g, h). We find that the analysis of HR-TEM and SAED results is consistent with the XRD refinement data for these samples, as presented in Figs. 1(b, e).
The Raman spectra of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) measured at three different excitation wavelengths, 532 nm, 633 nm, and 785 nm are presented in Fig. 4 for all the samples (\(x=0\)-1). Three different excitation wavelengths are used to distinguish the fluorescence effect on the Raman signal and to avoid the background effects from the sample. We use Lorentzian line shape function to deconvolute and fit the observed individual Raman peaks, as marked in Table 1. We find that all the specific Raman peak positions (Raman shift) are independent of excitation wavelength for a sample, which confirm their inherent characteristic of that particular sample, as shown in Fig. 4. The intensity of the modes may vary due to several reasons like the polarizability of the molecule, ex
Figure 3: The HR-TEM images of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) for the (a) \(x=0.2\) and (b) \(x=0.8\) samples. The magnified view of HR-TEM images in (c, d) for the \(x=0.2\) sample, and (e, f) for the \(x=0.8\) sample. (g, h) The SAED patterns for the \(x=0.2\) and 0.8 samples, respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(x\) & \(\chi^{2}\) & Space Group & \(a\) (Å) & \(b\) (Å) & \(c\) (Å) & \(\beta\)(\({}^{*}\)) & Volume (Å\({}^{3}\)) \\ \hline
0 & 1.09 & P2\({}_{1}\)/n & 7.042(3) & 7.276(4) & 6.724(7) & 104.88 (6) & 333.033(5) \\ \hline
0.2 & 2.63 & P2\({}_{1}\)/n - 84\% & 7.046(1) & 7.278(2) & 6.733(3) & 104.91(1) & 333.685(4) \\ & & I4\({}_{1}\)/a - 16\% & 5.336(1) & 5.336(1) & 11.731(2) & 90 & 334.042(4) \\ \hline
0.4 & 2.46 & P2\({}_{1}\)/n - 78\% & 7.043(0) & 7.276(3) & 6.732(4) & 104.91(2) & 333.397(3) \\ & & I4\({}_{1}\)/a - 22\% & 5.332(3) & 5.332(3) & 11.735(2) & 90 & 333.509(6) \\ \hline
0.6 & 3.70 & P2\({}_{1}\)/n - 45\% & 6.818(9) & 7.596(5) & 8.030(0) & 105.21(7) & 401.383(6) \\ & & I4\({}_{1}\)/a - 55\% & 5.329(8) & 5.329(8) & 11.714(8) & 90 & 332.787(0) \\ \hline
0.8 & 4.31 & P2\({}_{1}\)/n - 4\% & 6.878(5) & 7.459(4) & 7.679(8) & 105.61(1) & 379.517(2) \\ & & I4\({}_{1}\)/a - 96\% & 5.375(4) & 5.375(42) & 11.624(0) & 90 & 335.869(7) \\ \hline
1 & 4.91 & I2/a & 5.558(5) & 11.529(1) & 5.201(8) & 93.99(2) & 332.546(3) \\ \hline \end{tabular}
\end{table}
Table 1: The Rietveld refinement parameters of polycrystalline LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) (\(x=0\)–1) samples with Nb substitution induced metastable tetragonal–scehelite phase for the \(x=0.2\) to 0.8 samples, determined using the FullProf software.
citation wavelength of the laser source, and the concentration of the active group [34]. Though there are minor changes in the intensity variation of Raman modes measured with different excitation wavelengths, we can see that Raman active peaks are changing systematically for all the measured samples in Fig. 4.
In the measured spectra we see 20 peaks corresponding to LaVO\({}_{4}\) and 17 peaks for LaNbO\({}_{4}\). According to the Group theory calculations LaVO\({}_{4}\) contains 72 vibrational modes and out of them, 36 modes are Raman active modes (18A\({}_{g}\) + 18B\({}_{g}\)) [35; 36; 37] (here, A and B denote symmetric and antisymmetric vibrations about the principal axis of symmetry and subscripts \(g\) indicates that the vibrations are symmetric relative to a symmetry center, respectively). All the 20 Raman peaks for the \(x\) = 0 sample are represented from S\({}_{0}\) to S\({}_{19}\), as shown in the Table 2. The theoretical approach predicted that the 8A\({}_{g}\)+10B\({}_{g}\) modes are for \(m-f\) structure, and 13 Raman-active modes in \(t-s\) structure (as observed in the \(x\) = 0.6 sample), which are summarized in Table 2. The reason for the absence of some of the peaks could be due to the overlap of several A\({}_{g}\) and B\({}_{g}\) modes and their low Raman scattering cross-section. All the assignments related to each Raman peak in LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) are summarised in Table 3. We can see in Table 2 that the S\({}_{0}\) mode (127.24 cm\({}^{-1}\)) is present only in LaVO\({}_{4}\) and LaNbO\({}_{4}\) samples and absent for rest of the intermediate samples. The reason for origin of S\({}_{0}\) mode is translational motion of La atoms in the monoclinic phase. All the concentrations from \(x\) = 0.2 to 0.8 in LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) results in the formation of \(t-s\) type structure or \(m-m\) and \(t-s\) in equilibrium type structure. So, the formation of mixed phase may result in the disappearance of S\({}_{0}\) mode. The S\({}_{18}\) is the most intense mode for the LaVO\({}_{4}\) sample, which decreases with Nb substitution. Whereas, we find that the intensity of S\({}_{16}\) mode increases with Nb substitution and the same becomes the most intense mode for the LaNbO\({}_{4}\) sample, as can be seen in Fig. 4(a). For the \(x\) = 0.8 sample, the S\({}_{18}\) mode completely disappears, which indicates that crystal phase transformation from a mixed phase of \(m-m\) and \(t-s\) in equilibrium to turning into an approximately pure (96%) \(t-s\) phase [38]. This behaviour of S\({}_{0}\), S\({}_{16}\) and S\({}_{18}\) corroborate with the structural phase transformation with Nb\({}^{+5}\) substitution, as has been observed in XRD analysis. Furthermore, the presence of S\({}_{8}\), S\({}_{9}\), S\({}_{10}\), S\({}_{13}\), S\({}_{14}\), S\({}_{15}\), S\({}_{17}\) and S\({}_{18}\) modes in the \(x\) = 0 sample confirms the existence of VO\({}_{4}^{3-}\) ions since none of these modes are visible in LaNbO\({}_{4}\)[39; 40; 41; 42].
All the Raman peaks arise due to different vibrational modes, i.e., bonds between different constituent elements,
Figure 4: The room temperature Raman spectra of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) (\(x\) = 0 to 1) samples using (a) 532 nm, (b) 633 nm, and (c) 785 nm excitation wavelengths. The dotted blue lines represent the Lorentzian line shape to deconvolute the individual modes.
i.e., the La\({}^{3+}\), V\({}^{5+}\), Nb\({}^{5+}\) and O\({}^{2-}\). The comparison of experimentally observed peak positions of distinct Raman modes, fitted using Lorentzian function, with the reported data [35; 43; 44; 3; 45], shows a high degree of similarity, as presented in Table 2. In the \(m-m\) structured LaVO\({}_{4}\) crystal, nine O\({}^{2-}\) atoms are linked to La\({}^{3+}\) whereas four O\({}^{2-}\) atoms and V\({}^{5+}\) are joined in a tetrahedral shape. There are four different O\({}^{2-}\) locations and it is bound in a 3-coordinate geometry to two equivalent La\({}^{3+}\) and one equivalent V\({}^{5+}\) atoms at the first site. Also, it is bound to two comparable La\({}^{3+}\) and one equivalent V\({}^{5+}\) atoms in a deformed single-bond geometry in the second site. Three comparable La\({}^{3+}\) and one equivalent V\({}^{5+}\) atoms are linked to O\({}^{2-}\) in a 3-coordinate geometry at the third O\({}^{2-}\) site and it is bound in a deformed single-bond geometry to three equivalent La\({}^{3+}\) and one equivalent V\({}^{5+}\) atom in the fourth O\({}^{2-}\) site [46]. In the \(m-f\) structured LaNbO\({}_{4}\) crystal, the La\({}^{3+}\) is joined to eight O\({}^{2-}\) atoms in an 8-coordinate geometry and six O\({}^{2-}\) atoms are bound to Nb\({}^{5+}\) to create the deformed, edge-sharing NbO\({}_{6}\) tetrahedra. There are two different sites for O\({}^{2-}\) and it is linked in a 4-coordinate geometry to two equivalent La\({}^{3+}\) and two equivalent Nb\({}^{5+}\) atoms at the first O\({}^{2-}\) site. Also, it is bound in a 3-coordinate geometry to two equivalent La\({}^{3+}\) and one Nb\({}^{5+}\) atoms at the second O\({}^{2-}\) site. In the analysis of vibrational modes, it has been assumed that the LaNbO\({}_{4}\) crystal is made up of La\({}^{3+}\) cations and NbO\({}_{4}^{3-}\) molecular anions [43; 46]. It is revealed experimentally that on the addition of Nb\({}^{5+}\), it replaces V\({}^{5+}\) from its site and distorts LaVO\({}_{4}\)'s unit cell [30]. The mode of vibrations for LaVO\({}_{4}\) is categorised as follows: (I) the zone of high wavenumber (765-874 cm\({}^{-1}\)) resulting from O-V-O bond's stretching vibration (II) the intermediate (305-436 cm\({}^{-1}\)) region resulting from O-V-O bond's bending vibration, and (III) the zone of low wavenumber (\(<285\) cm\({}^{-1}\)) resulting from La atom's translation modes as the La atoms have high mass [17; 9], and the results are presented in Table 3. Similarly, the vibrational modes of LaNbO\({}_{4}\) are also categorized as follows: (I) high wavenumber zone (623-803 cm\({}^{-1}\)) for stretching modes of Nb-O bonds, (II) intermediate zone (322-422 cm\({}^{-1}\)) for deformation/scissor
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \(x\) & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1 \\ \hline Peak & \(\omega_{abs}\) & \(\omega_{abs}\) & \(\omega_{abs}\) & \(\omega_{obs}\) & \(\omega_{obs}\) & \(\omega_{obs}\) \\ \hline S\({}_{0}\) & B\({}_{g}\)(127.24) & & & & & B\({}_{g}\)(121.86) \\ S\({}_{1}\) & A\({}_{g}\)(141.99) & A\({}_{g}\)(140.66) & A\({}_{g}\)(140.66) & A\({}_{g}\)(140.66) & & \\ S\({}_{2}\) & B\({}_{g}\)(154.05) & B\({}_{g}\)(155.39) & B\({}_{g}\)(152.71) & & & \\ S\({}_{3}\) & A\({}_{g}\)(187.44) & A\({}_{g}\)(187.44) & A\({}_{g}\)(186.11) & & & \\ S\({}_{4}\) & B\({}_{g}\)(206.07) & B\({}_{g}\)(206.07) & B\({}_{g}\)(206.079) & B\({}_{g}\)(207.40) & B\({}_{g}\)(211.39) & A\({}_{g}\)(219.35) \\ S\({}_{5}\) & A\({}_{g}\)(235.26) & A\({}_{g}\)(233.94) & A\({}_{g}\)(233.94) & & & \\ S\({}_{6}\) & A\({}_{g}\)(245.84) & A\({}_{g}\)(244.52) & A\({}_{g}\)(244.52) & & & \\ S\({}_{7}\) & B\({}_{g}\)(305.11) & B\({}_{g}\)(305.11) & & & & \\ S\({}_{8}\) & A\({}_{g}\)(326.07) & A\({}_{g}\)(326.07) & A\({}_{g}\)(326.07) & A\({}_{g}\)(324.76) & A\({}_{g}\)(328.69) & \\ S\({}_{9}\) & A\({}_{g}\)(344.36) & A\({}_{g}\)(344.36) & A\({}_{g}\)(344.36) & & & \\ S\({}_{10}\) & A\({}_{g}\)(370.41) & A\({}_{g}\)(370.41) & A\({}_{g}\)(369.11) & A\({}_{g}\)(369.11) & & \\ S\({}_{11}\) & B\({}_{g}\)(393.78) & B\({}_{g}\)(393.78) & B\({}_{g}\)(393.19) & A\({}_{g}\)(391.19) & A\({}_{g}\)(388.60) & B\({}_{g}\)(396.38) \\ S\({}_{12}\) & A\({}_{g}\)(420.96) & A\({}_{g}\)(418.37) & A\({}_{g}\)(418.37) & & & \\ S\({}_{13}\) & B\({}_{g}\)(436.44) & B\({}_{g}\)(435.15) & B\({}_{g}\)(433.86) & B\({}_{g}\)(436.44) & & \\ S\({}_{14}\) & A\({}_{g}\)(765.31) & A\({}_{g}\)(765.31) & A\({}_{g}\)(766.54) & A\({}_{g}\)(766.54) & & \\ S\({}_{15}\) & B\({}_{g}\)(788.68) & B\({}_{g}\)(788.68) & B\({}_{g}\)(788.68) & B\({}_{g}\)(788.68) & & \\ S\({}_{16}\) & A\({}_{g}\)(816.86) & A\({}_{g}\)(815.63) & A\({}_{g}\)(8141.41) & A\({}_{g}\)(811.96) & A\({}_{g}\)(807.07) & A\({}_{g}\)(803.39) \\ S\({}_{17}\) & A\({}_{g}\)(840.06) & A\({}_{g}\)(840.06) & A\({}_{g}\)(840.06) & A\({}_{g}\)(840.06) & & \\ S\({}_{18}\) & B\({}_{g}\)(855.88) & B\({}_{g}\)(855.88) & B\({}_{g}\)(854.67) & B\({}_{g}\)(854.67) & & \\ S\({}_{19}\) & B\({}_{g}\)(874.10) & B\({}_{g}\)(874.10) & B\({}_{g}\)(874.10) & A\({}_{g}\)(170.10) & A\({}_{g}\)(168.76) & A\({}_{g}\)(174.10) \\ S\({}_{20}\) & & & & & & A\({}_{g}\)(108.41) & A\({}_{g}\)(105.72) \\ S\({}_{23}\) & & & & & & B\({}_{g}\)(164.75) \\ S\({}_{24}\) & & & & & & B\({}_{g}\)(198.09) \\ S\({}_{25}\) & & & & & & B\({}_{g}\)(282.77) \\ S\({}_{26}\) & & & & & & B\({}_{g}\)(316.91) & A\({}_{g}\)(322.14) \\ S\({}_{27}\) & & & & & & A\({}_{g}\)(3
modes of NbO\({}_{4}^{3-}\), and (III) low wavenumber zone (121-282 cm\({}^{-1}\)) for rotational modes of NbO\({}_{4}^{3-}\) and translational lattice modes that include the relative translations of anions and cations [43].
The LaNbO\({}_{4}\) contains a total of three different modes; rotational modes of NbO\({}_{4}^{3-}\), vibrational modes of NbO\({}_{4}^{3-}\) and translation modes of La-O and O-La-O bonds. The S\({}_{0}\), and S\({}_{22}\) peaks are visible corresponding to the combined translation-rotational (B\({}_{g}\)) and rotational mode (A\({}_{g}\)), respectively, while the third rotational B\({}_{g}\) mode (S\({}_{21}\)) is absent in the observed experimental Raman spectra. The vibrational modes can be categorized into (I) doubly degenerated scissor modes, (II) triply degenerated deformation mode, which further splits into a pair of degenerated rocking mode and one twist mode, (III) stretching mode, a non-degenerate and a triply degenerated with increasing order of wave number [43]. The remaining modes are all translational modes. From the Table 3, we can easily identify that LaNbO\({}_{4}\) Raman modes are matching well with the reported one. Two NbO\({}_{4}^{3-}\)
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{3}{c|}{LaVO\({}_{4}\)} & \multicolumn{2}{c|}{LaNbO\({}_{4}\)} \\ \hline Peak & \(\omega_{th}\) & Assignments & \(\omega_{th}\) & Assignments & Refs. \\ \hline S\({}_{0}\) & B\({}_{g}\)(127) & Translation mode of La atoms in monoclinic phase & B\({}_{g}\)(125.1) & Coupled translation-rotational mode of La atoms in monoclinic phase and NbO\({}_{4}^{3-}\) around an axis perpendicular to b-axis \\ S\({}_{1}\) & A\({}_{g}\)(143) & Translation mode of La–O bonds & & [45; 47] \\ S\({}_{2}\) & B\({}_{g}\)(158) & Translation mode of La–O bonds & & [43; 48] \\ S\({}_{3}\) & A\({}_{g}\)(188) & Translation mode of La–O bonds & & [45; 47] \\ S\({}_{4}\) & B\({}_{g}\)(204) & Translation mode of La–O bonds & & [43; 48] \\ S\({}_{5}\) & A\({}_{g}\)(230) & Translation mode of La–O bonds & & [47] \\ S\({}_{6}\) & A\({}_{g}\)(252) & Translation mode of La–O bonds & & [45; 47] \\ S\({}_{7}\) & B\({}_{g}\)(316) & Bending vibration of O–V–O bonds & & [45; 49] \\ S\({}_{8}\) & A\({}_{g}\)(336) & Bending vibration of O–V–O bonds & & [45; 49] \\ S\({}_{9}\) & A\({}_{g}\)(355) & Bending vibration of O–V–O bonds & & [45; 49] \\ S\({}_{10}\) & A\({}_{g}\)(380) & Bending vibration of O–V–O bonds & & [43; 45; 49] \\ S\({}_{11}\) & B\({}_{g}\)(389) & Bending vibration of O–V–O bonds & & [45; 49] \\ S\({}_{12}\) & A\({}_{g}\)(423) & Bending vibration of O–V–O bonds & & [45; 49] \\ S\({}_{13}\) & B\({}_{g}\)(427) & Bending vibration of O–V–O bonds & & [50] \\ S\({}_{14}\) & A\({}_{g}\)(784) & Stretching vibration of V–O bonds & & [50] \\ S\({}_{15}\) & B\({}_{g}\)(799) & Stretching vibration of V–O bonds & & [50; 51; 52] \\ S\({}_{16}\) & A\({}_{g}\)(806) & Stretching vibration of V–O bonds & & [50] \\ S\({}_{17}\) & A\({}_{g}\)(836) & Stretching vibration of V–O bonds & & [45; 50] \\ S\({}_{18}\) & A\({}_{g}\)(861) & Non degenerate stretching mode of & [45; 50] \\ S\({}_{19}\) & B\({}_{g}\)(892) & Stretching vibration of O–V–O bonds & & [43; 51; 52] \\ S\({}_{20}\) & & A\({}_{g}\)(177.1) & Translational mode along b axis & [43; 43] \\ S\({}_{21}\) & & B\({}_{g}\)(114) & Rotational mode of NbO\({}_{4}^{3-}\) along an axis perpendicular to b-axis & [43; 48] \\ S\({}_{22}\) & & A\({}_{g}\)(108.6) & Rotational mode of NbO\({}_{4}^{3-}\) along b-axis & [43; 48] \\ S\({}_{23}\) & & B\({}_{g}\)(170) & Translational mode parallel to ac-plane & [43] \\ S\({}_{24}\) & & B\({}_{g}\)(200.2) & Translational mode parallel to ac-plane & [43] \\ S\({}_{25}\) & & B\({}_{g}\)(284.9) & Translational mode parallel to ac-plane & [43] \\ S\({}_{26}\) & & A\({}_{g}\)(321.7) & Doubly degenerate scissors mode of NbO\({}_{4}^{3-}\) & [43; 48] \\ S\({}_{27}\) & & A\({}_{g}\)(326.9) & Doubly degenerate scissors mode of NbO\({}_{4}^{3-}\) & [43; 48] \\ S\({}_{28}\) & & B\({}_{g}\)(344) & Translational mode parallel to ac-plane & [43] \\ S\({}_{29}\) & & B\({}_{g}\)(404.9) & Triply degenerate deformation mode (Rocking mode of NbO\({}_{4}^{3-}\)) & [43] \\ S\({}_{30}\) & & A\({}_{g}\)(425.8) & Triply degenerate deformation mode (Twist mode of NbO\({}_{4}^{3-}\)) & [43] \\ S\({}_{31}\) & & B\({}_{g}\)(625.5) & One of triply degenerate Stretching mode of Nb–O bonds & [43; 48; 51] \\ S\({}_{32}\) & & A\({}_{g}\)(649) & One of triply degenerate Stretching mode of Nb–O bonds & [43; 48; 51] \\ S\({}_{33}\) & & B\({}_{g}\)(664.9) & One of triply degenerate Stretching mode of Nb–O bonds & [43; 48; 51] \\ S\({}_{33}\) & & B\({}_{g}\)(664.9) & One of triply degenerate Stretching mode of Nb–O bonds & [43; 48; 51] \\ \hline \end{tabular}
\end{table}
Table 3: Summary of all the 34 Raman active modes and their assignments with the help of literature (cited in the last column of the table) for the LaVO\({}_{4}\) and LaNbO\({}_{4}\) samples.
scissor modes with almost degenerated wave numbers are projected to be seen in the A\({}_{g}\) spectrum. Out of all, the most obvious choices are S\({}_{26}\) and S\({}_{27}\) because the remaining A\({}_{g}\) band's wavenumbers are too low to allocate them. In the LaNbO\({}_{4}\), as already discussed, the deformation modes are believed to be divided into two almost degenerated rocking modes (S\({}_{11}\) and S\({}_{29}\)) with B\({}_{g}\) symmetry and a twist mode (S\({}_{30}\)) with A\({}_{g}\) symmetry. These modes are also present in the region of intermediate wave numbers. The stretching modes are high energy vibrations and here they are recognised as S\({}_{16}\), S\({}_{31}\), S\({}_{32}\) and S\({}_{33}\) peaks. As the non-degenerate symmetric mode is expected to provide the strongest band, band S\({}_{16}\) is allocated to it. The remaining S\({}_{31}\), S\({}_{32}\), and S\({}_{33}\) peaks are assigned to other three degenerate stretching modes. The invariance of the S\({}_{4}\), S\({}_{11}\) and S\({}_{16}\) peak positions, all through, from the \(x=0\) to 1 samples indicates no effect on translational mode along the \(b-\)axis, the B\({}_{g}\) frequency of VO\({}_{4}^{3-}\) and NbO\({}_{4}^{3-}\) for rocking and stretching modes. The S\({}_{8}\) peak disappears only in LaNbO\({}_{4}\) spectrum because of absence of O-V-O bending vibrations [17; 43]. Interestingly, the S\({}_{2}\), S\({}_{3}\), S\({}_{5}\), S\({}_{6}\), S\({}_{9}\), S\({}_{12}\) and S\({}_{19}\) peaks vanished just before Nb concentration exceeds V (at \(x=0.4\)) and also S\({}_{1}\), S\({}_{10}\), S\({}_{13}\), S\({}_{14}\), S\({}_{15}\), S\({}_{16}\), S\({}_{17}\) and S\({}_{18}\) peaks vanished just after Nb concentration became more than V. It is quite possible that the low concentration of Nb in the sample results in the weakening and then disappearance of some of the spectral peaks. Due to the same reason some new peaks (S\({}_{20}\) and S\({}_{33}\)) appeared in \(x=0.6\)-1 samples. Furthermore, the S\({}_{20}\) peak arises due to translational mode along the \(b-\)axis, and S\({}_{33}\) peak appeared due to one of three triply degenerate stretching modes of NbO\({}_{4}^{3-}\) in the sample.
The most intense peak in LaNbO\({}_{4}\) (S\({}_{16}\)) and LaVO\({}_{4}\) (S\({}_{18}\)) at higher wavenumber is due to the stretching of Nb-O\({}_{t}\) and V-O\({}_{t}\) bonds, where O\({}_{t}\) represents the oxygen atoms in the terminal position [53]. The terminal position of oxygen is that where it connects the LaO\({}_{8}\) dodecahedra and NbO\({}_{6}\) octahedra in case of LaNbO\({}_{4}\) and LaO\({}_{9}\) muffin [54] and VO\({}_{4}\) tetrahedra in case of LaVO\({}_{4}\)[53]. Since the VO\({}_{4}\) tetrahedra appears to be intrinsic in the peak broadening, it has been found that this broadening in the Raman peaks spreads along samples with intermediate Nb and V compositions. However, in certain samples, Nb\({}^{5+}\) and V\({}^{5+}\) cation-related variables may also play an important role in the increasing the broadening of the peak. The broad peaks are made up of multiple modes which are normally difficult to distinguish from one another [53]. The strongest peak of LaVO\({}_{4}\) (S\({}_{18}\)) is in the high wavenumber region which lies approximanty 52.5 cm\({}^{-1}\) on higher side of the spectrum with respect to the strongest peak of LaNbO\({}_{4}\) (S\({}_{16}\)). This difference in wavenumber (\(\Delta\)) is related to average bond length (\(d\)) of the atoms by \(\Delta\propto 1/d^{3/2}\), as stated by the Badger
Figure 5: The room temperature XPS survey spectra of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) (\(x=0\) to 1) samples.
rule [55]. The bond lengths V-O\({}_{t}\) in LaVO\({}_{4}\) and Nb-O\({}_{t}\) in LaNbO\({}_{4}\) are \(\sim\)1.72 A [9] and \(\sim\)1.90 A [53], respectively. The changes observed in the Raman septetra of the samples is quite consistent with the Badger's rule.
Finally we use x-ray photoemission spectroscopy (XPS) to investigate the electronic structure by measuring the survey scan and particular elemental core-level spectra of all the prepared samples. The identified peaks in the survey spectra according to their binding energies are labeled and are in agreement with reported values, as shown in Fig. 5. The characteristic La 3\(d\) peaks cluster (830-870 eV), 4\(d\) (4\(d_{5/2}\) at 101 eV and 4\(d_{3/2}\) at 104 eV), and 4\(p\) (centered around 195 eV) [56]. The presence of these peaks of La are clearly visible for every synthesised sample, and they are all remarkably comparable. A consistent rise in Nb 3\(d\) (discussed later) and Nb 3\(p\) (3\(p_{3/2}\) at 364 eV and 3\(p_{1/2}\) at 379 eV) is observed along with an increase in Nb doping and this feature of Nb is absent in the \(x=0\) sample [57]. For the V 2\(p\) (2\(p_{3/2}\) at 517 eV and 2\(p_{1/2}\) at 525 eV) and V 2\(s\) (630 eV) core-level peaks, the reverse behavior is anticipated and it is quite evident as clearly visible in Fig. 5 [58]. The Voigt function has been used to fit the core level spectra of the constituent elements. The fitted La 3\(d\) core-levels are shown in Fig. 6(a). The spin-orbit splitting peaks present in all the samples, have been de-convoluted at binding energies 834.3\(\pm\)0.2 eV, 836.0\(\pm\)0.3 eV, 838.7\(\pm\)0.1 eV, 847.9\(\pm\)0.1 eV, 851.1\(\pm\)0.2 eV, 853.0\(\pm\)0.3 eV, 855.6\(\pm\)0.1
Figure 6: (a) The La 3\(d\) core-level spectra of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\), \(x=0\)–1 samples. (b) The intensity ratio I\({}_{2}\)/I\({}_{0}\), and (c) energy separation I\({}_{2}\) - I\({}_{0}\) as a function of doping level \(x\). The fitted spin-orbit split components are also shown for each sample.
Figure 7: Tha Nb 3\(d\) core level spectra of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\), \(x=0\)–1 samples. The fitted spin-orbit split components are also shown for each sample.
eV, and 863.4\(\pm\)0.2 eV (average B.E. of all the samples \(\pm\)\(\Delta\)B.E., calculated for the \(x=0\)-1 samples). The broad diffusive satellite peaks at 847.7 eV and 863 eV in the locality of La 3\(d\) core-level are coming from plasmons. Because of the two final states I and II, and the subsequent spin-orbit splitting between each state, making the structure complex. The primary strong peaks (3\(d_{5/2}\) at 834.3 eV and 3\(d_{3/2}\) at 851.1 eV, respectively) are associated with the final state I (La\({}^{4+}\) 3d\({}^{9}\)4f\({}^{0}\), L), which involves electron transfer to the continuum from the 3\(d\) core-level. The peaks at higher binding energies are features of final state II (La\({}^{3+}\) 3d\({}^{9}\)4f\({}^{1}\), L, -e) and this feature is experimentally unresolved which indicates multiplet structure, as has been suggested by Mullica _et al._[56]. This corresponds to the electron transfer from ligand (L, O\({}_{2p}\) in our case) valance band to the empty 4\(f\) orbitals of La [56, 59]. This multiplet structure of state II is composed of two bonding and anti-bonding states. The prominent signals at higher binding energies (3\(d_{5/2}\) at 838.7 eV and 3\(d_{3/2}\) at 855.6 eV) are due to bonding of state II and the weak signals at lower binding energies (3\(d_{5/2}\) at 836.0 eV and 3\(d_{3/2}\) at 853 eV) are because of anti-bonding. The average energy difference (over La core-level spectra of all the samples) between these three pairs of peaks is nearly the same ( 16.9 eV) for the state I, state II bonding, and state II anti-bonding, respectively. This verifies the unaltered spin-orbit energy splitting of the states of La on Nb substitution [60]. Interestingly, we find a significant and systematic change in the intensity variation of the peak at 838.7 eV (I\({}_{2}\)) relative to the primary peak at 834.3 eV (I\({}_{0}\)) with Nb doping. The metal-ligand orbital overlaps are reported to be accountable for such doping-induced intensity variations [61, 62] where strong ligands are found to populate the (La\({}^{3+}\) 3d\({}^{9}\)4f\({}^{1}\), L, -e) state, intensifying I\({}_{2}\)[63]. The intensity ratio I\({}_{2}\)/I\({}_{0}\) is shown in Fig. 6(b), which shows a consistent decrease as a function of doping \(x\). This signifies that with Nb substitution, the extent of overlapping between La(4f)-O(2p) orbitals decreases monotonically. This conclusion can also be drawn from the trend in the energy separation between I\({}_{2}\) and I\({}_{0}\) as a function of \(x\), as shown in Fig. 6(c). However, the separation is minute in the subsequent samples, but for the \(x=0\) and \(x=1\) the energy difference (I\({}_{2}\) - I\({}_{0}\)) is found to be of the order of 0.3 eV. The value of I\({}_{2}\) - I\({}_{0}\) was found to be varying for a variety of La-containing compounds mainly because of the crystal structure, like 3.8 eV for La\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) and 5.3 eV La\({}_{1.85}\)Ba\({}_{0.15}\)CuO\({}_{4}\)[60, 62]. Notably, this energy separation could be related to the ease of electron transfer between the ligand and the more ionic state of La, therefore having an opposite trend with the tendency of ligand's overlapping with the La 4\(f\) orbitals [62].
Figure 8: Tha O 1\(s\) core level spectra of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\), \(x=0\)–1 samples. Each spectrum is shifted vertically for the clarity.
Figure 9: Tha V 2\(p\) core level spectra of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\), \(x=0\)–1 samples. The fitted spin-orbit split components are also shown for each sample.
The Nb \(3d\) core level spectra are shown in Fig. 7 where the spin-orbit doublet of Nb \(3d\) core levels are fitted with a single peak for each component and the calculated peak positions for the Nb-doped samples are found to be \(3d_{5/2}\) at 206.2\(\pm\)0.2 eV and 3d\({}_{3/2}\) at 209.0\(\pm\)0.2 eV [64, 65]. This confirms the presence of prevailed 5+ oxidation state of Nb atom [57] in all the samples. However, for the \(x=1\) sample the Nb \(3d_{5/2}\) is at a higher binding energy as compared to the other Nb-containing samples, which could be due to the charging effects and the change in chemical environments. Therefore, Atuchin _et al._ characterized the Nb state by using energy difference \(\Delta\) (Nb \(3d_{5/2}\) - O \(1s\)) instead of solely relying on Nb 3d\({}_{5/2}\) binding energy position [66]. The evaluated \(\Delta\) (Nb \(3d_{5/2}\) - O \(1s\)) values are found to be around 323.5 eV. The calculated energy difference with respect to O \(1s\) is independent of the carbon correction. The obtained binding energy difference \(\approx\)323.5 eV is reported to be a fairly highest value for the 5+ oxidation state of Nb. We can also see that the error in the value of binding energy position \(\Delta\) is only 0.1 eV in this case, while for the Nb 3d\({}_{5/2}\), and O \(1s\), it is 0.3, and 0.2 eV, respectively. In Fig. 8 we can also see that the O \(1s\) peak is shifting to higher binding energy for the \(x=1\) sample as compared to the \(x=0\). Similarly, for the Nb \(3d_{5/2}\) core-level, it is shifting to higher binding energy. The \(\Delta\)(Nb 3d\({}_{5/2}\) - O \(1s\)) value for the \(x=1\) sample is quite consistent for all the samples, which strongly supports the electronic characterization using energy difference with respect to O \(1s\) instead of absolute peak positions.
In Fig. 9, we present the V \(2p\) core level spectra for all the samples, which shows spin orbit components of \(2p_{3/2}\), and \(2p_{1/2}\) at 516.9 and 524.8 eV, respectively indicating V in 5+ state. Interestingly, an unusual broadening in the V \(2p_{1/2}\) component is observed for all the samples, whereas no such additional component is evident in the V \(2p_{3/2}\) peak at 516.9 eV. More importantly, the deconvolution of the V \(2p_{1/2}\) component reveals that the FWHM of the higher energy feature (denoted by I) (1.2 eV) is nearly the same as that of \(2p_{3/2}\) component (1.1 eV). In contrast, the lower energy feature (II) is significantly broader (2.8 eV). Moreover, the area ratio of the combined I and II with \(2p_{3/2}\) is close to 1/2, which clearly indicates the intrinsic origin of these two features from the vanadium. In contrast to metallic V \(2p\) core-level, the vanadium based compounds have often been reported to exhibit an anomalous V \(2p_{1/2}\) width as a consequence of Coster-Kronig (C-K) transitions [68]. The C-K type transition is a class of Auger transition in which an electron from a higher sub-shell of the same shell fills the core hole [67]. In the present case, the filling of \(2p_{1/2}\) core hole by an electron from \(2p_{3/2}\) may give rise to the C-K transitions and that can result in an additional feature in the \(2p_{1/2}\) component. Therefore, it is likely that the component I is attributed to the core-hole recombination with the screening electrons, analogous to the 2p\({}_{3/2}\), whereas an additional L\({}_{2}\)-L\({}_{3}\) (C-K) relaxation process gives rise to the feature II in \(2p_{1/2}\) peak [69]. No significant change in these components has been observed with the Nb substitution, indicating the robust nature of the underlined system. Further, the approach of O \(1s\) energy difference is also implemented in this case as for vanadium oxides, the energy difference \(\Delta\)(V \(2p_{3/2}\) - O \(1s\)) is an advantageous reference [70]. The average \(\Delta\)(V \(2p_{3/2}\) - O \(1s\)) magnitude is 12.8\(\mp\)0.1 eV, which is in good agreement with the literature for V\({}^{5+}\) oxidation state [71].
## Conclusions
In conclusion, the solid state reaction method was used to prepare LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples successfully with regular variable Nb\({}^{5+}\) concentration. The XRD measurements established that the substitution of larger Nb\({}^{5+}\) ion for V\({}^{5+}\) affects the lattice constant of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) and goes through three different phase transformations [monoclinic monazite (\(m-m\)) type (\(x=0\)), two-phase equilibrium of monoclinic monazite (\(m-m\)) and tetragonal scheelite (\(t-s\)) type (0.2\(\leq\)\(x\)\(\leq\)0.8) and monoclinic fergousite (\(m-f\)) type (\(x=1\))]. The SEM micrographs helped in analyzing that the particle size and shape altered due to the change in crystal phases of these samples with increasing Nb\({}^{5+}\) concentration. The analysis of HR-TEM and SAED data found to be consistent with the XRD refinement data. The Raman spectra of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) were studied using 532 nm, 633 nm, and 785 nm excitation wavelengths. All the Raman assignments were found to have well-ordered enhancement/diminution with the increase in Nb\({}^{5+}\) doping. The variation in the intensity as well as appearance/disappearance of the Raman mode with Nb concentration are coinciding with the change in the structural phases, as observed in XRD analysis. This further confirms that the phase transformation in LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) agrees with the maximum intensity peak patterns shown in the Raman spectra of these samples and are consistent with the Badger's rule. The XPS analysis reveal the changes in Nb \(3d\) and V \(2p\) core-level spectral intensities of the samples with the increase in Nb\({}^{5+}\) concentration. The equal spin-orbit energy splitting of the states was confirmed by the average energy difference (over La core spectra of all samples) for state I, state II bonding, and state II anti-bonding and the observed changes in their relative intensities with Nb substitution are due to the metal ligand orbitals overlap. These findings provide valuable insights into the structural and electronic properties of LaV\({}_{1-x}\)Nb\({}_{x}\)O\({}_{4}\) samples and their potential use in different fields of practical applications.
## Acknowledgment
AS and MS thank MHRD and CSIR, respectively for the fellowship. The authors acknowledge IIT Delhi's FIST (DST, Govt. of India) UFO scheme for providing the physics department with the Raman facility. We |
2306.07012 | Generating Language Corrections for Teaching Physical Control Tasks | AI assistance continues to help advance applications in education, from
language learning to intelligent tutoring systems, yet current methods for
providing students feedback are still quite limited. Most automatic feedback
systems either provide binary correctness feedback, which may not help a
student understand how to improve, or require hand-coding feedback templates,
which may not generalize to new domains. This can be particularly challenging
for physical control tasks, where the rich diversity in student behavior and
specialized domains make it challenging to leverage general-purpose assistive
tools for providing feedback. We design and build CORGI, a model trained to
generate language corrections for physical control tasks, such as learning to
ride a bike. CORGI takes in as input a pair of student and expert trajectories,
and then generates natural language corrections to help the student improve. We
collect and train CORGI over data from three diverse physical control tasks
(drawing, steering, and joint movement). Through both automatic and human
evaluations, we show that CORGI can (i) generate valid feedback for novel
student trajectories, (ii) outperform baselines on domains with novel control
dynamics, and (iii) improve student learning in an interactive drawing task. | Megha Srivastava, Noah Goodman, Dorsa Sadigh | 2023-06-12T10:31:16Z | http://arxiv.org/abs/2306.07012v1 | # Generating Language Corrections for Teaching Physical Control Tasks
###### Abstract
AI assistance continues to help advance applications in education, from language learning to intelligent tutoring systems, yet current methods for providing students feedback are still quite limited. Most automatic feedback systems either provide binary correctness feedback, which may not help a student understand _how_ to improve, or require hand-coding feedback templates, which may not generalize to new domains. This can be particularly challenging for physical control tasks, where the rich diversity in student behavior and specialized domains make it challenging to leverage general-purpose assistive tools for providing feedback. We design and build CORGI, a model trained to generate language corrections for physical control tasks, such as learning to ride a bike. CORGI takes in as input a pair of student and expert trajectories, and then generates natural language corrections to help the student improve. We collect and train CORGI over data from three diverse physical control tasks (drawing, steering, and joint movement). Through both automatic and human evaluations, we show that CORGI can (i) generate valid feedback for novel student trajectories, (ii) outperform baselines on domains with novel control dynamics, and (iii) improve student learning in an interactive drawing task.
Machine Learning, ICML
## 1 Introduction
In our daily lives, we need to learn a variety of physical control tasks (e.g. driving a car or athletic sports) that benefit from receiving feedback of different modalities, such as visual demonstrations or haptic guidance. One of the most general forms of corrective feedback, however, is natural language - a person learning how to ride a bike can easily understand what _"make a sharper left turn"_ means, even if they are unfamiliar with the specific control dynamics of the task. While recent works have focused on learning control policies that incorporate natural language feedback from users (Broad et al., 2017; Cui et al., 2023; Sharma et al., 2022), few have considered the reverse direction of automatically generating language corrections to provide to human users. Such corrections can be useful for enhancing human-AI interaction in decision making contexts (Lai and Tan, 2019), improving interactive data collection (Gondhi et al., 2022; Gopalan et al., 2022), and more generally teaching humans how to perform physical control tasks such as for rehabilitation, flying an aircraft, or operating surgical robots. (Hayws et al., 2009; Maciejasz et al., 2014; Srivastava et al., 2022; Yu et al., 2022; Schrum et al., 2022).
How do humans typically provide natural language feedback? Consider a parent who is teaching their child how to ride a bike. One form of corrective feedback they may provide are general, vague utterances (e.g. _"that was okay, try again"_) that provide positive or negative reinforcement, but may not be very informative on _how_ to improve. On the other extreme, the parent may provide precise feedback (e.g. _"wider grip on the handle-bars"_) that clearly conveys how the child should adjust their behavior, but requires access to domain-specific information such as referring to handle bars, which is only applicable to the setting of teaching how to ride a bike. This results in a trade-off between helpfulness, or the ability to provide sufficient information to help a student improve, of corrections and their generality, or ability to be understood and conveyed across different settings.
In fact, existing works on automatic feedback generation in domains such as programming and language learning reflect this trade-off (Settles et al., 2020; Liu et al., 2022). Some systems provide simple binary feedback (e.g. whether a program ran successfully), which may not be very helpful to the student, while others require hand-coded, templates (e.g. grammar checking) that lack generality. Due to the rich diversity of physical control tasks and variation in ways a student might under-perform, we seek to strike a balance by learning to generate helpful comparative corrections (e.g. _"brake sooner"_) that can also generalize to novel trajectories within the same control space. To achieve this, we choose to leverage the expressive capabilities of language models (LMs), driven by the key insight that LMs may encode physical conceptual spaces that are isomorphic across the variety
of environments, states, and action spaces that exist across different physical control tasks (Patel and Pavlick, 2022).
Concretely, we design and build CORGI1, a model trained to generate corrections in natural language based on three physical control tasks of drawing, driving a car in simulation, and dancing. These three tasks exhibit different control spaces such as the 2D x-y position on a surface, steering and acceleration, and skeleton joint motion, which in turn require CORGI to develop a general understanding of physical concepts. At test time, CORGI takes in as input a pair of student and expert trajectories, and generates a correction in natural language to help the student better match the expert's performance. Specifically, CORGI consists of a trainable trajectory encoder that learns to map student and expert trajectories to prompts that can be used as inputs to a frozen LM to generate feedback with, thus keeping the more general representations of language encoded by the LM fixed. Through both automatic and human evaluations, we show that CORGI can (i) generate valid feedback for novel student trajectories, (ii) outperform baselines on domains with novel control dynamics, and (iii) improve student learning in an interactive drawing task. Thus, in addition to introducing the task of generating natural language feedback to humans for physical control tasks, our contributions include:
Footnote 1: CORGI: The acronym stands for natural language **cor**rections generation for **instruction.
1. A dataset of 2k crowdsourced corrections collected across (student, expert) trajectories from a diverse set of control tasks (drawing, steering, and joint motion).
2. CORGI, our model trained to generate corrective feedback in natural language for these three tasks.
3. A comprehensive evaluation of the ability of CORGI to generalize to novel student trajectories and domains that share the same control space.
4. Two human subject user studies assessing both preference and the helpfulness of generated feedback in helping users improve drawing.
We will release all data, model checkpoints, code, and user study infrastructure to aid future work at [https://github.com/Stanford-ILIAD/corgi](https://github.com/Stanford-ILIAD/corgi).
## 2 Related Works
While recent works have explored generating _comparative_ descriptions, such as language descriptions of distribution shifts (Zhong et al., 2022) and relative image captions (Mirchandani et al., 2022), we are the first to explore this for physical control tasks, as well as with an educational focus.
**Language in Multimodal Tasks** Several works have leveraged advances in LMs and multimodal models to improve human interaction across physical control tasks. For example, Google's SayCan leverages LMs to break down language instructions into executable skills, providing users flexibility in receiving robotic assistance for complex, long-horizon tasks (Ahn et al., 2022). Others have explored using language to adjust robot plans with constraints or specify subgoals (Sharma et al., 2022; Karamcheti et al., 2021; Cui et al., 2023). Finally, (Tevet et al., 2022) recently introduced MotionCLIP, a transformer-based auto-encoder that shows exciting text-to-motion capabilities like adjusting motion sequences for novel styles (e.g. _"run away hysterically"_).
Another multimodal task closely related to ours is image (or video) captioning, where large-scale multimodal models have achieved state-of-the-art performance on classic benchmarks such as MSCOCO (Alayrac and et. al., 2022; Lin et al., 2014). Furthermore, Tsimpoukelli et al. (2021) achieve strong performance on captioning tasks by only training a visual encoder to output a prompt for a frozen LM, motivating our approach for CORGI.
**Language in Education** A few works have studied the role of language descriptions and feedback in educational settings. Chopra et al. (2019) show that language can reduce time in communicating concepts to a student, Sumers et al. (2020) find in a cooperative teaching game that language helps communicate more nuanced concepts than other feedback forms like demonstrations, and Ruan et al. (2019) demonstrate that interactive dialogue-based agents can improve student learning. However, these works largely focus on understanding the role of language in pedagogical settings, not automatically generating language feedback.
**Language in Physical Interaction Datasets** Large-scale datasets of language paired with physical interactions have enabled further understanding of physical reasoning, as well as inspired progress on novel interactive control tasks. For example, Ji (2022) built a rich-annotated dataset of tangram puzzles to study the abstract visual reasoning capabilities of multi-modal models, Wong et al. (2022) show how to leverage annotations in the CLEVR dataset (Johnson et al., 2017) to improve generalization on spatial relationship tasks and Lynch and Sermanet (2021) show that "play" data annotations enable strong zero-shot language conditioning for robotic tasks. To the best of our knowledge, we are the first to collect corrections over pairwise trajectories, providing insight into how people reason about physical comparisons.
## 3 Generating Corrective Feedback
We now formalize generating corrective feedback in an educational setting, where the goal is to generate corrections from the set of possible natural language utterances \(u\in\mathcal{U}\)
that are _comparative_ with respect to some expert behavior. Consider a target physical control task \(g\) (e.g. riding a bike), a student \(\mathcal{S}\) (e.g a child learning to ride a bike), and an expert \(\mathcal{E}\) (e.g. their parent who can already perform this task). We can treat \(g\) as a standard Markov decision process (MDP) \(<S,A,f,R,T>\) with finite horizon \(T\), reward function \(R:S\times A\rightarrow\mathbb{R}\) over state \(S\) and action \(A\) spaces, and a deterministic transition function \(f:S\times A\to S\) that maps a particular state and action pair \(s_{t},a_{t}\) at time step \(t\) to a new state \(s_{t+1}\). We can then define a trajectory \(\tau\) as a sequence of state and action pairs \(\{s_{1},a_{1},\dots,s_{T},a_{T}\}\), and can collect trajectories from both the student (\(\tau_{\mathcal{S}}\)) and the expert (\(\tau_{\mathcal{E}}\)). Under this setting, we now formalize the goal of generating corrective feedback \(u\) for the student \(\mathcal{S}\).
### Problem Statement
Effective feedback should reduce discrepancies between a student learner's current understanding and performance of a task and that of an expert teacher (Hattie and Timperley, 2007). Therefore, good corrections should not only accurately identify such discrepancies, but also be sufficiently _helpful_ for the student to improve. We thus assess a correction \(u\) by measuring the degree it reduces the gap between the student \(\mathcal{S}\)'s and expert \(\mathcal{E}\)'s performance on task \(g\).
Concretely, let \(\pi^{k}_{\mathcal{S},g}\) represent the student policy for task \(g\) at time \(k\) and \(\pi_{\mathcal{E},g}\) represent a fixed expert policy for task \(g\). From these policies, we can collect trajectory rollouts \(\tau^{g,k}_{\mathcal{S}}\) and \(\tau^{g}_{\mathcal{E}}\), respectively. Furthermore, let \(\mathcal{L}\) be a task-dependent loss function that measures the discrepancy between two trajectories. A corrective feedback utterance \(u_{k}\) provided at timestep \(k\) may result in the student updating their policy from \(\pi^{k}_{\mathcal{S},g}\) to \(\pi^{k+1}_{\mathcal{S},g}\), and so the optimal corrective feedback would be a \(u_{k}\) that minimizes the expression:
\[\min_{u_{k}}\ \mathcal{L}(\tau^{g,k+1}_{\mathcal{S}}(u_{k}),\tau^{g}_{\mathcal{E }})-\mathcal{L}(\tau^{g,k}_{\mathcal{S}},\tau^{g}_{\mathcal{E}}) \tag{1}\]
In other words, our goal is to generate language corrections \(u\) that result in the largest decrease in discrepancy between the student and the expert. In practice, however, optimizing directly for the above expression is intractable due to the lack of strong cognitive models of human learning, i.e., we do not have an accurate model of how \(u_{k}\) leads to changes in the student trajectory \(\tau^{g,k+1}_{\mathcal{S}}\). Therefore, instead of optimizing for the objective in Eq. (1), we consider whether it is possible to build a strong generative model in a supervised manner from annotated samples of corrective feedback \((\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}},u)\). In order to best capture the expressiveness of annotations provided in natural language, we propose leveraging the rich encoding of language present in modern day LMs by casting the problem of generating corrective feedback for student \(\mathcal{S}\) in reference to \(\mathcal{E}\) as a _controllable text generation_ problem. Concretely, our goal is to identify a method that, given tuples of \((\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}},u)\), allows us to effectively control (via prompting) a large pretrained LM to generate corrections \(u\) at test time when we only have access to novel student and expert trajectories \((\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}})\).
### Trajectory Encoding
To use trajectory samples \((\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}})\) to construct an input prompt that can help steer an LM to generate good corrections \(u\), we first need the ability to represent trajectories of a physical task as a sequence of text tokens. Recall
Figure 1: Overview of CORGI at test time. Trajectories \(\tau_{\mathcal{S}}\), \(\tau_{\mathcal{E}}\), from a student and an expert respectively, are mapped by a learned trajectory encoder \(M_{\text{minj},\theta}\) to vectors of the same dimension as the output of the frozen language model \(M_{\text{lang},\phi}\)’s embedding layer (\(W_{\text{lang},\phi}\)). The resulting output vectors are stitched together with the embeddings corresponding to vocabulary words “student”, “expert”, and “correction” in order to create the input prompt sent to the \(M_{\text{minj},\theta}\), from which we then generate a correction.
that a trajectory \(\tau\) is a sequence of state and action pairs \(\{s_{1},a_{1},\ldots,s_{T},a_{T}\}\) which, when concatenated can be represented as a set of \(T\) vectors of numerical values with dimension \(d_{g}:=[S]+[A]\). Meanwhile, a typical LM (\(\mathcal{M}_{\text{lang},\phi}\)) consists of a word embedding layer (\(\mathcal{W}_{\text{lang},\phi}\)) that maps text tokens from a fixed vocabulary to embeddings of a given dimension \(d_{e}\). We therefore learn a trajectory encoder model \(\mathcal{M}_{\text{traj},\theta}\) that can map any (\(T\times d_{g}\))-dimension trajectory \(\tau^{g}\) to a set of \(n\) vectors of dimension \(d_{e}\), where \(n\) is a hyperparameter. We can then represent \(\tau^{g}_{\mathcal{E}}\) and \(\tau^{g}_{\mathcal{S}}\) as a sequence of "token embeddings" \(v_{S,1}...v_{S,n}\), \(v_{\mathcal{E},1}...v_{\mathcal{E},n}\) that, as shown in Figure 1, form the input prompt to the LM which we will use to conditionally generate correction \(u\).
### Controllable Text Generation
CORGI consists of a trainable encoder \(\mathcal{M}_{\text{traj},\theta}\) that learns to represent any arbitrary trajectory \(\tau\) as a sequence of continuous embeddings such that, when embeddings corresponding to both the student and expert trajectories are included as part of a prompt, the underlying _frozen_, pre-trained LM (\(\mathcal{M}_{\text{lang},\phi}\)) will generate appropriate corrections. We choose to keep the LM frozen in order to aid the adaptability of CORGI to new kinds of student behavior and domains where there may be changes in language not captured by our data.
We learn the same trajectory encoder (\(\mathcal{M}_{\text{traj},\theta}\)), consisting of a 3-layer feed-forward neural network that outputs \(n\) vectors with the same dimension as the target LM (e.g. 768 for GPT-2), for both student \(\mathcal{S}\) and expert \(\mathcal{E}\) trajectories. We train our model over tuples of corrections paired with student and expert trajectories (\(\tau_{\mathcal{S}}\), \(\tau_{\mathcal{E}}\), \(u\)); by constructing input prompt sequences using \(\mathcal{M}_{\text{traj},\theta}\) as shown in Figure 1. During training, we calculate the language modeling loss, where the loss of single sample \(q_{i}\) is:
\[\mathcal{L}_{\phi}(q_{i})=-\sum_{t=1}^{|q_{i}|}\text{log}\mathcal{M}_{\text{lang },\phi}(q_{i_{e}}|q_{i_{<t}})\]
However, we only use \(\mathcal{L}_{\phi}(q_{i})\) to update weights \(\theta\) of the trajectory encoder \(\mathcal{M}_{\text{traj},\theta}\), keeping the weights of \(\mathcal{M}_{\text{lang},\phi}\) frozen. At test time, we use the same format (omitting \(u\) which is unknown) to construct the input prompt provided to the frozen LM from which we generate corrections.
### Annotating Corrections & Data Augmentation
In order to train CORGI, we need to collect data of corrections for paired trajectories. Because our goal is for CORGI to generalize well to novel trajectories and domains, we are primarily interested in shorter, general corrections that do not refer to specific aspects of the expert's trajectory or domain-specific objects. Concretely, we ask annotators to provide brief samples of corrective feedback \(u^{(1)},u^{(2)},...,u^{(m)}\) for a particular \(\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}}\) trajectory pair for task \(g\) in free-form text, encouraging annotators to identify which of the potentially several different ways for the student to improve they believe is most optimal to describe. We can then use tuples \((\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}},u^{(i)})\) to construct input prompts to train CORGI. Further details on crowdsourcing results of for our annotation procedure are described in Section 4.2.
However, we observe that when human annotators provide corrective feedback in natural language, there exists greater variance in the language style of the provided corrections than the particular discrepancies they refer to. In order to enable CORGI to better capture this rich style diversity efficiently, we leverage more powerful, "instruction tuned" language models (e.g. OpenAI's text-davinci-003) for data augmentation. As described in Algorithm 1, for each annotation \(u^{(i)}\) in our original dataset, we construct an input prompt describing a teaching setting and directly asking for paraphrases of \(u^{(i)}\), which, when sent as input to a large instruction-tuned LM results in an augmented set of utterances \(\{u^{\prime(i)}_{1},u^{\prime(i)}_{2},u^{\prime(i)}_{3}\}\) which are used for training. The prompt and \(\text{examples}_{2}\) paraphrases are shown below:
```
1:Input: dataset \(\mathcal{D}\) of \((u,\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}})\) tuples with size \(|\mathcal{D}|\)
2:Input: frozen LM \(\mathcal{M}_{\text{lang},\phi}\) with token embedding layer \(\mathcal{W}_{\text{lang},\phi}\) and instruction-tuned LM \(\mathcal{M}^{\prime}_{\text{lang},\psi}\)
3:Input: number of epochs \(n_{e}\), learning rate \(\lambda\)
4:Initialize trajectory encoder \(\mathcal{M}_{\text{traj},\theta}\)
5:// data augmentation
6:Set dataset \(\mathcal{D}^{\prime}\leftarrow\mathcal{D}\)
7:for sample \(i=1\)to \(|\mathcal{D}^{\prime}|\)do
8: Set prompt \(p_{i}\leftarrow\) "You are a teacher providing" +Feedback to a student learning a control task." + "List 3 short paraphrases of the feedback" + \(u_{i}\)
9: Set paraphrases \(u^{\prime}_{1,1},u^{\prime}_{2},u^{\prime}_{i,3}\leftarrow\mathcal{M}^{\prime} _{\text{lang},\psi}(p_{i})\)
10:\(\mathcal{D}^{\prime}\).append\(((u^{\prime}_{i,1},\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}}))\)
11:\(\mathcal{D}^{\prime}\).append\(((u^{\prime}_{i,2},\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}}))\)
12:\(\mathcal{D}^{\prime}\).append\(((u^{\prime}_{i,3},\tau^{g}_{\mathcal{S}},\tau^{g}_{\mathcal{E}}))\)
13:endfor
14:// training
15:for epoch \(m=1\)to \(n_{e}\)do
16: Shuffle dataset \(\mathcal{D}^{\prime}\)
17:for sample \(i=1\)to \(|\mathcal{D}^{\prime}|\)do
18: Set prompt \(q_{i}\leftarrow\mathcal{W}_{\text{lang},\phi}(\textit{student})+M_{traj,\theta}( \tau^{g}_{\mathcal{S}})+\mathcal{W}_{\text{lang},\phi}(\textit{expert})+M_{traj, \theta}(\tau^{g}_{\mathcal{E}_{i}})+\mathcal{W}_{\text{lang},\phi}(\textit{ correction})\):\(+\mathcal{W}_{\text{lang},\phi}(\textit{u}_{i})\)
19: Set loss \(\mathcal{L}(u_{i},\tau^{g}_{\mathcal{S}_{i}},\tau^{g}_{\mathcal{E}})\leftarrow \mathcal{L}_{\phi}(q_{i})\)LM loss
20: Update \(\theta\leftarrow\theta+\lambda\nabla_{\theta}\mathcal{L}(u_{i},\tau^{g}_{ \mathcal{S}_{i}},\tau^{g}_{\mathcal{E}_{i}})\)
21:endfor
22:endfor
```
**Algorithm 1** Train CORGI
You are a teacher providing feedback to a student learning a control task. List 3 short paraphrases of the feedback _"turner
The above example shows that paraphrases returned from the text-davinci-003 LM retain the particular discrepancy of the correction while modifying its style, language, and correcting for typos and grammatical errors. As we will show next (Table 1), training CORGI over augmented data improves performance across all control tasks.
## 4 Experimental Results
We now present our three tasks and experimental results. Details of user studies (including IRB approval) and training of CORGI, which is built on a 124M parameter model of the GPT-2 family (Wolf et al., 2019), are in the Appendix.
### Environments & Datasets
We study three physical control tasks that span common primitives: drawing (x-y control), steering (acceleration and heading angle control), and human body movement (joint control). For each environment, we also create in-domain (ID) and an out-of-domain (OOD) splits that share the same control space, but require different dynamics.2
Footnote 2: While we aimed to pick OOD splits that were semantically far (e.g. Futurama is a synthetic language), it is still possible there may be smaller “sub-skills” shared between ID-OOD splits.
Drawing: The student's goal is to learn how to draw characters from different alphabet scripts. We select 10 characters from 5 scripts (ID: Arabic, Burmese, & Japanese, OOD: Futurama & Bengali) from the Omniglot dataset (Lake et al., 2015). We select 1 trajectory per character as the expert trajectory and randomly sample 5 student trajectories, split between train/test sets. Each trajectory is a sequence of 2D actions along x-y coordinates.
Steering: The student's goal is to learn how to park a vehicle in a target parking spot. We modify the Parking environment from Leurent (2018) by changing the steering sensitivity and min/max speed for 3 vehicle types (ID: Car & Plane, OOD: Bike). For each vehicle type, we design a hand-coded expert policy, and then collect 20 student trajectories including perturbations of the expert policy and half-trained RL agents (details in Appendix A.3). Trajectories are split between train/test sets, and consist of 2D actions controlling acceleration and heading angle and 6D states corresponding to vehicle position, velocity, and heading.
Movement: The student's goal is to learn how to perform a full-body movement activity. We select activities from the BABEL dataset (Punnakkal et al., 2021) of 3D human motion (ID: Walk, Jump, & Throw, OOD: Wave, Jumping Jacks). For each activity we select 1 trajectory as the Expert, and sample 15 student trajectories, which are then split between train/test sets. We represent trajectories with learned video-text representations from X-CLIP (Ma et al., 2022), treating the output as a trajectory sequence of 1D states.
Example student trajectories for each environment are shown in Figure 2. We pad trajectories to a fixed dimension of 10 and length of 600 as input to CORGI. Further details on expert trajectory selection, as well as the assumption of a single expert behavior, are in Appendix A.2.
### Crowdsourcing Details
We recruit crowdworkers on Prolific3 to annotate paired student/expert trajectories with corrections. We instruct crowdworkers to not refer to expert demonstrations in their annotations. Crowdsourced corrections demonstrate a variety of ways people express feedback, such as rich shape descriptions (e.g. _"go towards making an infinity shape rather than a venn diagram"_), encouragement (e.g. _"more vertical but good effort"_ ), and action ordering (e.g. _"after second bend draw towards left not down"_. We collect **2,023** corrections, and provide further details in Appendix A.4.
Footnote 3: [https://www.prolific.co/](https://www.prolific.co/)
### Automatic Evaluation
Our first evaluation goal is to measure the degree CORGI assigns high likelihood to examples of good corrections, which can be useful for tasks such as automatically evaluating feedback provided by instructors. In Table 1, we report the average perplexity (i.e. the exponentiated loss) across ground truth corrections for novel student trajectories unseen during training, and for both ID and OOD splits of each task. We compare results across the following ablations:
* if a task has low variance across the types of feedback needed (e.g. all students need to "improve posture" in movement), we should observe no difference.
* it should assign higher (worse) perplexity when the student trajectory is randomized, showing the ability to tailor corrective feedback to individual students. For fair comparison, we sample student trajectories from the eval set to maintain the same overall distribution.
* CORGI w/o Pretraining : We ablate the effect of pre
training by (i) using the same GPT-2 architecture, but without pre-trained weights, and (ii) using a 3-layer LSTM with pre-trained embedding layer.
* **CORGI w/o Data Augmentation**: We train CORGI on the original, smaller dataset consisting purely of human annotations, without any paraphrases from our automatic data augmentation procedure.
As Table 1 shows, CORGI outperforms both permutation ablations, suggesting that the model does take into account specific student trajectories, rather than just learning general task language. As expected, no pre-training decreases performance, due to the lack of strong language representations. Furthermore, data augmentation results in an improvement across all tasks for both ID and OOD settings. Although the gap between ID and OOD is high, we note that even in OOD settings CORGI generally outperforms ablations.
Thus, our second automatic evaluation focuses on the quality of generated samples from CORGI. Under a fixed set of decoding parameters (nucleus sampling (Holtzman et al., 2020), temperature = 0.5), we measure the average similarity between generated and ground-truth corrections across each \((\tau_{\mathcal{S}},\tau_{\mathcal{E}})_{i}\) in our test set. However, as Figure 2 shows, annotations for a sample may have high variance due to identifying different discrepencies. We therefore use a re-weighted version of BERTScore that accounts for intrinsic variance between ground-truth captions, originally proposed for image captioning (Yi et al., 2020). In addition to the pre-training and data augmentation ablations, we compare the average similarity across generated samples from three alternative methods with CORGI:
* **Random:** We select a random human annotation from the same domain as the input trajectories, allowing us
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Ablation & \multicolumn{2}{c}{Drawing} & \multicolumn{2}{c}{Steering} & \multicolumn{2}{c}{Movement} \\ \hline & ID & OOD & ID & OOD & ID & OOD \\ \hline Permute Correction & 310 \(\pm\) 38 & 249 \(\pm\) 1.1 & 84 \(\pm\) 18.5 & **194 \(\pm\) 2.4** & 47 \(\pm\) 2.3 & 123 \(\pm\) 7.4 \\ Permute Student & 153 \(\pm\) 5.6 & 256 \(\pm\) 5.9 & 96 \(\pm\) 8.9 & 218 \(\pm\) 3.1 & 35 \(\pm\) 0.28 & 111 \(\pm\) 4.9 \\ \hline CORGI & **145 \(\pm\) 1.5** & **246 \(\pm\) 2.5** & **51 \(\pm\) 5.9** & **194 \(\pm\) 2.3** & **33 \(\pm\) 0.22** & **109 \(\pm\) 3.1** \\ w/o Data Aug. & 162 \(\pm\) 6.3 & 251 \(\pm\) 2.9 & 54 \(\pm\) 1.8 & 635 \(\pm\) 24.3 & 36 \(\pm\) 2.3 & 159 \(\pm\) 6.7 \\ w/o Pretraining (GPT-2) & 959 \(\pm\) 62 & 808 \(\pm\) 72 & 302 \(\pm\) 32 & 848 \(\pm\) 88 & 376 \(\pm\) 37 & 823 \(\pm\) 53 \\ w/o Pretraining (LSTM) & 215 \(\pm\) 1.2 & 584 \(\pm\) 1.2 & 197 \(\pm\) 1.4 & 271 \(\pm\) 1.1 & 221 \(\pm\) 1.3 & 252 \(\pm\) 1.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Perplexity on held-out test sets (lower is better) across three control tasks. CORGI achieves lower perplexity in comparison to baselines across all tasks, and both pre-training and data augmentation components improve performance. Although there exists a gap between in-domain (ID) and out-of-domain (OOD) performance, CORGI still outperforms ablations even in OOD settings.
Figure 2: Example student trajectories, reference corrections from annotators, and corrections generated by CORGI for novel trajectories for all three control tasks. Generated corrections in _italics_ are completely unseen during training, for any trajectory.
to measure the degree CORGI's performance is due to just using vocabulary appropriate for the domain.
* **Nearest Neighbors:** For a given student trajectory in our test data, we use our trajectory encoder \(\mathcal{M}_{\text{traj},\theta}\) to find the nearest neighbor student trajectory seen during training (using the mean squared error in encoder output). We then randomly sample from the set of ground-truth annotations provided for this student.
* **Permute Student:** We select a correction from the same domain and expert as the input trajectories, but a random student. Note this method is distinct from the Permute Student method in the previous section.
Table 2 shows that CORGI outperforms both methods across all tasks, for both ID and OOD settings. As expected, removing pre-training results in samples with lower similarity scores than **Random**, and we observe that without using a pre-trained LM, the model can only generate domain specific verbs (e.g. _"make"_ or _"move"_). Interestingly, we observe that for this metric, there is less of a gap between ID and OOD - in fact, for Drawing, generated samples from CORGI are _more_ similar to ground truth annotations for OOD characters. As shown in Figure 2, for both ID and OOD we observe that CORGI indeed often generates corrections that are similar to the ground-truth annotations.
**Error Analysis**
In practice, however, neither automatic evaluation metric we report fully captures the complexities of evaluating corrections. For example, the types of sequences CORGI assigns high (worse) perplexity to include metaphorical utterances and noise (e.g. _"the shape at the top should be larger, matching the hook shape"_) and domain-specific language (e.g. _"go forward gear not reverse"_). Meanwhile, the improved BERTScore method from Yi et al. (2020) assigns a score of 0.0 to examples such as (_reference: well done, perfect!_, CORGI : _you nailed it!_), where the expressed meanings are equivalent, but use very different language. This motivates the need for human evaluation, which we focus on next.
### Human Preference Evaluation
We first choose to assess the degree human evaluators _prefer_ CORGI over randomly chosen utterances from the same domain. Specifically, we measure preference as the rate at which human evaluators prefer the correction that is generated by CORGI when provided two other randomly selected corrections from the same domain. We then compare this rate with three other conditions that replace CORGI:
* **Random:** We calculate the rate at which human evaluators pick a correction randomly selected from the training data within the same domain. Since the other options are also randomly sampled, as the number of samples increase, this should converge to \(33\%\).
* **Nearest Neighbors:** Already described in section 4.3, we randomly sample a ground-truth correction provided to the nearest neighbor student.
* **Ground Truth:** We calculate the rate at which human evaluators pick a corrections sampled from the set of ground-truth annotations for the target trajectory.
Users are shown a pair of student and expert trajectories (e.g. videos of human movement for movement) and asked to pick one of the three corrections in response to the instruction _"Which feedback do you think is most helpful to provide to the student?"_. We collect preference data from 15 users per condition for each of our three tasks, randomizing the order in which each correction is provided. We recruit crowdworkers on Prolific, and provide further details in Appendix A.5. Due to cost, we limit ourselves to only novel in-domain (ID) trajectories for each of our control tasks.
Figure 3 shows that across all three control tasks, users were significantly more likely to prefer corrections from CORGI than our **Random** control. Furthermore, corrections generated with the **Nearest Neighbors** method are only comparable to those of CORGI for the movement task, highlighting the ability of CORGI to generalize to student trajectories unseen during training. Surprisingly, in the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & \multicolumn{2}{c}{Drawing} & \multicolumn{2}{c}{Steering} & \multicolumn{2}{c}{Movement} \\ \hline & ID & OOD & ID & OOD & ID & OOD \\ \hline Random & 0.20 \(\pm\) 0.03 & 0.21 \(\pm\) 0.04 & 0.19 \(\pm\) 0.04 & 0.22 \(\pm\) 0.03 & 0.23 \(\pm\) 0.06 & 0.18 \(\pm\) 0.03 \\ Nearest Neighbors & 0.28 \(\pm\) 0.03 & 0.22 \(\pm\) 0.03 & 0.28 \(\pm\) 0.05 & 0.16 \(\pm\) 0.04 & 0.31 \(\pm\) 0.05 & 0.19 \(\pm\) 0.05 \\ Permute Student & 0.22 \(\pm\) 0.03 & 0.23 \(\pm\) 0.04 & 0.14 \(\pm\) 0.03 & 0.26 \(\pm\) 0.01 & 0.14 \(\pm\) 0.03 & 0.15 \(\pm\) 0.03 \\ \hline CORGI & 0.3 \(\pm\) 0.01 & **0.34 \(\pm\) 0.03** & **0.32 \(\pm\) 0.08** & **0.31 \(\pm\) 0.02** & **0.39 \(\pm\) 0.03** & **0.24 \(\pm\) 0.03** \\ w/o Pretraining (GPT-2) & 0.11 \(\pm\) 0.02 & 0.18 \(\pm\) 0.03 & 0.10 \(\pm\) 0.03 & 0.12 \(\pm\) 0.03 & 0.11 \(\pm\) 0.03 & 0.11 \(\pm\) 0.02 \\ w/o Pretraining (LSTM) & 0.15 \(\pm\) 0.03 & 0.17 \(\pm\) 0.03 & 0.12 \(\pm\) 0.04 & 0.13 \(\pm\) 0.03 & 0.15 \(\pm\) 0.03 & 0.18 \(\pm\) 0.02 \\ w/o Data Aug. & **0.32 \(\pm\) 0.04** & 0.26 \(\pm\) 0.04 & 0.26 \(\pm\) 0.03 & 0.27 \(\pm\) 0.02 & 0.19 \(\pm\) 0.05 & 0.23 \(\pm\) 0.02 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Similarity scores on held-out test sets (higher is better) based on an improved BERTScore to account for ground truth variance from Yi et al. (2020). Across all tasks, CORGI outperforms both randomly sampling ID feedback and a nearest neighbors baselines.
steering task, we observe that CORGI significantly outperforms **Ground Truth**. One potential hypothesis is that preferences capture important aspects of corrections beyond accuracy, including clarity, constructiveness, and tone. Generated samples from CORGI are often concise and formal, while human corrections exhibit more variety. For example, the most common human annotation that evaluators did _not_ select in the steering task was _"right hand down, route south"_, which may be less clear than the generated sample for the same comparison (_"glide gracefully to the left"_). Finally, we provide pair-wise comparison results on feedback from CORGI when directly compared with **Ground Truth** and **Nearest Neighbors** feedback in Appendix A.5.
### Learning from Feedback
Our final human evaluation directly measures the degree CORGI helps reduce the discrepancy between student \(\mathcal{S}\) and expert \(\mathcal{E}\) performance in the drawing task. We design a teaching interface, shown in Appendix A.6, where users are given three chances to draw a provided stimulus and match a hidden expert trajectory \(\tau_{\mathcal{E}}\). The only information users receive are corrections corresponding to their trajectory \(\tau_{\mathcal{S}}\), and a numerical score calculated with the mean squared error between \(\tau_{\mathcal{S}}\) and expert trajectory \(\tau_{\mathcal{E}}\). We then measure the change in student error between the first and third trial.
We assign 20 users to a control group where corrections are randomly sampled from data within the same domain, 20 users to a control group where no corrections are provided, and 20 users to the experiment group, who receive corrective feedback from CORGI. While users who received random feedback (**-0.17 \(\pm\) 1.16**) and no feedback (**-0.20 \(\pm\) 1.01**) both on average _decreased_ in performance, users provided feedback from CORGI actually _improved_ with an average score difference of 1.84 \(\pm\) 0.7_. A larger sample size may be needed to observe a stronger effect (we observe \(p<0.1\) using a Welch's t-test with multiple hypothesis correction, verifying normality assumption and medium effect size of Cohen's \(d=0.52\)). However, we provide further results showing that feedback from CORGI also outperforms a baseline with only visual feedback, and covers a diverse set of topics such as size (_"make it all a bit bigger"_) and edge straightness, in the Appendix.
Overall, our results show that CORGI can generate corrective feedback for novel student trajectories across a diverse set of control tasks that not only outperform baselines in automatic evaluation, but are also preferred by human raters and help learners improve at a physical control task. One appealing aspect of CORGI is the ability to avoid fine-tuning the underlying LM. This allows us to retrain the rich and expressive encoding the LM has learned, enabling several possible directions for future work that we discuss next.
## 5 Limitations & Future Directions
As our work is a first step towards building a model capable of generating natural language corrections for physical control tasks, there are a few limitations and important directions for future work. First, one important aspect of corrective feedback is _tone_: language with positive encouragement may lead to different student learning outcomes than more terse feedback, and future work could consider adding information about the student (e.g. age, personality) as an additional control for CORGI.
Another limitation is that CORGI does not generate feedback with domain-specific references - future work could consider integration of corrections from CORGI with domain-specific approaches (Schrum et al., 2022). Additionally, while CORGI only provides corrections over the entire trajectory, many control tasks involve complex sequences of actions that combine many different sub-tasks, or skills. Future work could consider learning how to jointly break down student trajectories into different sub-components, and then generating corresponding feedback for each part.
Finally, as described in Appendix A.2, a key assumption of our work is the need for an expert reference trajectory used to provide feedback. In practice, there may be many ex
Figure 3: Across all three tasks, users are more likely to prefer feedback generated from CORGI over random corrections than feedback from a random control and nearest neighbors baseline. For steering, feedback from CORGI also outperforms ground truth corrections, which may be due to the high variance human annotations. Asterisk (*) marks statistically significant difference (\(p<0.05\)) from CORGI.
pert ways to perform a physical control task, which expert-specific systems may fail to capture. While CORGI can flexibly take any expert trajectory as input, its performance is limited by the diversity of expert trajectories it saw during training, and we believe enabling CORGI to generate appropriate corrections for a diverse range of expert behaviors in a data efficient manner is an important next step.
Finally, because CORGI can take any student and expert trajectory as input, potential misuse includes a malicious agent leveraging CORGI repeatedly to generate corrections that actually guide a student towards harmful behavior (e.g. physical actions that harm the body). An interesting avenue for future work is creating a mechanism that can detect whether an expert trajectory is plausible and safe for a human to perform under domain-specific constraints.
## 6 Acknowledgements
We thank all reviewers for their valuable feedback. We acknowledge support from Point72, Ford, AFOSR, and NSF Awards #2218760, #2132847, and #2006388. MS was also supported by the NSF GRFP under DGE-1656518.
|
2305.11320 | Parameter-Efficient Learning for Text-to-Speech Accent Adaptation | This paper presents a parameter-efficient learning (PEL) to develop a
low-resource accent adaptation for text-to-speech (TTS). A resource-efficient
adaptation from a frozen pre-trained TTS model is developed by using only 1.2\%
to 0.8\% of original trainable parameters to achieve competitive performance in
voice synthesis. Motivated by a theoretical foundation of optimal transport
(OT), this study carries out PEL for TTS where an auxiliary unsupervised loss
based on OT is introduced to maximize a difference between the pre-trained
source domain and the (unseen) target domain, in addition to its supervised
training loss. Further, we leverage upon this unsupervised loss refinement to
boost system performance via either sliced Wasserstein distance or maximum mean
discrepancy. The merit of this work is demonstrated by fulfilling PEL solutions
based on residual adapter learning, and model reprogramming when evaluating the
Mandarin accent adaptation. Experiment results show that the proposed methods
can achieve competitive naturalness with parameter-efficient decoder
fine-tuning, and the auxiliary unsupervised loss improves model performance
empirically. | Li-Jen Yang, Chao-Han Huck Yang, Jen-Tzung Chien | 2023-05-18T22:02:59Z | http://arxiv.org/abs/2305.11320v1 | # Parameter-Efficient Learning for Text-to-Speech Accent Adaptation
###### Abstract
This paper presents a parameter-efficient learning (PEL) to develop a low-resource accent adaptation for text-to-speech (TTS). A resource-efficient adaptation from a frozen pre-trained TTS model is developed by using only 1.2% to 0.8% of original trainable parameters to achieve competitive performance in voice synthesis. Motivated by a theoretical foundation of optimal transport (OT), this study carries out PEL for TTS where an auxiliary unsupervised loss based on OT is introduced to maximize a difference between the pre-trained source domain and the (unseen) target domain, in addition to its supervised training loss. Further, we leverage upon this unsupervised loss refinement to boost system performance via either sliced Wasserstein distance or maximum mean discrepancy. The merit of this work is demonstrated by fulfilling PEL solutions based on residual adapter learning, and model reprogramming when evaluating the Mandarin accent adaptation. Experiment results show that the proposed methods can achieve competitive naturalness with parameter-efficient decoder fine-tuning, and the auxiliary unsupervised loss improves model performance empirically.
Li-Jen Yang\({}^{\star}\), Chao-Han Huck Yang\({}^{\dagger}\), Jen-Tzung Chien\({}^{\star}\)\({}^{\star}\)National Yang Ming Chiao Tung University, Taiwan
\({}^{\dagger}\)Georgia Institute of Technology, USA
[email protected], [email protected], [email protected]
**Index Terms**: Parameter-efficient learning, optimal transport, test-to-speech, accent adaptation, pre-trained model
## 1 Introduction
Large-scaled pre-trained acoustic models [1, 2] and language models [3] or so-called foundation models [4] have been emerging due to the rapid development of efficient computation hardware and self-supervised learning. Recently, the generative diffusion models [5, 6, 7] have achieved dominant performance across different tasks. Either pre-trained foundation models or generative diffusion models require powerful computation hardware and long training time. Therefore, parameter-efficient adaptation plays an important role in many practical applications when utilizing the pre-trained model for a low-resource downstream task. The approaches such as prompt tuning [8], residual adapter [9] or model reprogramming [10, 11, 12] have been developed for parameter-efficient learning (PEL) [13] where several benefits are commonly pursued. First, the training time is reduced by freezing the backbone model and only adapting the domain-specific parameters. Second, the generalization to a low-resource and out-of-distribution target domain is improved. In [11, 14], fine-tuning the whole model was seen as a way to distort the pre-trained features for an out-of-distribution target task. The way of updating a small portion of a pre-trained model or freezing an entire backbone model and updating only the extra weights was illustrated as the lightweight fine-tuning which generalized the representation under distribution shift. Third, the backbone model is reused [8, 15] for those methods which only add on additional weights, arrange task-specific prompts or reprogram input layers for different tasks. Several works were recently proposed to introduce the adapter learning and model reprogramming in speech-related applications. In [16], a pre-trained English automatic speech recognition (ASR) model was repurposed as a multilingual ASR by adding reprogramming layers. In [17], residual adapters were added to a backbone model for speaker adaptation in a text-to-speech (TTS), ensuring the naturalness of synthesized speech through the use of only a few controllable parameters.
Meanwhile, accent adaptation [18, 19, 20] is seen as a practical issue for TTS system which is handled to develop the generative model for speech synthesis in presence of pronunciation variations or accent shifts due to different speakers in different ages coming from different regions. For instance, accent adaptation is one essential step when conducting the English accent transfer among different countries including United States, United Kingdom and Australia. For Chinese spoken language, similar challenges [21] have been reported due to differences in hearing effects between Mainland Chinese accent (zh-CN) and Taiwanese Mandarin accent (zh-TW) on voice quality and understanding. One important task in TTS is to adapt additional layers and weights for accent adaptation based on the frozen pre-trained TTS backbone. This study addresses the emerging challenges [21] of data sparseness for Taiwanese Mandarin accent in accent adaptation by utilizing a pre-trained TTS model trained on a large corpus of Chinese speech with Mainland China accent. A direct way to tackle this problem is to fine-tune the whole pre-trained model [22], but the computation cost is still high due to a large number of parameters. Accordingly, this paper presents and investigates several parameter-efficient methods to deal with accent adaptation for low-resource spoken language. The first study on model reprogramming scheme to TTS is explored. An additional scheme of learning residual adapter [23, 17] for speaker adaptation in TTS is evaluated. The main contributions of this paper are summarized as follows. First, two PEL methods to accent adaptation by utilizing pre-trained TTS model were presented. Second, a novel PEL based on model reprogramming for accent adaptation is proposed. The input-based reprogramming for TTS is exploited and can be applied for model re-deployment [24]. Third, a model regularization based on optimal transport is developed to improve TTS performance by characterizing the latent feature distance.
## 2 Background Survey
### TTS by Utilizing Pre-Trained Model
Many prior works on speaker adaptation [17, 18] or accent adaptation [22] were developed by conducting knowledge transfer based on a pretrained multi-speaker TTS model. A sim
ple method is to fine-tune or adapt the entire pretrained model to a target speaker. This method is feasible to synthesize the speech signals with high naturalness and speaker characteristics. However, the drawback [13] is the scaling issue which causes high computation for adapting so many parameters. In [25], AdaSpeech was proposed as a parameter-efficient TTS for a new speaker by additionally performing the conditional layer normalization. In [17], the residual adapter scheme was shown as an effective approach to speaker adaptation by adapting the prosody features of a TTS model from source to target domains through the adapter layers. This study addresses the parameter-efficient methods, e.g. adapter learning, for accent adaptation and further explores a new PEL method based on model reprogramming where the Chinese accent adaptation is implemented.
### Parameter-Efficient Learning
There are three major approaches to parameter-efficient learning which are input prompting, adapter learning and model reprogramming. In [3], the generative pre-trained transformer-3 (GPT-3), was proposed by adopting the prompt and guiding the learned model to generate the related response. In [8, 26], the bidirectional encoder representations from transformers (BERT) was utilized to perform domain adaptation. By concatenating the trainable task-specific prompts with input text sequence, the gap between pre-training tasks and downstream tasks was bridged. In [17, 27, 28], the sub-module of transformer layer using adapter or residual adapter was added and adjusted so as to obtain remarkable performance for a target task. In addition, the model reprogramming or adversarial reprogramming was proposed to reprogram input data by introducing trainable layers which attempted to translate new inputs into source domain so that the pre-trained model could be used directly. For example, in [11], Voice2Series was proposed to transform an input time series so that a large acoustic model was utilized to obtain competitive results on various series classification tasks. This paper is motivated to develop various parameter-efficient methods for accent adaptation with low-resource setting in a TTS system.
## 3 Low-Resource Accent Adaptation
Considering the success of parameter-efficient learning in different tasks and domains, this paper presents the parameter-efficient learning and model regularization for low-resource accent adaptation in a TTS system where the backbone model based on a _conformer-fastspeech2_[29] model from ESPNet [30] is utilized. System architecture is shown in Figure 1. The speech spectrogram from phoneme input is synthesized through a stack of components consisting of phoneme embedding, encoder, variance adapter and Mel decoder where the additional position embeddings and _\(x\)-vectors_[31] are added during stack processing. Our proposed model consists of two parts: parameter learning and parameter regularization, which will be individually addressed in the following sections.
### Parameter-Efficient Accent Adaptation
First, parameter-efficient learning is performed by reshaping the architecture of Mel decoder through three types of layer while the remaining components in TTS are frozen. The input space and latent space of Mel decoder are re-organized by merging with input reprogramming layer, latent adapter layer and latent reprogramming layer which are shown in Figure 1. The number of controllable parameters due to the add-on layers is very limited relative to the whole model architecture. Briefly speaking, input reprogramming layer aims to reduce the cost of model re-deployment based on a kind of prompt scheme. The latent reprogramming layer is implemented as separate reprogramming layer which is appended in latent space of Mel decoder.
#### 3.1.1 Input reprogramming layer
In [11, 32], input reprogramming was first proposed to implement a prompt tuning scheme to redeploy endpoint models for speech processing tasks in recent works [24, 15]. Traditionally, it is popular to carry out domain adaptation by fine-tuning the entire model or only a portion of model. Using the fine-tuning approach, the pre-trained model should be re-tuned every time when a new task is present. The issue of computation cost is severe when a large-scaled pre-trained model such as CLIP [33], GPT-3 [3] or Wav2Vec2 [34] is utilized. Parameter-efficient learning is required to handle this issue [15, 24]. This study introduces the input reprogramming layer in conjunction with Mel decoder of a frozen TTS model. This treatment aims to address the challenge of re-deploying accent voices under a fixed Mel decoder. Such a composite layer is seen as a trainable feature extractor \(\mathcal{H}_{\theta}\) which is stacked with a linear feedforward layer and a 1-dimensional convolutional layer. The input reprogramming function \(\mathcal{R}_{\theta}\) with the decoder input \(z\) is yielded as a residual calculation by \(\mathcal{R}_{\theta}(z)=z+\mathcal{H}_{\theta}(z)=z^{\prime}\) where \(z^{\prime}\) represents the reprogrammed input of decoder that is fed into the pre-trained endpoint model, and \(\theta\) is the only trainable parameter for entire TTS model, which is updated during the back-propagation process.
#### 3.1.2 Latent adapter layer
Next, a frozen TTS backbone model is utilized by configuring the adapter layer or reprogramming layer in the latent space of Mel-scale decoder where the outputs are finally used to synthesize the speech spectrogram. Basically, the adapter layer or module is formed of one linear down layer and one linear up layer which are used to extract the bottleneck features, and the residual connection performs additive feature learning to engage knowledge of adapter input. There are \(N\) blocks of decoder layer and adapter layer which are stacked to form a Mel
Figure 1: System architecture for parameter-efficient learning by using conformer-fastspeech2 backbone. Three kinds of layers are configured in input and latent spaces of Mel decoder.
decoder. Only the adapter layer is fine-tuned, the other layers are frozen.
#### 3.1.3 Latent reprogramming layer
Another type of latent configuration of Mel decoder is to add on reprogramming layer in each block to enrich the optimization procedure of a TTS model. Given a frozen decoder layer \(\mathcal{F}_{\Theta}^{i}\) in each block \(i\), the latent features \(h^{i}\) of \(i\)-the decoder layer is calculated and then fed into the latent reprogramming layer \(\mathcal{R}_{\theta}^{i}\) or latent adapter layer \(\mathcal{A}_{\theta}^{i}\) to perform feature reprogramming or adaptation, respectively, instead of taking latent feature \(h_{i}\) as the input of \((i+1)\)-th decoder layer
\[\begin{split}\underbrace{\mathcal{R}_{\theta}^{i}(h^{i})}_{i\text {-th latent reprogramming}}&\rightarrow(h^{i})^{\prime}\rightarrow \underbrace{\mathcal{F}_{\Theta}^{i+1}((h^{i})^{\prime})}_{i\text{-th frozen decoder with trainable feature}}\\ \underbrace{\mathcal{A}_{\theta}^{i}(h^{i})}_{i\text{-th latent adopter}}& \rightarrow(h^{i})^{\prime}\rightarrow\underbrace{\mathcal{F}_{\Theta}^{i+1}( (h^{i})^{\prime})}_{i\text{-th frozen decoder with trainable feature}}\end{split} \tag{1}\]
where \(\Theta\) represents a set of non-trainable parameters across decoder layers, and \(\theta_{i}\) denotes the \(i\)-th trainable feature generator for PEL using either reprogramming layer or adapter layer.
### Parameter Regularized Accent Adaptation
In addition to parameter-efficient learning for accent adaptation to a low-resource target domain, this study further introduces the parameter regularization scheme in domain adaptation. In particular, an auxiliary unsupervised loss based on the optimal transport [35] is merged for model regularization. This consideration is based on an observation about the distance of latent features between source (Mainland Chinese) and target accents (Taiwanese Mandarin). As shown in Figure 2, the latent feature distance before and after the reprogramming layer with two metrics, sliced Wasserstein distance (SWD) [35] and maximum mean discrepancy (MMD) [36], are illustrated. The latent features in accent adaptation are evaluated. Both metrics are calculated to measure the discrepancy between two probability distributions for variables \(u\) and \(v\). These measures belong to the family of integral probability metric (IPM) [37] which measures the optimal transport in a form of
\[d_{\mathcal{F}}(\mu,\nu)=\sup_{f\in\mathcal{F}}\Big{(}\int fd\mu-\int fd\nu \Big{)} \tag{2}\]
where \(\mathcal{F}\) is a class of measurement functions. As referred in [38], if \(f\) is selected as a 1-Lipschitz function, SWD is a simple realization of IPM in Eq. (2) based on Euclidean distance. If \(f\) is set as a kernel function, MMD is a realization of IPM. In this evaluation, it is found that both SWD and MMD using the proposed PEL are increased after latent reprogramming along learning epochs. There is a big increase after 20 epochs. MMD converges better than SWD.
To highlight this optimal transport phenomenon during fine-tuning, a regularization term is designed and merged as auxiliary training objective to measure the distance between the source feature \(h_{s}\) and the reprogrammed target feature \(\mathcal{R}_{\theta}(h_{t})\)
\[\mathcal{L}_{\text{et}}(h_{t},h_{s};\theta)=-d\left(\mathcal{R}_{\theta}(h_{t} ),h_{s}\right) \tag{3}\]
where \(\mathcal{R}_{\theta}\) acts as either input reprogramming layer or latent reprogramming layer, and \(d\) is distance metric either SWD or MMD. For the adapter, the same distance in Eq. (3) is used but the adaptation function \(\mathcal{A}_{\theta}\) is adopted. As a result, the total learning objective consisting of regression loss for synthesized speech and optimal transport due to reprogramming is constructed for parameter regularized learning
\[\mathcal{L}=\mathcal{L}_{\text{max}}(\widehat{\mathbf{y}},\mathbf{y};\theta)+ \mathcal{L}_{\text{et}}(h_{t},h_{s};\theta) \tag{4}\]
where \(\mathcal{L}_{\text{max}}\) represents the mean absolute error (MAE) between the predicted spectrogram \(\widehat{\mathbf{y}}\) from Mel decoder and the ground-truth spectrogram \(\mathbf{y}\) from training speech. This MAE loss is seen as the supervised regression loss. Importantly, the optimal transport loss \(\mathcal{L}_{\text{et}}\) is calculated as the negative distance measure, namely, a penalized regularization [36] is introduced to assure separation between the source feature \(h_{s}\) and the reprogrammed target feature \(\mathcal{R}_{\theta}(h_{t})\) or adapted target feature \(\mathcal{A}_{\theta}(h_{t})\). In the implementation, a warm-up strategy [39] was performed by gradually increasing the coefficient of the regularization term from zero at the start of training. This treatment helps to stabilize the model. Algorithm 1 shows the training procedure with optimal transport as a penalized regularization.
```
Input: fine-tuning data \(\{\mathbf{x},\mathbf{y}\}\), source domain feature \(h_{s}\), total training steps \(T\), hyperparameter \(K\) Output: parameter \(\theta\) of PEL method while train steps less than \(T\)do calculate outputs \(\widehat{\mathbf{y}}\) and \(h_{t}\) given inputs \(\mathbf{x}\) if train steps less than \(K\)then compute loss \(\mathcal{L}=\mathcal{L}_{\text{max}}(\widehat{\mathbf{y}},\mathbf{y};\theta)\) else compute loss \(\mathcal{L}=\mathcal{L}_{\text{max}}(\widehat{\mathbf{y}},\mathbf{y};\theta)- \mathcal{L}_{\text{et}}(h_{t},h_{s};\theta)\) update parameter \(\theta\) using \(\nabla_{\theta}\mathcal{L}\) return\(\theta\)
```
**Algorithm 1** Optimal transport regularized training procedure
## 4 Experiments
### Experimental settings
We chose the AISHELL3 [40] as pretrained dataset, which consists of around 85 hours of emotion-neutral recordings delivered by 218 native Chinese mandarin speakers, to build a wide-range source accent acoustic model. Our purpose is to leverage the power of the pre-trained model and perform the accent transfer from zh-CN to zh-TW. To make sure the voice is clean and suitable for TTS, we collect a 40-minute-long Taiwanese accent corpus with only one female speaker in a quiet space. The recording transcripts are including jokes, movie introductions, and travel guides.
For model configuration, We employ the same pre-trained _Conformer-Fastspeech2_ backbone which was trained on the AISHELL3 dataset and further use the x-vector [41] as speaker embedding to leverage the better speaker attribute. The entire model had 71M parameters including 4 conformer layers
Figure 2: Latent feature distance between two accents before/after reprogramming via SWD (left) and MMD (right).
in both decoder and encoder which the latent feature dimension is set to 384, and it was trained using the Adam optimizer with a Transformer learning rate schedule for 500k steps. We further use the parallel-wavegan [42] as vocoder to transform mel feature to the waveform. For PEL methods settings, we set the bottleneck dimension size \(r=96\) of the adapter and inserted them between 4 conformer-decoder layers, and keep the settings consistent for input reprogramming and latent reprogramming, where the Conv-1D feature extractor is set the hidden feature dimension as 96. The training setting is same as pretraining stage but sets train steps to 20k and the step of adding auxiliary loss is set to 300. In this work, we do the experiment on three settings: a) the input reprogramming, b) the latent adapter, and c) the combination of the input reprogram and latent reprogram. We conduct the evaluation with objective methods, mel cepstral distortion (MCD) to quantify the distortion between two sequences of Mel-frequency cepstral coefficients. We further evaluate the character error rate (CER) of pretrained Automatic Speech Recognition (ASR) model. We conduct experiments on two alternative ASR models trained with the zh-CN and zh-TW corpora of Common Voice [43], with CER baselines provided by HuggingFace of 0.19 and 0.10, respectively. We expect to see the synthetic Taiwanese Mandarin accent speech have lower CER on the corresponding pretrained ASR model but have a terrible result on pretrained zh-CN ASR model. Referring to the CHiVE-BERT [44], we conduct a subjective mean opinion score (MOS) on naturalness and accent quality(AQ). The accent quality is used to determine whether the voice sample came from a native accent speaker. We ask people to rate on a 5-scale Likert scale (1: Bad, 2: Poor, 3: Fair, 4: Good, 5: Excellent) for these two MOS evaluations.
### Experiment results
We show the experiments on accent transferring from zh-CN to zh-TW. Table 1 lists the results of MCD and human evaluations. We conduct the experiments on three PEL settings including adapter, input reprogram, and the combination of input reprogram and latent reprogram. Besides, we run the fine-tuning (FT) and decoder fine-tuning methods as baselines. By fine-tuning the whole backbone, we get the best value on MCD and MOS evaluations. Besides, we find out that all the methods we proposed shows competitive result against the full fine-tuning strategy and even outperform the decoder fine-tuning method. Compare to the result of parameter-efficient methods, the input reprogramming can have acceptable results With only 0.6% of total parameters being trainable. The adapter and the joint reprogram (IR+LR) perform better results compared to the input-based methods but require more trainable parameters. We further show the results with auxiliary SWD/MMD loss to highlight the effect of optimal transport perspective. Clearly, the optimal transport viewpoint aids the model in producing natural speech with good accent quality.1
Footnote 1: Code and audio samples are available at [https://github.com/TTS-Research/PEL-TTS](https://github.com/TTS-Research/PEL-TTS)
To show the effect of accent transfer, we produce synthetic Taiwanese Mandarin accent speech based on various PEL settings and compare the CER when testing on different pretrained ASR models. The results are shown in Table 2. Obviously, the synthetic speech cannot be well recognized by pretrained zh-CN model because of the large domain difference while performing in-domain testing with a pretrained zh-TW model achieves a lower CER value. One special finding is the fine-tuning method has a higher CER which indicates the sound with naturalness can't prove the ASR performance. Observing the CER on the zh-TW accent pretrained model, the error rate is reduced and the auxiliary MMD loss reaches a better result compared to SWD loss when utilizing latent parameter-efficient learning methods.
## 5 Conclusions
We introduce parameter-efficient methods into text-to-speech accent adaptation via model reprogramming and residual adapter. Benefits from input parameter-efficient learning, the input reprogramming, the backbone can repeatedly be deployed by only replacing the reprogramming layer. Furthermore, latent parameter-efficient learning including adapter learning and latent reprogramming shows their effect by tuning the latent feature, and improving the performance compare to the input reprogramming method. By leveraging the concept of optimal transport, we design an unsupervised auxiliary loss using SWD and MMD distance metrics to strengthen the tendency we observed from figure 2, and experiments prove that the auxiliary loss indeed helps the model produce speech with naturalness and higher accent similarity. Because we first introduce model reprogramming in text-to-speech and show its effectiveness on accent adaptation, it would be interesting to apply reprogramming schemes to other cases such as performing cross-lingual adaptation by leveraging the well-trained TTS model.
**Acknowledgments** The authors would like to express their gratitude to Heiga Zen and Bo Li from Google for providing helpful insights and discussion on our draft.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Methods** & \multicolumn{3}{c|}{**ASR Models WER (\(\downarrow\))**} \\ \cline{2-5} & zh-TW & zh-CN & Diff. \\ \hline FT & 0.202 & 0.408 & 0.206 \\ Decoder FT & 0.187 & 0.317 & 0.130 \\ \hline \hline Input Reprogram & 0.210 & 0.343 & 0.133 \\ w/ SWD & 0.215 & 0.346 & 0.115 \\ w/ MMD & 0.240 & 0.371 & 0.131 \\ \hline Latent adapter & 0.177 & 0.344 & 0.237 \\ w/ SWD & 0.185 & 0.317 & 0.180 \\ w/ MMD & 0.177 & 0.362 & 0.285 \\ \hline Input + latent Reprogram & 0.176 & 0.373 & 0.197 \\ w/ SWD & 0.224 & 0.389 & 0.165 \\ w/ MMD & 0.179 & 0.331 & 0.159 \\ \hline \end{tabular}
\end{table}
Table 2: ASR evaluation in terms of word error rate (WER) for synthetic speech was conducted under parameter-efficient settings.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Method** & **MCD (\(\downarrow\))** & **Naturalness (\(\uparrow\))** & **AQ (\(\uparrow\))** & **Params** \\ \hline Ground Truth & N/A & \(4.83\pm 0.38\) & N/A & N/A \\ FT & \(7.64\pm 0.76\) & \(4.40\pm 0.70\) & \(4.55\pm 0.71\) & 100\% \\ Decoder FT & \(8.06\pm 0.87\) & \(4.00\pm 0.95\) & \(4.17\pm 0.92\) & 46.8\% \\ \hline \hline IR & \(8.07\pm 0.82\) & \(3.65\pm 0.85\) & \(3.85\pm 0.82\) & \\ IR w/ SWD & \(8.03\pm 0.65\) & \(3.70\pm 0.81\) & \(3.98\pm 0.85\) & 0.6\% \\ IR w/ MMD & \(8.03\pm 0.69\) & \(3.70\pm 0.84\) & \(3.98\pm 0.88\) & \\ \hline LA & \(7.99\pm 0.89\) & \(3.77\pm 0.91\) & \(4.03\pm 0.76\) & \\ LA w/ SWD & \(7.93\pm 0.78\) & \(3.88\pm 0.87\) & \(4.12\pm 0.81\) & 0.8\% \\ LA w/ MMD & \(\mathbf{7.90\pm 0.81}\) & \(3.73\pm 0.84\) & \(4.05\pm 0.86\) & \\ \hline IR+LR & \(7.86\pm 0.83\) & \(3.67\pm 0.91\) & \(3.92\pm 0.88\) & \\ IR+LR w/ SWD & \(7.81\pm 0.80\) & \(3.50\pm 0.97\) & \(3.90\pm 1.09\) & 1.2\% \\ IR+LR w/ MMD & \(\mathbf{7.79\pm 0.78}\) & \(\mathbf{3.75\pm 0.80}\) & \(\mathbf{3.95\pm 0.89}\) & \\ \hline \end{tabular}
\end{table}
Table 1: Objective and Subjective evaluation under different PEL methods with SWD/MMD auxiliary loss, including input reprogram (IR), latent reprogram (LR), and latent adapter (LA). |
2310.01385 | Feature-Driven Strategies for Trading Wind Power and Hydrogen | This paper develops a feature-driven model for hybrid power plants, enabling
them to exploit available contextual information such as historical forecasts
of wind power, and make optimal wind power and hydrogen trading decisions in
the day-ahead stage. For that, we develop different variations of
feature-driven linear policies, including a variation where policies depend on
price domains, resulting in a price-quantity bidding curve. In addition, we
propose a real-time adjustment strategy for hydrogen production. Our numerical
results show that the final profit obtained from our proposed feature-driven
trading mechanism in the day-ahead stage together with the real-time adjustment
strategy is very close to that in an ideal benchmark with perfect information. | Emil Helgren, Jalal Kazempour, Lesia Mitridati | 2023-10-02T17:45:11Z | http://arxiv.org/abs/2310.01385v2 | # Feature-Driven Strategies for Trading
###### Abstract
This paper develops a feature-driven model for hybrid power plants, enabling them to exploit available contextual information such as historical forecasts of day-ahead prices and wind power, and make optimal wind power and hydrogen trading decisions in the day-ahead stage. For that, we develop different variations of feature-driven linear policies. In addition, we propose a real-time adjustment strategy for hydrogen production. Our numerical results show that the final profit obtained from our proposed feature-driven trading mechanism in the day-ahead stage together with the real-time adjustment strategy is very close to that in an ideal benchmark with perfect information.
Hybrid power plants, trading decisions, feature-driven model, linear policies, real-time adjustment strategy.
## I Introduction
Hybrid power plants, comprising an electrolyzer and wind turbines, are expected to be largely installed in many countries, such as Denmark [1]. The plant operator, aiming to maximize their profit, can either allocate the entire generated wind power for hydrogen production, sell the whole or part of generated wind power to the grid, or buy power from the grid to produce more hydrogen. The plant operator should make informed decisions for trading power and hydrogen in a forward stage, when neither day-ahead and balancing prices nor the true wind power generation are realized. This requires developing a trading strategy, which is the focus of this paper.
There are many papers in the literature that develop trading strategies for wind power producers (wind power as the sole product) under price and wind uncertainty. Among others, [2] solves a newsvendor problem, [3] develops a two-stage stochastic programming, [4] devises a range of robust optimization models, and finally [5] develops a distributionally robust model. While all these methods outperform the deterministic counterpart, they require (_i_) knowledge of probabilistic forecasting, (_ii_) generating a set of scenarios, an uncertainty set, or a family of probability distributions, and then (_iii_) developing a stochastic optimization model for decision making. The trading problem is even more challenging for hybrid power plants, where hydrogen is also a product in addition to wind power. Although one may expect large companies to have great expertise in making trading decisions under uncertainty, it might be more challenging for a small-scale hybrid power plant operator. Therefore, the aim of this paper is to develop a _pragmatic_ trading approach, which outperforms the deterministic model by enabling the hybrid power plant to learn from historical data, without the need for the plant to use probabilistic forecasting and complex stochastic solutions.
Aligned with [6], we propose a prescriptive analytics framework for hybrid power plants, where forecasting and decision-making steps are integrated. In this data-driven approach, we exploit contextual information, so-called _features_, such as historical (deterministic) forecasts of wind power and day-ahead electricity prices. In the _training stage_, the hybrid power plant operator learns feature-driven _policies_ based on historical features and uncertainty realizations. In the _decision-making stage_, the learned policies and available features lead to the trading decisions. A few papers in the literature, e.g., [7, 8] and [9], develop feature-driven trading models for renewable power producers. To the best of our knowledge, this is the first paper that develops such a model for hybrid power plants trading both wind power and hydrogen, which is a more complicated problem. Besides, this paper extends these previous works by investigating various model architectures, including price- and time-dependent policies, and feature vectors. In addition, we propose a pragmatic rule-based adjustment strategy for real-time hydrogen production. Using an out-of-sample simulation, we show how the resulting profit from feature-driven trading in the day-ahead stage and the adjustment strategy in real time is very close to that in an ideal benchmark (oracle).
The rest of the paper is organized as follows. Section II explains the decision-making framework. Section II details the feature-driven trading strategy model in the day-ahead stage. Section IV presents the proposed real-time adjustment strategy. Section V provides numerical results. Finally, Section VI concludes the paper.
## II Decision-Making Framework
We consider an electrolyzer and a wind turbine, together forming a behind-the-meter hybrid power plant. The wind power generation can be exported to the grid or directed to the electrolyzer to produce hydrogen with a constant efficiency coefficient. The electrolyzer has also the possibility to buy power directly from the grid. The produced hydrogen is sold through a bilateral contract at a fixed hydrogen price. As part of the contract, there is a minimum amount of hydrogen that should be produced in a daily basis. The hydrogen production thus needs to be scheduled for the whole day in order to ensure fulfilling this minimum production requirement. Any real-time adjustment should likewise account for the imposed hydrogen production quota.
The hybrid power plant aims to maximize the total profit from electricity trading and hydrogen sales. This leads to two
stages for decision making:
_Day-ahead decision making_: Given the available (deterministic) wind power forecast, the hybrid power plant must decide how much electricity to trade (buy or sell) in the day-ahead market and how to schedule the operation of the electrolyzer for each hour of the following day, depending on hourly day-ahead market prices and the fixed hydrogen price. Recall the minimum daily hydrogen production requirement needs to be accounted for. In Nord Pool, all these decisions for every hour \(t\in\mathcal{T}\) of day D should be made before noon of day D\(\,\)\(-\,\)1.
_Real-time decision making_: The hybrid power plant must settle imbalances arising from deviations in wind power production (compared to the day-ahead schedule) in the balancing market. Therefore, at the time of delivery, i.e., in every hour \(t\) of day D, the plant must decide how to adjust the schedule of the electrolyzer based on the realized wind power production and electricity balancing prices, while ensuring the minimum daily hydrogen production requirement is fulfilled.
## III Day-Ahead Bidding
We first define a feature-driven model for making power and hydrogen trading decisions in the day-ahead stage. Next, we explain feature vectors, and then elaborate on architecture of linear policies. We eventually provide an optimization model for training the proposed feature-driven model.
### _Model Definition_
The main challenge for the hybrid power plant in the day-ahead, is to account for the uncertainty on the day-ahead and balancing electricity prices and wind power production.
Aiming to develop a feature-driven trading strategy in the day-ahead stage, we first define the vector of available features with \(N\) elements corresponding to hour \(t\) as
\[\textbf{X}_{t}=\begin{bmatrix}X_{t}^{1},...,X_{t}^{N}\end{bmatrix}\ \ \forall t\in \mathcal{T}. \tag{1}\]
The features should be those contextual information for hour \(t\) of the following day that the plant has access to when making day-ahead decisions. An example of available features for the hybrid power plant at the day-stage stage is the deterministic forecast of wind power generation.
Let variable \(p_{t}^{\rm DA}\) denote the electricity traded (bought or sold) in the day-ahead market for hour \(t\) of the next day. In addition, variable \(p_{t}^{\rm H}\) is the electricity consumption schedule of the electrolyzer for the same hour. We exploit linear decision rules [10], so called _linear policies_, to define \(p_{t}^{\rm DA}\) and \(p_{t}^{\rm H}\) as a function of features \(\textbf{X}_{t}\), such that
\[p_{t}^{\rm DA}=\textbf{q}^{\rm DA}\ \textbf{X}_{t}^{\top} \forall t\in\mathcal{T} \tag{2a}\] \[p_{t}^{\rm H}=\textbf{q}^{\rm H}\ \textbf{X}_{t}^{\top} \forall t\in\mathcal{T}, \tag{2b}\]
where \((.)^{\top}\) is the transpose operator. Vectors \(\textbf{q}^{\rm DA}\in\mathbb{R}^{N}\) and \(\textbf{q}^{\rm H}\in\mathbb{R}^{N}\) are the policies, so called **q**-policies, to be learned from historical data in the training stage.
If the model is designed and trained appropriately, \(\textbf{q}^{\rm DA}\) and \(\textbf{q}^{\rm H}\) will reflect relations between the input features and the decision variables \(p_{t}^{\rm DA}\) and \(p_{t}^{\rm H}\) that persist into the future, such that the model will produce well-performing trading decisions for new feature vectors.
_Feature Vectors_: The day-ahead market allows for simultaneously placing multiple independent price-quantity bids \(\{\lambda_{b},p_{b}\}_{b=1,...,B}\) for each hour of the following day. Therefore, we can model the electricity traded in the day-ahead market \(p_{t}^{\rm DA}\) as a function of the day-ahead price at hour \(t\). This is achieved by introducing an unknown price variable \(\lambda\!\in\!\mathbb{R}\) in the feature vector \(\textbf{X}_{t}\). Since all other values in the feature vector are known at the time of decisions, this results in \(p_{t}^{\rm DA}\) being a linear function of this unknown variable \(\lambda\). In practice, the day-ahead price-quantity bids are then derived by discretizing this linear function and approximating it as a stepwise function with arbitrary step size, as illustrated in Fig. 1. Each step \(b\) of the resulting stepwise function will represent one pair of price-quantity bids \(\{\lambda_{b},p_{b}\}_{b=1,...,B}\). Since the electricity traded in the day-ahead market and the power consumption \(p_{t}^{\rm H}\) of the electrolyzer are interdependent, \(p_{t}^{\rm H}\) is also expressed as a linear function of the day-ahead electricity price at hour \(t\).
All feature vectors investigated in this paper will have the form \(\textbf{X}_{t}=\begin{bmatrix}\textbf{X}_{t},\lambda,1\end{bmatrix}\), where the constant feature \(1\) provides an intercept and \(\textbf{\tilde{X}}_{t}\in\mathbb{R}^{N-2}\) is the vector of remaining features. The remaining features can include any relevant data that are expected to be related with the uncertain parameters, i.e. the wind production (e.g., weather forecasts), and the day-ahead and balancing prices (e.g., electricity price and demand forecasts and imbalance forecasts). Note that the plant has usually access to an extensive database, including _historical_ values of these features and corresponding _realized_ values of the uncertain parameters.
_Architecture of Linear Policies_: Several approaches can be used regarding the architecture of the **q**-policies. The simplest but restrictive architecture, so-called _general architecture_, considers constant policies \(\textbf{q}^{\rm DA}\in\mathbb{R}^{N}\) and \(\textbf{q}^{\rm H}\in\mathbb{R}^{N}\) for every hour \(t\).
It is widely acknowledged that daily patterns exist in the day-ahead prices. Accounting for these patterns could remarkably increase the performance of the model. This can be achieved by introducing an _hourly architecture_, where the **q**-policies in (2) are specific to the hour of the day and denoted as \(\textbf{q}^{\rm DA}_{h_{t}}\in\mathbb{R}^{N}\) and \(\textbf{q}^{\rm H}_{h_{t}}\in\mathbb{R}^{N}\) for \(h_{t}=t\mod 24\), with \(\mod\) representing the modulo operator. This hourly architecture allows the model to learn daily patterns. However,
it results in \(24\) times as many \(\mathbf{q}\)-policies to train as in the general architecture, and thus requires a larger training dataset. The trade-off between these factors will be empirically studied later in our case study.
Besides, in order to capture more complex and non-linear dependencies between the features \(\mathbf{X}_{t}\) and decision variables \(p_{t}^{\mathrm{DA}}\) and \(p_{t}^{\mathrm{H}}\), multiple sets of model parameters can be introduced, each covering one domain of the features. In particular, we introduce \(M\) price domains \(\mathcal{D}=\{\mathcal{D}_{1},...,\mathcal{D}_{M}\}\), each defined as a range of prices \(\mathcal{D}_{i}=\left[\lambda_{i-1},\lambda_{i}\right]\) with \(\lambda_{i-1}<\lambda_{i}\) for all \(i\in\{1,...,M\}\). Over each price domain, we define \(\mathbf{q}\)-policies as \(\mathbf{q}_{\mathbf{H}}^{\mathrm{DA}}\) and \(\mathbf{q}_{\mathbf{i}}^{\mathrm{H}}\). As a result, the decision variables \(p_{t}^{\mathrm{DA}}\) and \(p_{t}^{\mathrm{H}}\) are expressed as piece-wise linear functions of the day-ahead prices, such that
\[p_{t}^{\mathrm{DA}}=\mathbf{q}_{t}^{\mathrm{DA}}\ \mathbf{X}_{t}^{\top} \text{if }\lambda\in\mathcal{D}_{i}, \forall t\in\mathcal{T} \tag{3a}\] \[p_{t}^{\mathrm{H}}=\mathbf{q}_{t}^{\mathrm{H}}\ \mathbf{X}_{t}^{\top} \text{if }\lambda\in\mathcal{D}_{i}, \forall t\in\mathcal{T}. \tag{3b}\]
Note that a trade-off between complexity and flexibility can be achieved, if optimal domains can be chosen based on statistical characteristics of the data. For example, a natural threshold where it might be beneficial to divide the model into different trading strategies is when the day-ahead electricity price is equal to the hydrogen price \(\lambda^{\mathrm{H}}\).
### _Optimization Model for Training Stage_
In the training stage we determine the adequate values of the \(\mathbf{q}\)-policies using a batch learning mechanism. The training of the \(\mathbf{q}\)-policies can be represented as an optimization problem, in which the profit of the plant is maximized over a _historical_ dataset, containing values of the feature vectors along with corresponding realizations of the uncertain parameters for all \(t\in\mathcal{T}^{\mathrm{hist}}\). This optimization problem, formulated as a mixed-integer linear program (MILP), reads as
\[\max_{\Omega} \sum_{t\in\mathcal{T}^{\mathrm{hist}}}\left[\lambda_{t}^{\mathrm{ DA}}\mathbf{q}^{\mathrm{DA}}\mathbf{X}_{t}^{\top}+\lambda^{\mathrm{H}}\rho^{ \mathrm{H}}\mathbf{q}^{\mathrm{H}}\mathbf{X}_{t}^{\top}+\lambda_{t}^{\mathrm{ UP}}o_{t}-\lambda_{t}^{\mathrm{DW}}u_{t}\right]\] (4a) s.t. \[o_{t}-u_{t}=P_{t}^{\mathrm{W}}-\mathbf{q}^{\mathrm{DA}}\mathbf{X }_{t}^{\top}-\mathbf{q}^{\mathrm{H}}\mathbf{X}_{t}^{\top} \forall t\in\mathcal{T}^{\mathrm{hist}} \tag{4b}\] \[0\leq o_{t}\leq M(1-b_{t}) \forall t\in\mathcal{T}^{\mathrm{hist}}\] (4c) \[0\leq u_{t}\leq Mb_{t} \forall t\in\mathcal{T}^{\mathrm{hist}}\] (4d) \[0\leq\mathbf{q}^{\mathrm{H}}\mathbf{X}_{t}^{\top}\leq\overline{ P}^{\mathrm{H}} \forall t\in\mathcal{T}^{\mathrm{hist}}\] (4e) \[-\overline{P}^{\mathrm{H}}\leq\mathbf{q}^{\mathrm{DA}}\mathbf{X }_{t}^{\top}\leq\overline{P}^{\mathrm{W}} \forall t\in\mathcal{T}^{\mathrm{hist}}\] (4f) \[\sum_{t=24(d-1)+1}^{24d}\rho^{\mathrm{H}}\mathbf{q}^{\mathrm{H}} \mathbf{X}_{t}^{\top}\geq\underline{H} \forall d\in D^{\mathrm{hist}}, \tag{4g}\]
where the set of variables \(\Omega=\{\mathbf{q}^{\mathrm{DA}}\), \(\mathbf{q}^{\mathrm{H}}\), \(o_{t}\), \(u_{t}\), \(b_{t}\}\)\(\forall t\in\mathcal{T}^{\mathrm{hist}}\) includes the \(\mathbf{q}\)-policies \(\mathbf{q}^{\mathrm{DA}}\) and \(\mathbf{q}^{\mathrm{H}}\), as well as the real-time decisions, including the over-production \(o_{t}\) and under-production \(u_{t}\) settled in real time, and the auxiliary binary variable \(b_{t}\in\{0,1\}\) indicating the state of over- or under-production. Note that the \(\mathbf{q}\)-policies in (4) can take index \(h_{t}\) and/or index \(i\) as discussed in Section III-A
The objective function (4a) maximizes the total profit of the hybrid power plant over the historical dataset, including four terms. The first term \(\lambda_{t}^{\mathrm{DA}}\mathbf{q}^{\mathrm{DA}}\mathbf{X}_{t}^{\top}\) corresponds to the revenue/cost from electricity trading in the day-ahead market, where \(\lambda_{t}^{\mathrm{DA}}\in\mathbb{R}\) is the historical day-ahead price. Note that \(\mathbf{q}^{\mathrm{DA}}\mathbf{X}_{t}^{\top}\), reflecting the power quantity, could be positive or negative, indicating whether the plant cells or buys power. The second term \(\lambda^{\mathrm{H}}\rho^{\mathrm{H}}\mathbf{q}^{\mathrm{H}}\mathbf{X}_{t}^{\top}\) is the revenue from hydrogen sales, where \(\lambda^{\mathrm{H}}\in\mathbb{R}_{+}\), in \(\epsilon\)/kg, is the fixed hydrogen price. In addition, \(\rho^{\mathrm{H}}\in\mathbb{R}_{+}\), in Kg/MWh, is the constant power-to-hydrogen efficiency of the electrolyzer, whereas \(\mathbf{q}^{\mathrm{H}}\mathbf{X}_{t}^{\top}\) reflects the power consumed by the electrolyzer. Finally, the third and fourth terms in (4a) refer to the settlement of over- and under-production, respectively, in the balancing market. Note that \(\lambda_{t}^{\mathrm{UP}}\in\mathbb{R}\) is the upward regulation price, which is lower than or equal to the day-ahead price \(\lambda_{t}^{\mathrm{DA}}\), reflecting a lost opportunity cost for over-production. In addition, \(\lambda_{t}^{\mathrm{DW}}\in\mathbb{R}\) is the downward regulation price, which is higher than or equal to the day-ahead price \(\lambda_{t}^{\mathrm{DA}}\), reflecting a penalty cost for under-production.
Constraint (4b) defines the power imbalance \(o_{t}-u_{t}\) to be settled in real time as the difference between the realized wind power generation \(P_{t}^{\mathrm{W}}\) and the power scheduled in the day-ahead stage. The set of disjunctive constraints (4c)-(4d), with \(M\) a large enough positive constant, enforces over- and under-production to not happen simultaneously at each hour \(t\). If \(b_{t}=1\), then (4c) enforces the over-production \(o_{t}\) to be zero. On the other hand, if \(b_{t}=0\), then (4d) sets the under-production \(u_{t}\) to zero. Constraint (4e) enforces the power consumption of the electrolyzer to lie within zero and its capacity \(\overline{P}^{\mathrm{H}}\). Note that the lower bound being set to zero means that the electrolyzer cannot operate as a fuel cell, converting the hydrogen back to electricity. In order to avoid settlement costs, the power traded in the day-ahead market is restricted by (4f) to lie within the minus consumption capacity of the electrolyzer and the nominal capacity of the wind turbine \(\overline{P}^{\mathrm{W}}\). Finally, the minimum daily hydrogen production requirement \(\underline{H}\) is enforced by (4g) for each day \(d\in D^{\mathrm{hist}}\) in the historical dataset. Note that, in this training problem, the realized electricity traded in the day-ahead market and the electricity consumption of the electrolyzer are evaluated directly by applying the linear policies in (2) to the values of the realized day-ahead prices.
### _Decision-Making Stage and Retraining_
For each day of the testing period, the trained \(\mathbf{q}\)-policies are applied to a new feature vector \(\tilde{\mathbf{X}}_{t}\) for each hour \(t\) of the following day, to compute the linear functions representing the electricity traded in the day-ahead market and consumed by the electrolyzer as functions of the future day-ahead prices. These functions are then discretized, and stepwise price-quantity bids are placed in the day-ahead market. The realized day-ahead price will then determine the electricity traded in the day-ahead market and consumed by the electrolyzer.
Once the uncertain parameters, i.e., day-ahead prices, balancing prices, and wind power production, are realized, the new feature vector and the associated realizations are added to the historical dataset. Retraining the model considering more recent historical data points is thus performed at regular intervals. One can use a sliding window approach in order to
keep the length of the training dataset constant, as illustrated in Fig. 2. Note that, at each retraining interval, the blue part of the window represents the previous testing data points recently added to the training dataset. The length of this window is a hyperparameter of the model that should be selected appropriately, as further discussed in our numerical analysis.
## IV Real-Time Adjustments
We assume the electrolyzer is subject to negligible ramping constraints. Therefore, the hybrid power plant is able to adjust the hydrogen production schedule in real time based on the realized wind power production and balancing prices. This section introduces a rule-based adjustment algorithm for the real-time operation of the electrolyzer that accounts for the minimum daily hydrogen production requirement.
### _Decision Rule for a Single Hour_
In each hour, the decision rule is dependent on whether the balancing prices are higher or lower than the hydrogen price. Hereafter, to make electricity and hydrogen prices comparable in terms of /MWh, we refer to \(\rho^{\mathrm{H}}\lambda^{\mathrm{H}}\) as the hydrogen price. The balancing price is not known until after the balancing settlements, however it is assumed that it can be approximated to a sufficient degree for the current hour by predicting the system status [11] and observing the intraday market.
We assume that the balancing market follows a dual-pricing mechanism, implying that \(\lambda_{t}^{\mathrm{UP}}\leq\lambda_{t}^{\mathrm{DA}}\leq\lambda_{t}^{ \mathrm{DW}}\) for each hour \(t\in\mathcal{T}\). The following three situations can thus occur regarding the relation between the hydrogen price and balancing prices: (_i_) \(\rho^{\mathrm{H}}\lambda^{\mathrm{H}}\leq\lambda_{t}^{\mathrm{UP}}\leq \lambda_{t}^{\mathrm{DW}}\); (_ii_) \(\lambda_{t}^{\mathrm{UP}}\leq\lambda_{t}^{\mathrm{DW}}\leq\rho^{\mathrm{H}} \lambda^{\mathrm{H}}\); and (_iii_) \(\lambda_{t}^{\mathrm{UP}}\leq\rho^{\mathrm{H}}\lambda^{\mathrm{H}}\leq\lambda_{t }^{\mathrm{DW}}\). In the first situation where both up and down balancing prices are higher than the hydrogen price, producing hydrogen would result in less revenue than what is compensated for over-production in the balancing market. This means that the producer would benefit from adjusting the electrolyzer's consumption down and exporting as much electricity to the grid as possible. In the second situation where both balancing prices are lower than the hydrogen price, producing hydrogen results in higher revenue than the cost incurred from under-production in the balancing market. This means that the hybrid power plant would benefit from adjusting the electrolyzer's consumption up and importing as much electricity from the grid as possible. In the final situation, over-production results in lower revenues than hydrogen production, and under-production results in higher costs than revenues from hydrogen production. In this situation, the electrolyzer should be adjusted in order to minimize any deviation from the day-ahead market bid. This decision rule that maximizes the profit for a single hour can be formulated as a piece-wise linear function \(\Pi(.)\) that returns the adjusted electricity consumption level of the electrolyzer, such that
\[\Pi\big{(}\delta_{t},\lambda^{\mathrm{H}},\lambda_{t}^{\mathrm{UP}},\lambda_{t }^{\mathrm{DW}}\big{)}=\begin{cases}0&\text{if }\lambda_{t}^{\mathrm{UP}}>\rho^{ \mathrm{H}}\lambda^{\mathrm{H}}\\ \overline{P}^{\mathrm{H}}&\text{if }\lambda_{t}^{\mathrm{DW}}<\rho^{\mathrm{H}} \lambda^{\mathrm{H}}\\ \alpha^{\mathrm{H}}[\delta_{t}]&\text{otherwise,}\end{cases} \tag{5}\]
where \(\delta_{t}=P_{t}^{\mathrm{W}}-p_{t}^{\mathrm{DA}}\) is the surplus/deficit of wind power production compared to the day-ahead market bid, and \(\alpha^{\mathrm{H}}[\cdot]\) ensures that the electrolyzer production is feasible:
\[\alpha^{\mathrm{H}}\big{[}\delta_{t}\big{]}=\begin{cases}0&\text{if }\delta_{t}<0\\ \overline{P}^{\mathrm{H}}&\text{if }\delta_{t}>\overline{P}^{\mathrm{H}}\\ \delta_{t}&\text{otherwise.}\end{cases} \tag{6}\]
Adjusting the electricity consumption of the electrolyzer according to the decision rule (5) will maximize the generated revenues for a single hour, when the estimated balancing price is the only source of uncertainty.
### _Real-time Adjustment Algorithm_
Implementing the hourly decision rule (5) independently for each single hour of the optimization period would not guarantee that the minimum daily hydrogen production requirement \(\underline{H}\) is satisfied. Instead, we introduce an algorithm which only adjusts the electrolyzer's schedule according to the hourly decision rule in a given hour \(t\) if this requirement can still be met during the day. Therefore, the electrolyzer's consumption can always be adjusted upward from the day-ahead schedule at any given hour. However, in order to perform a downward adjustment in a given hour \(t\) of the day \(d\), the adjusted hydrogen production in this hour \(\rho^{\mathrm{H}}p_{t}^{\mathrm{adj}}\), plus the cumulative realized production in previous hours \(\rho^{\mathrm{H}}p_{t}^{\mathrm{real}}=\sum_{i=24(d-1)+1}^{t-1}\rho^{\mathrm{ H}}p_{i}^{\mathrm{adj}}\), plus the cumulative production schedule for the remaining hours of the day \(\sum_{i=t+1}^{24d}\rho^{\mathrm{H}}p_{i}^{\mathrm{H}}\) must be higher than the minimum daily hydrogen production requirement \(\underline{H}\). For example, let us consider a hybrid power plant with a certain minimum daily hydrogen production requirement, equivalent to the consumption of \(15\) MWh. If the realized cumulative hydrogen production at hour \(15\) is equal to \(10\) MWh and the cumulative planned hydrogen production from hour \(15\) to \(23\) is equal to \(7\) MWh, then the electrolyzer has a scheduled hydrogen surplus of \(2\) MWh, which can be adjusted down at hour \(15\). If the hourly decision rule (5) outputs a lower set-point for the electrolyzer than its schedule for this hour, the electrolyzer can then be turned down as much as the surplus allows. On the contrary, if the realized cumulative hydrogen production is only equal to \(8\) MWh, then the electrolyzer has no scheduled hydrogen surplus and cannot be adjusted down in this hour, regardless of the outputs of the hourly decision rule (5). The complete adjustment algorithm is provided in Algorithm 1.
```
1:Input: \(\rho^{\mathrm{
time adjustment. All source codes are available in [12].
### _Case Study Data_
We use historical data for the year \(2019\) for training, and the year \(2020\) for testing, ensuring that the models are evaluated through all seasonal variations. All price data is publicly available on ENTSO-e [13], along with production data for specific wind farms, and aggregated forecasts of offshore/onshore wind production in the Danish bidding zones DK1 and DK2. The production from the wind farm in Roedsand, Denmark is used to model the realized wind production of the hybrid power plant.
Realistic forecasts of day-ahead price and wind power production have been provided by Siemens Gamesa Renewable Energy for five consecutive months. Two years of synthetic forecasts are created for both price and wind production, by fitting the statistical characteristics of the forecast error of Siemens Gamesa forecasts to a theoretical distribution, and then sampling from this distribution to generate new forecast errors with the same characteristics. Fig. 3 provides a histogram of both original and generated forecast errors for both day-ahead market price and wind production forecasts.
Forecasts are produced by sampling forecast error values from the fitted distributions, and then adding these errors to the realized production data to create a forecast. The entire training and testing datasets thus consist of realized day-ahead prices, upward and downward balancing prices, and production for the wind farm in Roedsand, and forecasts for day-ahead prices and wind production, for \(2019\) and \(2020\).
### _Models to be Compared_
This numerical analysis compares several trained models with various combinations of (_i_) linear policies architecture as introduced in Section III-A and (_ii_) feature vectors. These trained models are compared to a deterministic (**Det.**) optimization approach as a base-case, and a benchmark with perfect information (**Hindsight**). In addition, we explore the impact of the length of the training period, ranging between \(1\) and \(12\) months, on the outcomes of different models.
We first compare models with **General Architecture (GA)** and **Hourly Architecture (HA)** for **q**-policies. Recall that **q**-policies take index \(t\) in the **HA** model, which is not the case in the **GA** model. Next, we compare the **GA** and **HA** in the cases where a single set of policies is defined over all values of prices, and a case where the policies are split into multiple **Price Domains (PD)**. When considering multiple price domains, we refer to models as **GA+PD** and **HA+PD**. Three price domains are chosen: one below the hydrogen price \(\rho^{\rm H}\lambda^{\rm H}\), one above the \(90\)% percentile of realized day-ahead prices in the training data, and one in between. The hydrogen price is a natural threshold to separate the trading decisions because producing hydrogen is necessarily more profitable than selling power for day-ahead prices below the hydrogen price. The \(90\)% percentile is chosen to reduce the influence of extreme outliers on the **q**-policies.
Additionally, we investigate three different types of feature vectors \(\mathbf{X}_{t}\). Firstly, the simplest **Reduced Feature (RF)** vector contains only the deterministic wind production forecast \(\hat{P}_{t}^{\rm W}\), such that \(\mathbf{X}_{t}^{\rm RF}=\left[\hat{P}_{t}^{\rm W},\lambda,1\right]\). Secondly, the **Augmented Feature (AF)** vector includes additional features, namely the aggregated onshore/offshore wind production forecasts in the two bidding zones DK1 and DK2, released by the Danish transmission system operator, Energheit, before the time of bidding, such that \(\mathbf{X}_{t}^{\rm AF}=\left[\tilde{\mathbf{X}}_{t}^{\rm AF},\lambda,1\right]\) with \(\tilde{\mathbf{X}}_{t}^{\rm AF}=\left[\hat{P}_{t}^{\rm W},\hat{P}_{t}^{\rm on,DK1},\hat{P}_{t}^{\rm on,DK2},\hat{P}_{t}^{\rm off,DK1},\hat{P}_{t}^{\rm off,DK2}\right]\). This augmented feature vector may capture additional information related to the uncertainty sources at the cost of training additional **q**-policies and requiring a larger amount of training data. Increasing the length of the historical dataset raises issues of stationarity of the environment, in particular of electricity prices. Therefore, a so-called **Forecast Model (FM)** feature vector is introduced, that accounts for the same additional features \(\tilde{\mathbf{X}}_{t}^{\rm AF}\), while minimizing the added complexity of the model. This is achieved by first training a separate feature
Fig. 3: Distribution of day-ahead market price (**right**) and wind power generation (**left**) forecast errors.
driven forecast model that provides a single improved wind power generation forecast feature \(\tilde{X}_{t}^{\mathrm{FM}}=\tilde{\mathbf{q}}^{\mathrm{FM}}\big{[}\tilde{\mathbf{ X}}_{t}^{\mathrm{AF}},1\big{]}^{\top}\in\mathbb{R}\), where \(\tilde{\mathbf{q}}^{\mathrm{FM}}\) represents a vector of policies that is trained using a historical dataset. As the environment related to wind power generation is expected to be stationary, to a large extent, this feature-driven forecast model can be trained on a longer historical dataset. Then, the **FM** feature vector is defined as \(\mathbf{X}_{t}^{\mathrm{FM}}=\big{[}\tilde{X}_{t}^{\mathrm{FM}},\lambda,1\big{]}\).
### _Results_
The results presented are the profits generated by each model from electricity trades in the day-ahead market, hydrogen production, and settlements in real time, evaluated ex-post for the entire year \(2020\).
Fig. 3(a) illustrates the comparison between the different policy architectures, showing the best performing feature vector and training period for each architecture. This plot shows that introducing multiple price domains increases the performance radically for both types of architectures, resulting in the **HA+PD** model outperforming the **GA+PD** model with price domains included. Additionally, we note that the training period yielding the best results for the **GA** model is \(5\) months, and the model performed within \(99\)% of this performance after only \(3\) months. With a training period of \(3\) months, the **GA** model generates around \(5\)% higher revenues than the **HA** model. This indicates that for time periods with a more non-stationary environment than the one used in the study, or with limited available historical data, the **GA** model might be a more appealing choice. Since Fig. 3(a) shows that **HA+PD** is the model that outperforms others, we further analyze this model in the rest of this section.
Fig. 3(b) depicts the performances of **HA+PD** for different feature vectors and varying lengths of the training dataset, from one to \(12\) months. First, we observe that the improvement of the model by having access to a training dataset longer than \(7\) months is limited, irrespective of the feature vectors chosen. For example, the profit when models trained on a \(12\)-month dataset is only \(1\)-\(2\)% higher than when trained on a \(7\)-month dataset. The second observation is that the model with the **FM** feature vector setting usually exhibits a better performance than the ones with the **AF** and **RF** feature vector settings, especially when the length of the training dataset is lower than \(12\) months. However, these three models show a similar performance when the plant uses a dataset of \(12\) months. The third observation is the performance of our models in comparison to the deterministic model, where the hybrid power plant makes day-ahead bidding decisions by using deterministic forecasts only. We notice that, by using a training dataset of at least \(4\) months, our model outperforms the deterministic model. For example, when using the \(12\)-month dataset, the plant makes around \(5.9\%\) more profit by using the **HA+PD** model with either of three feature vector settings. The final observation corresponds to the comparison of the **HA+PD** model with respect to the hindsight model, in which the plant has perfect information on the prices and wind generation. We observe that the profit earned by our proposed **HA+PD** model is only \(3.7\%\) lower than the profit in hindsight, whereas it is \(9.8\%\) for the deterministic model. For the rest of this section, we further analyze the best-performing model, i.e., the **HA+PD** model with a \(12\)-month training dataset and (**AF**) feature vector setting.
Fig. 3(c) depicts the profit across various models before and after real-time adjustments. Obviously, the hindsight model does not need any real-time adjustment. The _final_ profit, i.e., after in the day-ahead stage plus real time, earned by the deterministic (**Det.**) and the proposed **HA+PD** model is computed using the rule-based adjustment strategy proposed in Section IV-B. This profit is compared to the final profit obtained by the HA+PD model with an _optimal adjustment_, implemented as an optimization problem over \(24\) hours with perfect foresight, which provides an upper bound on the benefits of any
Fig. 4: Ex-post profit of the hybrid power plant with (a) four policy architectures (**GA**, **HA**, **GA+PD**, **HA+PD**); (b) the **HA+PD** model for various training dataset lengths and available feature vectors (**AF**, **RF**, **FM**), compared to the **Deterministic** and **Hindsight** models; and (c) the **HA+PD** model before and after the real-time adjustment, compared to the deterministic (**Det.**) and **Hindsight** model, and the **Optimal Adjustment** strategy.
adjustment policy. We observe that the the proposed real-time adjustment strategy achieves a higher increase in the profit of the deterministic model compared to that of the **HA+PD** model. Indeed, due to a less efficient day-ahead scheduling, the deterministic model has more potential for improvement in the real-time stage than the **HA+PD** model. However, the final profit of the deterministic model is still slightly lower than that of the proposed **HA+PD** model. Yet, it is important to note that its result relies on having access to accurate upward and downward balancing price forecasts for the the real-time adjustment strategy, which is not a straightforward task. As the final profits of the deterministic model are highly dependent on the performance of the real-time adjustment policy, in the absence of accurate price forecasts, the deterministic model risks achieving significantly lower final profits than the **HA+PD** model. Additionally, we observe that the proposed rule-base adjustment strategy achieves almost as much profit as the optimal adjustment one, under the assumption of accurate price forecasts. This shows that, owed to the proposed real-time adjustment strategy, the final profit of the **HA+PD** model gets even closer to the profit in hindsight, making it an even more attractive model.
Finally, Fig. (5) provides two examples of feature-driven bidding strategies in the day-ahead market, obtained by the **HA+PD** model. The left plot shows the bidding strategy in a representative hour during which the hybrid power plant sells power to the grid. In this hour, the plant sells the first \(10\) MW at price \(\rho^{\mathrm{H}}\lambda^{\mathrm{H}}\) and the rest at the \(90\)% percentile of the realized day-ahead prices in the training data (\(\lambda^{90\%}\)). The right plot shows a representative hour during which the hybrid power plant buys power from the grid. The first \(10\) MW is bought at price \(\lambda^{90\%}\) and the rest at price \(\rho^{\mathrm{H}}\lambda^{\mathrm{H}}\).
## VI Conclusion
This paper explains how hybrid power plants with co-located wind turbines and an electrolyzer can implement efficient feature-driven models to learn from historical data and make informed day-ahead trading decisions. The proposed feature-driven model derive trading (selling or buying) decisions in the day-ahead electricity market, as well as a hydrogen production schedule for the next day, fulfilling the minimum daily hydrogen quota. This is a pragmatic solution, which properly accounts for wind power and price uncertainty without the need to generate probabilistic forecasts or solve complex stochastic optimization problems. Our numerical analysis shows that the proposed feature-driven models outperform the deterministic model, and require less adjustment in real time. In addition, they result in a final profit which is close to that in hindsight.
For future work, it is of interest to explore the performance of the model in a non-stationary environment. This could be tackled by developing online decision-making methods, similar to the one proposed in [14] for wind power trading only. It is also interesting to include more complex physical characteristics of the electrolyzer, such as non-linear efficiency and degradation curves, and add more relevant assets to the hybrid power plant, such as battery and hydrogen storage.
## Acknowledgement
We would like to thank Siemens Gamesa Renewable Energy for providing us with historical forecast data. We also thank the Danish Energy Technology Development and Demonstration Programme (EUDP) for supporting this research through HOMEY project (Grant number: 64021-7010).
|
2305.05355 | Turning Privacy-preserving Mechanisms against Federated Learning | Recently, researchers have successfully employed Graph Neural Networks (GNNs)
to build enhanced recommender systems due to their capability to learn patterns
from the interaction between involved entities. In addition, previous studies
have investigated federated learning as the main solution to enable a native
privacy-preserving mechanism for the construction of global GNN models without
collecting sensitive data into a single computation unit. Still, privacy issues
may arise as the analysis of local model updates produced by the federated
clients can return information related to sensitive local data. For this
reason, experts proposed solutions that combine federated learning with
Differential Privacy strategies and community-driven approaches, which involve
combining data from neighbor clients to make the individual local updates less
dependent on local sensitive data. In this paper, we identify a crucial
security flaw in such a configuration, and we design an attack capable of
deceiving state-of-the-art defenses for federated learning. The proposed attack
includes two operating modes, the first one focusing on convergence inhibition
(Adversarial Mode), and the second one aiming at building a deceptive rating
injection on the global federated model (Backdoor Mode). The experimental
results show the effectiveness of our attack in both its modes, returning on
average 60% performance detriment in all the tests on Adversarial Mode and
fully effective backdoors in 93% of cases for the tests performed on Backdoor
Mode. | Marco Arazzi, Mauro Conti, Antonino Nocera, Stjepan Picek | 2023-05-09T11:43:31Z | http://arxiv.org/abs/2305.05355v1 | # Turning Privacy-preserving Mechanisms against Federated Learning
###### Abstract.
Recently, researchers have successfully employed Graph Neural Networks (GNNs) to build enhanced recommender systems due to their capability to learn patterns from the interaction between involved entities. In addition, previous studies have investigated federated learning as the main solution to enable a native privacy-preserving mechanism for the construction of global GNN models without collecting sensitive data into a single computation unit. Still, privacy issues may arise as the analysis of local model updates produced by the federated clients can return information related to sensitive local data. For this reason, experts proposed solutions that combine federated learning with Differential Privacy strategies and community-driven approaches, which involve combining data from neighbor clients to make the individual local updates less dependent on local sensitive data.
In this paper, we identify a crucial security flaw in such a configuration, and we design an attack capable of deceiving state-of-the-art defenses for federated learning. The proposed attack includes two operating modes, the first one focusing on convergence inhibition (_Adversarial Mode_), and the second one aiming at building a deceptive rating injection on the global federated model (_Backdoor Mode_). The experimental results show the effectiveness of our attack in both its modes, returning on average 60% performance detriment in all the tests on Adversarial Mode and fully effective backdoors in 93% of cases for the tests performed on Backdoor Mode.
Federated Learning, Graph Neural Network, Model Poisoning, Privacy, Recommender Systems +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: journal: Computer Vision and Pattern Recognition
+
However, despite its distributed design representing a native privacy solution, researchers have shown that federated learning is vulnerable to attacks. Thus, an adversary could infer sensitive information related to the original private data of local clients based on the local model variations recorded during consecutive learning epochs (Beng et al., 2015; Li et al., 2017; Li et al., 2018). To address this vulnerability, several recent studies have combined federated learning with Differential Privacy techniques (Li et al., 2018). This ensures that the rating assigned by users to items cannot be inferred by analyzing the local model updates of consecutive epochs. However, as stated above, to face the cold-start issue, also social information can be included, thus adding a higher complexity level to the whole solution. Leveraging this information, some authors have proposed to exploit the social nature of the underlying scenario to create an additional collaborative privacy-preserving mechanism (Li et al., 2018; Li et al., 2019). In practice, the idea underlying these strategies is to augment the training of the local models with information derived from the surrounding social neighborhood so that the produced updates will not be dependent only on the local data. Interestingly, as shown in (Li et al., 2018), such an augmentation mechanism not only addresses the privacy concerns discussed above but ultimately leads to the improved general performance of the global model.
The proposal described in this paper starts from these recent research efforts. Our intuition is that although the additional social collaborative solutions can help both in improving the performance of considered systems and in building strong privacy-preserving approaches, this paradigm can be maliciously exploited to craft very impactful cyber attacks. In general, the decentralized nature of federated learning makes it a very interesting target environment for attackers. Indeed, each involved client, as well as the aggregating server, can become potential adversaries to the system (Beng et al., 2015; Li et al., 2018; Li et al., 2018). For this reason, the research community has developed several countermeasures and advanced protection solutions that can be successfully exploited to protect this complex environment (Beng et al., 2015; Li et al., 2018; Li et al., 2018). However, by analyzing the behavior of the most recent defenses, we can see that the main strategy adopted therein is basically to detect and isolate from the system any action that differs from the average behavior of the community composing the federated scenario. By contrast, the collaborative strategy introduced by the novel privacy-preserving mechanisms tries to protect the local contributions of single clients. To do so, these strategies suitably combine local updates with those of the surrounding community members. From an attacker's point of view, this configuration can become an opportunity to spread the attack to its neighbors. Thus, obtaining a novel threat possibly capable of even deceiving the state-of-the-art protections.
In this paper, we leverage this intuition to design a novel AI-based attack strategy for a scenario characterized by a social recommender system equipped with the privacy protection measures introduced above. Borrowing some ideas from the related literature (Beng et al., 2015), we include two modes for the attack in our design, namely: a convergence inhibition strategy (_Adversarial Mode_) and a deceptive rating injection solution (_Backdoor Mode_). More precisely, we implemented our proposal by focusing on the system described in (Li et al., 2018), in which a GNN model is trained with a federated learning approach to build a social recommender system. To achieve a strong privacy protection level, the target system includes both a Local Differential Privacy module and a community-based mechanism, according to which pseudo-items derived from the community are included in the local model training. We argue that, although the attack described in this paper is specifically tailored to the features of such a system, the underlying intuition and methodology can be generalized to other similar scenarios. The contributions of this paper can be summarized as follows:
* We identify the main vulnerabilities of community-based privacy protection mechanisms for federated learning, focusing on approaches targeting Graph Neural Networks as underlying deep learning models.
* To deceive state-of-the-art security solutions for federated learning, we propose a model poisoning attack leveraging the features of the considered scenario.
* We adapt our attack to work in two modes: _Adversarial Mode_ aiming at inhibiting the convergence of the federated learning model, and _Backdoor Mode_ focusing on the creation of a backdoor in the learned model.
* To assess the performance of our attack, we adopt the Root Mean Squared Error, the Mean Absolute Error, and a newly defined metric, called _Favorable Case Rate_ specific to estimate the success rate of our backdoor attack against the regressor that powers the recommender system.
* We test the effectiveness of our attack against a real-life recommender system based on the approach of (Li et al., 2018). Moreover, we carried out an experimental campaign leveraging three very popular datasets for recommender systems. The obtained results show that our attack can cause very strong effects in both operating modes. In particular, in _Adversarial Mode_, it is capable of causing a 60% detriment in the performance of the target GNN model, on average, whereas, in _Backdoor Mode_, it allows the construction of fully effective backdoors in about 93% of cases, also in the presence of the most recent federated learning defenses.
The remainder of this paper is organized as follows. In Section 2, we describe some background concepts related to our reference scenario. Section 3 describes the system model and the intuition underlying our attack. The technical details of our attack are discussed in Section 4. In Section 5, we report the experiments carried out to assess the effectiveness of our attack. The related literature is surveyed in Section 6. Finally, in Section 7, we draw conclusions and discuss potential future work directions.
## 2. Background
This section is devoted to the description of background concepts for our study. In particular, we begin by introducing existing federated learning solutions that focus on privacy-preserving applications, with particular emphasis on recommender systems based on Graph Neural Networks. After that, we describe model poisoning attacks in this context and introduce the most popular and effective countermeasures.
### Privacy-preserving Federated Learning
Federated learning exploits decentralized parties, which own private sets of data to build global models through the suitable aggregation of learning information derived from the local training of
individual models. This infrastructure ensures the construction of global models without sharing data between the involved parties. In any case, this scenario opens possible threats to the privacy of involved actors, including the possibility of inferring the private original data based on the model updates during the training phase or by observing the output produced subsequently.
In this context, solutions like Local Differential Privacy (LDP) (Fan et al., 2015) allow basic protection of the privacy of the federated learning clients by limiting the influence of the single datasets. Generally speaking, Local Differential Privacy achieves privacy protection by norm clipping and adding noise to the updates of the local models from the clients. Some effective solutions apply Local Differential Privacy by adding Gaussian or Laplacian noise (Kang et al., 2016):
\[\tilde{g}^{c}=clip(g^{c},\gamma)+Laplacian(0,\lambda), \tag{1}\]
where \(g^{n}\) are the updates of a client \(c\in C\), \(C\) is the set of clients, \(\gamma\) is the clipping limit, and \(\lambda\) is the standard deviation of the Laplacian noise. As an example, in the approach of (Kang et al., 2016), a Graph Convolutional Neural Network is trained in a federated way, and the privacy of the clients is preserved by using a Local Differential Privacy solution. Specifically, the involved clients protect their real updates from a potentially malicious data aggregator by providing a perturbed version of their updates that is not meaningful individually, which, however, guarantees the same training capability as the real ones when aggregated with the other contributions. In addition, they also proposed a simple but effective Graph Convolutional Layer called \(K-Prop\). This layer aggregates messages from an extended neighborhood set, which includes neighbor nodes with a distance of \(K\) hops at maximum. In this way, the proposed solution not only enhances client privacy by adding noise derived from real data but also improves the robustness of the global model because it is trained on an augmented dataset.
### Graph Neural Networks-based Recommender Systems
By introducing links between users, social recommender systems compensate for the data sparsity problem. As typically done in Social Network Analysis, a very promising strategy in this setting is to model data through graphs, and then, ad-hoc Deep Learning algorithms, such as Graph Neural Networks, can be adapted to identify complex recommendation patterns. Practically speaking, Graph Neural Networks are used to learn user and item embeddings from the graph to predict additional links between them. Works like the one proposed by Fan et al. (Fan et al., 2015) exploit Graph Neural Networks, particularly Graph Attention Networks, to learn the embeddings of users and items for recommendation purposes. In particular, this paper showed how using a GNN as an underlying model for a recommender system can be effective and efficient. The advantage of such models is the ability to aggregate high-order structural information that is important for learning embeddings from users and items. Of course, due to the sensitivity of involved training data, this type of solution could also be implemented through a federated learning approach, in which data concerning the links between users and items remain locally private. For instance, Wu et al. (Wu et al., 2017) proposed a federated learning approach to build a recommender system based on a GNN model collectively trained with highly decentralized user data. This solution builds a robust model while preserving the privacy of the involved parties via Local Differential Privacy and user graph expansion, obtained by randomly sampling items from the neighbors.
### Model Poisoning on Federated Learning
Due to its decentralized nature, federated learning introduces important security issues in scenarios where the involved clients cannot be assumed to be honest. In such a case, local model updates can be orchestrated by attackers to cause a detriment in the global model performance or, even worse, to drive the model behavior maliciously. As described in Section 2.4, to overcome these flaws, the aggregator entity of the federated learning solution can apply different robust aggregation strategies to limit the impact of such attacks. These defense methods are, typically, Byzantine-robust algorithms that filter possibly malicious updates returned by the clients using statistical approaches.
For instance, a baseline strategy could be to exclude gradient updates too distant from the mean (outside the interval confidence) of the distribution of the updates of all the clients. However, the recent scientific literature has demonstrated that these methods are still vulnerable to model poisoning attacks.
In this setting, Baruch et al. (Baruch et al., 2016) proposed one of the most well-known attacks trying to circumvent these defense strategies. There, the authors defined two attack variations, namely _Convergence Prevention_ and _Backdooring_. In the first version of the attack, the attacker controls a small set of clients and tries to perturb their updates, within a statistically admissible range, with the objective of preventing the convergence of the model. Gradients are perturbed by finding a deviation range from the mean that cannot be detected by defense methods based on statistical heuristics. Specifically, the attack identifies the updates from local models with the maximum distance from the mean of the update distribution. Then it boosts this edge signal by replicating it in all the updates sent by the attacked clients. Instead, the second attack they proposed is a _backdoor_ attack in which the attacker poisons the model during the training phase to force the prediction of a specific target class against a controlled input pattern. In practice, the attacker seeks a range of parameters that, if attacked, force the model to produce the desired label. A successful configuration must not affect the model's performance on benign inputs. In our paper, we follow a similar strategy and design two different variants of our attack. In particular, our attack leverages the vulnerabilities introduced by the recent privacy-preserving techniques for GNN-based recommender systems trained through federated learning. As we show in our experiments, our attack proved to be more effective than the one presented in (Baruch et al., 2016) also against the defense mechanisms described in the next section.
Still, in this context, Fang et al. (Fang et al., 2015) proposed another relevant example of a model poisoning attack. In this case, the authors have defined two versions of the attack, the former referring to a situation in which the attacker has partial knowledge of the clients (i.e., the attacker knows only the controlled clients), and the latter, instead focusing on a condition in which the attacker has full knowledge of the federated learning scenario. In both cases, the attacker crafts compromised local updates by maximizing or
minimizing the parameters in such a way as to skew the global model in the reverse of the expected gradient direction; that is, the direction along which the global model would converge in a favorable situation.
### Defenses against Model Poisoning
According to the basic implementation of a federated learning solution, the global model training is obtained by aggregating the local model updates returned by involved clients. However, as explained above, this strategy introduces many security issues in general scenarios where the clients cannot be assumed fully secure. Among the other security threats, model poisoning, either in the form of convergence prevention or backdooring, is, for sure, one of the most critical. Over the years, researchers have proposed several countermeasures for this reason. In particular, Yin et al. (Yin et al., 2017) proposed an enhanced version of the basic gradient aggregation strategy called TrimmedMean. According to this solution, the server aggregates the gradients in the \(i_{th}\) position independently. Specifically, given the gradients of all the clients in the \(i_{th}\) position, the aggregator sorts them according to their distance from the median. Then, only the \(top-k\) parameters are considered benign, where \(k=n-m\), \(n\) is the number of clients, and \(m\) is the corrupted portion of them.
Blanchard et al. (Blanchard et al., 2016) proposed a solution called Krum that updates the global model by choosing the best candidates between the gradients returned by the clients. The chosen gradients are those returned by the clients whose updates are the closest to the group of \(n-m-2\) presumably honest workers. The main intuition behind this approach is that, even if the selected updates are from malicious clients, they would still be close to the group of honest clients. According to this mechanism, all the outliers that differ significantly from the average will be discarded. Both TrimmedMean and Krum are designed to work in a scenario with up to \(m=(\frac{n}{2}+1)\) malicious clients.
Recently, Nguyen et al. (Nguyen et al., 2019) proposed an advanced defense mechanism for _backdoor_ attacks, named FAME, which combines a clustering algorithm with an adaptive differential privacy strategy. The workflow of FLAME consists of three main steps, namely: filtering, clipping, and noising. The objective of the first step is to filter malicious clients and select only those with the highest probability of being honest. To do so, the authors perform a clustering over the pairwise cosine similarity distances among the updates received from the clients using HDBSCAN. Specifically, they configured it to return a cluster that includes at least 50% of the batch of clients. With this setting, the candidate cluster will contain the majority of clients, and all the remaining updates, possibly poisoned, are marked as outliers. The second and third steps are dedicated to an adaptive differential privacy approach that estimates an effective clipping bound and a sufficient level of noise, such that the effect of the backdoor attack is removed while preserving the original performance of the model. The clipping bound should be dynamically adapted to the decreasing trend of the gradients' \(L_{2}-norm\). It is performed by scaling the updates of the clients so that the \(L_{2}-norm\) of the updates becomes smaller or equal to the chosen threshold. The clipped updates are then aggregated to obtain the new global model. The third step adds a certain amount of noise to the aggregated updates. This amount is determined by estimating a sensitivity value based on the distance between the clients' updates. In this way, the proposed strategy can override the contribution of the attack on the global model.
Recently, Fung et al. (Fung et al., 2019) proposed another defense solution with the name of FoolsGold. In any iteration, FoolsGold adapts the learning rate of each client based on the similarity distance of the updates, also considering information derived from past iterations. To measure the distance between the updates, as done by FLAME, the cosine similarity is used. Poisoning attacks usually affect specific features of the model, which can be identified by measuring the magnitude of model parameters in the output layer of the global model. Hence, the malicious updates can be removed or re-weighted. Another key point of FoolsGold is the exploitation of the history of the previous updates. Indeed, as stated above, the similarity distance among the updates is computed by considering the current values returned by the clients and the values of the historical updates produced in the previous iterations. This additional feature allows more accurate identification of malicious attempts to corrupt the federated learning task.
## 3. System Model and Attack Intuition
This section is devoted to describing the reference scenario of our attack. In particular, in Section 3.1, we present the essential concepts and definitions necessary to understand the scenario. In Section 3.2, we describe the main characteristics that introduce important advantages to the referring scenario but, at the same time, can be exploited by an attacker to perform an even more powerful exploit.
### The System Model
The scenario for our approach is a privacy-aware social recommender system built through a federated learning solution. To make our strategy concrete, we focus on a recent solution in this setting proposed in (Kumar et al., 2019). It is worth observing that, although, in our approach, we make explicit reference to such a scenario, the main feature we are focusing on is a common strategy of social systems. Indeed, in such contexts, collaboration is generally leveraged to obtain joint advantages among peers. Our strategy relies just on the fact that, if the common objective of the social system is to achieve privacy protection, such collaboration is typically "blind", and, even better, includes a Local Differential Privacy strategy, in such a way as to ensure non-disclosure of sensitive information. We argue that, if properly handled, this condition can be exploitable to craft critical security menaces for social scenarios.
With that said, our target scenario, proposed by Liu et al. (Liu et al., 2019) with the name FeSoG, shown in Figure 1, is a federated social recommendation system (FSRS) designed to predict users' ratings for items using a Graph Neural Network model. In this scenario, let \(U=\{u_{1},\ldots,u_{n}\}\) be the set of users and \(I=\{i_{1},\ldots,i_{m}\}\) be the set of items, where \(N=|U|\) and \(M=|I|\) are the number of users and items, respectively. FeSoG is composed of a set of clients \(C=c_{1},\ldots,c_{n}\) such that each client \(c_{n}\) is associated with a user \(u_{n}\). Due to this direct association, in the following, we shall use the terms user and client interchangeably.
The coordination of the federated training is delegated to a central unit, which receives the updated gradients from the clients and builds a global model by suitably aggregating them. By design, each
client owns a local graph that contains the first-order neighbors and the information about the items of interest for the corresponding user along with their ratings. Therefore, the local graph \(G_{n}\) of a client \(c_{n}\) consists of both user nodes and item nodes. \(G_{n}\) is characterized by two types of edge, namely the _user-item_ weighted edges, in which the weights represent the ratings assigned to the items by the corresponding users, and the _user-user_ edges denoting the interactions between users.
For each client, the set of rated items is denoted as \(I^{(n)}=\{i_{1},\ldots,i_{z}\}\), whereas the set of neighbors is denoted as \(U^{(n)}=\{u_{1},\ldots,u_{k}\}\). Users and items are associated with their embeddings respectively \(E_{u_{n}}\in\mathbb{R}^{dxN}\) and \(E_{i_{m}}\in\mathbb{R}^{dxN}\), where \(d\) is the dimension size of the embeddings. A complete embedding table is maintained by the server and the clients can request access to this table.
By downloading the complete embedding table, a client can access the embeddings of the users and items that are part of its local graph \(G_{n}\). Such embeddings are, then, used as input for the local GNN model and, in particular, a GAT layer, to learn the embedding of the user associated with the client and predict the item scores.
In particular, the client embedding is an aggregation of both the embeddings of its neighbors and the embeddings of the rated items. At this point, to predict the local item ratings for a specific user, the authors adopt a dot-product between the inferred user embedding and the item embeddings:
\[\hat{R}_{u_{n},i_{m}}=E_{u_{n}}\cdot E_{i_{m}}.\]
One of the specifics of Fe50g is the particular attention given to the privacy of the produced embeddings. In particular, two techniques are implemented to protect the updates of the local user-item gradients, namely: Local Differential Privacy and pseudo-item labeling. The Local Differential Privacy solution prevents the user's rating information to be inferred, given the gradients uploaded by a user during two consecutive steps. To protect the gradients, each client clips its updates based on their \(L_{2}-norm\) with a threshold \(\gamma\) and adds a zero-mean Laplacian noise to achieve privacy protection. The local differential privacy process is applied to the item embedding gradients \(g_{i}^{(n)}\), the user embedding gradients \(g_{u}^{(n)}\), and the model gradients \(g_{m}^{(n)}\) for each client \(c_{n}\). This process can be formalized as follows:
\[\hat{g}^{(n)}=clip(g^{(n)},\gamma)+Laplacian(0,\lambda\cdot mean(g^{(n)})), \tag{2}\]
where \(g^{(n)}=\{g_{i}^{(n)},g_{u}^{(n)},g_{m}^{(n)}\}\) is the combination of the gradients of the three different embeddings considered above. Observe that because the involved gradients can be of a different magnitude, instead of applying a constant noise with strength \(\lambda\), in this scenario, a dynamic noise is applied by multiplying \(\lambda\) by the mean of the gradients themselves.
The second privacy-preserving technique introduced in this approach, instead, consists of the inclusion of pseudo-items in the training process of each local model. This guarantees an enhancement of user privacy, and, at the same time, an improvement of the robustness of the aggregated global model. In practice, before the computation of the training loss on the local model, each client samples \(p\) items \(\hat{I}^{(n)}=\{\hat{i}_{1}^{(n)}\cdots\hat{i}_{p}^{(n)}\}\), not already included in their local items. Of course, for these additional pseudo-items only the corresponding embeddings are available to the client (through the embedding table available from the server). As for the corresponding ratings, a semi-supervised strategy is adopted, according to which the client uses its current local model to predict them for each pseudo-items. At this point, such pseudo-items are included in the local loss computation as follows:
\[L_{u_{n}}=\sqrt{\frac{\sum_{l_{m}\in I^{(n)}\setminus\hat{I}^{(n)}}(R_{u_{n},l_{m}}-\hat{R}_{u_{n},l_{m}})}{\left|I^{(n)}\cup\hat{I}^{(n)}\right|}}, \tag{3}\]
where the adopted loss is the Root Mean Squared Error between the predicted ratings \(\hat{R}_{ui}\) and the ground-truth rating scores \(R_{ui}\). The pseudo-item sampling provides additional rating information, similar to data augmentation, which, in addition to improving the protection against data leakage, enhances the robustness of the local model.
Figure 1. Main Scenario.
### Attack Intuition and Challenges
By design, the referring scenario introduces two main techniques that aim to improve the privacy protection of clients' data, while ensuring greater robustness of the global model built through federated learning. Among them, the design choice of including pseudo-items in the local embeddings of clients plays a crucial role. Indeed, as stated in Section 3.1, because the adopted pseudo-items are generated from real-data embeddings gathered from other clients, the introduced noise is informative and resembles a data augmentation solution. On the other hand, in the case in which the assumption of the trustworthiness of clients does not hold, such an approach could lead to exploit opportunities for attackers.
The goal of this paper is to demonstrate that by leveraging this privacy-preserving social collaboration mechanism, it is possible to design a very powerful poisoning attack. As will be shown in the experiments described in this paper, the social nature of such an attack allows the achievement of considerable performance also in the presence of cutting-edge defense solutions.
More in detail, the social mechanism of sampling pseudo-elements of peer clients to improve privacy protection, allows the possibility of involving such peers in prearranged attacks and, therefore, forcing them to include poisoned elements in their local training process, unknowingly. Our attack aims, therefore, at performing a model poisoning by forging a malicious set of item embeddings. Our objective is to deceive the target recommender system and make it act as intended by the attacker, either by inhibiting convergence of the underlying GNN model or by performing a _backdoor_ attack to force the system to predict specific ratings for items in relation to a target user. In such a context, not only pseudo-items could be exploited, but also the Local Differential Privacy strategy could play a key role in the attack process. As a matter of fact, many countermeasures, like for example the one proposed by Nguyen et al. (Nguyen et al., 2010), make use of Differential Privacy to override or erase the contribution of an attack, thus filtering malicious gradients updates in a federated learning solution. In our case, the Local Differential Privacy module, which is included in the privacy-aware social recommender system, acts as a regulator of the attack so that the poisoned changes to updates are the most similar to benign ones, while still guaranteeing the effectiveness of the attack.
## 4. Attack Description
This section is devoted to the design of an attack strategy against the target scenario introduced in the previous section, a schematic representation of which is shown in Figure 2.
Similarly to the work proposed by Baruch et al. (Baruch et al., 2012), our design includes two attack modes. The former aims at the convergence of the aggregated model and attempts to significantly reduce its general performance. The latter, instead, focuses on a more refined model poisoning goal, which is the construction of a _backdoor_. In practice, it aims at forcing the model to predict specific ratings for items in relation to a target user. Both attacks try to exploit vulnerabilities exposed by the strategy adopted to enhance the privacy and the robustness of the federated learning model, as described in detail in Section 3.1.
In the next sections, we shall report all the details related to the two attack types mentioned above. In particular, the former is presented in Section 4.1, and the latter is described in Section 4.2.
### Adversarial Mode - Convergence Inhibition
As presented in Section 3.1, according to the target scenario, each client involved in the privacy-aware social recommender system can sample a set of \(p\) items, namely \(\tilde{I}^{(n)}\), from the pool of other clients in their neighborhood (according to the graph underlying the GNN), and assign them a pseudo-label. This strategy allows them to add an _informative_ noise to their local updates, thus producing two important effects: a higher privacy protection level and improved robustness of the final model.
The intuition behind our attack is that an attacker can exploit such a community-driven privacy-preserving mechanism, based on the sampled item set \(\tilde{I}^{(n)}\), to poison the federated learning model. We assume that the adversary can control a set, even small, of clients, hereafter referred to as malicious clients. We argue that, by suitably crafting a poisoned item set, say \(\tilde{I}^{(n)}\), it might be possible to coerce the community around a malicious node to unwittingly participate in the attack, thus producing a hardly-detectable community attack.
To do so, instead of sampling the items randomly from the other users, a malicious client tries to generate a set of fake embeddings \(\overline{E_{tf}}\in(R)^{dxN}\) having the same shape \(dxN\) obtained by sampling real items in normal conditions and, hence, corresponding to an implicit set of fake pseudo-items \(\tilde{I}^{(n)}\). In particular, to undermine the convergence of the federated learning model, according to our strategy, starting from random Gaussian noise, at each training epoch \(t\), the attacker trains malicious embeddings \(\overline{E_{tf}}\) to maximize the loss of the global model. For this purpose, it uses the model parameters obtained from the server after the previous epoch \(t-1\). Then, it performs a gradient descent optimization on the local model by keeping all the parameters frozen, with the exception of malicious embeddings. In practice, to obtain effective malicious embeddings, an attacked client \(c_{n}\) associated with a user \(u_{n}\) pursues the following objective:
\[\min_{\tilde{I}^{(n)}}\left(-\sqrt{\frac{\sum_{i_{m}\in\tilde{I}^{(n)}\cup \tilde{I}^{(n)}}(R_{u_{m}i_{m}}-\tilde{R}_{u_{m}i_{m}})}{\left|\tilde{I}^{(n)} \cup\tilde{I}^{(n)}\right|}}\right),\]
where, once again, the ratings of the fake pseudo-items are derived through a semi-supervised approach using the version of the local model obtained after epoch \(t-1\). Figure 3 shows a graphical representation of the strategy above.
Once the malicious fake pseudo-items have been crafted, the attacker trains the local model, as done by any other client in the scenario, using the crafted embeddings \(\overline{E_{tf}}\) instead of the real embeddings \(E_{i}\) of the sampled items \(\tilde{I}^{(n)}\). It is worth observing that a domino effect is triggered by this strategy. Indeed, in doing so, the attacker poisons not only the updates of the local model but also the embeddings of the corresponding user, its neighbors, and the associated items. Moreover, the pseudo-item sampling task of the subsequent training epoch \((t+1)\) of the federated learning will also include the current malicious embeddings introduced by the
attacker. This will boost the exploit even more by involving other clients as unaware but still effective attackers.
### Backdoor Mode - Deceptive Rating Injection
The objective of the second attack mode is to poison the federated learning model in such a way that, given a target user \(u_{t}\) and a set of target items \(I_{x}^{(n)}\) not belonging to the local item set of \(u_{t}\), a backdoor is created on the prediction of the ratings. In practice, the attacker aims to perform a backdoor attack that will force the recommender system to predict for the target user a specific (false) rating for these items. Thus, the adversary can even force the recommender system to always or never propose a specific item to a user based on the rating predicted by the model. To carry out this attack, all the malicious clients controlled by the attacker must agree on the same target user \(u_{t}\), a set of items \(I_{x}^{(n)}\), and the target fake ratings to associate with them as a result of the poisoning action. As for the considered scenario, the high-level objective of the attacker might be to force the inclusion (resp. exclusion) of the target items in the recommendation set.
To do so, instead of sampling a random set of pseudo-items \(\tilde{I}^{(n)}\), all the malicious clients use the same target set of pseudo-items \(I_{x}^{(n)}\) and include it in the training of their local models. As presented in Section 3.1, in our scenario, each client corresponding to a single user concatenates the embeddings of the local items (the items related to the underlying user) to the embeddings of the pseudo-items (sampled from the items related to the neighbor users) and, then, computes the corresponding ratings by combining them (i.e., applying a dot-product) with the trained user embedding. To perform our attack instead of concatenating the local items \(I^{(n)}\) with a set of pseudo-items \(\tilde{I}^{(n)}\) sampled at random, a malicious client \(c_{n}\) performs the following steps:
* First, it combines the local items embeddings \(I^{(n)}\) with the embedding of the underlying user \(u_{n}\), to obtain a rating prediction for such items according to \(u_{n}\).
* Then, it combines the embeddings of the target items \(I_{x}^{(n)}\) with the embedding of the target user \(u_{t}\) to obtain a rating prediction for the items of \(I_{x}^{(n)}\) according to \(u_{t}\).
* Finally, it concatenates all the predicted ratings (of both the local items and the target ones) and uses them to calculate the loss (see Eq. (3)), which is, hence, modified as follows: \[L_{u_{n}}=\sqrt{\frac{\sum_{i_{m}\in I^{(n)}}\left(R_{u_{n},l_{m}}-\hat{R}_{ u_{n},l_{m}}\right)+\sum_{i_{f}\in I_{x}^{(n)}}\left(R_{u_{t},l_{f}}-\hat{R}_{ u_{t},l_{f}}\right)}{\left|I^{(n)}\cup I_{x}^{(n)}\right|}}.\]
As for the last point above, the value of the ground-truth rating score \(R_{u_{t},i_{f}}\) of Eq. (3) for each target item, say \(i_{f}\), of \(I_{x}^{(n)}\), is forged by the attacker to obtain the desired effect on the final prediction (e.g., obtaining the maximum/minimum rate or setting it to a specific value). In this way, the backpropagation on the model will include both the real signal from the local graphs of the clients controlled by the attacker and the additional poisoned knowledge designed to control only the rating scores for the items of \(I_{x}^{(n)}\) for the target \(u_{t}\). Figure 4 shows a representation of the steps described above.
## 5. Attack Evaluation
In this section, we present the experiments carried out to assess the performance of both our attack modes on the referring scenario. In particular, in Section 5.1, we describe the reference testbeds for our experiments. Sections 5.2 and 5.3 are devoted to analyzing the results and performance of our attacks against different settings and defense mechanisms.
Figure 3. A schematic view of the proposed Convergence Prevention attack.
Figure 2. Attack Scenario.
### The Considered Testbeds
To assess the performance of our attack, we define some reference testbeds, including the adopted evaluation metrics and the underlying datasets. Moreover, we identify the experimental setup by selecting the most promising configurations to properly test our solution.
**Evaluation Metrics.** To evaluate the effectiveness of our attack, we adopt the Mean Absolute Error and the Root Mean Squared Error and compare the performance of the target scenario in normal conditions and under our attack. For both metrics, smaller values are associated with better performance. For our _Convergence Prevention_ attack, the exploit is successful when both the metrics above return higher values for the underlying GNN model (the deep model at the basis of the reference social recommender system) than in a condition with no attacks.
As for the second attack type proposed in this paper, a successful backdoor must not affect the general performance of the target GNN model. Moreover, to further assess the effectiveness of the obtained backdoor, we define a metric called _Favorable Case Rate_ (\(FCR\)). As visible in Algorithm 1, this metric returns the percentage of target items whose residuals are lower than the Standard Error of the estimate function for good items. The objective of this metric is to assess whether the error produced by the model on the target items, with respect to the rating value aimed by the attacker, is comparable to the average baseline error rate obtained for good items (we require that this error is even lower than the average to declare an attack success). Indeed, such a condition would imply that the built backdoor successfully changes the behavior of the attacked model, forcing it to predict, for the target items, the ratings imposed by the attacker.
**Datasets.** To validate our proposal, we adopt the same datasets used in (Krishna et al., 2017) to test the performance of the reference social recommender system (see Section 3.1). In particular, we use three popular recommendation system datasets, namely: Ciao (Zhou et al., 2019), Epinions (Zhou et al., 2019; Li et al., 2020; Li et al., 2020), and Filmtrust (Fil et al., 2020). Ciao and Epinions have been collected by crawling shopping websites, and both of them are characterized by items rated with integers in the interval \((0,5)\) and social trust links among users. Similarly, Filmtrust is composed of a set of users connected by trust links and a set of items, each associated with a rating score ranging in the interval \((1,8)\).
For our experiments, each user of the previous datasets is associated with a client, and the corresponding local graph is generated using the items that they have rated and the users with whom they have trust links to build their neighborhood. The statistics of the obtained datasets are reported in Table 1.
**Experimental Setup.** The reference datasets are randomly split into three subsets: training set (\(60\%\)), validation test (\(20\%\)), and testing set (\(20\%\)). As for the validation set, it is used to evaluate the performance of the model during the training phase. In our configuration, the policy for the training early stopping, the learning rate, the initialization of the embeddings, and the strength of the Laplacian noise, are set as proposed in the reference scenario originally described in (Krishna et al., 2017). Specifically, the training process is stopped when the model does not improve on the validation set for more than 5 successive validation steps. When the training phase is completed, the model is evaluated on the testing set. For the backdoor mode of our attack, at each validation step, we also assess the effectiveness of the attack on the target items. For all our experiments, the learning rate of the model is set to \(0.01\), the embeddings are initialized with standard Gaussian distribution. Moreover, the gradient clipping threshold is set to \(0.3\), and the strength of Laplacian noise is set to \(0.1\). Finally, we tested our attack for different numbers of items sampled, specifically \(\{10,\ 20,\ 30,\ 40,\ 50,\ 100\}\), and different percentages of attackers, namely, \(\{10\%,\ 20\%,\ 30\%,\ 40\%,\ 50\%\}\).
### Results: Adversarial Mode
In this section, we analyze the performance of our attack in Convergence Prevention mode against the scenario introduced in Section 3.1 (_Main Scenario_, for short). In our experiment, as an initial configuration, we set the percentage of attackers to \(30\%\) of the total number of clients and the maximum number of pseudo-items
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Dataset & Ciao & Epinions & Filmtrust \\ \hline Users & 73.317 & 18,009 & 824 \\ Items & 104.975 & 28,124.26 & 1,957 \\ \hline \# of ratings & 283.302 & 762.938 & 186.622 \\ Rating density & 0.03695 & 0.01625 & 1,0911\% \\ \hline \# of social connections & 111,781 & 395,530 & 1,853 \\ Social connection density & 0.29885 & 0.10895 & 0.24625 \\ \hline \end{tabular}
\end{table}
Table 1. Statistics of the reference datasets
Figure 4. A schematic view of the proposed Backdoor attack.
sampled equal to 10. Moreover, as commonly done in this context, we also consider different protection configurations based on the most common and effective Federated Learning defenses, namely:Krum,TrimmedMean,FoolSGD, andFlame (see Section 2.4 for background on these defenses). Moreover, to provide a comparison baseline for the assessment of the effectiveness of our solution, we report: _(i)_ the basic performance of the considered GNN model without the additional privacy-aware social mechanism proposed in (Kumar et al., 2019) based on pseudo-items (_Baseline Scenario_), _(ii)_ the performance obtained in the same configuration when the system is attacked by a reference state-of-the-art attack, i.e., the Little Is Enough attack (LIE) (Section 2.3), _(iii)_ the performance obtained by the complete solution of (Kumar et al., 2019) in the absence of attacks, and _(iv)_ the performance obtained in the same configuration when the attack on the pseudo-items is performed using a naive strategy based on the generation of Gaussian noise. The results on the three datasets introduced above are reported in Table 2.
By analyzing this table, it is possible to see that our attack is capable of significantly decreasing the performance of the GNN model, with a performance reduction spanning from 39% to 76% with respect to the scenario in the absence of attacks. This result is even more astounding when we consider that, for the Baseline Scenario, the state-of-the-art LIE attack produces a maximum performance penalty of 10.2%. The obtained result also confirms that the use of community-derived pseudo-items and, in general, collaborative strategies to achieve privacy protection improves the robustness of the federated learning model (as originally shown in (Kumar et al., 2019)) but, at the same time, provides an adversary with the means to perform possibly stronger attack. As presented in Section 4.1, our attack crafts the embeddings of the pseudo-item by maximizing the loss of the model at each epoch. To assess the reasoning behind our strategy, in Table 2, we report the results obtained by a basic attack in which, instead of learning optimal embeddings at each iteration, they are initialized with Gaussian noise. As we can clearly see from this table, the attack on pseudo-items using Gaussian noise does not affect the performance of the model, thus confirming that only an AI-driven attack can suitably exploit this scenario.
As a final remark on these first results, we observe that our attack proved to be resistant to all the different countermeasures we considered. In fact, as expected, the use of Local Differential Privacy gives boundaries to the adversary, allowing for a controlled impact of the attack on gradients, thus keeping them quite similar to benign ones and, therefore, very complex to detect. The underlying assumption of the aforementioned defenses is that such a limited impact on the gradients, in principle, would completely prevent the effectiveness of the attack. However, the additional community-based privacy solution of the attacked scenario provides an opportunity to boost this malicious signal.
To have a confirmation of our intuition, in Figure 5, we show the variation of the performance metrics of the GNN model during the training phase. We can see at the very beginning of the training phase, the performances of the federated model on the validation set, with and without attacks, are almost identical. As the training continues, the difference between the normal and the attacked model increases, reaching high values by the end of it. Indeed, after the first epoch, the clients surrounding the nodes controlled by the attacker begin to sample the malicious pseudo-items forged by them, thus permanently poisoning their local models. Such a mechanism continues, epoch by epoch, expanding the malicious signal to a growing neighborhood. In the end, all the poisoned clients will contribute to the attack boosting the negligible original signal produced by the attacker.
To deepen the analysis of this aspect, we tested our solution with both different percentages of malicious clients and several configurations of the number of pseudo-items sampled by the clients.
In particular, in Figure 6, we show the impact of our attack on the performance of the federated learning GNN model underlying the _Main Scenario_ with a percentage of clients controlled by the attacker spanning from 10% to 50%.
As expected, from this figure, we can see that the increasing number of malicious clients causes a linearly related detriment to the model performance. However, the variation is not very steep and, sometimes, almost stable, proving that the attack strength does not depend only on the number of controlled malicious clients. In Figure 7, we show the variation of the model performance for different configurations of the number of sampled pseudo-items (i.e., \(\{10,\ 20,\ 30,\ 40,\ 50,\ 100\}\)). Here, we can see how changing the number of sampled items does not significantly affect the performance of the attack. This indicates that, at least for the considered datasets, a small number of pseudo-items is enough to spread the malicious payload to a sufficiently large set of clients, which, then, will unknowingly act as additional collaborators of the attacker.
Figure 5. Performance of the federated learning model on the validation set with and without our attack.
Figure 6. Performance of the federated learning model with different percentages of malicious clients.
### Results: Backdoor Mode
This section is devoted to presenting the results of the experiments carried out to validate the performance of the Backdoor Mode of our attack (see Section 4.2 for details).
In this experiment, we randomly selected a target user \(u_{t}\) from the set of users of each of our datasets and randomly sampled groups of 10 items from the whole item pool, excluding those already belonging to the local graph of \(u_{t}\). At this point, we carried out our attack on the referring scenario to force the system to learn a backdoor for this set of items so that, for the only user \(u_{t}\), the ratings associated with these items are controlled by the attacker. In the scenario, we included again the state-of-the-art defense mechanisms for federated learning presented in Section 5.2. To measure the effectiveness of our backdoor attack on this setting, we used the _FCR_ metric defined in Section 5.1. This metric estimates how close the ratings of the selected items predicted for the target user are with respect to the values proposed by the attacker. To return a reliable estimation, it also considers the general error of the regressor (standard error of estimate) to purge the evaluation from possibly wrong predictions related to the accuracy of the model. The results of this experiment are reported in Table 3.
As visible in this table, our attack is capable of achieving an average _FCR_ score higher than 80% with a maximum of 100% against all the defenses. Another important result is, as expected that the performance (assessed with RMSE and MAE metrics) of the model on benign items is preserved for all three datasets.
To have a ground truth to compare the obtained results with, we also measured the _FCR_ score in the case of no attack to exclude any success case related to the data distribution and not to the attack effect. As we can see from the results, the maximum _FCR_ value in the absence of an attack (implying a situation in which, by chance, the real ratings are in-line with the attacker selection) is around 20% on average, thus showing, once more, the effectiveness of our attack.
### Evaluation on a Real Recommender System
As a final experiment, we proceed by testing our Backdoor Mode attack against a real-life recommendation system. To do so, first of all, we designed a recommender system on top of the GNN-based model described in the previous sections. Such a model includes the embedding of users and items according to their interactions, which are described in the datasets of reference in this paper (see Section 5.1). Moreover, as stated in Section 3.1, given the embeddings of a user and an item, an estimate of the rating that the given user would assign to the target item can be obtained through the dot-product between their embeddings. With this information, it is possible to build a recommender system capable of suggesting an item to a user if the estimated rating, according to the strategy above, is higher than a recommendation threshold \(\delta\). A possible strategy to set a value for \(\delta\) could be to consider that, usually, an item is recommendable to a user if its estimated rating is close to the upper bound of the rating range (i.e., it is higher than the median value of the range). As such, \(\delta\) should be a value equal to a fraction of the rating score range (e.g., for a maximum rating score equal to 10, \(\delta=0.5\) indicates that the recommendable items must have a rating score higher than half of the maximum rating score, that is a rating higher than 5).
The objective of this experiment is to demonstrate that our backdoor attack can force a recommender system to suggest to a target user any item (also those that would normally receive a minimum rating score). Of course, it can even be used in the opposite direction, that is, to force the removal of a good item from the set of recommendable ones for a target user.
To properly configure our test, we started by selecting a target user and training the model in a safe configuration without attacks. Then, using the trained model, we estimated the rating of all the items available in relation to the target user. After this, we sorted them and created a ranking of items for the target user. As stated above, the goal of the attacker can be either to force the recommendation of a specific item to a target user or to remove a good item from the user recommendation list. In both cases, we considered the worst-case situation, in which the specific item has originally an extremely low rating for the former objective, or an extremely high one, for the latter. To obtain this configuration, as for the former objective, we selected the bottom 10 items of the ranking above, and for the latter attack objective, we selected the top 10 items as targets. At this point, in our experiment, we tested the effectiveness of our
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Scenario} & \multirow{2}{*}{Attack} & \multirow{2}{*}{Defense} & \multicolumn{2}{c|}{Filtrant} & \multicolumn{2}{c|}{Ciao} & \multicolumn{2}{c|}{Epintions} \\ \cline{4-9} & & & \multicolumn{1}{c|}{RMSE} & \multicolumn{1}{c|}{MAE} & \multicolumn{1}{c|}{RMSE} & \multicolumn{1}{c|}{MAE} & \multicolumn{1}{c|}{RMSE} & \multicolumn{1}{c|}{MAE} \\ \hline Baseline Scenario & None & None & 2.19 & 1.60 & 2.54 & 1.87 & 2.17 & 1.52 \\ \hline Baseline Scenario & LLE [2] & None & 2.37 & 1.69 & 2.80 & 2.94 & 2.36 & 1.66 \\ \hline Main Scenario & None & None & 2.08 & 1.56 & 2.18 & 1.55 & 1.79 & 1.35 \\ \hline Main Scenario & Gaussian Noise & None & 2.06 & 1.57 & 2.20 & 1.59 & 1.78 & 1.36 \\ \hline \hline Main Scenario & Our attack (Adversarial Model) & FootballGold & 3.21 & 2.69 & 3.07 & 2.45 & 2.79 & 2.51 \\ \hline Main Scenario & Our attack (Adversarial Model) & Flame & 3.01 & 2.30 & 3.95 & 2.45 & 2.69 & 2.34 \\ \hline Main Scenario & Our attack (Adversarial Model) & Kvum & 3.03 & 2.44 & 3.02 & 2.42 & 2.71 & 2.35 \\ \hline Main Scenario & Our attack (Adversarial Model) & FlameGreen & 3.22 & 2.60 & 3.00 & 2.42 & 2.66 & 2.31 \\ \hline \hline \multicolumn{9}{|c|}{**Average Performance Detiment**} & \multicolumn{1}{c|}{**-50\%**} & \multicolumn{1}{c|}{**-40\%**} & \multicolumn{1}{c|}{**-39\%**} & \multicolumn{1}{c|}{**-57\%**} & \multicolumn{1}{c|}{**-51\%**} \\ \hline \end{tabular}
\end{table}
Table 2. Results of the convergence inhibition attack
Figure 7. Performance of the federated learning model under our attack with different numbers of sampled pseudo-items per client.
Backdoor Mode attack against the above-introduced recommender system with different values of the recommendation threshold. In particular, to measure the obtained attack performances, we started with the former objective and counted the percentage of attacked items whose rating was higher than the recommendation threshold \(\delta\). In this case, we considered different values of \(\delta\), namely {0.5, 0.6, 0.7, 0.8, 0.9}, implying ratings for the recommendable items always above the median of the rating range and up to a value really close to the upper bound (i.e., \(\delta=0.9\)). As for the latter objective, instead, we defined an additional negative threshold, called \(\gamma\), to evaluate the attack strength. The objective of this second threshold is the exact opposite of \(\delta\), that is, to verify the percentage of items that have a lower rating than this negative threshold. Of course, the lower the negative threshold, the more complex the attack goal. Also in this case, \(\gamma\leq\delta\) is obtained as a fraction of the maximum possible rating; in particular, we set it to {0.1, 0.2, 0.3, 0.4, 0.5}, respectively. We reported the obtained results in Figure 8.
The first row of this figure shows the attack performance for the first objective, whereas the second row concerns the performance obtained for the second attack objective. By analyzing this figure, we can see that, for both the Ciao and Epinions datasets, our attack is successful with all the possible threshold configurations for both objectives above. As for the Filmtrust dataset, we can notice how the performance of our attack degrades to 30% in the edge cases (i.e., the cases in which \(\delta\) is equal to 0.9, for the first objective, and \(\gamma\) is 0.1, for the second objective) while preserving its full effectiveness for the other configurations of the thresholds. This behavior could be because this dataset has a fewer number of items concerning the others, thus increasing the probability for a single item to be sampled by multiple clients. In this way, the contribution of the attack could be partially overwritten by the benign clients' updates, which implies a slight reduction of the attack performance.
## 6. Related Work
Federated recommender systems are becoming popular due to regulation on data and privacy of the users, like GDPR in the European Union (Hil et al., 2017; Wang et al., 2018). This solution allows social media platforms to build effective recommender systems useful to produce high-quality suggestions while preserving the privacy of the final user. However, this kind of collaborative strategy might be affected by malicious users that take part in the training of the federated recommender system (Hil et al., 2017; Wang et al., 2018; Wang et al., 2018).
Christakopoulou et al. (Hil et al., 2017), proposed to use a Generative Adversarial Network (GAN) that generates fake users to be injected during the federated training to control the top-\(K\) recommendations produced by the target recommender system. The proposed solution is designed to preserve the main characteristics of the data, thus ensuring unnoticeable changes. Generative Adversarial Networks not only can be used to attack the systems in an adversarial way, but they are also effective in stealing private information from other users. An example of that has been proposed by Hitaj et al. (Hitaj et al., 2017) in which the attacker runs the collaborative learning algorithm and reconstructs sensitive information stored on the victim's device. The attacker also influences the training process inducing the victim to disclose more detailed information.
The conventional poisoning attacks on recommender systems, known as shilling attacks (Hil et al., 2017), are not targeted to a specific type of recommender system. Therefore, the performance that they can achieve is sub-optimal to an attack targeted at a specific recommender system. Fang et al. (Fang et al., 2018) proposed a series of techniques that optimize the attack to be more effective and achieve better performances compared to general shilling attacks. Wu et al. (Wu et al., 2018) proposed another optimized attack on recommender systems. In this paper, the authors proposed to use globally hardest sampling as
Figure 8. The recommendation system recommends an item if the rating is higher than the given threshold \(\delta\). As a first possible objective, the attacker tries to force the model to predict an item of minimum rating as an item of maximum rating. We have a success when the rating exceeds a recommendation threshold \(\delta\). Ex. \(\delta=0.5\): rating \(>\delta\cdot max\_rating\). As a second objective, the attacker tries to remove a good item from the list of recommendable items. The goal is hence to reduce the rating for a target item under a negative threshold \(\gamma\leq\delta\). Therefore, we count the percentage of items with ratings lower than \(\gamma\). Ex. \(\gamma=0.4\): rating \(<\gamma\cdot max\_rating\). Worst case scenario: change the rating score of an item from a minimum to a maximum value and vice versa.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Attack} & \multirow{2}{*}{Defense} & \multicolumn{3}{c|}{Filtrust} & \multicolumn{3}{c|}{Attay} & \multicolumn{3}{c|}{Filtrust} \\ \cline{3-10} & & MRR & MRR & MRR & MRR & MRR & MRR & MRR & MRR \\ \hline No Attack & None & 2.06 & 1.56 & 20\% & 2.16 & 1.56 & 20\% & 1.79 & 1.36 & 30\% \\ \hline Our Attack (Backdoor Mode) & FoodGold & 2.07 & 1.55 & 80\% & 2.19 & 1.56 & 100\% & 1.78 & 1.34 & 100\% \\ \hline Our Attack (Backdoor Mode) & Flame & 2.05 & 1.57 & 80\% & 2.18 & 1.55 & 100\% & 1.79 & 1.39 & 100\% \\ \hline Our Attack (Backdoor Mode) & Kvum & 2.03 & 1.54 & 80\% & 2.15 & 1.54 & 100\% & 1.79 & 1.34 & 100\% \\ \hline Our Attack (Backdoor Mode) & Tritmond Mean & 2.05 & 1.56 & 80\% & 2.19 & 1.56 & 100\% & 1.79 & 1.34 & 100\% \\ \hline \end{tabular}
\end{table}
Table 3. Results of the deceptive rating injection attack
a poisoning technique. In particular, they retrieve pseudo "hardest positive samples" that are farthest from user embeddings to replace the original positive samples. The obtained gradients significantly impact the model convergence while being difficult to be perceived as malicious updates from the server. Fang et al. (Fang et al., 2019) presented a poisoning attack optimized for graph-based recommender systems, like the attack we are proposing. More in detail, in this poisoning attack, the authors' goal is to deceive the graph-based recommender system making it promote a target item to as many users as possible injecting fake users that give fake ratings to a subset of the items.
The superior ability of graph neural networks to learn graph-structured data makes them ideal for recommender systems (Wang et al., 2019). Considering this, Nguyen et al. (Nguyen et al., 2020) proposed an attack that leverages both the representations of items and users to learn an optimal attack on a surrogate model. The proposed framework, similar to the one described above, synthesizes new users and associated edges to be added into a heterogeneous graph between real users and items before feeding the poisoned graph as input for optimization. Graph-based recommender systems are also vulnerable to optimized backdoor attacks such as the one proposed by Zheng et al. (Zheng et al., 2019). In particular, the authors designed a backdoor attack against link prediction that injects nodes and uses gradient information to generate optimized triggers building a relationship between any two nodes in the graph to construct a general attack.
The GNN models are also prone to attacks that aim at the privacy of the model and the data. These vulnerabilities could be exploited to infer group properties that are defined over the distribution of nodes and links, as proposed by Wang et al. (Wang et al., 2019). In particular, the authors designed six different attacks considering a comprehensive taxonomy of the threat model with various types of adversary knowledge. They analyzed the main factors that contribute to group property inference attacks' success and they found that it is possible to infer the existence of a target property by using the correlation between the property feature and a label in the target model. Duddu et al. (Duddu et al., 2019) designed three different attacks: the first infers if a node was included in the training graph, the second recreates the target graph, and the third infers sensitive attributes of the graph. Considering attacks against the model instead, Zhang et al. (Zhang et al., 2019) proposed a property inference attack that aims to infer the basic properties of the graph given the graph embeddings.
## 7. Conclusions and Future Work
In this paper, we described an AI-based attack against a scenario composed of a privacy-preserving social recommender system leveraging Graph Neural Networks and federated learning to produce item recommendations. Our attack design starts by analyzing the security of recent approaches aiming at building such recommender systems, including Differential Privacy and community-based strategies to improve sensitive data protection in federated learning contexts. As a matter of fact, although, by design, one of the main features of federated learning is privacy protection, researchers have shown that, by analyzing local model updates produced by federated clients, it is possible to infer sensitive information concerning the local datasets. For this reason, recent studies have included additional privacy protection strategies to face the above-mentioned issue. This is the case of recent investigations in the context of social recommender systems, in which federated learning and Graph Neural Networks are adopted to build a predictive model to estimate item ratings to be fed to an underlying recommendation engine. In such a scenario, some authors have proposed combining Differential Privacy modules with novel privacy-preserving strategies based on the main characteristics of the underlying scenario. Indeed, in the context of social recommender systems, user interactions play a crucial role; this additional information allows the identification of communities of users related to each other. Leveraging these communities for each client, it is possible to augment the training of their local model with knowledge derived from the other community members, thus creating an additional separation between the local updates and the training-sensitive data. However, our intuition is that, if properly exploited, these additional privacy-preserving mechanisms can be used to produce a very impacting model poisoning attack against federated learning.
In this paper, we demonstrated this concept by designing a novel AI-based model poisoning attack with two operating modes, namely: _Adversarial Mode_ producing a convergence inhibition effect and _Backdoor Mode_ creating a deceptive rating injection attack on the federated model. We tested our solution against a target social recommender system proposed by (Duddu et al., 2019) in a federated learning scenario equipped with the most effective state-of-the-art defenses. The experimental results have shown how our attack is effective in all the considered cases. Moreover, to further show the significance of our achievements, we built a real-life recommender system to demonstrate that, with our attack operating in Backdoor Mode, an adversary can fully control the recommendations produced for specific target users.
The proposal described in this paper must not be intended as conclusive. Indeed, to demonstrate the general validity of our method, we are planning to extend our investigation by adapting the proposed attack strategy to other possible scenarios. Moreover, the vulnerability we discovered is based on the collaborative nature of some privacy-preserving approaches for federated learning. For this reason, we intend to work on designing possible extensions of existing defenses to cope with the identified flaw. Finally, we made explicit reference to a horizontal federated learning scenario. In the future, we plan to extend our research to vertical federated learning. Of course, due to the specificities of this variant, a thorough investigation must be carried out to understand how our attack methodology can be adapted to it.
|
2302.11387 | Switchable-magnetisation planar probe MFM sensor | We present an alternative switching-magnetization magnetic force microscopy
(SM- MFM) method using planar tip-on-chip probes. Unlike traditional
needle-like tips, the planar probe approach integrates a microdevice near the
tip apex with dedicated functionality. Its 1 mm x 1 mm planar surface paves the
way for freedom in ultra thin-film engineering and micro-/nano-tailoring for
application-oriented tip functionalization. Here, we form a microscale current
pathway near the tip end to control tip magnetisation. The chip like probe or
planar probe, was applied to study the complex magnetic behaviour of epitaxial
transition metal oxide perovskite LaMnO3, which was previously shown to behave
as complex material with domains associated with superpara-, antiferro- and
ferromagnetism. To this end we successfully imaged an inhomogeneous
distribution of weak ferromagnetic islands with a resolution better than 10 nm. | Michael Verhage, Tunç H. Çiftçi, Michiel Reul, Tamar Cromwijk, Thijs J. N. van Stralen, Bert Koopmans, Oleg Kurnosikov, Kees Flipse | 2023-02-22T14:02:50Z | http://arxiv.org/abs/2302.11387v1 | # Switchable-magnetisation planar probe MFM sensor
###### Abstract
We present an alternative switching-magnetization magnetic force microscopy (SM-MFM) method using planar tip-on-chip probes. Unlike traditional needle-like tips, the planar probe approach integrates a microdevice near the tip apex with dedicated functionality. Its 1 mm \(\times\) 1 mm planar surface paves the way for freedom in ultra thin-film engineering and micro-/nano-tailoring for application-oriented tip functionalization. Here, we form a microscale current pathway near the tip end to control tip magnetisation. The chip like probe or planar probe, was applied to study the complex magnetic behaviour of epitaxial transition metal oxide perovskite LaMnO\({}_{3}\), which was previously shown to behave as complex material with domains associated with superpara-, antiferro- and ferromagnetism. To this end we successfully imaged an inhomogeneous distribution of weak ferromagnetic islands with a resolution better than 10 nm.
## Introduction
Magnetic force microscopy (MFM) is a widespread method in fundamental surface studies and nanoscale technological applications with a high lateral resolution of up to tens of nanometers and pN force sensitivity [1, 2]. The working principle of MFM relies on the force interaction between the tip's magnetic stray field and a samples' spatially varying magnetic textures. By nanoscale utilization of this magnetic force interaction, MFM covers a wide operational range from characterization to manipulation of magnetic objects [3, 4, 5].
Despite its extensive use, a general MFM hits its capability limits mainly in lateral resolution of imaging materials with weak or time-varying magnetization. For instance, low coercive and weak ferromagnetic (FM) or superparamagnetic (SP) structures generally require an external magnetic field to saturate [6], and without this the magnetic force to the tip is weak or may be undetectable by the bandwidth of the MFM. Hence, nonmagnetic interactions such as those of electrostatic origin can mask the magnetic signal [7, 8, 9]. To obtain the pure magnetic signal of nanoscale weak FM or SP textures, such as isolated islands, an MFM variant called switching magnetization force microscopy (SM-FM) [10, 11] or controlled magnetization-MFM (CM-MFM) [8] stands out by extracting such signals out of the detected force [9, 11, 12]. Beyond traditional MFM, SM-FM measures a relative force change due to controlled altering of the magnetic state of the tip or the sample (or both). Only the magnetic field interaction is sensitive to the relative magnetic polarities between the tip and sample and hence can thus be detected.
The need for a SM-FM imaging technique with a capability of imaging weak FM or SP islands with a resolution beyond \(10\,\mathrm{nm}\) can be found in the study of epitaxial complex oxide perovskites such as LaMnO\({}_{3}\) (LMO\({}_{3}\)). Wang _et al._[13] have shown that epitaxial LMO\({}_{3}\) reveals an abrupt transition from an antiferromagnetic (AF) state to a ferromagnetic (FM) depending on the thickness of the LMO\({}_{3}\) layer. The magnetic transition occurred at a film of 5 atomic unit cell (u.c.) thickness. Furthermore, Anahory _et al._[14] observed inhomogeneously distributed SP islands besides the FM domains, with the former only detected following an
applied in-plane magnetic field of variable strength. Both groups used a scanning SQUID microscope (SSM), albeit with different lateral resolution, to image the LMO\({}_{3}\) sample's stray field distribution. However, the SSM imaging performed by Anahory _et al._[14] could not go beyond a resolution of 100 nm, which left the SP island size to be only indirectly inferred between 10 nm and 20 nm.
To solve the problem of limited imaging resolution of traditional MFM, we carefully designed a new type of SM-FM sensor. To this end, we design a magnetic tip with a stray-field of several hundred mT strong enough to saturate the magnetic textures. The tip is realised by forming an oriented single domain state near the tip apex [15]. Traditional needle-like MFM tips generally only generate up to a few tens of mT of stray field [2, 16]. With this approach the LMO\({}_{3}\) weak FM domains are simultaneously saturated and profiled for imaging by the same tip. The tip's stray field decays rapidly from the tip and hence, by changing the height of the tip with respect to the sample surface the weak magnetic textures can be actively saturated.
We demonstrate that our sensor is capable of imaging magnetic textures of LMO\({}_{3}\) with a resolution beyond 10 nm. For this, we present a new approach combining planar chip-like probes [17, 18, 19] with highly sensitive tuning fork force sensors which we call the switching-magnetization planar probe (SM-PP) and is illustrated in Figure 1a. This method aims to provide an on-chip reorientable tip magnetization with no required external magnetic field, to act as an switchable magnetic force sensor.
## Results and Discussion
The working principle of the SM-PP relies on switching from a multi-domain state of the tip to a poled single domain via an internally generated Orsted field (\(H_{p}\)) within a planar chip-like probe, as illustrated in Figures 1a, b and d. Initially, the magnetic layer on the tip is in a multi-domain state with a closed flux loop, Figure 1d. The direction of this flux may be irregular, and hence inappropriate for perturbing weak FM islands. The planar probe
Figure 1: **Planar probe with switchable tip magnetization (SM-PP).** (**a**) Illustration of the SM-PP sensor, with electrical contacts for force sensing and sending current pulses \(I_{p}\) to the tip apex. The tip is formed by a tip-on-chip called the planar probe (PP). (**b**) SEM image of the PP with a sharp tip apex formed by cleaving a Si wafer. Sending a current pulse generates an Orsted field (\(\vec{H_{p}}\)) within the metallic film to orient the tip magnetization into a singular domain state. The current pathway is created by the formation of a FIB milled bridge. (**c**) The metallic film with two main layers: the current-carrying Pt layer and the ferromagnetic Co layer. The polarity of \(I_{p}\) determines the direction of the Orsted field (\(\vec{H_{p}}\)) which alters (reverse) the direction of magnetization of the Co film. (**d**) Multi-domain state can be poled into an oriented single domain by a controlled current pulse. (**e**) Schematic side view of the SM-PP shows the mass retuned tuning fork prong and a lateral view of the surrounding tip stray field \(\vec{B}\). The planar probe is placed under a 45\({}^{\circ}\) angle to the prong. (**f**) Kerr microscopy image showing a poled magnetic tip domain after sending a single \(I_{p}\). The dark contrast at the bottom of the tip demonstrates the singular domain with magnetization \(\vec{M_{\rm tip}}\). The white scale bar equals 5 μm.
design used in this study has a bi-metallic structure of thin-film components: the current-carrying layer and the ferromagnetic layer, as depicted in Figures 1b and c. By sending an electrical pulse (\(I_{p}\)) through the current-carrying layer in a designated electrical pathway (called the bridge) near the tip apex, we generate an Orsted field of controlled magnitude and well-defined direction which penetrates the ferromagnetic layer. This action leads to a singular domain state of the tip apex, with a preferable orientation.
The planar probe is formed by cleaving a silicon wafer into a small \(1\,\mathrm{mm}^{2}\) square piece with a \(90\lx@math@degree\) tip apex [17, 20]. Near the tip apex, i.e. the cleaved corner, naturally increases the flux density, increasing the tip stray field compared to a needle-like MFM tip. This magnetic field strength and distribution is discussed later on. The single domain state of the tip can be used to probe weak FM domains. To this end, we used a \(30\,\mathrm{nm}\) Pt film for the current-carrying layer and a \(15\,\mathrm{nm}\) Co film for the ferromagnetic layer, placed on top of the planar probe. Detailed fabrication procedures of the film and planar probe are given in Supplementary S1.
Contrary to the traditional passive needle-like MFM tips, we can orient the SM-PP multi-domains into a singular domain by only a single current pulse as often as needed to combat transient tip demagnetisation, a known issue in MFM. The resulting tip domain after sending a current pulse is illustrated in Figure 1d. As a result, we can obtain consistently oriented tip domains, resulting in a predictable stray field in the tip vicinity, as indicated by \(\vec{B}\) in Figure 1e. Figure 1e illustrates the side view of the SM-PP with the tip stray field distribution predominately out-of-plane from the sample's perspective. In Supplementary S4 we discuss in-depth the tip stray field distribution derived from a numerical study. Finally, Figure 1f shows a Kerr microscopy image of the SM-PP, after having sent a current pulse of sufficient amplitude. A poled tip domain is formed as observed with the dark contrast near the apex, highlighted within the dashed circle.
We attached this functionalized planar probe to a mass retuned [20] quartz tuning fork (QTF) force sensor with integrated electrical access to the probe for the current pulse \(I_{p}\), as schematically illustrated in Figure 1a. QTF's have been successfully used before for MFM [21]
and are easily integrated in an UHV scanning probe microscope. The retuned tuning fork approach significantly improved the load capacity of the QTF sensor. As widespread AFM and MFM applications have previously experienced, once the mass exceeds several tens of \(\mathrm{\SIUnitSymbolMicro g}\)[22], which is far below the mass of a chip-like probe, the oscillation's Q-factor value drops to only several hundred from the original 40 thousand. This reduction in Q-factor results in a large loss in force sensing capabilities [20, 23]. For degraded Q-factor sensors, we would be unable to use the planar probe for imaging magnetic fields of \(\mathrm{LMO}_{3}\). To this end, the retuned tuning fork approach compensates for the mass unbalancing from planar probe attachment and recovers sensitivity. As a result we are able to restore the Q-factor to over \(2\times 10^{4}\) at room temperature in ultra-high vacuum (UHV) [18, 20]. In Supplementary S3 we discuss further the need for a high \(Q\).
As a consequence, the Q-factor drops to only a few hundred [18], leading to degraded force sensing capabilities. The same effect can arise from the additional wires connecting the current control signal for pulsing, which is the reason why dedicated electrical contacts to the tip are now integrated on the tuning fork itself [24]. We solve the mass-imbalance by mass retuning [23] the QTF as described in our previous work [20] and utilising readily available electrodes on the tuning fork.
For extracting the magnetic signal of weak FM islands, the SM-PP needs to change to a fully oriented \(M_{\mathrm{tip}}\) near the tip apex, starting from a multi-domain state. We turned to finite element modeling (FEM), with COMSOL [7], to simulate the generated Orsted field within the bridge to assess the required \(I_{p}\) magnitude for tip magnetisation control. Furthermore, we can investigate the thermal response of the tip by Joule heating.
Figure 2 presents the numerical and experimental validation of the magnetic switch of SM-PP tip. The simulations cover various bridge widths \(d\) in the range from \(50\,\mathrm{nm}\) to \(7\,\mathrm{\SIUnitSymbolMicro m}\) and different \(I_{p}\) values from \(10\,\mathrm{mA}\) to \(200\,\mathrm{mA}\). The pulse duration is \(500\,\mathrm{ns}\). See Supplementary S4 for details on the simulations. First, Figure 2a shows the calculated spatial field components of \(\vec{H_{\mathrm{p}}}(\vec{r})\) of a \(5\,\mathrm{\SIUnitSymbolMicro m}\) bridge under application of \(I_{p}=150\,\mathrm{mA}\). The
Figure 2: **Numerical and experimental validation of the magnetic switch of the SM-PP tip.** (**a**) Numerical calculations of \(\vec{H}_{\rm p}(\vec{r})\) magnetic field components (\(B_{x}\),\(B_{y}\)) and \(B_{z}\)) from a 150 mA pulse. The bridge is 5 um wide. (**b**) Numerical switching behaviour of the SM-PP, covering bridge width \(d\) varying from 50 nm to 7 um, presented in the phase diagram. The colors indicate a switch between two oppositely poled single domain state (green), domain fluctuations (orange) or no switch (red). (**c**), (**f**) Simulated single domain formation of the tip for inverting \(I_{p}\) polarity. (**d**)-(**h**) Kerr microscopy results show domain orientation switching after inverting \(I_{p}\) polarity. The vertical component of the altered magnetization (indicated in blue and yellow) is visible in the location near the tip end in **d** and **g**. The horizontal component is mostly aside the tip end **e, h**.
in-plane field components \(B_{x}\) and \(B_{y}\) of \(\vec{H}_{\mathrm{p}}(\vec{r})\) follow the bridge structure. This indicates that both symmetric sides of the tip have opposing magnetic direction, as is evident from the current flow pathway. Near the tip apex, \(B_{x}\) and \(B_{y}\) are relatively small since the current density is lowest (between the white lines of Figure 2a). A strong out-of-plane component \(B_{z}\) (Figure 2a) is only observed at the boundary of the tip and bridge, but is of little importance with respect to the in-plane magnetisation of the Co film. At just \(1\,\mathrm{\SIUnitSymbolMicro m}\) away from the tip apex, above the upper white line, the in-plane magnetic field is larger than \(10\,\mathrm{mT}\) which implies the nucleation of oriented in-plane Co domains. In Supplementary S1 the magnetisation response of the Co film is given.
Following, \(\vec{H}_{\mathrm{p}}(\vec{r})\) is used as an input parameter within Mumax[3] to calculate the magnetisation response (switch vs. no switch) of the Co film at the tip apex, as a function of the bridge width \(d\) and \(I_{\mathrm{p}}\). In Figure 2b the color scale represents three different states of the tip magnetization after applying \(I_{p}\). Green means the tip end domain shows a \(180\,\mathrm{\SIUnitSymbolDegree}\) reversal, so a full switch. Yellow represents an observed modification or a limited rotation by less than \(180\,\mathrm{\SIUnitSymbolDegree}\) in the tip domain. Red implies that the magnetization remained identical to the pre-pulse orientation. The results show a few tens of mA increase in critical current level for the bridge gap width values from \(50\,\mathrm{nm}\) until \(1\,\mathrm{\SIUnitSymbolMicro m}\), as given in Figure 2a. For the bridge gap widths greater than \(1\,\mathrm{\SIUnitSymbolMicro m}\), the critical current shows a larger increase.
Although the nanometer scale of the bridge can be achieved with various techniques and types of lithography, in our experiments we used focused ion beam (FIB) milling. This resulted in bridges on the micrometer scale and as the simulations results shows, we require a current magnitude in the order of \(10^{2}\,\mathrm{mA}\). Supplementary S1 discusses the FIB fabrication in further detail. When we simulate a current pulse with \(130\,\mathrm{mA}\) amplitude and \(500\,\mathrm{ns}\) duration for a bridge of \(5\,\mathrm{\SIUnitSymbolMicro m}\), the tip magnetization changes fully accordingly to the pulse polarity, as shown in Figures 2c and f. The notion of tip switch at values below \(130\,\mathrm{mA}\) is important, especially for micrometer scale bridges, because it significantly limits the Joule heating, as we discuss later.
Based on the simulation results, a Kerr microscopy experiment was performed to validate the magnetic switch. The Kerr microscopy experimental details are given in Supplementary S2. Gray/black tones in Figure 2d, e, g, and h represent Co domains preserving initial orientations before the pulse, upon applying a current pulse. False colored areas represent the Co domains' response to \(I_{p}\). Figures 2c and f indicate the orientation in the vertical direction expressed by the blue-to-yellow color scale. Figures 2d and g show the domain orientation in the horizontal direction, given by the pink-to-green color scale. The domain is mainly confined to the bridge region, as only here the current density is sufficient for inducing Co domain reversal. Along the length of the FIB bridge, the domains are inverted (pink and green), which follows closely those of the numerical simulations of Figures 2c and f, validating the realisation of the SM-PP.
After applying \(I_{p}\), the temperature increase should be excessive i.e. above \(100\,\mathrm{K}\), because it would hamper operation in UHV and degrade the tip's metallic layers. Examples and solution by metallic layer composition with respect to preventing degradation of tips are
Figure 3: **Numerical study of the thermal response.** (**a**) On the right, the calculated current density across the bridge for a current pulse of \(150\,\mathrm{mA}\). The current density increases up to \(6\times 10^{11}\,\mathrm{A}\,\mathrm{m}^{-2}\) at the smallest section of the bridge. On the left, the corresponding temperature profile. (**b**) The current pulse \(I_{p}\) has to form of an asymmetric double sigmoidal function plotted as the black curve. With a FWHM of \(160\,\mathrm{ns}\) and a peak value of \(150\,\mathrm{mA}\). The transient temperature response, red curve, show a rapid decay of the temperature, highlighting the efficient thermal dissipation of the bridge and ensuring mechanical stability.
discussed in Supplementary S1. Hence, we modelled the (transient) temperature response of the tip for \(I_{p}=150\,\)mA for an upper limit of thermal increase. Figure 3a compares the simulated spatial current density across the bridge for a \(I_{p}\) of \(160\,\)ns, with the thermal profile. As expected, the current density is highest near the shortest width of the metallic film and is in the order of \(10^{11}\,\)A m\({}^{-2}\). Yet, the maximum temperature increase is observed to be only \(50\,\)K, which means operation in UHV is possible and would minimize Joule heating damage to the metallic films. We experimentally pulsed several tips for tens of times and no degradation was observed. The transient heating response was also simulated, with the results given in Figure 3b. Here, a \(160\,\)ns asymmetric double sigmoidal pulse, see Supplementary S4 for pulse details, is simulated. The temperature decreases quickly within a microsecond due to efficient thermal dissipation of the Si substrate. We studied the effects of substrate capping material, i.e. Si coated with SiO\({}_{2}\) or MgO, on this thermal dissipation and the results are also discussed in Supplementary S4.
To conclude the first part of this work; the design, fabrication and optimisation of SM-PP provides us a SM-PP sensor with high Q-factor. Combined with the current-controlled tip magnetization it enables the possibility to study the magnetic surface textures of LMO\({}_{3}\)[13, 14]. For the second part of this work we turn to applying the SM-PP sensor to saturate and image the weak FM islands, aswell the AF domains the former are embedded into, of epitaxial LMO\({}_{3}\).
The magnetic texture of a 6 u.c. LMO\({}_{3}\) on STO\({}_{3}\) sample was imaged with our MFM operating both below (\(T=100\,\)K) and above (\(T=300\,\)K) of LMO\({}_{3}\)'s \(T_{c}=115\,\)K [13]. The first aim was to identify the AF and weak-FM texture distribution across the surface. Secondly, the SM-PP is able to magnetize magnetic islands by the tip's oriented stray field exceeding \(300\,\)mT, see Supplementary S4, and hence the size of the magnetic islands can be observed with a lateral scale between 10 and \(20\,\)nm [14]. The same SM-PP sensor was used for all imaging, with Frequency Modulation (FM) feedback. The scanning parameters are kept constant throughout all the measurements, see Supplementary S6 for methods and
experimental details.
First, we scanned at a temperature of 100 K and imaged a plateau of the LMO\({}_{3}\)/STO\({}_{3}\) stepped surface. The 90\(\times\)90 nm\({}^{2}\) topography images are given in Figure 4a and demonstrates a LMO film RMS roughness \(S_{q}\) of 32 pm. Although the surface of LMO\({}_{3}\) can show up to 1 u.c. roughness variations, in-homogeneously distributed across the stepped surface, which is a known surface feature for manganites [25]. The lateral resolution in topography is limited
Figure 4: **MFM images obtained with the SM-PP sensor on 6 u.c. LMO\({}_{3}\)/STO.** (**a**) Topographic images of LMO\({}_{3}\). (**b**) Multi-domain tip state MFM measurement at 100 K showing no magnetic contrast. (**c**) The SM-PP tip is magnetised into a single domain. MFM imaging reveals spatially inhomogenous magnetic contrast at 100 K. (**d**) Typical force-distance (F-z) spectroscopy and damping (voltage) signal taken at red areas of **c**. F-z spectroscopy shows a sudden kink in the attractive regime as highlighted with the blue arrow. The orange arrow indicated short range vdW forces. The damping signal (red line) is simultaneously taken, showing a sudden change in dissipation as indicated with the red arrow, (**e**) F-z spectroscopy and damping signal taken at a blue spot of **c**, showing significant reduction in sudden dissipation and force changes at the blue and red arrow, compared to **d**. (**f**, **g**) Magnetic features observed with a poled tip. The tip’s stray field induces local magnetic domain perturbation (streaks) as indicated with the black circles and arrows. The forward and backward scan are compared. In all images the black scale bar is equal to 30 nm.
by the relatively large amplitude of 10 nm used for detecting the long range magnetic force. Ideally, one would use a small amplitude for high resolution topography and a large amplitude for lift mode magnetic imaging [21]. We leave this for future work, as currently this approach of consecutively switching of the amplitude introduced large drift in our setup.
After obtaining the local topography, we switched to MFM. The MFM signal was first acquired with a multi-domain, closed flux tip, where negligible (out-of-plane oriented) stray field should interact with the sample magnetic domains. Indeed, Figure 4b shows that no MFM signal could be measured below the noise level of 1.5 mHz. The lack of topography cross-talk in the lift mode image also demonstrates that the lateral variation of the electrostatic force is neglible.
Following, the tip was pulsed by a 160 mA current pulse for 205 ns, which aligned the tip domain in the downward position, as indicated schematically in Figure 4c and confirmed with Kerr microscopy prior. For safety, the tip was retracted by 0.5 um from the surface during the pulse, which can give rise to lateral drift of around 10 nm in our SPM at these temperatures. With the magnetically oriented tip, we continue MFM imaging at 100 K and observed a complex landscape of magnetic textures across the scanned area, as given in Figure 4c. The smallest magnetic objects are highlighted with red circles in Figure 4c, which also correspond to the strongest attractive magnetic forces. These features have on average a diameter of 10 nm. We attribute these area's as stray field induced magnetised domains. Observing the nominal size, it is very likely that these corresponds to the weak FM textures [14], even at 100 K.
Performing force-distance (F-z) spectroscopy on the red islands of Figure 4d showed complex behaviour and provides more evidence for weak FM properties. The tip was retracted up to 20 nm above the surface, and then lowered until a notable repulsive fore was observed. The frequency shift \(df\) was measured during spectroscopy. Evidence of the tip stray field induced magnetic alignment is given in Figure 4d. We observe four distinct regimes; firstly we note a long-range attractive forces between 20 nm and 8 nm. This can be assigned to long
range electrostatic forces. At around 7.5 nm, a sudden negative change in frequency (force) is observed as indicated with a blue arrow. We attribute this to the significant increase of the magnetic field the sample experiences as the SM-PP tip approaches the weak FM domain and hence saturating it. At 3.4 nm, indicated with an orange arrow, the attractive van der Waals force region is noted. At very small tip-sample distances the frequency shift becomes positive evidence of repulsive forces. We also measured the damping, a sign of energy loss via local magnetization change of the weak FM islands [7]. In Figure 4d, the red curve shows a sudden rise in the damping as indicated with the red arrow. Likely, at this distance the weak FM islands are magnetized periodically as the tip oscillates up and down. As a comparison, Figure 4e shows the same F-z spectroscopy experiment performed on the blue areas of Figure 4c. Less perturbing of the attractive force is noted, and no measurable dissipation change is observed as highlighted with the colored arrows. We conjecture that those blue colored areas are the antiferromagnetic domains.
Generally, the weak FM features (coloured red) are embedded in magnetic labyrinth-like domains colored yellow/green in Figure 4c. These domains are continuous and spread across the surface. Furthermore, they have smaller attractive force than the weak FM areas. Due to the tip's large stray field induced magnetization of LMO\({}_{3}\), no repulsive area's could be observed. Areas depicted in blue are observed in two distinct regimes. First, we note distributed areas highlighted with the dashed white line. Secondly, blue circular like objects are noted as highlighted with the blue circle. These objects show very little attractive frequency shift. Hence, excluding electrostatic forces as these are constant across the surface, these domains form an antiferromagnetic texture, corroborating the SSM observations of Wang _et al._[13] and Anahory _et al._[14]
Furthermore, we note that the tip stray field can induce local magnetic perturbations in the real-space imaging. By comparing the forward and the backward scan, Figures 4f-g, the areas highlighted in the black circle show clear distinction between the two images. We conjecture that the field from the tip perturbed the local weak FM domains. This would
also be in agreement with the observation of streak-lines as indicated with the arrows.
In conclusion, the results lay strong indications of the imaging capabilities of the magnetically controllable SM-PP tips for weak FM islands with a resolution higher than 10 nm. Firstly, we achieved a repeatable control over the magnetization at the SM-PP tip with a consistently distributed domain state at the tip apex. Following, we demonstrated imaging of a complex magnetic texture of the rare-earth metal oxide perovskite LMO\({}_{3}\) with nanometric identification of weak FM islands. For further investigation of LMO\({}_{3}\) the SM-PP can be employed for ultra-high resolution imaging of the local u.c. variation in film thickness and the possible correlation with weak FM islands. Furthermore, the integration of the SM-PP in a LHe cryostat would increase the Q-factor by another order of magnitude, significantly improving the signal-to-noise ratio. Finally, future application the SM-PP can be combined with scanning tunneling microscopy functionality because of the tip metallic layers and electrode accessibility of the tuning fork. This way we can combine ultra-high lateral resolution imaging of conductive metal-oxide-perovskites and measure the long range MFM forces without the need to switch to different setups. This possibility opens up a approach to disentangle the atomic scale structure and long range magnetic ordering relations of transition metal oxides for application in spintronic and catalytic devices. Considering its enhanced sensitivity, the widened scope of tip-on-chip design can convert the MFM/AFM from a surface analysis tool with passive probes to a more sophisticated device with active more complex probes for characterization, e.g. nitrogen-vacancy centers diamond tips as quantum sensors for detecting ultra-small magnetic fields [26, 27] or currents [28].
## Materials and Methods
### Planar probe fabrication
The metallic layers was sputtered on a thin 150 um thin Si \(<100>\) wafer (intrinsic, UniversityWafer). The wafer was cleaved by a diamond scriber in 1 mm\({}^{2}\) pieces and inspected by an optical microscope. Following, the tip apex radius was inspected and selected for a
radius below \(50\,\mathrm{nm}\) by a ZEISS-Sigma SEM. A FEI Nova600i SEM-FIB was used to fabricate the bridge structure by Ga-ion etching. A sequential beam current of 0.05, 0.46 and \(2.8\,\mathrm{nA}\) was used. Near the bridge the smallest current prevents damage and increases the etching resolution. The acceleration voltage was \(30\,\mathrm{kV}\). The planar probe was placed onto the QTF (AB38T) prong with UV-curable resin with minimal volume (less than \(100\,\mathrm{\SIUnitSymbolMicro L}\)) employed by the use of a syringe needle. Silver paste was used to connect the electrical leads of the planar probe to those of the QTF. EPO-TEK 4410 was used to connect the wires from the QTF to a custom PEEK sensor holder. Detailed fabrication procedure is further outlined in Supplementary S1.
#### Kerr Microscopy
A Zeiss Axio Imager.D2m Kerr microscope was used with a \(50\times\) magnification lens, assembled by Evico Magnetics with a polariser/analyser pair and manual slit diaphragm. The setup was combined with a set of water cooled Helmholtz coils for magnetic moment alignment by in-plane magnetic fields with respect to the Co film orientation. Kerrlab software was used for data acquisition.
#### Numerical calculations
MuMax[3] was employed to simulate the domain structure of a \(16\,\mathrm{nm}\) Co film. Exchange length of \(5\,\mathrm{nm}\) and grid unit cell of \(4\mathrm{x}4\)\(\mathrm{nm}^{2}\) were used. For study of the thermal and magnetic properties of the SM-PP COMSOL Multiphysics was used with the AC/DC module and Heat Transfer module. Further numerical details are outlined in Supplementary S4.
#### Imaging in UHV
A Scienta Omicron VT-SPM setup was modified to carry two additional electrical contacts for pulsing the planar probe tip. The contacts are constructed from 2 gold coated pogo pins placed on a custom PEEK holder onto the scanning tube. An square wave generator (Agilent
33120A) was connected to a custom MOSFET circuit to reduce the pulse down to several hundred ns. Coax cables where used to connect the function generator to the VT-SPM. Detailed imaging methods are outlined in Supplementary S5.
The authors thank W. Dijkstra for assistance in both the modification of the UHV-SPM and fabrication of the custom pulse generator. Special thanks to H. Hilgenkamp of Twente University of Technology, Netherlands, for providing the 6 u.c. LaMnO\({}_{3}\) thin film on SrTiO\({}_{3}\) sample. O. Kurnosikov acknowledge support from ANR-15-IDEX-04-LUE CAP-MAT ans by the "FEDER-FSE Lorraine et Massif Vosges 2014-2020. Financial support from the Eindhoven University of Technology is acknowledged.
The fabrication procedure of the SM-PP is schematically given in Figure 5. The use of a thin 150 um Si wafer (intrinsic, from UniversityWafer) makes it possible to facilitate easy mechanical cleaving without having to apply large mechanical force and simultaneously reduce the planar probe's mass. First the wafer is diced into 20x20 mm\({}^{2}\) pieces, as shown in Figure 5b. Single crystal silicon (100) is known to cleave in atomically smooth planes [29]. By cleaving in two perpendicular directions a nanometer scale tip apex can be achieved. The cleaving results in square pieces up to 1x1 mm\({}^{2}\), with the tip apex' are evaluated for their radius (sharpness) with SEM, see Figure 5e for a large scale image. For integration into a sensor, tips with a radius below 50 nm were chosen for further fabrication steps, tips with larger radii where discarded. Because of the square nature of the cleaved planar probes, each piece offers up to 4 adequate tip apices. This makes the availability of many excellent tips with a small radius very likely, fabricated in a short amount of time.
With FIB milling, significant Ga-ion implantation occurred into the Si wafer which short
Figure 5: **Fabrication procedure of the switching-magnetisation planar probe.** (**a**) A 150 μm thin intrinsic (110) silicon wafer is diced into smaller pieces, (**b**). (**c**) The pieces of wafer are sputtered with a 150 nm MgO layer for electrical insulation needed to reduce the doping effect of Ga-ion implantation by FIB milling. Following, the metallic layers are sequentially deposited by plasma sputtering deposition, as schematically drawn. (**d**) After film deposition, the wafer pieces are cleaved into small rectangular planar probes with 90\({}^{\circ}\) angles forming the SPM tip. (**e**) Finally, the probe is functionalised with a bridge fabricated by FIB milling. (**f**) SEM images of a FIB fabricated bridge structure. (**g**) The magnetisation curve of the thin Co film on a planar probe as measured with Kerr microscopy.
ened the milled trench and voiding the bridge functionality. Hence, we resorted to growing an insulator spacer layer of SiO\({}_{2}\) or MgO between the metallic stack and the wafer. The integration of a microscale current pathway requires to reduce thermal-induced damage by excessive Joule heating. To enhance thermal management near the tip apex a high thermal conductivity MgO spacer layer is sputtered on top of the silicon wafer. MgO has a sufficient thermal conductivity of about \(40\,\mathrm{W}\,\mathrm{m}^{-1}\,\mathrm{K}^{-1}\)[30]. Furthermore, MgO simultaneously facilitates electrical insulation to reduce electrical leakage currents. In Supplementary S4 we discuss the thermal dissipation behaviour between Si/MgO and Si/SiO\({}_{2}\) substrates after sending a current pulse through the bridge.
Next, the metallic multilayer was deposited. The planar probe metallic layer consists of the following structure, see Figure 5c. First, a \(4\,\mathrm{n}\mathrm{m}\) tantalum (Ta) seed layer is grown to induce good mechanical adhesion of the subsequent metal layers with the MgO/Si substrate. The Ta seed layer also smooths the surface roughness of the MgO layer to to some extent, which still results in a final RMS roughness of \(6\,\mathrm{n}\mathrm{m}\). Such a reduced roughness can actually be beneficial as it can be expected that near the cleaved tip apex small nanometer scale bumps form the nano-tip and reduce the van der Waals forces compared to a fully triangular structure, as it scales with the tip volume. The measured roughness of SiO\({}_{2}\) and MgO layered films are given in Figure 6.
Subsequently, a \(30\,\mathrm{n}\mathrm{m}\) current-carrying Pt layer is grown. This relatively thick Pt layer has the lowest film electrical resistance of the metallic stack; the majority of the current will flow through this layer. The ferromagnetic film is made from \(15\,\mathrm{n}\mathrm{m}\) Co. Finally, the stacking is capped with \(3\,\mathrm{n}\mathrm{m}\) of Ta and \(3\,\mathrm{n}\mathrm{m}\) of Pt to induce high mechanical rigidity of the tip apex and prevent native oxidation.
When using (native) SiO\({}_{2}\) as a spacer layer between the intrinsic silicon substrate and the metallic stack of the planar probe, the low thermal conductivity of SiO\({}_{2}\) of only \(1\,\mathrm{W}\,\mathrm{m}^{-1}\,\mathrm{K}^{-1}\), limits the thermal durability of the device. This low thermal conductivity was found to be insufficient to prevent damage from Joule heating to the bridge when using pulses above
80 mA. Limiting the current below 80 mA proved insufficient for full domain reversal for many devices with bridges in the micrometer widths. Figures 7a, b and c show SEM images of FIB fabricated tips with either a single straight trench or the crossed configuration. The tips have a nominal bridge width, measured from the end of the trench to the tip end, of (a) 12 um, (b) 10 um and (c) 8 um. Figures 7e, f and g show optical microscope images of observed tip damage, highlighted with orange circles, after sequential 80 mA pulsing. In these images, the metallic films have clearly degraded by excessive Joule heating. The Joule heating is most intense where the bridge width is smallest i.e. near the tip end, corresponding to the highest current density. Figures 7d and 7h show AFM topographic images of the FIB trench end, after 2 pulses. Clearly, the metallic film shows first signs of degradation or "peel-back", before full layer destruction occurs. With the inclusion of MgO, which is directly sputtered on top of the silicon wafer, we observed no Joule heating induced damage. MgO has a much high thermal conductivity of 40 W m\({}^{-1}\) K\({}^{-1}\).[30] MgO devices pulsed over 25 times showed no degradation or change of the bridge resistance, even for pulses up to 250 mA.
Figure 6: **Surface roughness of planar probe tips measured with AFM.** (**a**) A topographic AFM image of SiO\({}_{2}\) layered planar probe covered with the multi-layer metallic stack. The roughness is found to be around 400 pm. (**b**) A planar probe surface but with a 100 nm MgO layer instead of SiO\({}_{2}\). The surface roughness is larger compared to (**a**) and around 6 nm. The black arrows point to the FIB milled trenches.
### Supplementary S2: Kerr microscopy
A Zeiss Axio Imager.D2m Kerr microscope was used with a 50\(\times\) magnification lens, assembled by Evico Magnetics with a polariser/analyser pair and manual slit diaphragm. The setup was combined with a set of water cooled Helmholtz coils for magnetic moment alignment by in-plane magnetic fields with respect to the Co film orientation. Kerrlab software was used for data acquisition. The slit diaphragm makes it possible to filter light and select a magnetisation direction to visualise (horizontal or vertical) moment sensitivity. A custom-made sample holder was fabricated with integrated electrical wiring. The holder offered 3-degree's of positioning freedom needed for positioning of the SM-PP below the Kerr lens. A custom pulsing circuit was used to pulse a 50 mA to 300 mA current for 150 ns to 500 ns. By ramping up the current in consecutive pulses, the domain switching threshold was found.
It was found that fabricating a single straight FIB trench does not always result in a stable magnetic domain reversal, with an example given in Supplementary Figure 8b. For these straight trench devices, a stable domain was observed only after several consecutive current pulses. Even then, the domain does not extend across the complete bridge region, as indicated with the black arrows. With a single trench structure, only two nucleation sites are
Figure 7: **SEM and AFM images of Joule heating-induced bridge damage with a SiO\({}_{2}\) layer.** (**a**) - (**c**) SEM image of pristine tips with FIB fabricated bridges. (**e**) - (**g**) Same tips after consecutive current pulsing of 80 mA showing film degradation. (**d**) and (**h**) AFM topography shows signs of film degradation and ”peal-back” from excessive heating.
formed, near the very tip apex where the current density is highest (orange arrows). Although our numerical model suggests that it is possible to fully switch the domain with a single pulse, we attribute some devices needing more pulses to the fact that the FIB bridge was not always placed perfectly placed along the mirror-symmetrical axis of the probe. Hence, the current distribution is not equal along the bridge which could hamper full domain reversal near the tip apex. In our numerical models we have always assumed fully symmetrical structures.
To remedy this problem a perpendicular and smaller trench was FIB milled near the original trench, close to the tip apex, see Figure 8c. This design adds an extra nucleation site, indicated with orange arrows, in close proximity to the bridge end. It was observed that this enables formation of a very stable domain that can be reliably reversed with a single pulse in all fabricated tips (number of tips exceeding 15). Figure 8c show gray-scale Kerr Microscope images focused on the crossed bridge design. Kerr sensitivity was selected which corresponds to the domain pointing in-plane. Thus, the double crossed trench solution overcomes the non-perfect mirror-symmetric alignment of the FIB trench and induces multiple nucleation
Figure 8: **Kerr microscopy.** (**a**) Setup to measure the tip magnetisation of a SM-PP device. The SM-PP is placed on a movable holder, to align the tip with the lens of the microscope. (**b**) and (**c**) show SEM images of FIB fabricated tips either with a straight trench, or with a crossed configuration. The black arrows highlight the pulsed tip Co domain, with a more stable domain with a crossed structure. The orange arrows point to the bridge section of smallest diameters. These regions have the highest current density under pulsing.
sites across the bridge.
### Supplementary S3: Note on the QTF function
The large mass of the planar probe would decrease the Q-factor \(Q\) below a few hundred, with \(Q\) strongly influencing the sensitivity of the force sensor [24]. Hence, mass retuning of the QTF is performed by offsetting the mass of the planar probe carrying prong to equate that of the added mass of the oversized tip. This effectively restores \(Q\) to its pristine value [20]. The need to enhance \(Q\) is vital by satisfying two requirements for the SM-PP to function. First, the frequency noise in frequency-modulation (FM) AFM scales inversely with \(Q\)[24]. Hence, a higher \(Q\) results in larger signal-to-noise ratio needed to measure the smaller forces associated with magnetic stray field gradient sensing [24].
Second, due to the mentioned separation of the electrode layout, the actual force-to-voltage conversion signal by the piezo-electric effect of the quartz tuning fork, is measured in the upper prong of the QTF. This prong essentially measures the tip-sample force indirectly and works most efficiently if the two oscillating prongs operate in the anti-phase mode with little dissipation in the connecting node [20]. A low \(Q\) would lead to insufficient coupling between the two oscillating prongs and hence impede operation of the force sensing. This is fundamentally different from the qPlus sensor, were only one prong oscillates and hence the force sensing is limited to this prong [24].
Using the retuned tuning fork approach, we explore the possibility of using a high spring constant \(k\) (\(\sim 10^{4}\,\mathrm{N}\,\mathrm{m}^{-1}\)) force sensors, which is remarkably higher than those of the qPlus (\(1500\,\mathrm{N}\,\mathrm{m}^{-1}\)). With a high \(k\), the minimal detectable force gradient \(dF_{\mathrm{m}}/dz\) is reduced together with the detectable frequency shift down to several tens of mHz, according to the equation \(\omega/\delta\omega=1/(2k)\cdot dF/dz\)[24]. Hence, the tip needs to scan only a few nanometers above the sample surface to measure the magnetic stray field gradients, above the noise level of approximately \(1.5\,\mathrm{mHz}\). This small \(dF_{\mathrm{m}}/dz\) simultaneously prevents the signals from areas beyond the dimensions of the tip to be picked up and hence the resolution can be increased
significantly, needed for imaging nanometer scale SP islands of LMO\({}_{3}\). Our SM-PP realises a \(Q\) above 20 000 at room temperature, in ultra-high vacuum (UHV), which results in a noise floor of 1.5 mHz. For MFM imaging of nanometer sized SP textures, the high \(k\) and large \(Q\) are beneficial, as forces coming from area's not directly beneath the tip are not picked up by the SM-PP because the \(df/dz\) decreases rapidly below the noise floor.
## Supplementary S4: Numerical calculations of thermal and magnetic properties
### Influence of current pulse characteristics on Joule heating
The (transient) temperature response of the SM-PP bridge was numerically studied with COMSOL (tm). First we address the current pulse shape.
In our SPM setup, capacitance of the cables that are connected to the SM-PP affect the pulse shape, resulting in a more asymmetric shape deviating from a square current pulse input. We simulate this curve in COMSOL using the asymmetric double sigmoidal function, which has the following functional form:
\[I\left(t\right)=A_{1}/\left(1+\exp\left(-\left(t-t_{c}+w_{1}/2\right)/w_{2} \right)\right)\left[1-1/\left(1+\exp\left(-\left(t-t_{c}-w_{1}/2\right)/w_{3} \right)\right)\right] \tag{1}\]
Here, \(w_{1}\) defines the width of the pulse, \(w_{2}\) and \(w_{3}\) together define the asymmetry of the pulse, \(t_{c}\) is the centre of the pulse and \(A\) is proportional to the amplitude of the current pulse \(I_{0}\). We fit Eq 1 to the real pulse of which we obtain the full-width-at-half-maximum (FWHM), and pulse amplitude \(I_{0}\).
Heat transport in ultrahigh vacuum (UHV) conditions is modelled via COMSOL(tm)heat conduction. Thermal dissipation occurs via conduction within the planar probe and via radiative emission. The environment is set at room temperature, as is the case for the Scienta Omicron VT-SPM lacking a cryostat. Only the sample is cooled. The radiative
Figure 9: **Temperature response of the SM-PP for different pulse characteristics and spacer layer materials.** (**a**) Increase in temperature for different current densities. Even for very high currents exceeding \(10^{12}\,\mathrm{A}\,\mathrm{m}^{-2}\), the temperature does barely exceed \(400\,\mathrm{K}\). (**b**) The temperature magnitude strongly depends on the pulse length between \(300\) and \(900\,\mathrm{ns}\). (**c**) Operation in UHV at \(77\,\mathrm{K}\) should be possible since only a marginal increase in temperature is observed for a pulse sufficient to change the magnetisation of the SM-PP. (**d**, **e**) Simulation of the temperature response of MgO vs. SiO\({}_{2}\) spacer layers. The pulse shape is varied between an asymmetric double sigmoid (**d**) and a square pulse (**e**), highlighting the need to keep the maximum current as short as possible to reduce thermal heating.
thermal dissipation is set by the surface emissivity \(\varepsilon\). We used the values for the following materials that were used in the functionalised planar probe: \(\varepsilon_{\mathrm{Si}}=0.6\), \(\varepsilon_{\mathrm{Pt}}=0.04\), \(\varepsilon_{\mathrm{SiO_{2}}}=0.8\) and \(\varepsilon_{\mathrm{MgO}}=0.5\). The other metals, Ta and Co, are neglected.
First, we consider the maximum current through the bridge without overheating it. To this end, we varied the current density \(J\) in the planar probe, which is related to the total current \(I_{p}\) via \(J=\frac{I_{p}}{d_{Pt}d}\), where \(d_{Pt}\) is the thickness of the main platinum layer and \(d\) is the width of the bridge gap (\(5\,\mathrm{\SIUnitSymbolMicro m}\)), respectively. The situation we consider is the asymmetric pulse from Equation 1 with a current peak of \(150\,\mathrm{mA}\), which amounts to a current density of \(6\times 10^{11}\,\mathrm{A}\,\mathrm{m}^{-2}\). We vary the current density from \(10^{11}\,\mathrm{A}\,\mathrm{m}^{-2}\) to \(10^{12}\,\mathrm{A}\,\mathrm{m}^{-2}\), in steps of \(0.5\times 10^{11}\,\mathrm{A}\,\mathrm{m}^{-2}\). The results are shown in Figure 9a, in which the temperature barely increases by \(20\,\mathrm{K}\) for the smallest current densities, and increases by approximately \(115\,\mathrm{K}\) in case of the highest current density.
To characterize the temperature increase with respect to the current pulse length, we simulate three current pulses with a different pulse duration: 300, 600 and \(900\,\mathrm{ns}\) FWHM. The pulses are indicated in Figure 9b in dark solid lines. The resulting temperature evolution at the probe's tip end is shown in Figure 9b in dotted red lines. Indeed, the temperature increases with the duration of the pulse. The temperature increases by about \(45\,\mathrm{K}\) in case of a short \(160\,\mathrm{ns}\) pulse, up to an increase of over \(100\,\mathrm{K}\) for the \(900\,\mathrm{ns}\) pulse. This large difference illustrates that the current pulse duration has a major impact on the temperature increase, so the current pulses need to be as short as possible in order to prevent overheating the probe.
We also investigated the thermal aspects of the planar probe when it is cooled down to liquid nitrogen temperature (\(77\,\mathrm{K}\)). The values of the surface emissivity are set to the same values as at \(293\,\mathrm{K}\). The temperature evolution of the probe's tip at \(77\,\mathrm{K}\) is shown in Figure 9c. Evidently, the temperature of the probe increases by only \(35\,\mathrm{K}\), which would make operation in a cryostat possible.
Finally, we discuss the thermal characteristics of a SiO\({}_{2}\) layer. As Supplementary S1
discussed, the inclusion of SiO\({}_{2}\) as a spacer layer to reduce electrical shorting by FIB milling, turned out to negatively impact the thermal dissipation, eventually leading to bridge damage after several pulses above 80 mA. Numerical calculations of MgO and SiO\({}_{2}\) are shown in Figure 9d. These plots show a temperature increase when SiO\({}_{2}\) is used, to a maximum above 400 K. These values were calculated with a pulse of equation 1, with a FWHM of 160 ns. In the case of a 230 nm MgO layer, the temperature increase is less than 40 K, supporting the experimentally observed stability of MgO as a spacer layer. For comparison, we have also used a block pulse of 160 ns with a magnitude of 150 mA with the thermal response shown in Figure 9e, with much higher temperature peaks observed, exceeding over 500 K. Hence, the peak value needs to be minimized to only a few ns to prevent damage to the tip, and a block pulse is not sufficient.
#### MuMax3 calculation of tip domain orientation
MuMax[31] was used in which a 16 nm Co film was simulated, with an exchange length of 5 nm and grid unit cell of 4x4 nm\({}^{2}\). The following values are taken to simulate the cobalt; saturation magnetisation \(M_{sat}=1.4\) MA m\({}^{-1}\), exchange constant \(A_{ex}=16\) pJ m\({}^{-1}\) and the \(1^{st}\) order uniaxial anisotropy constant \(Ku_{1}\) of 0.72 MJ m\({}^{-3}\).
#### Note on the magnetic stray field computation
To study the magnetic properties of the SM-PP, specifically its tip magnetic field component distribution \(B_{x}\), \(B_{y}\) and \(B_{z}\) and the field magnitude, a 3D COMSOL(tm)FEM model was made. The SM-PP is modelled as a 15 nm ferromagnetic Co layer with a magnetization of \(M=1400\) kA m\({}^{-1}\). In the main text we showed the stray field of the SM-PP in the in-plane direction of the tip as it extends outwards, when placed a certain height \(z\) above a flat sample. In the calculations, the SM-PP and sample are placed in an air medium. Figure 10 presents the modeled stray field results.
Figures 10a and b show the field distribution at two different cross-sections: in the \(xy\)
plane, and parallel to the probe surface, respectively. Important to note is that the probe length is much larger than the tip-sample distance \(z\). However, for computational reasons the probe was made smaller ("cut-off"), which gives rise to an additional curvature of the field near the upper side of the probe. Near the tip, where the calculations were performed, the effect of this curvature has been taken into account during the final calculations. Figure 10c presents the calculated \(B_{x}\), \(B_{y}\) and \(B_{z}\) between the tip sample distance \(z\).
We calculated the spatial distribution of the tip's stray field \(\vec{B}\) across the sample surface. Figure 11 shows the lateral distribution of \(B_{x}\), \(B_{y}\) and \(B_{z}\) at a tip-sample distance of 1 nm. We expect this to be the closest tip-sample distance during oscillation. At the lowest point of the amplitude extension with respect to the sample surface, the stray field perturbation should be strongest. The \(B_{x}\) and \(B_{y}\) components in Figure 11a and 11b show sign inversion close to the (0,0) point, where the tip apex is positioned. This is to be expected from the symmetry of the planar probe. The magnitude of \(\vec{B}\) can become quite large up to 100 mT. However, at (0,0) the field is minimal for \(B_{y}\). But for \(B_{x}\) a sizable field up to 80 mT is observed directly below the tip. For a fabricated planar probe, the tip is never fully symmetric, nor is the bridge shape and it's position on the tip apex. The in-plane values are hence expected to be larger below the tip during experiments. The out-of-plane component \(B_{z}\) is strongest at
Figure 10: **COMSOL-calculated magnetic field distribution around the tip**. (**a**), (**b**) Arrow plot of the magnetic field distribution. Color bar gives the absolute strength in mT. (**c**) Magnetic field components \(B_{x}\), \(B_{y}\) and \(B_{z}\) as a function of tip-sample distance \(z\).
(0,0) and given in Figure 11c. The magnitude can reach up to over 300 mT. For measuring LMO\({}_{3}\), having both an in-plane and out-of-plane field is beneficial, as the SM-PP in-plane field magnetises the SP islands, while \(B_{z}\) is used to read out the stray signal. This is similar to the SSM experiments of Anahory _et al._[14] and supports our work on observing the complex SP islands.
We also examined the dependency of the magnetic field magnitude \(|B|\) and distribution on \(z\), between 5 nm and 20 nm. The spatial distribution of \(|B|\) is given in Figure 11d and 11e, respectively. Evidently, \(|B|\) increases by almost a factor of 3 as \(z\) decreases down to 5 nm. Hence, this may explain why we observe a sudden change (kink) in F-z spectroscopy, in Figure 4 in the main text, as the SP islands are magnetized and a sudden change in the magnitude of the attractive force occurs.
Finally, we examined the effect of the direction of the Co magnetisation along the planar probe symmetry axis, on the tip stray field magnitude and distribution. Figure 12 shows
Figure 11: **Distribution of the SM-PP tip stray field on a sample surface in the xy plane.** (**a**)-(**c**) Numerically calculated stray field components across the sample surface placed at z = 0 nm and the tip apex positioned at 1 nm. (**d**),(**e**) Numerical calculated stray field and spatial distribution at different heights of the tip (z)
the calculated \(|B|\) at a fixed \(z=0.5\,\)nm. Here, the relative Co magnetisation direction on both sides of the symmetry axis are depicted in blue and red with the angle between the two directions indicated as \(\theta\). Two main conclusions can be derived from these results: the magnitude of magnetic field strength is relatively independent on \(\theta\). And secondly, the shape of the magnetic field distribution on the surface changes only slightly. In conclusion, changing the symmetry of Co magnetisation along the planar probe vertical symmetry axis does not alter the final \(|B|\) distribution a lot. Hence it is reasonable to assume \(|B|\) is mainly tip-sample distance \(z\), dependent.
## Supplementary S5: Methods SM-PP MFM experiment
### \(\text{LMO}_{3}\) sample in UHV
The 6 u.c. \(\text{LMO}_{3}/\text{STO}_{3}\) sample was exposed to ambient air prior to measurement, hence we expect a thin layer of contaminates present of the \(\text{LMO}_{3}\) surface before being loaded into
Figure 12: **Distribution of the SM-PP tip stray field on a sample surface in the xy plane, depending on the off-axis orientation of the in-plane magnetisation of the Co layer.**
UHV. Gently heating to 100\({}^{\circ}\) in UHV removed the water adsorbents. Higher temperatures were not used to prevent oxygen diffusion which likely alters the stochiometry and magnetic behaviour [32, 33]. However, we cannot attest if small stochiometric changes have occurred (and also over time) or between sample growths. This variable offers future investigation possibilities between the stochiometry relationship and the magnetic textures.
### FM-MFM imaging and data visualisation
To measure the long-range MFM signal, the tip was retracted by 5 nm and followed the previously obtained topography, the so-called lift mode [1]. For measuring the long range magnetic gradient force \(dF_{m}/dz\), the oscillation amplitude \(A\) was set to 10 nm.
A Scienta Omicron VT-SPM was used with a custom tip holder and low temperature sample holder for enhanced thermal conduction. Imaging was performed by locking the amplitude and tracking the frequency shift with a phase lock loop (PLL). Care was taken to keep all the scanning parameters constant, with optimized feedback settings for the phase, amplitude and frequency. The PLL bandwidth was set to 384 Hz. After a topography line-scan was obtained, the tip was lifted by the predefined lift height. Both forward and backward scans where compared to identify repeatability and exclude artifacts. We imaged with a speed of 39 nm s\({}^{-1}\) at a resolution of 256x256 pixels. Gwyddion software was used for the AFM and MFM data plotting and analysis. We used plane projection to level the image, and line-alignment in the vertical direction. Due to thermal drift induced by the thermal gradient between the cold sample and the room temperature SM-PP (as dictated by the VT-SPM design), a small eigenmode frequency shift of 50 mHz was observed during the prolonged imaging, hence we renormalized the frequency MFM shift accordingly to the first few line-scans. No influence on the MFM lateral dimensions was noted by this effect. Finally, both the sample and tip are grounded. Future work will translate the SM-PP to a cryostat to reduce thermal gradients which would also significantly increase the \(Q\) further.
## Supplementary S6: Variable temperature MFM
|
2307.14421 | On the likelihoods of finding very metal-poor (and old) stars in the
Milky Way's disc, bulge, and halo | Recent observational studies have uncovered a small number of very metal-poor
stars with cold kinematics in the Galactic disc and bulge. However, their
origins remain enigmatic. We select a total of 138 Milky Way (MW) analogs from
the TNG50 cosmological simulation based on their $z=0$ properties: disky
morphology, stellar mass, and local environment. In order to make more
predictive statements for the MW, we further limit the spatial volume coverage
of stellar populations in galaxies to that targeted by the upcoming 4MOST
high-resolution survey of the Galactic disc and bulge. We find that across all
galaxies, $\sim$20 per cent of very metal-poor (${\rm [Fe/H]} < -2$) stars
belong to the disk, with some analogs reaching 30 per cent. About 50$\pm$10 per
cent of the VMP disc stars are, on average, older than 12.5 Gyr and
$\sim$70$\pm$10 per cent come from accreted satellites. A large fraction of the
VMP stars belong to the halo ($\sim$70) and have a median age of 12 Gyr. Our
results with the TNG50 cosmological simulation confirm earlier findings with
simulations of fewer individual galaxies, and suggest that the stellar disc of
the Milky Way is very likely to host significant amounts of very- and
extremely-metal-poor stars that, although mostly of ex situ origin, can also
form in situ, reinforcing the idea of the existence of a primordial Galactic
disc. | Diego Sotillo-Ramos, Maria Bergemann, Jennifer K. S. Friske, Annalisa Pillepich | 2023-07-26T18:00:02Z | http://arxiv.org/abs/2307.14421v1 | On the likelihoods of finding very metal-poor (and old) stars in the Milky Way's disc, bulge, and halo
###### Abstract
Recent observational studies have uncovered a small number of very metal-poor stars with cold kinematics in the Galactic disc and bulge. However, their origins remain enigmatic. We select a total of 138 Milky Way (MW) analogs from the TNG50 cosmological simulation based on their \(z=0\) properties: disky morphology, stellar mass, and local environment. In order to make more predictive statements for the MW, we further limit the spatial volume coverage of stellar populations in galaxies to that targeted by the upcoming 4MOST high-resolution survey of the Galactic disc and bulge. We find that across all galaxies, \(\sim\)20 per cent of very metal-poor (\(\left[{\rm Fe/H}\right]<-2\)) stars belong to the disk, with some analogs reaching 30 per cent. About 50\(\pm\)10 per cent of the VMP disc stars are, on average, older than 12.5 Gyr and \(\sim\)70\(\pm\)10 per cent come from accreted satellites. A large fraction of the VMP stars belong to the halo (\(\sim\)70) and have a median age of 12 Gyr. Our results with the TNG50 cosmological simulation confirm earlier findings with simulations of fewer individual galaxies, and suggest that the stellar disc of the Milky Way is very likely to host significant amounts of very- and extremely-metal-poor stars that, although mostly of ex situ origin, can also form in situ, reinforcing the idea of the existence of a primordial Galactic disc.
keywords: galaxies: spiral - galaxies: interactions - galaxies: structure - Galaxy: disc - Galaxy: structure - Galaxy: evolution - methods: numerical
## 1 Introduction
One of the most exciting questions in modern observational astrophysics is the existence of primordial, so-called Population III stars (e.g. Beers et al., 1985; Beers, 2000; Christlieb et al., 2002; Sneden et al., 2003; Frebel et al., 2005; Frebel and Norris, 2015). Despite decades of theoretical and observational research, no such objects have been discovered yet, although their successors have been identified and it has become possible to link their properties to enrichment in individual stellar explosions (e.g. Keller et al., 2014; Howes et al., 2015; Takahashi et al., 2018; Ji et al., 2020; Yong, 2020; Hansen et al., 2020; Skulafoitri et al., 2021; Lagae et al., 2023).
In this work, we explore the possibility that the Milky Way galaxy may host a primordial disc, that is, qualitatively, an old in situ disc formed at \(z\gtrsim 2\) out of stars born from the rotationally-supported pristine gas. In other words, the question is whether very metal-poor stars1 could be hiding in the Galactic disc, in addition to their established association with the Galactic halo (e.g. Schorck et al., 2009; Youakim et al., 2020; Bonifacio et al., 2021) and the bulge (e.g. Schlaufman and Casey, 2014; Howes et al., 2016; Reggiani et al., 2020). Owing to the overall progression of cosmic chemical enrichment, one expects that more metal-poor stars form in larger systems at earlier times or in smaller systems at later times and their presence would be intricately linked to the hierarchical growth of galaxies (White and Springel, 2000). From a theoretical point of view (e.g. Searle and Zinn, 1978; Salvadori et al., 2010) and also confirmed by recent cosmological simulations, such as FIRE (Hopkins et al., 2014), APOSTLE (Sawala et al., 2016; Fattahi et al., 2016), and TNG50 (Nelson et al., 2019; Pillepich et al., 2019), it is expected that most metal-poor (MP) stellar populations follow isotropic distributions of orbits and are therefore preferentially confined to the spheroidal components, bulge and halo (e.g. Chen et al., 2023, for TNG50). This is expected, because they form either ex situ and have been accreted by progressively stripping smaller satellites to form mostly the halo; or they form in situ, at the early stages, when there was no rotationally supported component or in the primordial disc on orbits then heated by mergers (Starkenburg et al., 2017; El-Badry et al., 2018; Chen et al., 2023).
Footnote 1: Following Beers and Christlieb (2005), stars are defined as _metal-poor_ (\(\left[{\rm Fe/H}\right]<-1\)), _very metal-poor_ (\(\left[{\rm Fe/H}\right]<-3\)), _ultra metal-poor_ (\(\left[{\rm Fe/H}\right]<-4\)), _hyper metal-poor_ (\(\left[{\rm Fe/H}\right]<-5\)), and currently these categories are being extended to \(\left[{\rm Fe/H}\right]<-10\).
Observational searches have yielded over \(10^{5}\) stars with metallicity [Fe/H] \(\lesssim-2\)(Bonifacio et al., 2000; Li et al., 2018; Chiti et al., 2021; Huang et al., 2022; Andrae et al., 2023), representing all morphological components of the Galaxy, including the bulge (Howes et al., 2015; Koch et al., 2016; Reggiani et al., 2020), the halo (Hayes et al., 2018; Limberg et al., 2021), and the disc (Di Matteo et al., 2020; Sestito et al., 2020; Carter et al., 2021; Fernandez-Alvar et al., 2021). With estimated fractions of 25 - 30 per cent (Setiato et al., 2019, 2020), very metal-poor (VMP) stars with disky orbits are, perhaps surprisingly, not rare and are confined close to the Galaxy's midplane (see also Venn et al., 2020; Di Matteo et al., 2020). There is evidence for these systems being preferentially on prograde orbits (Carter et al., 2021; Carollo et al., 2023). However, the origin of the kinematic asymmetry
is currently debated. Santistevan et al. (2021), using FIRE-2 simulations (Hopkins et al., 2018), confirm the preference for prograde orbits for the UMP disky stars and a prograde-to-retrograde ratio of \(\sim 2:1\), associating the rotational bias with a single major merger event. Sestito et al. (2021) use 5 Milky Way (MW) analogs from the NIHAO-UHD project (Buck et al., 2020) to show that \(\rm[Fe/H]<-2.5\) stars in retrograde disc orbits were accreted in the first billion years of the galaxy formation, whereas the prograde subpopulation was mostly accreted at later stages.
In this work, we use the TNG50 cosmological simulation, which is the highest resolution run of the IllustrisTNG project (Naiman et al., 2018; Marinacci et al., 2018; Pillepich et al., 2018; Nelson et al., 2018; Springel et al., 2018), to assess the fraction of metal-poor stars expected in the different Galactic morphological components. We follow up on the analysis by Chen et al. (2023), who performed the analysis of extremely metal-poor stars in TNG50 MW and M31-like galaxies. Differently from the latter study, here we aim to a) quantify the presence and origin of stars across a wider range of metallicity levels; b) put a special focus on the Galactic disc and c) estimate the statistics and properties of VMP, EMP and UMP stars by including their [Mg/Fe] ratios, ages and origin (in or ex situ). Crucially, we show where these stars are distributed in MW simulated analogues by using the nominal spatial selection informed by the upcoming 4MOST high-resolution disc and bulge survey (Bensby et al., 2019), in order to provide predictions for the detectability of VMP stars in next-generation observational programs. Compared to previous works based on zoom-in simulations, we increase the MW-analogs sample size by a factor of \(\sim\)10-20.
The paper is organised as follows: in Sec. 2 we describe the cosmological simulation TNG50 and how the MW analogs are selected. We describe in Sec. 3 the populations of metal-poor stars across morphological components, exploring additional properties such as their ages, [Mg/Fe] abundances and origin. We discuss possible implications for the current understanding of the origin and composition of the Galaxy's disc in Sec. 4.
## 2 Methods
In this paper, we focus on MW-like galaxies realized within the cosmological simulation TNG50 (Nelson et al., 2019; Pillepich et al., 2019). For details on the simulation, we refer to the latter papers, and here provide a brief account of the main properties of the galaxies.
The simulation comprises a periodic cubic volume with a side length of 51.7 comoving Mpc, contains \(2160^{3}\) dark matter (DM) particles and equal initial number of gas cells. The DM particles have an uniform mass of \(m_{\rm DM}=4.5\times 10^{5}\) M\({}_{\odot}\), while the gas cells (and stellar particles) have an average (initial) mass of is \(m_{\rm baryon}=8.5\times 10^{4}\) M\({}_{\odot}\). The star formation follows Springel and Hernquist (2003): the gas is transformed into star particles stochastically when the density exceeds \(n_{\rm H}=0.1\rm cm^{-3}\) on time scales to reproduce the Kennicutt-Schmidt relation (Kennicutt, 1989). Stellar particles represent stellar populations that are born at the same time and have an initial mass distribution by Chabrier (2003). Detailed information about all the particles, subhaloes, and haloes, is stored in 100 snapshots. The (sub)haloes at different snapshots, i.e. across cosmic times, are linked via the Sulink (Rodriguez-Gomez et al., 2015) and LHaloTree(Springel et al., 2005) algorithms, so that the assembly histories of galaxies is available. In this paper, we use the baryonic version of Sublink and the main progenitor of a galaxy is the one with the most massive history. We will also identify _in situ_ stars as the ones formed in the main progenitor; accreted stars will be referred to as _ex situ_ (as per Rodriguez-Gomez et al., 2016).
From the TNG50 simulation box, which returns hundreds of massive galaxies at \(z=0\), Pillepich et al. (2023) identify the 198 most suitable counterparts to the MW and M31 based on their properties at \(z=0\) according to galaxy stellar mass, stellar diskyness, and environment. This galaxy sample has been previously used and extensively detailed in terms of its stellar content also by Engler et al. (2021); Sotillo-Ramos et al. (2022); Engler et al. (2023); Chen et al. (2023). With an additional cut in galaxy stellar mass (\(10^{10.5-10.9}\) M\({}_{\odot}\)), in this paper we identify the 138 best MW analogs from TNG50 at \(z=0\). We note that this selection does not impose any constraints on the evolutionary paths of galaxies nor a-priori on the detailed structural and chemical properties of the stellar disc and bulge.
In Fig. 1 we show the stellar-light composite face-on and edge-on images of one MW-like galaxy from the TNG50 sample, at \(z=0\). This simulated analog has disc scale-length and thin and thick disc scale-heights compatible with the current estimations for the Galaxy (see Sotillo-Ramos et al., 2023, for more details on the calculations and the MW reference values). We overlay the positions of VMP stellar particles with white contours and shows that a high fraction lies within a few kpc of the midplane, as is the case for the Galaxy.
Figure 1: Stellar-light composite image of one MW-like galaxy from the TNG50 simulation in face-on and edge-on projections, among 138 MW analogs. Spatial scales are given in the units of kpc. For this example (Subhalo IDs at \(z=0\): 535774) disc scale-length, and think and thick disc scale-height are compatible with the current estimations for the Galaxy (e.g. Bland-Hawthorn and Gerhard, 2016). Blue contours trace the stellar surface density. White contours trace the surface density of VMP stars. The side histogram shows the vertical distribution of VMP stars. In all galaxies in the sample, most of the VMP stars are concentrated very close to the midplane.
### Morphological decomposition of MW analogues
There are many methods to decompose a galaxy in its stellar morphological components. Recent methodologies applied to simulated galaxies have been described by Du et al. (2019); Gargiulo et al. (2022); Zhu et al. (2022), are based on earlier works by e.g. Abadi et al. (2003); Domenech-Moral et al. (2012); Obeja et al. (2018) and combine structural and kinematical information. In this paper, we choose the approach by Zhu et al. (2022), which is based on the orbit circularity \(\epsilon_{\rm z}\)2 (as defined by Sotillo-Ramos et al. 2022) and galactocentric distance \(r_{\rm s}\) of the stars. In brief, the four main stellar components are defined as follows (see also Chen et al. 2023):
Footnote 2: \(\epsilon_{\rm z}=j_{\rm z}/j_{\rm circ}\), with \(j_{\rm z}\) the specific angular momentum of the star in the direction perpendicular to the galactic disk, and \(j_{\rm circ}\) the specific angular momentum of a star at the same radius, on a circular orbit, i.e., \(j_{\rm circ}=rV_{\rm circ}\), with \(v_{\rm circ}=\sqrt{\mathrm{GM}(\leq r)/r}\) the circular velocity of the galaxy at the considered radius.
* Cold disk: \(\epsilon_{\rm z}>0.7\) and \(r_{\rm cut}<r_{\rm s}<r_{\rm disk}\)
* Warm disk: \(0.5<\epsilon_{\rm z}<0.7\) and \(r_{\rm cut}<r_{\rm s}<r_{\rm cut}\)
* Stellar halo: \(\epsilon_{\rm z}>0.5\) and \(r_{\rm disk}<r_{\rm s}<r_{\rm halo}\) or \(\epsilon_{\rm z}<0.5\) and \(r_{\rm cut}<r_{\rm s}<r_{\rm halo}\), with \(r_{\rm cut}=3.5\) kpc, \(r_{\rm disk}=6\times r_{\rm d}\), the exponential scale-length of the stellar disk, as measured by Sotillo-Ramos et al. (2022), \(r_{\rm halo}=300\) kpc the maximum galactocentric distance to which we consider that the stellar halo extends: see also fig. 4 in Chen et al. (2023) for a visual depiction of the components in the \(\epsilon_{\rm z}-r_{\rm s}\) plane. 'Cold' and 'warm' discs are similar, but are _not_ apriori equivalent, to the geometrically defined 'thin' and 'thick' discs based on fitting the vertical stellar density profiles (e.g. Gilmore & Reid 1983).
In the majority of the TNG50 MW analogues (96), the cold disc is the most massive component (by galaxy selection), with median values of \(\sim 1-3\times 10^{10}\) M\({}_{\odot}\), and increasing with galaxy stellar mass. There is also a significant, more than half an order of magnitude, galaxy-to-galaxy variation in the mass of all morphological components. The warm disc is the least massive component and it is one order of magnitude less massive than the cold disk. The bulge is in most cases the second most massive component, except for some galaxies at the high-mass end, where the stellar halo is more massive.
In relative terms, the cold disc represents \(\sim\)60 per cent of the total stellar mass for the less massive MW analogs and decreases to \(\sim\)40 per cent in the case of the most massive ones in the sample. The relative contribution of the other components does not change significantly with galaxy stellar mass. This dominance of the disc is related to the selection of the galaxies: it is in good agreement with the analysis of tens of edge-on spiral galaxies in the local universe by Comeron et al. (2014) and, considering the D/T fraction, with the properties of MW analogs from the NIHAO simulations by Obeja et al. (2018).
In this paper, we focus on the qualitative comparison with the expected properties of stars to be observed within the 4MIDABLE-HR survey (Bensby et al. 2019) that will be carried out at the 4MOST facility (de Jong et al. 2019). This survey will provide coherent homogeneous characterisation of a very large number (over 3 million) of stars in the Galactic disc and bulge, including their detailed metallicities, abundances, ages, and kinematics. To mimic the spatial coverage of 4MIDABLE-HR, we apply a cut on the volume occupied by the simulated stellar particles of 5.5 kpc in heliocentric distance centered at a random point positioned at 8 kpc from the galaxy center. We note that the 4MIDABLE-HR survey selects the targets based on apparent magnitude but, owing to instrumental limits and the complex observational strategy, in practice most stars to be observed (with the exception of selected fields) will be confined within the given spatial volume: this lends credibility to our procedure, despite its simplicity. It should also be noted that, with this spatial cut, the bulge component is represented only by its most external stars. Finally, we have checked that the results of the paper are qualitatively the same than if we had placed the fiducial 'Sun' not at a fixed 8 kpc distance but at 4 times the disc length of each galaxy, to account for the diversity in galaxy sizes (see e.g. Figure 13 of Pillepich et al. 2023).
In the next section, we explore in detail the temporal, chemical, and evolutionary properties of these four main Galactic components, and analyse their distributions by focusing on the metal-poor and old populations.
## 3 Results
We begin with the analysis of the temporal (i.e. of the stellar ages) and chemical properties of the components in the simulated galaxies, and then proceed with the assessment of the statistical properties of the distributions in the volume that will be accessible to next-generation spectroscopic surveys of the Galaxy, such as with 4MOST (Bensby et al. 2019; Chiappini et al. 2019; Christlieb et al. 2019), WEAVE (Dalton et al. 2012, 2016), and MOONS (Gonzalez et al. 2020). As described in the previous section, the focus here is on the 4MOST 4MIDABLE-HR survey and we refer the reader to the science case for more details on its scope and strategy (Bensby et al. 2019).
Figure 2: Metallicity distributions of stars in TNG50 MW-like galaxies, grouped by their respective morphological component: cold disc, warm disc, bulge and stellar halo, from left to right. The top panels quantify the stellar mass fractions in each component to the total stellar mass; the bottom ones are the metallicity distributions in each morphological component, in stellar mass. The solid lines represent the medians and the shaded areas and errorbars represent inter-per centile ranges across the galaxy sample: 16 to 84 and 2 to 98, respectively. The vertical red shaded bands highlight metallicities [Fe/H]\(\leq-2\), i.e., VMP stars. We remind the reader that the distributions are shown for the characteristic observable volume fraction of the Galaxy, as it will be ‘seen’ by the 4MIDABLE-HR survey on 4MOST (see Section 2.1). Specifically, we apply a volume cut of 5.5 kpc in heliocentric distance, where the fiducial ‘Sun’ is placed at 8 kpc in the simulated galaxy.
### Trends with metallicity
Fig. 2 shows the metallicity distribution functions (MDFs) for all four Galactic components of all TNG50 MW-like galaxies described in Section 2. The distributions include a cut on the volume to mimic the spatial coverage of 4MIDABLE-HR, as detailed in the previous Section. The solid lines represent the medians across galaxies. Shaded areas and error-bars represent inter-per centle ranges also across the studied galaxy sample: 16 to 84 and 2 to 98 per cent, respectively. Cold disk, warm disk, bulge and halo are represented, respectively, with blue, green, red and orange lines, from left to right. We also show the stellar mass fraction per component, that is stars in the component relative to the total number of stars: top row.
In most TNG50 MW analogues, the majority of Sun-like, [Fe/H]\(\sim 0\) stars reside in the kinematically cold 'thin' disk, but with some scatter that encompasses \(\sim 20\) per cent of the total number of stars in the disc. The bulge is the second most populated component when no cut is applied, whereby we note that the apparent difference in the total stellar mass in the bulge and in the thin disc is caused by the application of the fiducial volume cut to account for the survey selection of 4MIDABLE-HR. In general, just a small, although non-negligible (at the level of 20 per cent) fraction of the solar-metallicity stars can be found in the halo or the warm disk, although we note a significant galaxy-to-galaxy variation.
For stars with [Fe/H] \(\lesssim-1\) the trend reverses. As expected, in the vast majority of galaxies, most of these low-metallicity stars reside in the stellar halo, with a median fraction of \(\sim~{}60\) per cent. However, and this is one of most interesting results, the fraction of VMP stars in the cold disc component still reaches up to \(\sim~{}20\) per cent, in the typical galaxy. In fact, in some MW-like galaxies, as many as 40 per cent of the stars with [Fe/H]\(\sim-2\) follow cold disky orbits. For progressively lower metallicity values, the trends change slope. Most of the stars with metallicity values [Fe/H] \(\lesssim-2\) across all galaxies can be found in the stellar halo, with the median fraction of \(\sim 80\) per cent. For the cold (thin) disk, the median values are around \(\sim 15\) per cent, but we consistently find a significant (\(\sim\)25 per cent) number of galaxies where the fraction of VMP stars is \(\geq 25\) per cent. These results are qualitatively and quantitatively very similar if we place the 'Sun' at four times the disc length: the median values change only minimally across metallicity and component, although the scatter is in all cases is larger.
In summary, we find that large fractions of ultra, extremely and very metal-poor stars are present in all morphological components of the simulated Milky Way analogues. Most intriguingly, their MDFs suggest that such stars should be abundantly present in the cold disk, typically referred to as the thin disc. This finding confirms recent results of kinematically cold VMP targets in the literature (Sestito et al., 2019; Di Matteo et al., 2020), with our results indicating that the fraction of such stars in the Galactic disc could be even higher than the current observational evidence suggests.
### Trends with stellar age
In order to understand the temporal history of metal-poor populations, in Fig. 3 we show the age distribution functions of the halo, bulge and disc components in the simulated galaxies. In the top row, we normalise the fraction of stars per [Fe/H] bin in each population to the total number of stars in all populations. In the second row, as in Fig. 2, we also provide the stellar mass in each component per metallicity bin, and in the bottom row we show the corresponding mass of each stellar population for metal-poor stars with [Fe/H] \(<-1\).
The bulge and stellar halo appear as the oldest components, followed by the warm disc and the cold disc as the youngest component, although each of these populations show a significant temporal extent spanning the entire range of ages up to \(\sim 13\) Gyr, with only a mild dependence on the galaxy. Even in the cold disc, a non-negligible fraction of stars of \(\sim 20\) per cent, have ages greater than 10 Gyr, and the cold discs of some MW analogues stand out with fractions as large as 50 per cent. There is no galaxy in our TNG50 sample that does not host an old cold disc. On the one hand, properties of these distributions are consistent with what observers would usually describe as the canonical formation picture of the MW (Freeman and Bland-Hawthorn, 2002). On the other hand, the extended star formation histories of all components, especially that of the disc and (to a lesser extent) of the bulge, are striking and indicate that the Milky Way may host a primordial disc that we explore in more detail in Sec. 3.3.
In the bottom row of Fig. 3, we show the age distributions for the metal-poor populations ([Fe/H] \(<-1\)) of the discs, the bulge, and the stellar halo. Here we do not apply the heliocentric cut, in order to be able to find a significant enough number of EMP and UMP stars. It is clear, and expected, that the age distribution of each component exhibits a trend toward older ages with decreasing metallicity: middle vs. bottom panels of Fig. 3 (Bergemann et al., 2014). Specifically, the median age of all four components is now skewed towards ages \(\geq 8\) Gyr, and the mode values peak at \(\gtrsim 12\) Gyr for all galaxies and components. Cold disc stars with [Fe/H]\(\approx-1\) have an age distribution that closely resembles that of the warm disc and the halo (with the median ages of \(\sim\)10 Gyr to 11 Gyr), whereby the bulge appears to be made of the oldest population, as consistently seen in all MW analogues. We find very similar distributions for VMP, EMP and UMP stars. We also note, and this will be discussed in more detail in the next section, that generally, halo stars of all metallicities are on average younger than the stars in the bulge (Fig. 4).
Figure 3: Same as Fig. 2, but for the distribution of stellar ages in the morphological components. _Top_: Stellar mass fraction per component to total stellar mass. _Middle_: Stellar mass per component. _Bottom_: Mass of MP stars per component. The vertical dashed lines represent the median age of the stars in the component (within the volumetric cut), across all galaxies.
Finally, by exploring the median [Mg/Fe] abundance ratios (Fig. 4), we also find a strong dependence of the distributions on metallicity. In line with observational evidence (e.g. Bensby et al., 2014; Bergemann et al., 2017; Nissen and Gustafsson, 2018), [Mg/Fe] increases as the [Fe/H] values decrease (from the top to the bottom panels). At [Fe/H]\(\sim 0\), the cold disc has the lowest [Mg/Fe]. For metal-poor stars the bulge exhibits, on average, slightly higher (by \(\sim 0.05\) dex) values of [Mg/Fe]. Such a systematic difference for the bulge is indeed observed in the Milky Way (Rich and Origlia, 2005; Cunha and Smith, 2006; Ryde et al., 2010; Rich et al., 2012), but see also Jonsson et al. (2017) and Griffith et al. (2021), who advocate smaller chemical differences between the disc and the bulge (but with the caveat here that the different works use different definitions for the morphological components). Interestingly, we also find an increasingly large scatter of [Mg/Fe] ratios for \(-4\lesssim\) [Fe/H] \(\lesssim-3\), which is consistent with the observational distributions of EMP stars by Howes et al. (2016), although their data suggest a higher scatter also for stars with [Fe/H] \(\lesssim-2.5\). At higher metallicities, [Fe/H] \(\gtrsim-2\), the stellar halo and both disc components show similar trends of progressively declining [Mg/Fe] values. However, we emphasize that owing to very large differences in the stellar mass in each metallicity bin, it is unlikely that one would observe many low-\(\alpha\) halo stars or high-\(\alpha\) cold disc stars, respectively.
### Origins of metal-poor stars
The detailed properties of the stellar particles in the TNG50 simulation allow the identification of their origin, in particular, whether they formed in the main galaxy or whether they were accreted. Fig. 4 shows that as with the age and [Mg/Fe] abundance, there are clear trends that depend on the stellar metallicity. More metal-poor stellar populations have, on average, a higher ex situ fraction. This applies to all Galactic components, although the halo, on average, has a comparatively higher ex situ fraction for all metallicities. These distributions are qualitatively consistent with observations, e.g. Conroy et al. (2019) have shown that the accreted fraction increases with decreasing metallicity for the Milky Way halo stars.
Kinematically cold and warm discs are dominated by the ex situ component for metallicities below [Fe/H] \(\lesssim-2\). Yet, interestingly, even at the lowest metallicities a non-negligible fraction of stars in these components have an in situ origin. The median ex situ fraction values for the discs are, in these cases, \(\approx 65\), \(\approx 75\) and \(\approx 100\) per cent, for metallicities [Fe/H] of \(-2\), \(-3\), and \(-4\), respectively. However, importantly, we find also some MW analogs (Fig. 4 bottom row, right) where 10 to 30 per cent of the UMP disk stars were formed in situ, and given the age of these stars, most likely in a primordial disk. This finding is not in contradiction with Ruchti et al. (2015). They report no strong evidence for an accreted disc component, however, their observed sample is dominated by targets with [Fe/H] \(\gtrsim-1\), and only a very small fraction of their stars are more metal-poor than [Fe/H] \(\sim-2\), which, as we see in Fig. 4, represents a transition from the in situ to the ex situ dominated regime. Indeed, TNG50 suggests that most of the stars with [Fe/H]\(\gtrsim-1\) have a strongly in situ dominated origin, regardless of their parent Galactic component. However, for some of the analogs (\(\sim\)10 per cent), the accreted disc fraction can be also significant (\(\gtrsim\)15 per cent). The properties of early galactic discs, such as their alignment with respect to the present-day orientation, are complex and will be a subject of another paper. This has also been addressed, e.g. in Belokurov and Kravtsov (2022). However, here we note that from a quick inspection of the angular momenta, we find that at ancient times, galaxies show a large spread of angular momentum vectors close to a uniform distribution. This could be expected if stars formed in a chaotic way, with multiple gas inflows from many random directions and mergers, that potentially destroy and heat the primordial discs and bring stars in randomly distributed orbits. We cannot detect any preferred angle for the orientation of the primordial disk, but the alignment steadily increases as the galaxies evolve and approach the present-day age, \(z=0\).
## 4 Summary and conclusions
We have used the cosmological magneto-hydrodynamic galaxy simulation TNG50 to explore the fraction of very metal-poor stars, [Fe/H] \(\lesssim-2\), in Milky Way like galaxies. The selection of galaxies follows our detailed previous work presented in Pillepich et al. (2023). We furthermore apply observationally motivated limits to the theoretical distributions, aiming to understand which fraction of the metal-poor stars would be observable in the Galactic disc, bulge, and halo with next-generation facilities, such as the 4MIDABLE-HR survey on 4MOST (Bensby et al., 2019).
Through the statistical analysis of the stellar populations of simulated galaxies, specifically their metallicities, ages, and Mg/Fe ratios, we find that metal-poor stars are common in all morphological components of the MW analogues. As expected, the stellar halo is the component primarily hosting VMP stars (see also Chen et al., 2023). However, we find that in the cold 'thin' discs of TNG50 MW-like galaxies, the fraction of VMP, EMP, and UMP stars is typically \(\approx 20\) per cent of the total number of stars, and in some MW-like galaxies stars with [Fe/H] \(\approx-3\) reach up to 50 per cent. Most of these low-metallicity stars are formed ex situ. The temporal properties of these populations suggest that all galaxy components, i.e. the cold (thin) disc, the warm (thick) disc, the halo, and the bulge, have very extended evolutionary histories with ages reaching \(\gtrsim 13\) Gyr. This suggests that the Galaxy could host a primordial cold disc even though a significant fraction of metal-poor stars in disc orbits originated from small satellites and got subsequently accreted.
Figure 4: Mass fraction per component, stellar age, [Mg/Fe] and ex situ fraction distributions, for stellar samples with different values of [Fe/H]. We show the distributions of the median values across all TNG50 138 MW-like galaxies weighted by stellar mass and do not apply a heliocentric cut.
The large unbiased sample of MW analogs of the TNG50 simulation confirms that metal-poor stars can help unveil the first steps of the formation of the Galaxy. Contrary to what has been largely expected thus far, it is very likely that many of these stars follow cold disc orbits. We hence recommend that current and future Galactic surveys should target not only the stellar halo, but also the disc for the search of the most metal-poor old stars, which will be equivalent to exploring the regime of redshifts \(z\sim 2\) to \(>5\).
## Acknowledgements
We are grateful to the anonymous referee for the insightful suggestions that helped to improve the work. We acknowledge Timothy C. Beers and Anirudh Chiti for valuable discussions. DSR, AP, and MB acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 138713538 - SFB 881 ("The Milky Way System", subprojects A01, A05, A06, A10). MB is supported through the Lise Meitner grant from the Max Planck Society. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 949173). IF acknowledges support from University College London's Graduate Research Scholarships and the MPIA visitor programme. The TNG50 simulation was realized with compute time granted by the Gauss Centre for Super- computing (GCS), under the GCS Large-Scale Project GCS-DWAR (2016; PIs Nelson/Pillepich) on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS). This work benefited from a workshop supported by the National Science Foundation under Grant No. OISE-1927130 (IReNA), the Kavli Institute for Cosmological Physics, and the University of Chicago Data Science Institute.
## Data Availability
Data directly related to this publication and its figures are available on request from the corresponding author. The IllustrisTNG simulations, including TNG50, are publicly available and accessible at www.tng-project.org/data (Nelson et al., 2019). A special data release is also available for the TNG50 Milky Way and Andromeda like galaxies, as per Pillepich et al. (2023).
|
2304.02713 | NUMSnet: Nested-U Multi-class Segmentation network for 3D Medical Image
Stacks | Semantic segmentation for medical 3D image stacks enables accurate volumetric
reconstructions, computer-aided diagnostics and follow up treatment planning.
In this work, we present a novel variant of the Unet model called the NUMSnet
that transmits pixel neighborhood features across scans through nested layers
to achieve accurate multi-class semantic segmentations with minimal training
data. We analyze the semantic segmentation performance of the NUMSnet model in
comparison with several Unet model variants to segment 3-7 regions of interest
using only 10% of images for training per Lung-CT and Heart-CT volumetric image
stacks. The proposed NUMSnet model achieves up to 20% improvement in
segmentation recall with 4-9% improvement in Dice scores for Lung-CT stacks and
2.5-10% improvement in Dice scores for Heart-CT stacks when compared to the
Unet++ model. The NUMSnet model needs to be trained by ordered images around
the central scan of each volumetric stack. Propagation of image feature
information from the 6 nested layers of the Unet++ model are found to have
better computation and segmentation performances than propagation of all
up-sampling layers in a Unet++ model. The NUMSnet model achieves comparable
segmentation performances to existing works, while being trained on as low as
5\% of the training images. Also, transfer learning allows faster convergence
of the NUMSnet model for multi-class semantic segmentation from pathology in
Lung-CT images to cardiac segmentations in Heart-CT stacks. Thus, the proposed
model can standardize multi-class semantic segmentation on a variety of
volumetric image stacks with minimal training dataset. This can significantly
reduce the cost, time and inter-observer variabilities associated with
computer-aided detections and treatment. | Sohini Roychowdhury | 2023-04-05T19:16:29Z | http://arxiv.org/abs/2304.02713v1 | # NUMSnet: Nested-U Multi-class Segmentation network for 3D Medical Image Stacks
###### Abstract
Semantic segmentation for medical 3D image stacks enables accurate volumetric reconstructions, computer-aided diagnostics and follow up treatment planning. In this work, we present a novel variant of the Unet model called the NUMSnet that transmits pixel neighborhood features across scans through nested layers to achieve accurate multi-class semantic segmentations with minimal training data. We analyze the semantic segmentation performance of the NUMSnet model in comparison with several Unet model variants to segment 3-7 regions of interest using only 10% of images for training per Lung-CT and Heart-CT volumetric image stacks. The proposed NUMSnet model achieves up to 20% improvement in segmentation recall with 4-9% improvement in Dice scores for Lung-CT stacks and 2.5-10% improvement in Dice scores for Heart-CT stacks when compared to the Unet++ model. The NUMSnet model needs to be trained by ordered images around the central scan of each volumetric stack. Propagation of image feature information from the 6 nested layers of the Unet++ model are found to have better computation and segmentation performances than propagation of all up-sampling layers in a Unet++ model. The NUMSnet model achieves comparable segmentation performances to existing works, while being trained on as low as 5% of the training images. Also, transfer learning allows faster convergence of the NUMSnet model for multi-class semantic segmentation from pathology in Lung-CT images to cardiac segmentations in Heart-CT stacks. Thus, the proposed model can standardize multi-class semantic segmentation on a variety of volumetric image stacks with minimal training dataset. This can significantly reduce the cost, time and inter-observer variabilities associated with computer-aided detections and treatment.
semantic segmentation; multi-class; 3D image stacks; region of interest; Dice score; Unet; CT images; overfitting
## I Introduction
Multi-class semantic segmentation of regions of interest (ROIs) from medical 3D image stacks of CT or MRI images is important for diagnostic pathogenesis and for pre-procedural planning tasks. Performing such segmentations manually can be both costly and time intensive [1]. Additionally, manual segmentation process suffers from inter-observer variabilities where two medical practitioners may disagree between the exact locations of the ROIs [1]. In such situations, traditional deep learning approaches for segmentation have been used largely to augment the human effort needed to isolate the ROIs [2]. The idea is for a limited number of images to be manually annotated, followed by training a deep learning model on the hand annotated data, and generating standardized segmentations across all future image frames. The challenge here is that most deep learning approaches are data hungry and require large volumes of initial annotated data to yield standardized ROIs, especially if there are many ROIs, or ROIs with varying sizes. Also, medical 3D image stacks often represent variable pixel resolution and noise across imaging equipment's, which impedes extendability of the automated deep learning solutions to other image stacks. Several existing works for medical image semantic segmentation perform binary segmentations per image [3][4], or two-stage multi-class segmentations for image stacks [5]. In this work, we present a novel single-stage variant of the popular Unet model that contains additional nested layers to capture the spatial neighborhood characteristics and propagate it across image scans for accurate multi-class semantic segmentation performances with a minimal training set of images.
Unet models and their variants have been the preferred deep learning model in medical image processing and detection domains due to the low computational complexity. This allows training from a few hundred images as opposed to thousands of annotated images typically needed for other imaging domains such as autonomous drive and augmented reality [6][7]. Thus, the lack of high volumes of quality annotated data, and inter-observer variability [8] makes Unet and its variant models the preferable method for ROI segmentation and computer aided detection. The proposed Unet variant model, called the NUMSnet, propagates image features across scans, which results in faster network convergence with few training images for volumetric medical image stacks. The NUMSnet requires training images in order, but not necessarily subsequent images in a sequence. The training set is shown in Fig. 1 by the image stack [\(T_{0}\) to \(T_{n}\)]. Once trained, the test set of images can be ordered or be randomized per stack as represented by sets \(S_{m}\) and \(S_{m^{\prime}}\) in in Fig. 1.
In this paper we present a novel multi-class semantic segmentation model that propagates layers across consecutive scans to achieve multi-class semantic segmentation with only 10% of frames per 3D image stack. We investigate three main analytical questions towards multi-class semantic segmentation in 3D medical image stacks. 1) Does transmission of image features from some of the layers of a Unet variant model enhance semantic segmentation performance for multi-class segmentation tasks? 2) Is the order of training and test frames significant to segmentation tasks for 3D volumes? 3) How many layers should be optimally propagated to ensure
model optimality while working with sparse training data? The key contributions in this work are as follows.
1. A novel multi scan semantic segmentation model that propagates feature-level information from a few nested layers across ordered scans to enable feature learning from as few as 10% annotated images per 3D medical image stack.
2. Transfer learning performance analysis of the proposed model with respect to existing Unet variants on multiple CT image stacks from the Lung-CT (thoracic region) scans to Heart-CT regions. The NUMSnet model achieves up to 20% improvement in segmentation recall and 4-9% improvement in Dice scores for multi-class semantic segmentations across image stacks.
3. Identification of optimally located minimal training images per volumetric stack for multi-class semantic segmentation.
4. Identification of the optimal number of layers that can be transmitted across scans to prevent model over-fitting for segmentation of up to 7 ROIs with variables shapes and sizes.
This paper is organized as follows. Existing literature and related works are reviewed in Section II. The datasets under analysis and the NUMSnet model are explained in Section III. The experiments and results are shown in Section IV. The conclusions are in Section V and relevant discussions and limiting conditions are presented in Section VI.
## II Related work
Deep learning models have been heavily popular for computer aided detections in the past decade over signal processing methods in [9][10]. This is primarily due to the ability of deep learning models to automatically learn features that are indicative of a ROI if a significant volume of annotated data is provided. Signal processing models on the other hand rely on hand generated features that may lead to faulty detections due to the high variabilities across imaging modalities, storage and transmission formats. The prior work in [11] demonstrates a two path CNN model that can take filtered Lung-CT images followed by fuzzy c-means clustering to segment the opacity in per Lung-CT image. While such feature-based works have low data dependence, the models often do not scale across datasets.
the foreground in to the various ROIs. Another work in [15] implements a deeply supervised 3D Unet model with multi-branch residual network and deep feature fusion along with focal loss to achieve improved segmentations for smaller ROIs. Other variants of multi-Unet models such as the work in [17] implements trained Unet models at different resolutions, i.e., one Unet model trained on images of dimensions [256x256], while another trained at resolution [512x512] and so on for lung segmentation. However, these methods require significantly high volumes of annotated data to train the multiple Unet models or to perform 3D volumetric convolutions.
Other recent works in [2] and [18] have applied variations to the Unet model to achieve segmentation of opacity and lung regions from chest-CT scans to aid COVID-19 detections. Also, in [19], an Inf-net and Semi-Inf net models are presented that can achieve binary segmentation performances for lung opacity detection with Dice scores in the range of 0.74-0.76. Most of these existing methods require several hundred annotated training images across scans and patients and can efficiently be trained for binary semantic segmentation tasks. In this work, we present a novel Unet model variant that propagates deep learning layer information across scans, thereby achieving superior multi-class semantic segmentation performances to most existing methods while being trained by fewer than a hundred annotated scans overall.
Some of the well known Unet model variants used in the medical imaging domain are the wide Unet (wU-net) and Nested Unet (Unet++) [4]. While a typical Unet model of depth 5 will has filter kernel widths [32,64,128,256,512] at model depth 1 through 5, the wUnet model has filter kernel widths [35,70,140,280,560] at model depth 1 through 5. Thus, the wUnet has more parameters, and thereby can enhance segmentation performances when compared to Unet. The Unet++ model on the other hand generates dense connections with nested up-sampled layers to further enhance the performances of semantic segmentation as presented in [20][21]. In this work, we propose an enhanced Unet++ architecture called the NUMSnet, where the features from the nested up-sampled layers are transmitted across scans for increased attention to smaller regions of interest (such as opacity in Lung-CT images). This layer propagation across scans enables multi-class semantic segmentation with only 10% of annotated images per 3D volume stack.
## III Materials and Methods
### _Data: Lung-CT and Heart-CT Stacks_
In this work we analyze two kinds of single plane volumetric CT image stacks. The first category of Lung-CT image stacks are collected from the Italian Society of Medical and Interventional Radiology. The first Lung-CT (Lung-med) volumetric stack [22] contains 829 images from a single 3D image stack with [512x512] dimension images. 373 out of these 829 scans are annotated. The second dataset (Lung-rad) contains 9 axial volume chest CT scans with 39-418 images per stack. All Lung-CT images are annotated for 3 ROIs namely: ground-glass opacity (GGO), consolidations and the Lung region as foreground and can be downloaded from [23].
The second category of Heart-CT image dataset is from the MICCAI 2017 Multi-Modality Whole Heart Segmentation (MM-WHS) challenge [5][15] from where we select the first 10 training CT image stacks of the heart region for analysis. This dataset contains coronal volumetric stacks with 116-358 images per volume and multi-class semantic segmentation annotations for up to 7 heart-specific ROIs represented by label pixel values [205, 420, 500, 550, 600, 820, 850], respectively. These pixel regions represent the [left ventricle blood cavity (LV), myocardium of the left ventricle (Myo), right ventricle blood cavity (RV), left atrium blood cavity (LA), right atrium blood cavity (RA), ascending aorta (AA) and the pulmonary artery (PA)], respectively. It is noteworthy that for the Heart-CT dataset only 10-15% of the images per stack contain the annotated ROIs. Thus, at the time of ordered training dataset selection, it is ensured that atleast 50% of the training samples contain annotations. Some examples of the Lung-CT and Heart-CT images and their respective annotations are shown in Fig. 2.
Each image from the data stacks under analysis here is pre-processed for the Unet and variant models. First, each input image is resized to [256x256x1] for ease of processing. Next, the resized image \(I\) is re-scaled to the range [0,1], thereby resulting in image \(I^{\prime}\), using min-max normalization as shown in (1), where, \(min_{I}\) and \(max_{I}\) refer to the minimum and maximum pixel values in \(I\). This is followed by generation of multi-dimensional label vectors [256x256x\(d\)] per image, where \(d\) represents the number of classes that each pixel can be classified into. These label vectors are generated as a binary images per class. For example, the Heart-CT stack images contain up to 7 different annotated regions depicted by a certain pixel value \(pix_{i},\forall i=[1:7]\). Thus, the ground-truth label vector (\(G^{\prime}\)) generated per image contains 7 planes, where each plane \(G^{\prime}_{i}\) is generated as a binary mask from the label masks (\(G\)) as shown in (2). This process defines the ground-truth G' such that the Unet decision making function (\(f_{i}\)) proceeds to analyze if each pixel belongs to a particular class \(i\) or not. Finally, the output is a \(d\) dimensional binary image (\(P\)) where each image plane (\(P_{i}\)) is thresholded at pixel value \(\tau=0.5\) as shown in (3).
\[I^{\prime}=\frac{I-min_{I}}{max_{I}-min_{I}}. \tag{1}\] \[\forall i\in[1:d],G^{\prime}_{i}=[G==pix_{i}],\] (2) \[and,P_{i}=[f_{i}(I^{\prime})>\tau]. \tag{3}\]
Once the datasets are pre-processed, the next step is to separate the data stacks to training, validation and test sets. There are two ways in which the training/validation/test data sets are sampled per volume stack. For the first, random sampling method, 10% of the scans per volume are randomly selected in ascending order as training, 1% of the remaining images are randomly selected for validation and all remaining images are used for testing. The second sequential sampling
method starts from a reference scan in the volumetric stack. This reference scan could either be the first or the middle scan in the stack. We sample 10% of the total number of images in the stack starting from the reference scan in sequence and these become the training set of images. From the remaining images, 1% can be randomly selected for validation, while all remaining scans are test set images in sequence. Using these methods, we generate training sets of size: [82x256x256x1],[84x256x256x1] and [363x256x256x1], respectively, for the Lung-med, Lung-rad and Heart-CT stacks, respectively.
### _Unet Model and Variants_
Till date, Unet-variants such as wUnet, Vnet and Unet++ models, have been applied to improve foreground segmentation precision for small regions of interest as shown in [4][15]. It is noteworthy that for binary segmentation tasks, the relative variation in performances for such Unet model variants remains less significant [24]. However, to improve multi-class semantic segmentations, we propose a variant of the Unet++ or the Nested-Unet model from [4]. One major difference between the Unet++ and the traditional Unet model is the presence of nested layers that combine the convolved and pooled layers with the up-sampled transposed convolutional layers at the same level. Thus, for a Unet with depth of 5, a Unet++ model results in 6 additional nested layers shown as [X(1,2), X(1,3) X(1,4) X(2,2) X(2,3) X(3,2)] in Fig 3. These additional layers increase signal strengths at each depth level and amplify the segmentation decisions around boundary regions of ROIs [4].
The process of semantic segmentation using Unet for a single plain gray-scale medical image proceeds as follows. The input image \(I^{\prime}\) is subjected to 2D convolutions and pooling in layers [X(1,1), X(2,1), X(3,1), X(4,1)], respectively. The output of each layer results in an image with half the input dimensions but additional feature planes. For example, the input to layer X(1,1) is the image of size [256x256x1], while the output has dimensions [128x128x32], due to convolution with a [3x3] kernel with width 32 and max-pooling with a [2x2] kernel. Thus, at the end of the fifth layer (X(1,5)), a feature vector of size [16x16x512] is generated. At this point, the transposed convolutions in 2D are performed with kernel size [2x2] to up-sample the input images from the previous layer. One key consideration for all the up-sampling layers is that to promote better distinction between foreground pixels (scaled value 1) versus background pixels (scaled value 0), the images/features from same depth are concatenated followed
Fig. 2: Examples of multi-class segmentation datasets used in this work. Row 1: Lung-med dataset, Row 2: Lung-rad dataset. For Row 1 and Row 2 the regional color coding is as follows. Blue: Lung region, Red: GGO, Green: consolidation regions. Row 3: Heart-CT dataset. The ROIs are color coded as follows. Red plane: label pixels 205 and 420. Blue plane: label pixels 500 and 550. Green plane: label pixels 600, 820 and 850.
by the transposed convolutions. For example, at depth 4, the output from layer X(4,1) is concatenated with the up-sampled image from layer (1,5), resulting in image features of dimension [32x32x512] that are then subjected to convolutions in layer (4,2), thereby resulting in image features of dimension [32x32x256].This up-sampling, concatenation and convolution process continues till the output of layer X(1,5) is an image (\(P\)) with dimensions [256x256x\(d\)], \(d\) being the number of planes corresponding to ROIs.
The Unet++ model, on the other hand, was developed to enhance the boundary regions for relatively small ROIs by introducing nested up-sampling layers at each depth level as shown in Fig. 3. For example, the input for layer X(1,2) has the size of [256x256x32] being an output from the X(1,1) layer, which is then concatenated with the transposed convolved output of layer X(2,1) with same dimensions. The layer X(1,2) then performs convolutions to generate [256x256x32] image feature as output that then feeds into the layer X(1,3). This process continues across all the nested up-sampling layers.
The primary parameters that need to be tuned to ensure optimally trained Unet or a variant model are the following: data augmentation methods, batch size, loss function, learning rate and reported metric per epoch. In this work, we apply image data augmentation using the tensorflow keras library by augmenting images randomly to ensure rotation range, width shift range, height shift range and shear range of 0.2, respectively, and zoom range of [0.8,1] per image. Since the training data set has few samples, we implement a training batch size of 5 for the Lung-CT images and batch size of 10 for heart CT images. It is noteworthy that batch size should scale with the number of detection classes, thus, we use additional images per batch for the Heart-CT stack. For all the Unet and variant models we use the Adam optimizer with learning rate of \(10^{-3}\). Finally, the metrics under analysis are shown in (4)-(7) based on the work in [25]. For each image with \(l\) pixels and \(d\) images planes for the ground-truth (\(G^{\prime}_{i}\)), intersection over union (\(IoU\)) or Jaccard metric in (4) represents the average fraction of correctly identified ROI pixels. The Dice coefficient \(Dice\) in (5) further amplifies the fraction of correctly classified foreground pixels. Precision (\(Pr\)) in (6) and recall (\(Re\)) in (7) denote the average fraction of correctly detected ROI pixels per predicted image and per ground-truth image plane, respectively.
\[IoU=\sum_{i=1}^{d}\sum_{j=1}^{l}\frac{|P_{i}(j)\cap G^{\prime}_{i}(j)|}{P_{i} \cup G^{\prime}_{i}}, \tag{4}\]
\[Dice=\sum_{i=1}^{d}\sum_{j=1}^{l}\frac{2*|P_{i}(j)\cap G^{\prime}_{i}(j)+1|}{ P_{i}(j)+G^{\prime}_{i}(j)+1}, \tag{5}\]
\[Pr=\sum_{i=1}^{d}\sum_{j=1}^{l}\frac{P_{i}(j)\cap G^{\prime}_{i}(j)}{P_{i}(j)}, \tag{6}\]
\[Re=\sum_{i=1}^{d}\sum_{j=1}^{l}\frac{P_{i}(j)\cap G^{\prime}_{i}(j)}{G^{\prime }_{i}(j)}. \tag{7}\]
The loss functions under analysis are shown in (8)-(10). The Dice coefficient loss (\(DL\)) in (8) is inverse of the Dice coefficient, so it ensures the average fraction of correctly detected foreground regions increases per epoch. The binary cross entropy loss (\(BCL\)) in (9) is a standard entropy based measure that decreases as the predictions and ground-truth become more alike. Finally, the binary cross entropy-Dice loss (\(BDL\)) in (10) is a combination of BCL and DL based on the
Fig. 3: Example of a Unet++ model for depth 5. The blue layers correspond to convolved and pooled layers. The green layers correspond to merged transposed convolutions followed by convolution outcomes from the same depth layers. The 6 additional nested color coded layers as (purple, cyan, red, grey, orange, dark blue) corresponding to [X(1,2), X(1,3) X(1,4) X(2,2) X(2,3) X(3,2)], respectively, contain spatial pixel neighborhood information that can be transmitted temporally across images/scans for increased accuracy of semantic segmentation.
work in [4].
\[DL=-D, \tag{8}\]
\[BCL=-\sum_{i=1}^{d}\sum_{j=1}^{l}[P_{i}(j)log(G^{\prime}_{i}(j))], \tag{9}\]
\[BDL=\frac{BCL}{2}+DL. \tag{10}\]
Finally, we analyze the loss function curves per epoch using the deep-supervision feature from the Unet++ model [4] in Fig. 4. Here, we assess convergence rates for outputs at each depth levels. From Fig. 4, we observe that the convergence of outputs from depth 1 and 2 (layers X(1,5) and resized output of X(2,4)) are relatively similar and better than the loss curves for depth 4 (resized output of layer X(4,2)). This implies that as the transposed convolutions move further away from the dense feature layer X(5,1), additional local feature-level information gets added to the semantic segmentation output. Thus, for a _well-trained_ Unet++ model, the initial transposed convolution layers closer to the global feature layer X(5,1) bring less value to the semantic segmentation task when compared to the farther away layers from X(5,1), i.e. layers X(1,2),X(1,3),X(1,4). This variation in loss curves at the different depth levels, based on the work in [26] demonstrates the importance of the additional up-sampling nested layers towards the final multi-class segmented image.
### _The NUMSnet Model_
In the recent years Unet models and its variants have been trained and implemented for specific binary segmentation tasks such as COVID screening using Lung-CT images [22]. While the Unet and variant models are efficient at segmentations per scan, segmenting volume stacks need further intervention wherein pixel neighborhood information can be transmitted to the next ordered scan, thereby allowing better resolution of semantic segmentations by training on a few images. The NUMSnet model is a 3D extension to the Unet++ model, wherein the outcomes of the nested Unet++ layers from each image are transmitted to the next scan. For each subsequent scan, the transmitted layer image features from the previous scan are concatenated and convolved with the same layer equivalent of the current image features followed by the regular convolution pooling and up sampling operations of the remaining layers as explained in the previous subsection. For example, the output from layer X\({}^{n}\)(1,2) from the \(n\)-th training image with dimensions [256x256x32] is concatenated with the output of layer X\({}^{n+1}\)(1,2) from the very next training image and convolved with a [3x3] kernel to result in [245x256x32] dimension image/features output for the layer X(1,2) for the training image \(n+1\). Similarly the outputs of the other nested layers X(1,3), X(1,4), X(2,2), X(2,3), X(3,2) are propagated to the next ordered scan. For an optimal NUMSnet model, we apply batch normalization to encoder layers only and dropout at layers X(4,1),X(5,1) only. 1 Also, the widths of kernels per depth layer for the NUMSnet model are [5,70,140,280,56] similar to that of the wUnet model. This process of transmitting and concatenating layer-specific features with those of the subsequent ordered images generate finer boundary condition outcomes. This variation in the Unet++ model to generate the NUMSnet model is shown in Fig. 5. The additional layers generated in this process are shown in the model diagrams in the Appendix section Fig. 9.
Footnote 1: Github Code available at [https://github.com/sohiniroych/NUMSnet](https://github.com/sohiniroych/NUMSnet)
The NUMSnet model processes the 3D medical image stacks as follows. For the first image in the training stack, the nested layer outcomes are convolved with themselves due to the lack of a previous layer image features. Next, as the training progresses, the 6 nested layer outputs and the model layer states are collected for each scan and propagated to the next scan. The weights and biases from model neurons back-propagate to minimize the loss function and this process continues per training batch and epoch. It is noteworthy that while the training samples are ordered, the test samples may be out of order starting at the other end of the stack or starting at a new volumetric stack. In the testing phase, the nested layer outputs and model layer weights and biases are collected per test image and passed to the next image. Once the NUMSnet model is optimally trained, the out of order scans in test stacks do not significantly impact the segmentation outcomes. All other parameters including data augmentation, loss functions, batch size, compiler, learning rate and reported metrics are kept similar to that of the Unet model and variants to realize the segmentation enhancements per epoch.
The primary improvement in semantic segmentation capability for a volume stack introduced by NUMSnet over a Unet model [3] and Unet++ [4] model is the additional skip connections across scans that magnify pixel neighborhood features across scans. For medical images, the relative variation in pixel neighborhoods is significantly lesser than regular camera acquired images like those for autonomous driving or satellite imagery [6]. Thus, the feature-level propagation across scans enhances the decision making around boundary regions especially for smaller ROIs. However, the additional nested layer concatenation introduces higher number of parameters in the Unet-variant models, which leads to slower training time and higher GPU memory requirements for model training. In this work, we use Nvidia RTX 3070 with 8GB of GPU RAM on an Ubuntu Laptop and tensorflow/keras libraries to train and test the volume segmentation performances. In instances where models have a high number of parameters, keeping a small batch size of 5-10 ensures optimal model training. As an estimate for the model computational complexity, the number of trainable parameters in Unet 7.7 million that increases to 9 million parameters in the Unet++ model and 11.71 million in the NUMSnet model.
The NUMSnet model has two key hyper-parameters. First, the relative location of the training scans in the 3D volume stack impacts the training phase. Since layer information is transmitted to the subsequent ordered scans, selecting training scans that contain the ROIs in several subsequent scans is important. We analyze this sensitivity to training data location
in a 3D stack by varying the location of the reference training frame from the beginning to the middle of the stack followed by selecting the subsequent or randomly selected frames in order. For example, this ensures that in the Heart-CT stacks, if an aortic region is detected for the first time in a scan, the ROI first increases and then decreases in size as training progresses. The second hyper-parameter for the NUMSnet model is the number of up-sampling layer features that can be transmitted across scans. If all the 10 up-sampled layer features from layers [X(1,2), X(1,3), X(1,4), X(1,5), X(2,2), X(2,3), X(2,4), X(3,2), X(3,3), X(4,2)] from Fig. 5 are transmitted to the subsequent scans, this would incur higher computational complexity (14.5 million trainable parameters). We analyze the segmentation performance using this NUMSnet model variant (called NUMS-all), where outcomes of 10 up-sampling layers are transmitted. The primary reason for transmitting only up-sampling layers is that up-sampling generates image feature expansion based on pixel neighborhood estimates. Thus, added information during the up-sampling process further aids the foreground versus background decision making process per image plane.
## IV Experiments and Results
In this work, we analyze the performance of Unet model and variants for multi-class semantic segmentation on volumetric scans using only 10% of the annotated data for training. To analyze the importance of nested layer propagation across subsequent images, we perform four sets of experiments. First, we comparatively analyze the segmentation performance per
Fig. 4: Example of loss functions per depth layer in Unet++ model using the deep-supervision feature on the Lung-med training dataset. The resized image outcome from X(4,2) achieves lower segmentation resolutions when compared to the outcome from X(1,5). Thus, nested layers enhance local boundary-region specific features for segmentation.
Fig. 5: The proposed NUMSnet that propagates the image features from the 6 nested layers across scans. Outcome of each nested layer is concatenated and convolved with the equivalent layer of the subsequent ordered image in the 3D stack.
ROI for the NUMSnet when compared to Unet [3] model and its variants [4] for the Lung-CT image stacks. Second, we analyze the sensitivity of the NUMSnet model on the relative position and selection of training data for random ordered sampling versus sequential sampling from the beginning or middle of the volumetric stack. Third, we analyze the semantic segmentation performance of the NUMSnet model when only nested layer features are transmitted versus when all up-sampling layer features are transmitted (NUMS-all). Fourth, we asses the semantic segmentation capability of NUMSnet in comparison with Unet variants for transfer learning of weights and biases from segmenting 3 ROIs (in Lung-CT stacks) to segmenting 7 ROIs (in heart-CT stacks).
### _Multi-class Segmentation Performance of Unet Variants_
For any multi-class semantic segmentation model, it is important to assess the computational complexity introduced by additional layers in terms of the number of trainable parameters jointly with the semantic segmentation performances. Table I shows the variations in the number of trainable and non-trainable parameters for all the Unet variants analyzed in this work. Here, we find that Unet is the fastest model while NUMS-all has almost twice the number of trainable parameters when compared to Unet. Also, the NUMSnet model is preferable to NUMS-all with regards to computational complexity as it has lesser chances of overfitting [27].
Next, we analyze the multi-class semantic segmentation performances of the NUMSnet and Unet model variants. In Table II the averaged semantic segmentations across 5 random ordered training dataset selections of the 3 regions of interest in Lung-med dataset are presented. Here, we observe that the performance of Lung segmentation is the best and similar across all the Unet variants, with a Dice score ranging between 83-94%. This is intuitive since the lung is the largest region that is annotated in most images. The Unet and variant models preferentially extract this ROI with minimal training data. However, for segmentation of opacity (GGO) and consolidation (Con) regions, the performance of the NUMSnet model has the highest \(Re\) and average Dice scores are 3-9% better than the Unet++ model. Some examples of Unet and variant model segmentations are shown in Fig 6. Here, we observe that for small as well as large ROIs, the NUMSnet has better segmentation resolution when compared to all other Unet variants.
For all the Unet variants under analysis the number of epochs is 60 and the optimal loss function is the BDL with Dice coefficient as the reported metric. We observe poor convergence with DL loss function since the large lung regions get weighted more by the DL, thereby resulting in high accuracy of Lung segmentation but poor performances for the GGO and consolidation segmentations.
Next, we analyze the segmentation performances on the smaller Lung CT stacks from radiopedia (Lung-rad) in Table III. Here, we observe that Unet++ has the best performance for segmenting consolidations while NUMSnet has the best performance for segmenting GGO and Lung regions with 3-5% improved Dice coefficients for the GGO and Lung regions, respectively, over the Unet++ model. Selected good and bad segmentations on this dataset are shown in Fig. 7. Here we observe that the Lung region is well detected by all the Unet model variants, but the Unet misclassifies the GGO as consol (in row 2, red regions are predicted as green), while the NUMSnet under-predicts the GGO regions. The reason for lesser performance in the Lung-rad stacks when compared to the Lung-med stack is that the number of frames in sequence for training per stack is lesser when compared to the Lung-med stack. Thus, for denser volumetric stacks the NUMSnet has better multi-class segmentation performance when compared to shorter stacks with few images.
### _Sensitivity to Training Data_
In this experiment, we modify the training dataset sequence and observe the segmentation performance variations. We comparatively analyze the performances for three sets of variations in training and test sequences. The first set comprises of training data set that starts from the first scan in the image stack as reference image followed by sequential 10% of images extracted per stack for training. All remaining images in sequence are considered as test samples, while 1% images from the test samples are withheld for hyper-parameterization as a validation dataset. This is called the \(Initial,Seq\) set. The second set comprises of training images that start from the middle scan per 3D stack. 10% of the subsequent scans can
randomly be selected while maintaining the order of images to generate the training sequence. All remaining images are test data with 1% images randomly removed as validation dataset. This is called the \(Mid,Rand\) set. The third set starts the training images from the middle scan per stack and selects 10% frames in a sequence as training data. All remaining images are test data with 1% images separated for validation tasks. This is called the \(Mid,Seq\) set. The variations in multi-class semantic segmentations for the Lung-med and Lung-rad scans for all these three training/test stacks is shown in Table IV.
Here, we observe that the segmentation performances for the \(Initial,Seq\) train/test stack is consistently worse than the training sets that begin at the middle of each volume stack. This is intuitive since the initial layers often contain no annotations or minimal ROIs, being a precursor to the intended ROIs. Thus, using the \(Initial,Seq\) training dataset, the NUNShret model does not learn enough to discern the small ROIs in this stack. Also, we observe that the performances of \(Mid,Rand\) and \(Mid,Seq\) training stacks are similar for the Lung-med stack. Besides, we observe a 10% improvement in \(Pr\) and \(D\) for \(Mid,Seq\) over \(Mid,Rand\) for GGO segmentations only. Thus, selecting training images in the middle of 3D stacks with
Fig. 6: Example of Lung CT segmentation by the Unet variant models. Row 1 represents the poor segmentation results. Row 2 represent good segmentation results since major ROI is the Lung. The color coding is as follows. Blue: Lung regions, Red: GGO regions, Green: Consolidation regions, Magenta: Over detection of Consolidation regions.
Fig. 7: Example of Lung CT segmentation by the Unet variant models. Row 1: Best case detections, Row 2: Worst case detections. The color coding is as follows. Blue: Lung regions, Red: GGO regions, Green: Consolidation regions.
random ordered selection is important for training a multi-class NUMSnet model.
### _Performance Analysis for NUMSnet variants_
In the third experiment we analyze the number of up-sampling layers that should be propagated to subsequent training scans for optimal multi-class segmentation tasks per volume. In Table V, we analyze the segmentation performances of NUMS-all for Lung-CT stacks, where all 10 up-sampling layers are transmitted. Comparing the Dice scores for the Lung-med stack for NUMS-all with those of NUMSnet in Table II, we observe that NUMS-all improves segmentation \(Pr\) but the overall segmentation performances for GGO, Con and Lung are similar. However, for the Lung-rad stacks, comparing Table V and Table III, we observe an 8% improvement in consolidations segmentation using NUMS-all. However, give that NUMS-all has higher computational complexity without significant improvement in overall segmentation performances, the NUMSnet model can be considered superior to NUMS-all while training with limited images.
### _Transfer Learning for Heart-CT Images_
Finally, we analyze the transfer learning capabilities of pre-trained Unet and variant models from Lung-CT to the Heart-CT stacks. The trained models from the Lung-med image stack are saved and all layers before the final layer are unfrozen, to be retrained on the Heart-CT dataset. The only difference between the Unet and variant models between the Lung-CT and the Heart-CT image sets is the final number of classes in the last layer X(1,5). Re-using the weights and biases of all other layers provides a warm start to the model and aids faster convergence of the loss functions while training with randomly selected ordered training images. For this experiment, the performance of each Unet variant to segment regions with label pixel values [205, 420, 500, 550, 600, 820, 850] are represented by the model name and [pix\({}_{205}\), pix\({}_{420}\), pix\({}_{500}\), pix\({}_{550}\), pix\({}_{600}\), pix\({}_{820}\), pix\({}_{850}\)], respectively, in Table VI. Here, we observe that NUMSnet has superior segmentation performances for the smaller ROIs with pixel values [500, 550, 600, 820, 850], respectively, with 2-10% improvements in Dice scores for these regions over the Unet++ model. Thus, the NUMSnet model aids transfer learning across anatomical image stacks, across label types and yields higher precision for smaller ROIs. Some examples of good and average segmentation using the Unet model variants on the Heart-CT stack are shown in Fig. 8. Here, we observe significant variations for smaller ROIs across the Unet model variants.
## V Conclusion
In this work we present a novel NUMSnet model that is a variation to the Unet++ model [4] specifically for multi-class semantic segmentation in 3D medical image stacks using only 10% of the images per stack selected randomly in an ordered manner around the central scan of the 3D stacks. The novelty of this model lies in the temporal transmission of spatial pixel and neighborhood feature information across scans through the nested layers. The proposed model achieves 3-9% improvements in Dice scores over Unet++ and other Unet model variants for segmenting 3-7 ROIs per volumetric stack.
The comparative performance of the NUMSnet model with existing works that train deep learning models on larger training datasets is shown in Table VII. Here, we observe that the proposed NUMSnet model achieves comparable to improved semantic segmentation performances across a variety of anatomical CT image stacks with only a fraction of the training set of images. This demonstrates the importance of nested layer transmission for enhanced boundary segmentations especially for relatively smaller ROIs. For the Lung-CT stacks, the work in Voulodimos et. al. [28] introduced few-shot method using a Unet backbone for GGO segmentation only and while this method achieved high precision and accuracy, it had low recall and Dice coefficients. Also, for the same dataset, the work on Saood et. al. [2] used a small fraction of images for training, and achieved better binary segmentation performances than multi-class segmentation performances. It is noteworthy that no existing works have bench-marked segmentation performances for the Lung-rad image stacks. For the Heart-CT stacks, most works looked at training on 20 CT stacks and testing on another 20 stacks for high precision of segmentation per ROI. In this work, we apply a pre-trained model on Lung-CT and fine tune it on 4.6% of all Heart-CT images to obtain similar segmentation performances.
Additionally, in this work, we analyze a variety of sampling methods to optimally select the minimal 10% training set. We conclude that random selection of ordered scans is the optimal mechanism to select a minimal training set. Further, we analyzed the optimal number of up-sampling layers that should be transmitted for best semantic segmentation performances. Here, we conclude that the nested layers from a Unet++ model are significant for transmission, while adding additional up-sampling layers for transmission increases the overall computational complexity of the NUMSnet model while not significantly contributing to segmentation performances for sparse training image sets.
Finally, we assess the transfer learning capabilities for the NUMSnet model that is pre-trained on Lung-CT stacks and fine-tuned on Heart-CT images. We conclude that the NUMSnet model aids transfer learning for similar medical image modalities even if the number of classes and ROIs change significantly. This aligns with the works in [29][30] that demonstrate pre-trained models from one medical image modality to scale to other medical image stacks. Future work can be directed towards extending the NUMSnet model to additional medical image modalities such as X-rays, OCT and MRI stacks.
## VI Discussion
One key limiting condition for semantic segmentation using Unet model and its variants is when scans include written text on them. These irregularities can interfere with the segmentation of the outermost ROIs. In such situations, an overall
Fig. 8: Examples of Heart CT segmentation by the Unet variant models. Row 1: Good segmentations. Row 2: Average segmentations. For Row 2 we observe the variations in the small ROI on the left corner of the image across the Unet variants.
mask can be generated centered around all the ROI regions and superimposed on the original image before passing it to the Unet and variant models, thus eliminating the written text region for enhanced classification performance. Another alternative for reliable end-to-end segmentation in these cases, if enough annotated images are available, is to train two Unet or variant models to first detect the foreground region in the first Unet variant model followed by segmenting the ROIs in the second Unet model as shown in [5].
It is noteworthy that single stage Unet model and variants are easily trainable with few annotated images and they typically do not overfit. However, for high resolution images such as whole slide images (WSI), where the dimensions of the medical images are a few thousand pixels per side, resizing such images to smaller dimensions to fit a Unet model or its variant may result in poor segmentation results [31]. In such scenarios, splitting the images to smaller patches followed by training Unet model and variants can improve segmentation performances such as shown in [28].
One key consideration for multi-class segmentations using Unet variant models is the disparity between the ROI sizes that can significantly impact the training stages when only few annotated training images are available. For example, in the Lung-CT image stacks, the lung regions are relatively larger than the GGO and consolidation areas, because of which using few training images and Dice coefficient loss over hundreds of epochs can bias the model to segment the Lung region only. This occurs since the relative variation in pixel neighborhoods for larger ROIs is smaller than the pixel neighborhoods in smaller ROIs. In such situations, it is crucial to ensure that more training images are selected that have the smaller ROIs annotated and the Unet variant models are run for about 40-60 epochs with region sensitive loss functions.
Finally, for transfer learning applications, full image network weights transfer better when compared to Unet model variants trained on image patches such as in [28]. Future efforts can be directed towards transfer learning capabilities of the proposed NUMSnet model on WSI and patch image sets.
The proposed NUMSnet model layers and interconnections are shown in Fig. 9. The layer interconnections from the NUMS-all model are shown in Fig. 10.
|
2306.14708 | A Simple and Effective Baseline for Attentional Generative Adversarial
Networks | Synthesising a text-to-image model of high-quality images by guiding the
generative model through the Text description is an innovative and challenging
task. In recent years, AttnGAN based on the Attention mechanism to guide GAN
training has been proposed, SD-GAN, which adopts a self-distillation technique
to improve the performance of the generator and the quality of image
generation, and Stack-GAN++, which gradually improves the details and quality
of the image by stacking multiple generators and discriminators. However, this
series of improvements to GAN all have redundancy to a certain extent, which
affects the generation performance and complexity to a certain extent. We use
the popular simple and effective idea (1) to remove redundancy structure and
improve the backbone network of AttnGAN. (2) to integrate and reconstruct
multiple losses of DAMSM. Our improvements have significantly improved the
model size and training efficiency while ensuring that the model's performance
is unchanged and finally proposed our SEAttnGAN. Code is avalilable at
https://github.com/jmyissb/SEAttnGAN. | Mingyu Jin, Chong Zhang, Qinkai Yu, Haochen Xue, Xiaobo Jin, Xi Yang | 2023-06-26T13:55:57Z | http://arxiv.org/abs/2306.14708v2 | # A Simple and Effective Baseline for Attentional Generative Adversarial Networks
###### Abstract
Synthesising a text-to-image model of high-quality images by guiding the generative model through the Text description is an innovative and challenging task. In recent years, AttnGAN based on the Attention mechanism to guide GAN training has been proposed, SD-GAN, which adopts a self-distillation technique to improve the performance of the generator and the quality of image generation, and Stack-GANs, which gradually improves the details and quality of the image by stacking multiple generators and discriminators. However, this series of improvements to GAN all have redundancy to a certain extent, which affects the generation performance and complexity to a certain extent. We use the popular simple and effective idea (1) to remove redundancy structure and improve the backbone network of AttnGAN. (2) to integrate and reconstruct multiple losses of DAMSM (Deep Attentional Multimodal Similarity Model). Our improvements have significantly improved the model size and training efficiency while ensuring that the model's performance is unchanged and finally proposed our **SEAttnGAN**. Code is available at [https://github.com/jmyissb/SEAttnGAN](https://github.com/jmyissb/SEAttnGAN).
Keywords:Text-to-image AttnGAN Simple and effective redundancy SEAttnGAN
## 1 Introduction
Text-to-image technology, at the intersection of computer vision and natural language processing, converts textual descriptions into visual representations. It finds applications in art creation, virtual scene generation, advertising design, and game development etc. [11][7][12][6][9]. Within the field of image generation, an influential framework is the Generative Adversarial Network (GAN). By employing adversarial learning, wherein the generator and discriminator are
trained simultaneously, the quality of the generated images progressively improves. GAN-based text-to-image models have rapidly evolved since the introduction of Text descriptions as supervised signals, demonstrated by GAN-INT-CLS [23]. Notable advancements include the Adversarial What-Where Network (GAWWN) with its scene layout estimator [24], the Pixel Recurrent Neural Networks (PixelCNN) that model image pixel distributions using CNNs [30], and the Stacked Generative Adversarial Networks (StackGAN and StackGAN-v2) that enhance image quality through multi-stage models or multiple generators and discriminators [35][36]. Furthermore, the Semantics Disentangling GAN (SD-GAN) addresses latent space factor confusion with a self-distillation mechanism [33].
To improve the Text-to-image model performance, Attentional Generative Adversarial Network (AttnGAN) introduces the attention mechanism and the deep attention multimodal similarity model in the generation work, which enables AttnGAN to capture the word-level fine-grained information in the sentence. Thus allowing it to synthesise the advantages of fine-grained details of different subregions of an image by focusing on related words in the natural language description. However, the disadvantages of AttnGAN are also apparent. The AttnGAN contains multiple Attention models and multiple groups of baseline networks, which makes its model structure complex and has many parameters, which may lead to high computational complexity. To solve the problems,
* We propse our optimized simplified version, called Simple and effective Attentional Generative Adversarial Networks (SEAttnGAN ). Our work improves the structure of AttnGAN and optimises the number of its parameters, and its performance maintains the same level as before optimisation.
* Additionally, to achieve the corresponding effect even after model simplification and parameter reduction, we introduced semantics in the model. Our model can use the comparative loss of semantics as auxiliary information that can guide the GAN generator to more accurately generate images that match the text description.
To solve the large and complex problem of the AttenGAN model, our work proposes a simplified AttenGAN that performs similarly to the original AttenGAN with significantly reduced structure and reduced parameter count. Even in the CUB dataset generation test, we use one-tenth of the number of parameters of AttnGAN (our SEAttnGAN, 26.37M; AttnGAN, 230M) surpasses the image generation quality of AttnGAN, and the time consumption is much less than AttnGAN, which powerfully achieves our motivation.
The overall paper is organized as follows: in Section 2, we introduce previous related work on text-to-image task. Section 3 gives a detailed introduction to our method and baseline; in Section 4, our algorithm will be compared with other algorithms qualitatively and quantitatively. Subsequently, we summarize our algorithm and possible future research directions.
## 2 Related Work
GANs [8] have the remarkable capability to generate realistic images that are indistinguishable from real ones. Then Dong et al. proposed a method to generate realistic images using only natural language written descriptions, where they changed the structure of GAN by training a style encoder network to invert the generator network [4].
While current methods can generate images with novel properties, they do not preserve the text-independent details of the original image. To address this issue, Nam et al. propose TAGAN (Text Adaptive Generative Adversarial Networks) to generate semantically manipulated images while preserving text-independent details [20]. The key idea of PPGN (Plug & Play Generative Networks)[21] is to feed random noise vectors into generative models and combine them with optimization techniques to guide the generative process. LAPGAN (Laplacian Generative Adversarial Network)[2] is a generative model framework combining Laplacian pyramid decomposition and GAN, aiming to generate high-resolution images from coarse to fine in a multi-scale progressive manner. MirrorGAN [22] achieves text-to-image conversion by learning fine-grained correspondences, in which reverse information propagation during generation can promote consistency between text and images, and in the process of reverse generation through The discriminator provides an additional supervisory signal. In text-to-image synthesis, conditional GANs have been widely used, where a generator will take a text description as input and generate an image [19] that matches the description. Recent studies have shown that tuning GANs based on different factors such as object attributes, styles, and layouts can improve the quality and diversity of generated images.
AttnGAN is a symbolic attention-based [32] that improves the quality and relevance of generated images by focusing on important parts of textual descriptions. These mechanisms have been used to guide generators to notice connections between words and phrases and use them to create corresponding images. A multimodal GAN [1] has been developed to generate multiple images corresponding to a single textual description, and these diverse high-quality images can provide different interpretations for the same textual description. VAE(Variational Autoencoder) is a type of generative model that has been explored for text-to-image synthesis [14]. In VAE-based text-to-image synthesis, the model learns a latent space representation of the input text descriptions, which can be used to generate images. One of the primary advantages of using VAE-based models is their interpretability. But, there is still room for improvement to generate photo-realistic images that match the input textual description. PixelCNN was a generative model generates images pixel by pixel, where each pixel is conditioned on the previous pixels [30]. The key idea behind PixelCNN is to use a convolutional neural network (CNN) to model the distribution of the image pixels. PixelCNN has been shown to be effective at generating high-quality images, it has also been used for text-to-image synthesis, where the image is generated conditioned on an input text description.
GAN-INT-CLS is the first application study that adds textual descriptions (i.e., sentence embedding vectors) as supervised signals to image generation [23]. The main goal of GAWWN is to estimate the scene layout of an image while generating it [24]. GAWWN improves the text-to-image performance by introducing Scene Layout Estimator in the image generation process [35]. StackGAN (Stacked Generative Adversarial Networks) is used to gradually generate high-resolution images. This paper constructs a multi-stage generative model, which aims to solve the challenge of traditional GAN models in generating detail-rich images. Stackgan-v2 This paper improves and extends StackGAN by adding multiple generator and discriminator networks [36], introducing a super-resolution network, and introducing a triangular loss function and a classification loss function. SD-GAN mainly solves the confounding problem of the traditional GAN model in representing factors in the latent space and introduces the mechanism of self-disentanglement learning to train the generator and discriminator networks [33]. Training of the process-aided generator for "self-distillation" by augmenting the Distiller. SD-GAN mainly solves the confounding problem of the traditional GAN model in representing factors in the latent space[18], and introduces the mechanism of self-disentanglement learning to train the generator network and the discriminator network [37]. Training of the process-aided generator for "self-distillation" by augmenting Distiller.
## 3 Method
Figure 1: Overview architecture of AttnGAN
### Background: AttnGAN
AttnGAN [32] achieves fine-grained text-to-image generation through attention-driven multi-stage refinement, which focuses on related words to synthesize some details in different regions of the image.
The AttnGAN model consists of \(m\) generators (\(G_{0},G_{1},...,G_{m-1}\)), each generator will generate an image as follows
\[H_{0} =F_{0}(z,F^{ca}(T)) \tag{1}\] \[h_{i} =F_{i}(h_{i-1},F^{attn}(T,H_{i-1}))\] (2) \[\hat{x}_{i} =G_{i}(h_{i}) \tag{3}\]
where \(i=1,2,\cdots,m-1\), \(z\) is a noise variable following a Gaussian distribution, \(T\) is a matrix of word vector representations, and \(F^{ca}\) uses conditional augmentation to encode each word vector in \(T\). The attention network \(F^{attn}\) will generate a weighted representation of the word vector according to the relationship between the image vectors and the text vectors.
The attn network \(F^{attn}\) takes word features and image features as input: 1) maps word features to image feature space; 2) generates a new representation of word features according to the correlation between word features and image features, which is a linear weighting of image features combination. The specific description is as follows \(\hat{T}\) = \(UT\)
\[\alpha_{i,j}=\frac{\exp(\hat{t}_{i}^{T}h_{j})}{\sum_{k=1}^{m}\exp(\hat{t}_{i}^ {T}h_{k})},\quad c_{i}=\sum_{j=1}^{m}\alpha_{ij}h_{j}, \tag{4}\]
where \(U\) is the transformation matrix, which transforms word vectors \(t_{i}\) into the semantic space of image features \(h_{i}\).
AttnGAN model jointly optimizes generative adversarial loss and deep attention multimodal similarity loss. The generative adversarial loss contains conditional distribution and unconditional distribution likelihood function loss, where the unconditional distribution likelihood loss determines whether an image is real or fake, and the conditional distribution likelihood loss determines whether an image and a sentence match. DAMSM learns two neural networks to map image sub-regions and sentence words into a common semantic space respectively to compute fine-grained loss for image generation.
### Our Proposed Method
In AttnGAN, multiple generators generate images of increasing scale and use an attention mechanism for each generator. In order to reduce the complexity of the model, we replace multiple generators with upsampling modules to output ever-increasing feature maps, and use attention only on feature maps at one scale, and to align with word embedding vectors, we insert multiple downsampling modules, as shown below in Fig. 2.
In this paper, we simplified the network structure of AttnGAN. For the input of the generator, we mix sentence features with noise. After data augmentation, the input is first reshaped through a fully connected layer. Then apply a series of Up Block layers to extract image features. The Up-Block layer is composed of an upsample layer, a redisual block, and DF-Blocks to fuse the text and image feature during the generation process. The image features obtained are used as input and enter the attention layer together with the text features. Finally, the image features are re-extracted by a layer of Up-Block and converted to image features by a convolution layer. The detail architecture shown in Figure 2
We simplified the existing DFblock structure and used a new DFblock structure consisting of a two-layer linear mlp structure and an additional hidden layer (with ReLU activation) to further extract sentence features, map sentence features from 256 dimensional space to 64 dimensional space, and obtain new representation features. The new structure has the characteristic of lightweight, reducing the number of model parameters, simplifying the calculation process, and accelerating training speed. The detail architecrure shown in Figure3.2
The discriminator converts images into features through a series of DownBlock. Then the sentence vector will be replicated and concatenated with image features. The discriminator distinguishing generated images from real samples, the discriminator promotes the synthesize images with higher quality and text-image semantic consistency. And the word feature is used at word level to compute loss
Figure 2: Architecture of SEAttnGAN. FC: fully connected layer. UPBlock: residual block + DFBlock. DownBlock: downsample + residual block
by measures the similarity with image-text.
The text encoder is a bi-directional Long Short-Term Memory(LSTM). and we directly use the pre-trained model provided by AttnGAN.
The image encoder is a Convolutional Neural Network(CNN) that maps images to semantic vectors. we directly use the Inception-v3 model pretrained on ImageNet. To generate realistic images with multiple levels(i.e.,sentence level and word level) of conditions,the loss function of our network is defined as:
\[\mathcal{L}=\mathcal{L}_{G}+\gamma\mathcal{L}_{D} \tag{5}\]
where
\[\mathcal{L}_{G} =-E_{G(z)}\sim\ p_{g}[D(G(z)),t] \tag{6}\] \[\mathcal{L}_{D} =\lambda\mathcal{L}^{s}+(1-\lambda)\mathcal{L}^{w}.\]
Here \(\gamma\) and \(\lambda\) is hyperparameter to balance the terms of equation. Here we defined those two loss function, respectively.
\[\mathcal{L}^{s}= -E_{x\sim P_{r}}[\min(0,-1+D(x,\underline{t}))] \tag{7}\] \[-(1/2)E_{G(z)\sim P_{g}}[\min(0,-1-D(G(z),\underline{t}))]\] \[-(1/2)E_{x\sim P_{mis}}[\min(0,-1-D(x,\underline{t}))]\] \[+kE_{x\sim P_{r}}\left(\|\nabla_{x}D(x,\underline{t})\|+\|\nabla _{t}D(x,\underline{t})\|\right)^{p}]\]
where \(z\) is the noise vector; \(\bar{e}\) is the sentence vector; \(P_{g},P_{r},P_{mis}\) represent the synthetic data distribution, real data distribution, and mismatching data distribution, respectively.
\[\mathcal{L}^{w}=-\sum_{i=1}^{M}log\frac{exp(\mu\cdot sim(c_{i},t_{i}))}{\sum_{ j=1}^{M}exp(\mu\cdot sim(c_{i},t_{j}))} \tag{8}\]
Figure 3: (1)The architecture of the DFblock: 2 layers linear mlp, tensor from the previous layers. (2)The architecture of our model’s loss function
where
\[c_{i}=\sum_{j=0}^{n-1}\alpha_{j}x_{j},\quad\alpha_{j}=\frac{exp(\mu_{1}s_{i,j})}{ \sum_{k=0}^{n-1}exp(\mu_{1}s_{i,k})} \tag{9}\]
\(s\) is a matrix product of entire image feature and word feature defined by \(t^{T}x\). \(\mu\) and \(\mu_{1}\) is a smoothing factor determined by experiments. In this batch of sentences, only \(t_{i}\) matches the image \(x_{i}\), and treat all other \(M-1\) sentences as mismatching descriptions. \(sim(c_{i},t_{i})\) is the cosine similarity between \(c_{i}\) and \(t_{i}\) which equal to \(\frac{c_{i}^{T}t_{i}}{||c_{i}||||t_{i}||}\). Based on experiments on validation set, we set the hyperparameters in this section as: \(\gamma=5\), \(\lambda=0.2\), \(\mu=5\), \(\mu_{1}=10\). The detail architecture shown in figure 3.2
## 4 Experiment
**Datasets.** We selected two datasets: CUB Bird [31] and COCO [16]. The CUB dataset contains 11,788 bird images of 200 species, each with descriptions in 10 languages. Meanwhile, the COCO dataset has 80,000 images for training and 40,000 images for testing, each with descriptions in five languages.
**Experiment settings.** Both our SEAttnGAN generator and discriminator use the Adam optimizer. The generator uses the parameters \(\beta_{1}=0\), \(\beta_{2}=0.9\) and the learning rate is 0.0001 [13]; while the discriminator uses the parameters \(\beta_{1}=0.0\), \(\beta_{2}=0.9\), the learning rate is 0.0004.
**Evaluation metrics.** Finally, we choose IS (Inception Score) and FID (Frechet Inception Distance) to evaluate the quality of the generated image. IS measures [27] the KL divergence between the generated image category distribution obtained by a pre-trained Inception V3 model and the real image category to represent the authenticity of the generated image. The smaller the KL divergence, the closer the category distribution of the generated image is to the real image. Higher IS values indicate better diversity and authenticity of generated images. It's not perfect though, as it may not capture the quality of detail and realism of the resulting image. Therefore, when evaluating image generation models, it is usually necessary to combine other indicators and human subjective evaluation for comprehensive consideration.
Frechet Inception Distance (FID) [10] is a metric used to measure the quality of image generation. It is based on the statistical feature difference between the generated image and the real image: it first uses a pre-trained image classifier (such as the Inception V3 model) to extract the feature representation of the real image and the generated image, and then uses the Frechet distance to measure the difference between the two feature distributions. The difference between the sample mean and the covariance matrix of. The lower the value of FID, the better the quality of the generated image, and the higher the coincidence with the real image distribution. The advantage of FID over other metrics is that it can capture more subtle differences in image quality and has a high correlation with human subjective evaluation results. However, FID also has some limitations, such as requiring a pre-trained classifier and a large number of image samples.
### Quantitative Comparison
In this section, we compare our new method with baseline methods and some other methods, here we report quantitative metrics such as IS and FID on CUB Bird[31] and COCO datasets[16]. In particular, we also compare the memory and time consumed by these methods during inference. During model training, some models use additional pre-training models: SD-GAN[33] uses third-party trained COCO pre-training models[25], CPGAN[15] uses additional pre-training YOLO v3[5], XMC-GAN[34] use additional pre-trained VGG-19[28] and Bert[3], DAE-GAN use additional NLTK POS markers and design manually Rules and TIME use an additional two-dimensional positional encoding.
As shown in the results in Table 1, our method achieves comparable performance to other methods on IS and FID metrics for both datasets. In particular, our method achieves better performance compared to the baseline model AttnGAN, for example, on CUB, our method achieves a FID value of 15.03, which is better than AttnGAN's 23.98. In terms of inference time, our method runs about 45 times faster than the baseline method (from 4.05 seconds to 0.11 seconds). Considering the memory footprint of the model, our method is much lower than other methods. Compared with the baseline method AttnGAN, it reduces the memory footprint to about 1/9 (26.37M vs. 230M). Our SEAttnGAN did not achieve an absolute advantage in COCO, which may be due to the difficulty of training generative models in a short period of time due to the large variety of items in the COCO dataset. Comparing the FID values on the two data sets of CUB and COCO, we also found that the image generation task on COCO is more difficult than that of CUB. DF-GAN[29] using the same simple and effective ideas also significantly reduce model size. Still, our SEAttnGAN outperforms DF-GAN in generating image quality (IS, FID) and generation time. Besides, compare with the original loss in DF-GAN, our new loss saw an outstanding improvement. The detail is showed in figure 2
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline Model & \multicolumn{2}{c|}{CUB} & \multicolumn{2}{c|}{COCO} & \multicolumn{2}{c}{Cost} \\ \cline{2-6} & IS \(\uparrow\) & FID \(\downarrow\) & FID \(\downarrow\) & Memory \(\downarrow\) & Inference Time \(\downarrow\) \\ \hline StackGAN[35] & 3.70 & - & - & - & - \\ StackGAN++[36] & 3.84 & - & - & - & - \\ MirrorGAN[22] & 4.56 & 18.34 & 34.71 & - & - \\ SD-GAN[33] & 4.67 & - & - & 335M & 6.18s \\ DM-GAN[37] & 4.75 & 16.09 & 32.64 & 46M & 7.06s \\ CPGAN[15] & - & - & 55.80 & 318M & - \\ TIME[17] & **4.91** & **14.30** & 31.14 & 120M & - \\ XMC-GAN[34] & - & - & **9.30** & 166M & - \\ DF-GAN[29] & 4.46 & 18.23 & 41.83 & 46.79M & 3.71s \\ DAE-GAN[26] & 4.42 & 15.19 & 28.12 & 98M & - \\ \hline AttnGAN [32] & 4.36 & 23.98 & 35.49 & 230M & 4.05s \\
**SEAttnGAN(ours)** & 4.32 & 15.03 & 34.78 & **26.37M** & **0.11s** \\ \hline \end{tabular}
\end{table}
Table 1: The results of FID, IS and NOP comparison on dataset CUB bird and COCO.
### Qualitative Comparison
We compare the visualization of the model with three different models in Fig. 4: AttnGAN [32], DF-GAN and DM-GAN [37]. As can be seen from Fig. 4, the results obtained from AttnGAN and DF-GAN are not as realistic as those obtained from DM-GAN and our model. The difference between the bird and the background is less obvious, especially in the leg and tail areas. Also, the feathers on the bird's back lack a certain clarity. The image generated by DM-GAN also loses part of the image. In contrast, the images generated by our model exhibit a striking feature: a high degree of discrimination between the bird and the background. The pattern of the bird in our model is easy to discern. In addition, the background in the images of our model has stronger brightness than those generated by the other three models.
Finally, we also observe from Fig. 4 that the images in the four columns on the left are distorted on all three models on the COCO dataset, which means that some images are missing, duplicated, or folded. However, compared to images generated by the other three models, the background in our model's images
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model & \multicolumn{2}{c|}{CUB} \\ \cline{2-3} & IS \(\uparrow\) & FID \(\downarrow\) \\ \hline SEAttnGAN with original loss & 2.88 & 34.71 \\ \hline SEAttnGAN with new loss & 4.29 & 15.64 \\ \hline \end{tabular}
\end{table}
Table 2: The results of FID, IS comparison on dataset CUB bird
Figure 4: The architecture of the SEAttnGAN. FC: fully connected layer. UPBlock: upsample + residual block + DFBlock. DownBlock: downsample + residual block
shows stronger brightness, adding to the visual appeal and overall aesthetics of the output.
## 5 Conclusion
Based on the results above, it appears that AttnGAN utilizes two attention models during the fine-grained image generation process, which seems unnecessary. However, after implementing a simple and effective target improvement, the model's size was significantly reduced without any significant decrease in performance indicators like FID and IS when compared to the original AttnGAN version. Even in the CUB bird's FID item, we beat our improved object AttnGAN with one-tenth of the model size, which is an exciting result. Our improvement involved eliminating one set of attention models and adding the upsample and residual block, resulting in better image quality. Additionally, our combination of loss functions for the three stages of SEAttnGAN has proven to be effective.
## 6 Future work
The limitation of our model mainly reflects on the low performance of the COCO dataset. It occasionally generates defective images, meaning it needs to run the model several more times to generate the correct image. Future work will focus on taking methods to overcome these shortcomings. Initially, the number of parameters in our model will be minimised to render our model simple and effective. We anticipate improved performance and more reliable image generation by reducing the complexity. our attention will be directed towards enhancing the Attention-GAN model itself. We will explore and implement novel techniques to refine and optimise its architecture, enhancing its ability to generate high-quality images. Eventually, the Simple and effective philosophy can be extended to most models that are large but widely used by industry.
## 7 Authorship Statement
All authors contributed to the study's conception and design. Mingyu Jin and Chong Zhang completed the construction and debugging of the basic network framework of SEAttnGAN. Simple and effective improvements to AttnGAN ideas and improvements to the Attention model were completed by Chong Zhang. Loss function reconstruction and SEAttnGAN Mathematical interpretation by Qinkai Yu. Optimising the Attention Model combined with semantic information was done by Haochen Xue. All authors wrote the manuscript, and all authors commented on previous versions. Xiaobo Jin supervised this work and made a comprehensive revision and reconstruction of the work. All authors read and approved the final manuscript.
|
2305.03724 | DualCross: Cross-Modality Cross-Domain Adaptation for Monocular BEV
Perception | Closing the domain gap between training and deployment and incorporating
multiple sensor modalities are two challenging yet critical topics for
self-driving. Existing work only focuses on single one of the above topics,
overlooking the simultaneous domain and modality shift which pervasively exists
in real-world scenarios. A model trained with multi-sensor data collected in
Europe may need to run in Asia with a subset of input sensors available. In
this work, we propose DualCross, a cross-modality cross-domain adaptation
framework to facilitate the learning of a more robust monocular bird's-eye-view
(BEV) perception model, which transfers the point cloud knowledge from a LiDAR
sensor in one domain during the training phase to the camera-only testing
scenario in a different domain. This work results in the first open analysis of
cross-domain cross-sensor perception and adaptation for monocular 3D tasks in
the wild. We benchmark our approach on large-scale datasets under a wide range
of domain shifts and show state-of-the-art results against various baselines. | Yunze Man, Liang-Yan Gui, Yu-Xiong Wang | 2023-05-05T17:58:45Z | http://arxiv.org/abs/2305.03724v2 | # DualCross: Cross-Modality Cross-Domain Adaptation for Monocular BEV Perception
###### Abstract
Closing the domain gap between training and deployment and incorporating multiple sensor modalities are two challenging yet critical topics for self-driving. Existing work only focuses on single one of the above topics, overlooking the simultaneous domain and modality shift which pervasively exists in real-world scenarios. A model trained with multi-sensor data collected in Europe may need to run in Asia with a subset of input sensors available. In this work, we propose DualCross, a cross-modality cross-domain adaptation framework to facilitate the learning of a more robust monocular bird's-eye-view (BEV) perception model, which transfers the point cloud knowledge from a LiDAR sensor in one domain during the training phase to the camera-only testing scenario in a different domain. This work results in the first open analysis of cross-domain cross-sensor perception and adaptation for monocular 3D tasks in the wild. We benchmark our approach on large-scale datasets under a wide range of domain shifts and show state-of-the-art results against various baselines. Our project webpage is at [https://yunzeman.github.io/DualCross](https://yunzeman.github.io/DualCross).
## 1 Introduction
In recent years, multi-modality 3D perception has shown outstanding performance and robustness over its single-modality counterpart, achieving leading results for various 3D perception tasks [14, 23, 27, 31, 36, 42] on large-scale multi-sensor 3D datasets [3, 15, 34]. Despite the superiority in information coverage, the introduction of more sensor modalities also poses additional challenges to the perception system. On one hand, generalizing the model between datasets becomes hard, because each sensor has its unique properties, such as field-of-view (FoV) for cameras, density for LiDAR, _etc._ On the other hand, the operation of the model is conditioned on the presence and function of more sensors, making it hard to work on autonomous agents with less sensor types or under sensor failure scenarios.
More specifically, transferring knowledge among different data domains is still an open problem for autonomous agents in the wild. In the self-driving scenario, training the perception models offline in a source domain with annotation while deploying the model in a different target domain without annotation is very common in practice. As a result, a model has to consider the domain gap between source and target environments or datasets, which usually involves different running locations, different sensor specifications, dif
ferent illumination and weather conditions, _etc_.
Meanwhile, in addition to domain shift, modality shift is another factor which challenges the successful deployment of models. The widely adopted assumption that all sensors are available during training, validation, and deployment time is not always true in reality. Due to the cost and efficiency trade-off, or sensor missing and failure, in many scenarios we can have fewer sensors available in the target domain during testing than what we have in the source domain during training. A typical scenario is having camera and LiDAR sensors in the large-scale training phase while only having cameras for testing, as shown in Figure 1. It is not clear how to facilitate the camera-only 3D inference with the help of a LiDAR sensor only in the source domain during training.
The challenges above raise an important question: _Can we achieve robust 3D perception under both domain shift and sensor modality shift?_ Existing methods either study cross-domain scenarios assuming consistent modality [10, 14, 17, 20, 22, 29, 32, 47, 49], or study cross-modality scenarios assuming the same domain during training and validation [6, 8, 11, 13, 18, 19, 46]. However, simultaneous domain and modality shift poses additional challenges of large domain discrepancy and exacerbates the ill-pose nature of 3D inference from monocular information due to the misaligned sensory data. As we will discuss in Sec. 3.2, our new problem setting requires a novel methodology in using LiDAR without increasing the domain discrepancy.
To tackle the above challenges, we propose DualCross, a cross-modality cross-domain adaptation framework for bird's-eye-view (BEV) perception. Our model addresses the monocular 3D perception task between different domains, and utilizes additional modalities in the source domain to facilitate the evaluation performance. Motivated by the fact that image and BEV frames are bridged with 3D representation, we first design an efficient backbone to perform 3D depth estimation followed by a BEV projection. Then, to learn from point clouds without explicitly taking them as model inputs, we propose an implicit learning strategy, which distills 3D knowledge from a LiDAR-Teacher to help the Camera-Student learn better 3D representation. Finally, in order to address the visual domain shift, we introduce adversarial learning on the student to align the features learned from source and target domains. Supervision from the teacher and feature discriminators are designed at multiple layers to ensure an effective knowledge transfer.
By considering the domain gap and effectively leveraging LiDAR point clouds in the source domain, our proposed method is able to work reliably in more complicated, uncommon, and even unseen environments. Our model achieves state-of-the-art performance in four very different domain shift settings. Extensive ablation studies are conducted to investigate the contribution of our proposed components, the robustness under different changes, and other design choices.
The main contributions of this paper are as follows. (1) We introduce mixed domain and modality mismatch, an overlooked but realistic problem setting in 3D domain adaptation in the wild, leading to a robust camera-only 3D model that works in complicated and dynamic scenarios with minimum sensors available. (2) We propose a novel LiDAR-Teacher and Camera-Student knowledge distillation model, which considerably outperforms state-of-the-art LiDAR supervision methods. (3) Extensive experiments in challenging domain shift settings demonstrate the capability of our methods in leveraging source domain point cloud information for accurate monocular 3D perception.
## 2 Related Work
Multi- and Cross-modality 3D Perception.Considerable research has examined leveraging signals from multiple modalities, especially images and point clouds, for 3D perception tasks. Early work [21] projects point clouds to the BEV frame and fuses 2D RGB features to generate proposals and regress bounding boxes. Later work [48, 51] explores deep fusion between points and images. Under the umbrella of the cross-modality setting, 2DPASS [46] transfers features learned from images to the LiDAR. BEVDepth [19] obtains reliable depth estimation by exploiting camera parameters with image features during training. More recently, a line of work explores knowledge distillation from one sensor to another for 3D object detection [6, 8, 11, 13, 18, 19, 46]. On the contrary, our method explores a more realistic yet challenging setting, where we use LiDAR data in one domain (Boston/Sunny/Daylight) during training to help the camera-only model during inference in another domain (Singapore/Rainy/Night). As a result, we analyze and improve the actual usefulness of additional sensors under domain shift settings.
Cross-domain 3D Perception.While extensive research has been conducted on domain adaptation for 2D tasks, the field of domain adaptation for 3D perception in the real world has received relatively less attention. Some prior work adapts depth estimation from synthetic to real image domains [17, 50]. Working on point clouds, Point-DAN [32] designs a multi-scale adaptation model for 3D classification. For 3D semantic segmentation, Squeeze-Seg [43] projects point clouds to the 2D view, while other work [10, 14, 29] leverages point clouds and images data together. Recent work [22, 47, 49] explores cross-domain 3D object detection from point clouds. SRDAN [49] employs adversarial learning to align the features between different domains. Although prior work [14, 20] explores various domain adaptation techniques for different sensor modalities, these methods only adopt the same modalities to learn
the domain shift between source and target data. In contrast, our approach achieves robust 3D perception in a more general scenario, where the model can perform accurate 3D inference in the target domain by adapting information encoded in source-exclusive modalities.
**3D Inference in Bird's-Eye-View Frame.** Inferring 3D scenes from the BEV perspective has recently received a large amount of interest due to its effectiveness. MonoLayout [24] estimates the layout of urban driving scenes from images in the BEV frame and uses an adversarial loss to enhance the learning of hidden objects. Another work [4] proposes to employ graphical representation and temporal aggregation for better inference in the driving scenarios using on-board cameras. Recently, using BEV representation to merge images from multiple camera sensors has become a popular approach [12, 26]. Following the monocular feature projection proposed by Orthographic Feature Transform (OFT) [33], Lift-Splat-Shoot [30] disentangles feature learning and depth inference by learning a depth distribution over pixels to convert camera image features into BEV. Unlike the above work performing BEV analysis in settings with more controlled premises, we are the first to explore cross-domain and cross-sensor settings, leading to a more robust and more realistic 3D inference methodology.
## 3 Approach
In this work, we consider the task of learning BEV representation of scenes with domain shift and modality mismatch. Specifically, the model will be given annotated LiDAR point clouds and camera images in the source domain, but only unannotated camera images in the target domain. And the model seeks to achieve highest performance on the unsupervised target domain. This problem setting is common and worthwhile, especially considering the existence of many existing public multi-modality datasets and the rise of many camera-only vehicle scenarios.
Formally, **for the source domain**, we are given _labeled_ data with \(N^{s}\) multi-modality samples, \(\mathcal{D}^{s}=\{(\mathbf{X}_{i}^{s},\mathbf{P}_{i}^{s},\mathbf{y}_{i}^{s})\}_{i=1}^{N^{ s}}\), where \(s\) represents the source domain. Here \(\mathbf{X}_{i}^{s}=\{\mathbf{x}_{ik}^{s}\}_{k=1}^{n}\) consists of \(n\) camera images \(\mathbf{x}_{ik}^{s}\in\mathbb{R}^{3\times H\times W}\). The number of cameras \(n\) can take any integer as small as one, depending on the dataset or cameras deployed on the vehicle. In addition, each camera image has an intrinsic matrix and an extrinsic matrix. \(\mathbf{P}_{i}^{s}\) is a point cloud containing multiple unordered points \(\mathbf{p}\in\mathbb{R}^{3}\) represented by 3D coordinate values. And label \(\mathbf{y}_{i}^{s}\) represents rasterized representation of the scenes in the BEV coordinate. **For the target domain**, we are given _unlabeled_ data with \(N^{t}\) image samples, \(\mathcal{D}^{t}=\{\mathbf{X}_{i}^{t}\}_{i=1}^{N^{t}}\), where \(t\) represents the target domain, and we want to estimate \(\{\mathbf{y}_{i}^{t}\}_{i=1}^{N^{t}}\), the BEV representation of the scenes in the target domain.
An overview of our method DualCross is illustrated in Figure 2. DualCross is designed to extract features from monocular images and project the features into the BEV frame (Section 3.1), using estimated or ground truth 3D depth information. The model is composed of a LiDAR-teacher and a Camera-student (Section 3.2), where the teacher encodes how to learn better representation given point clouds, and transfers that knowledge to the camera-only student using multi-level teacher-student supervision. Finally, to bridge the domain gap between source and target domains, we leverage adversarial discriminators at different feature layers to align the distributions across two domains in the camera-student model (Section 3.3). Finally, we describe the overall learning objective and loss designs (Section 3.4).
### Learning BEV from Images
In order to achieve 3D perception under the cross-modality setting, our first challenge is to unify the image coordinates, point cloud coordinates, and BEV coordinates into a joint space. We follow LSS [30] to transform the image features from perspective view into the BEV view. Specifically, we tackle this problem by constructing a 3D voxel representation of the scene for each input image. We discretize the depth axis into \(N_{d}\) bins and lift each pixel of the images into multiple voxels (frustums), where each voxel is represented by the 3D coordinate of its center location. For a given pixel \(\mathrm{px}=(h,w)\) on one of the camera image, it corresponds to a set of \(N_{d}\) voxels at different depth bins:
\[V_{\mathrm{px}}=\{v_{i}=M^{-1}[d_{i}h,d_{i}w,d_{i}]^{T}|i\in\{1,2,\cdots,N_{d} \}\}, \tag{1}\]
where \(M\) is camera matrix and \(d_{i}\) is the depth of the \(i\)-th depth bin. The feature vector of each voxel \(v_{i}\) in \(V_{\mathrm{px}}\) is the base feature \(\mathbf{f}_{\mathrm{px}}\) of pixel \(\mathrm{px}\) scaled by the depth value \(\alpha_{i}\). More specifically, \(\mathbf{f}_{vi\in V_{\mathrm{px}}}=\alpha_{i}\cdot\mathbf{f}_{\mathrm{px}}\), where the pixel feature \(\mathbf{f}_{\mathrm{px}}\) is extracted by an image encoder. And the depth value \(\alpha_{i}\) is obtained either from LiDAR point clouds or by estimation, in the teacher and student model, respectively. The acquirement of \(\alpha_{i}\) is introduced in Sec. 3.2.
After getting the feature for each of the voxels, we project the voxels onto the BEV and aggregate the features to get the BEV feature map. The BEV frame is rasterized into \((X,Y)\) 2D grids, and for each grid, its feature is constructed from the features of all the 3D voxels projected into it using mean pooling. This projection allows us to transform an arbitrary number of camera images into a unified BEV frame. Finally, we obtain an image-like BEV feature embedding, which is used to estimate the final representation using a convolutional neural network (CNN) decoder.
This architecture design bridges the image and LiDAR modalities through an intermediate 3D voxelized representation. Hence, we can take LiDAR point clouds as input into the model to directly guide the BEV projection without hav
Figure 2: DualCross is designed to extract features from monocular images and project the features into the BEV frame (Section 3.1), using estimated or ground truth 3D depth information. The model is composed of a LiDAR-teacher and a Camera-student (Section 3.2), where the teacher encodes how to learn better representation given point clouds, and transfers that knowledge to the camera-only student using multi-level teacher-student supervision. Finally, to bridge the domain gap between source and target domains, we leverage adversarial discriminators at different feature layers to align the distributions across two domains in the camera-student model (Section 3.3). Finally, we describe the overall learning objective and loss designs (Section 3.4).
ing to change the overall pipeline. This further enables the distillation of knowledge from the point clouds to images using a teacher-student model.
### LiDAR-Teacher and Camera-Student
The co-existence of domain and modality gaps poses additional challenges to the adaptation task. Although the LiDAR sensor in the source domain provides 3D knowledge to the model, it also increases the discrepancy between the two domains, which hurts the model adaptation (as we will see in Sec. 4.2 and Table 4). Hence, the unique difficulty of our work lies in exploiting the _LiDAR point clouds_ during training to guide the camera model for better 3D estimation.
**Depth Supervision by Point Clouds.** The main advantage of point clouds over the image modality is the accurate 3D positional information coming from the depth measurement. Due to the lack of LiDAR during evaluation, we cannot use point clouds as direct input of the model. Hence, one alternative approach to using point clouds is to supervise the depth estimation in the model. As in Eq. 1, for each pixel, we calculate the features of its corresponding voxels by multiplying the pixel feature with a depth value \(\alpha_{i}\). We use another head to predict a depth distribution \(\mathbf{\alpha}_{\mathrm{px}}=\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{d}}\}\) over \(N_{d}\) depth bins for each pixel \(\mathrm{px}\).
The ground truth depth supervision for this estimation task is generated by LiDAR point clouds as follows: When projected to the image frame, the points corresponding to one pixel can have three conditions. If the pixel has, **(1) no point inside**: the ground truth depth distribution of it is omitted; **(2) only one point inside**: the ground truth depth distribution is a one-hot vector, with value one being in the voxel that the point lies in; **(3) multiple points inside**: the ground truth depth distribution \(\alpha_{i}\) of this point is calculated by counting the number of points in each depth bin, and dividing them by \(V_{\mathrm{px}}\), the total number of points in \(\mathbf{\alpha}_{\mathrm{px}}\): \(\alpha_{i}=\frac{\text{Number of points in depth bin }v_{i}}{\text{Total number of points in }V_{\mathrm{px}}}\).
Using a distribution-based depth representation effectively accounts for the ambiguity when objects of different depth occur in one pixel. This happens at the boundary of the objects, and becomes more severe during feature encoding processing when images get down-sampled and each pixel represents larger space. Moreover, a probabilistic depth representation considers uncertainty during depth estimation, and degenerates to pseudo-LiDAR methods [41] if the one-hot constraint is added.
**Learning from LiDAR-Teacher.** Despite being intuitive and straightforward, direct depth supervision is not optimal for two reasons. First, LiDAR supervision is only on the intermediate feature layer, providing no supervision on the second half of the model. Also, while LiDAR provides accurate depth measurement, "depth estimation" is still different from our overall objective on BEV representation. Motivated by this, as shown in Figure 2, we propose to use a pretrained LiDAR oracle model to supervise the image model at the final BEV feature embedding, such that the supervision of LiDAR is provided to the whole model and aligns better with the final objective. We call the model using ground truth point cloud information "LiDAR-Teacher," and the model to be supervised "Camera-Student." This boils down to a knowledge distillation problem where the 3D inference knowledge of the LiDAR-teacher is distilled to the camera-only student. Note that the classic problem
Figure 2: **Overview of our DualCross framework**. DualCross includes three components. (1) **LiDAR-Teacher** uses voxelized LiDAR point clouds to transform the image features to the BEV frame. It provides essential knowledge on how to guide image learning given LiDAR information. (2) **Camera-Student** is supervised by the teacher model as well as the LiDAR ground truth. (3) **Discriminators** are used to align features from source and target domains.
of "better teacher, worse student" [7, 25, 52] in knowledge distillation due to capacity mismatch does not exist in this model, because the LiDAR-Teacher and Camera-Student models in DualCross are almost identical.
Overall, this teacher-student mechanism allows the camera model to learn better 3D representation from the point clouds, leading to better LiDAR supervision at different stages, while still keeping the model image-centric for image-only inference.
### Cross-Domain Adaptation
Since the BEV annotations and the LiDAR ground truth are only available in the source data, the model will be heavily biased to the source distribution during teacher-student supervision. Hence, we bridge the target and source domains using adversarial training. Specifically, we place one discriminator \(D_{1}\) at the BEV decoder CNN blocks, and another \(D_{2}\) at the image encoder CNN blocks, to align the features of two domains by optimizing over discriminator losses. While the final-layer discriminator \(D_{1}\) is constantly useful to align features learned from the LiDAR-Teacher and final ground truth, we find that the middle-layer discriminator \(D_{2}\) is very effective under certain domain gaps where images have great changes but LiDAR remains robust.
To achieve adversarial learning, given a feature encoder \(E\) and input sample \(X\), a domain discriminator \(D\) is used to discriminate whether the feature \(E(X)\) comes from the source domain or the target domain. The target and source domain samples are given the label \(d=1\) and \(d=0\), respectively. And \(D(E(X))\) outputs the probability of the sample \(X\) belonging to the target domain. Hence, the discriminator loss is formulated by a cross-entropy loss:
\[\mathcal{L}_{\mathrm{dis}}=d\log D(E(X))+(1-d)\log(1-D(E(X))). \tag{2}\]
Moreover, in order to learn domain-invariant features, our feature encoder \(E\) should try to extract features that fool the discriminator \(D\), while the discriminator \(D\) tries to distinguish the right domain label of the samples. This adversarial strategy can be formulated as a "min-max" optimization problem: \(\mathcal{L}_{\mathrm{D}}=\min_{E}\max_{D}\mathcal{L}_{\mathrm{dis}}\). The "min-max" problem is achieved by a Gradient Reverse Layer (GRL) [9], which produces reverse gradient from the discriminator \(D\) to learn the domain-invariant encoder \(E\). The loss form is the same for both \(D_{1}\) and \(D_{2}\).
### Full Objective and Inference
The overall objective of our model is composed of the supervision from the BEV ground truth, the LiDAR-Teacher, and the domain alignment discriminators. Given the out-put rasterized BEV representation map \(\mathbf{y}\in\mathbb{R}^{X\times Y\times C}\), the ground truth (GT) loss term \(\mathcal{L}_{\mathrm{GT}}\) can be formulated as a cross-entropy loss between the estimated source domain BEV map \(\mathbf{\tilde{y}}^{s}\) and the GT label \(\mathbf{y}^{s}\):
\[\mathcal{L}_{\mathrm{GT}}(\mathbf{\tilde{y}}^{s},\mathbf{y}^{s})=-\sum_{i=1}^{X}\sum_ {j=i}^{Y}\sum_{k=1}^{C}y^{s}_{(i,j,k)}\log\tilde{y}^{s}_{(i,j,k)}. \tag{3}\]
The supervision from the LiDAR-Teacher is composed of a direct depth estimation loss \(\mathcal{L}_{\mathrm{dp}}\) and a teacher feature supervision \(\mathcal{L}_{\mathrm{T}}\). As described in Sec. 3.1, given the 3D depth volume \(\mathbf{\alpha}\in\mathbb{R}^{H\times W\times N_{d}}\), the direct depth supervision term \(\mathcal{L}_{\mathrm{dp}}\) is formulated as a cross-entropy loss between the estimated 3D depth distribution volume \(\mathbf{\tilde{\alpha}}^{s}\) in the source domain, and the GT depth volume \(\mathbf{\alpha}^{s}\) calculated from LiDAR point clouds as described in Sec. 3.2:
\[\mathcal{L}_{\mathrm{dp}}(\mathbf{\tilde{\alpha}}^{s},\mathbf{\alpha}^{s})=-\sum_{i=1 }^{H}\sum_{j=i}^{W}\sum_{k=1}^{N_{d}}\alpha^{s}_{(i,j,k)}\log\tilde{\alpha}^{s }_{(i,j,k)}. \tag{4}\]
And for the LiDAR-Teacher feature supervision: \(\mathcal{L}_{\mathrm{T}}(\mathbf{F}^{\mathrm{te}},\mathbf{F}^{\mathrm{st}})=\mathcal{ L}_{2}(\mathbf{F}^{\mathrm{te}},\mathbf{F}^{\mathrm{st}})\) is an \(\mathcal{L}_{2}\) loss, where \(\mathbf{F}^{\mathrm{te}}\) and \(\mathbf{F}^{\mathrm{st}}\) are the feature maps of teacher and student models, respectively. Finally, the domain adaptation loss contains \(\mathcal{L}_{\mathrm{D}_{1}}\) and \(\mathcal{L}_{\mathrm{D}_{2}}\) with the form described in Eq. 2.
**The final objective** is formulated as a multi-task optimization problem:
\[\mathcal{L}_{\mathrm{DualCross}}=\mathcal{L}_{\mathrm{F}}+\lambda_{\mathrm{T}} \mathcal{L}_{\mathrm{T}}+\lambda_{\mathrm{dp}}\mathcal{L}_{\mathrm{dp}}+ \lambda_{\mathrm{D}_{1}}\mathcal{L}_{\mathrm{D}_{1}}+\lambda_{\mathrm{D}_{2}} \mathcal{L}_{\mathrm{D}_{2}}, \tag{5}\]
where \(\lambda_{\mathrm{T}},\lambda_{\mathrm{dp}},\lambda_{\mathrm{D}_{1}},\text{and }\lambda_{\mathrm{D}_{2}}\) are weights for the corresponding loss terms. The DualCross model is trained end-to-end using the loss term in Eq. 5. During inference, target samples are fed into the Camera-Student model to output the final BEV representation. More training details are provided in Sec. 4.
## 4 Experiments
**Datasets and Domain Settings.** We evaluate DualCross with four unique domain shift settings constructed from two large-scale datasets, nuScenes [3] and Lyft [15], following existing LiDAR-based domain adaptation work, including SRDAN [49], ST3D [47], UDA3D [22], and xMUDA [14]. Specifically, for the _day-to-night_, _city-to-city_, and _dry-to-rain_ settings, we use the sentence in the nuScenes dataset and filter the keywords to split the dataset into corresponding subsets to create the intra-class adaptation scenarios. For the _dataset-to-dataset_ setting, we use the official split of the nuScenes dataset, and the split provided in ST3D [47] for the Lyft dataset. All adaptation settings follow the assumption that the source has access to cameras and LiDAR sensors, while the target only has cameras. We use all six cameras provided by the nuScenes dataset. We also analyze surprising observations on cross-modality performance in the ablation study.
**Implementation Details.** Following [30], we use EfficientNet [35] pretrained on ImageNet as our image encoder backbone. Two heads are applied to estimate pixel fea
tures and pixel-wise depth distribution from the \(8\times\) down-sampled feature map. The 3D feature maps are projected to the BEV frame using mean pooling. For the BEV decoder we use ResNet-18 as the backbone, and upsample the features learned from the first three meta-layers of ResNet to the final BEV output. The \(D_{1}\) and \(D_{2}\) domain discriminators are applied to the output feature layers of EfficientNet and ResNet backbones, respectively. We use a light-weight discriminator architecture, which is composed of a global averaging pooling layer, followed by two fully-connected layers, and outputs the domain label. For input, we resize and crop input images to size \(128\times 352\). For output, we consider a \(100\) meters \(\times 100\) meters range centered at the ego-vehicle, with the grid size set to be \(0.5\) meters \(\times 0.5\) meters. The depth bin is set to be \(1.0\) meter between \(4.0\) meters and \(45.0\) meters range. The whole model is trained end-to-end, with \(\lambda_{\mathrm{T}}=1.0,\lambda_{\mathrm{dp}}=0.05,\lambda_{\mathrm{D_{1}}}=0.1,\lambda_{\mathrm{D_{2}}}=0.01\). We train DualCross using the Adam [16] optimizer with learning rate \(0.001\) and weight decay \(1e\)-7 for \(50\)K steps for the teacher model, and \(200\)K for the student model. We use horizontal flipping, random cropping, rotation, and color jittering augmentation during training. The whole model is implemented using the PyTorch framework [28].
### BEV Segmentation Results and Comparisons
**Baselines.** We compare our method with state-of-the-art BEV 3D layout perception work MonoLayout [24], OFT [33], LSS [30], as well as other baseline methods in domain adaptation and cross-modality learning. _Wide-range Aug._ means using a wide range of random scaling augmentation which potentially includes the target domain scale. For _Vanilla DA_, we adapt camera-only DA-Faster [5] to our BEV perception setting. _Depth-Supv DA_ stands for depth-supervised domain adaptation. We use source domain LiDAR as ground truth to supervise the depth estimation during training, without LiDAR-Teacher supervision (only \(\mathcal{L}_{\mathrm{dp}}\) without \(\mathcal{L}_{\mathrm{T}}\)). _Input-fusion Teacher_ is an alternative way of designing the LiDAR-Teacher, where we directly fuse point \((x,y,z)\) coordinates into their corresponding image pixels as additional channels in the teacher model, similar to Pointpainting [36]. We use _DA_ and _CM_ to denote whether a model considers domain adaptation and cross-modality in design, respectively. Results are reported on vehicle, drivable roads, and lane marking classes using intersection-over-union (IoU).
**Day-to-Night Adaptation.** As shown in Table 1 on the left, we observe that our DualCross model achieves the best performance on all classes. We notice that the improvement under the Day \(\rightarrow\) Night setting is exceptionally high. This is because the initial domain gap between day and night scenarios is very large in the camera modality space. Moreover, the LiDAR sensor is robust under illumination changes, due to its active imaging mechanism as opposed to camera's passive one. Thus, incorporating LiDAR point cloud information helps the model to learn a more robust, illumination-invariant representation from the image inputs.
**Dry-to-Rain Adaptation.** As shown in Table 1 on the right, under this setting we also observe that our DualCross model achieves the best performance on all classes. We no
\begin{table}
\begin{tabular}{l|c|c||c|c|c} \hline \hline
**Day \(\rightarrow\) Night** & _DA_ & _CM_ & Vehicle & Road & Lane \\ \hline MonoLayout [24] & ✗ & ✗ & 5.9 & 37.7 & 5.9 \\ OFT [33] & ✗ & ✗ & 6.6 & 40.5 & 6.0 \\ LSS [30] & ✗ & ✗ & 6.7 & 41.2 & 7.1 \\ \hline Wide-range Aug. & ✓ & ✗ & 10.3 & 46.0 & 10.4 \\ Vanilla DA & ✓ & ✗ & 11.2 & 48.8 & 11.1 \\ Depth-Supv DA & ✓ & ✓ & 15.7 & 50.5 & 14.2 \\ Input-fusion Teacher & ✓ & ✓ & 14.9 & 48.8 & 13.1 \\ \hline
**DualCross (ours)** & ✓ & ✓ & **17.0** & **51.8** & **16.9** \\ \hline \hline \end{tabular}
\end{table}
Table 1: DualCross leads to significant improvements under _day-to-night_ and _dry-to-train_ domain shift settings. Numbers reported in IoU. _DA_ and _CM_ denote whether a model considers domain adaptation and cross-modality in design, respectively.
\begin{table}
\begin{tabular}{l|c|c||c|c|c} \hline \hline
**Dry \(\rightarrow\) Rain** & _DA_ & _CM_ & Vehicle & Road & Lane \\ \hline MonoLayout [24] & ✗ & ✗ & 20.6 & 68.7 & 13.1 \\ OFT [33] & ✗ & ✗ & 24.1 & 79.8 & 16.2 \\ LSS [30] & ✗ & ✗ & 27.8 & 71.0 & 16.8 \\ \hline Wide-range Aug. & ✓ & ✗ & 28.2 & 71.2 & 17.2 \\ Vanilla DA & ✓ & ✗ & 29.1 & 70.8 & 18.3 \\ Depth-Supv DA & ✓ & ✓ & **29.6** & 71.8 & 19.1 \\ Input-fusion Teacher & ✓ & ✓ & 29.5 & 71.0 & 18.8 \\ \hline
**DualCross (ours)** & ✓ & ✓ & **29.6** & **71.9** & **19.5** \\ \hline \hline \end{tabular}
\end{table}
Table 3: DualCross performs the best under _dataset-to-dataset_ domain gaps in IoU.
\begin{table}
\begin{tabular}{l|c|c||c||c} \hline \hline
**Boston \(\rightarrow\) Singapore** & _DA_ & _CM_ & Vehicle & Road & Lane \\ \hline MonoLayout [24] & ✗ & ✗ & 14.2 & 35.9 & 7.5 \\ OFT [33] & ✗ & ✗ & 16.8 & 37.9 & 9.6 \\ LSS [30] & ✗ & ✗ & 17.6 & 38.2 & 10.6 \\ \hline Wide-range Aug. & ✓ & ✗ & 17.9 & 40.5 & 12.4 \\ Vanilla DA & ✓ & ✗ & 13.0 & 31.4 & 9.1 \\ Depth-Supv DA & ✓ & ✓ & 19.0 & 42.8 & 14.9 \\ Input-fusion Teacher & ✓ & ✓ & 18.6 & 42.7 & 14.1 \\ \hline
**DualCross (ours)** & ✓ & ✓ & **20.5** & **43.1** & **15.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: DualCross performs the best under _city-to-city_ shift.
tice that the improvement under the Dry \(\rightarrow\) Rain setting is not as big as the previous setting. This is because the domain gap between dry and rain scenarios is not big in the image modality. Hence, baseline methods OFT and LSS are already able to obtain decent results even without domain adaptation. Furthermore, rainy weather is known to cause great domain shift in the LiDAR modality [45]. As a result, the knowledge learned from source-exclusive LiDAR suffers from an unknown domain shift which hinders its usefulness. This can potentially cancel out the benefit of 3D information learned from point clouds and explains for the smaller improvement.
**Dataset-to-Dataset Adaptation.** As shown in Table 3, we also observe that our DualCross model achieves the best performance in the nuScenes \(\rightarrow\) Lyft setting. Following [30], because Lyft does not provide road segment and lane marking information in the HD map, we report results on the vehicle class. Compared with baselines with and without domain adaptation or cross-modality learning, our DualCross demonstrates superior performance in leveraging and adapting LiDAR information.
**City-to-City Adaptation.** As shown in Table 2, we observe that our DualCross model achieves the best performance on all classes for two inter-city transfer settings. Without domain adaptation, baseline approaches MonoLayout, OFT, and LSS all suffer from performance degradation. Direct depth supervision and alternative input-fusion teacher models do not bring as much improvement as DualCross. The results clearly demonstrate the effectiveness of our method by distilling and aligning the LiDAR information for cross-modality 3D BEV perception.
**Qualitative Results.** As shown in Figure 3, under the Day \(\rightarrow\) Night domain shift setting, our model achieves significantly better monocular 3D perception than other baselines. We observe that DualCross provides more clearly defined road boundaries and lane markings. The depth and size of the vehicles and the road on the right side are also predicted more accurately. DualCross only misses some vehicles that are hardly visible in camera due to occlusion and distance. Overall, the qualitative results validate the effectiveness of DualCross in closing the gap between data domains and leveraging point clouds information for better 3D inference.
### Analysis and Ablation Study
**Direct LiDAR Supervision Leads to Worse Performance.** It is naturally believed that introducing multiple sensors in the perception model is bound to increase the model performance. Surprisingly, experiments shown in Table 4 negate this naive intuition. When we introduce the LiDAR sensor in the source domain as depth supervision, the result decreases by 0.3. As we described in Sec. 3.2, the domain distribution divergence increases after introducing
Figure 3: **Qualitative Results in Day \(\rightarrow\) Night setting** (model is trained with daytime data, and validated with night data). We notice that DualCross performs significantly better than other baselines for vehicles, drivable roads, and lane marking classes. From **left** to **right**: (1) Vanilla adversarial learning; (2) LiDAR as depth supervision with adversarial learning; (3) our DualCross model; (4) Ground Truth. Best viewed in color.
the sensor-modality shift. As a result, we propose multiple components in DualCross to account for the visual and sensor domain shifts. Experiments show that while the wide augmentation strategy and adversarial discriminator both achieve better results than the baseline (\(11.2\) vs. \(6.7\) in IoU), our LiDAR-Teacher further boosts the result to \(17.0\) by leveraging effective LiDAR knowledge distillation.
**LiDAR Density & Comparison with Oracle Model.** As shown in Figure 4, we validate that our model achieves higher performance when denser LiDAR is available. This can be accomplished by grouping continuous scans of LiDAR point clouds (from 1 to 5) into a single unit, to have a denser 3D representation of the scene. We observe that other cross-modality baselines including _Input-Fusion Teacher_ and _Depth-Supv_ models cannot effectively leverage the LiDAR knowledge, even with dense point clouds available. We also compare our model with the LiDAR oracle model (target domain also has the LiDAR modality) and find that the gap between the upper bound result and the No-LiDAR baseline is significantly reduced. The remaining performance gap is caused by the unknown LiDAR domain gap which we hope to further reduce in future work.
**Dealing with Mixed Domain Shift.** Another common but under-explored question we observe in the 3D domain adaption setting is the mixed domain shift problem, where multiple types of gaps between source and target domains occur concurrently. For example, in the nuScenes dataset, the Boston data are collected exclusively during daytime, whereas the Singapore data encompass both day and night captures. This leads to a mixture of city-wise and lighting-wise domain shifts. As shown in Table 5, we find that directly leveraging domain adaptation in this scenario leads to worse performance than direct inference, because mixed domains in the target confuse the discriminator. Hence, we propose a progressive learning mechanism, where we first perform adaptation with city-wise data for \(100K\) steps, and then train the model on the full target domain dataset for another \(150K\) steps. This effectively alleviates the mixed domain shift problem, and helps DualCross achieve leading results than other baselines.
**Computational Complexity.** Table 6 summarizes the number of parameters and inference speed for prior baselines and our model. Our Lidar-Teacher distillation and multi-level adversarial learning modules do not affect the inference efficiency of DualCross compared with the baseline. Our total number of parameters is 15M, and our inference time is 33 Frame-per-Second (FPS) on a V100 GPU, which is on par with the baseline LSS [30]. The training time for our model is around 20 hours on 4\(\times\)V100 GPUs.
## 5 Conclusion
In this paper, we proposed DualCross to estimate 3D scene representation in BEV under domain shift and modality change. To achieve this, we construct a LiDAR-Teacher and distill knowledge from it into a Camera-Student by feature supervision. And we further propose to align feature space between the domains using multi-stage adversarial learning. Results on large-scale datasets with various challenging domain gaps demonstrated the effectiveness of our approach, which marks a significant step towards robust 3D scene perception in the wild.
\begin{table}
\begin{tabular}{l c} \hline \hline & \#Params (M) & Frame-per-Second (FPS) \\ \hline \hline OFT [33] & 22 & 25 \\ LSS [30] & 14 & 35 \\
**DualCross (Ours)** & 15 & 33 \\ \hline \hline \end{tabular}
\end{table}
Table 6: DualCross achieves great perception results with efficient inference time compared with the baselines.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline Baseline & WA & AD & LS & LT & Results & _diff_ \\ \hline ✓ & & & & & 6.7 & \(0\) \\ \hline ✓ & & & ✓ & & 6.4 & \(-0.3\) \\ \hline ✓ & ✓ & & & & 10.3 & \(+3.6\) \\ \hline ✓ & ✓ & ✓ & & & 11.2 & \(+4.5\) \\ \hline ✓ & ✓ & ✓ & ✓ & & 15.7 & \(+9.0\) \\ \hline ✓ & ✓ & ✓ & ✓ & ✓ & **17.0** & \(+10.3\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Our proposed components all contribute to the final performance. We report results on the vehicle class under the _day-to-night_ domain gap in IoU. _WA, AD, LS, LT_ stand for Wide Augmentation, Adversarial Discriminators, LiDAR Supervision, and LiDAR-Teacher, respectively.
Figure 4: Results of DualCross improve as the number of LiDAR points increases.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Mixed Domain Gap** & Vehicle & Road & Lane \\ \hline Direct Inference & 17.6 & 38.2 & 10.6 \\ Vanilla DA & 13.0 & 31.4 & 9.1 \\ Progressive DA & 18.8 & 41.5 & 13.2 \\
**DualCross (ours)** & **20.5** & **43.1** & **15.6** \\ \hline \hline \end{tabular}
\end{table}
Table 5: The proposed progressive learning strategy effectively addresses the challenge caused by the mixed domain gap scenario (_Boston-to-Singapore_ mixed with _day-to-night_) on nuScenes. |
2307.02243 | Power-up! What Can Generative Models Do for Human Computation Workflows? | We are amidst an explosion of artificial intelligence research, particularly
around large language models (LLMs). These models have a range of applications
across domains like medicine, finance, commonsense knowledge graphs, and
crowdsourcing. Investigation into LLMs as part of crowdsourcing workflows
remains an under-explored space. The crowdsourcing research community has
produced a body of work investigating workflows and methods for managing
complex tasks using hybrid human-AI methods. Within crowdsourcing, the role of
LLMs can be envisioned as akin to a cog in a larger wheel of workflows. From an
empirical standpoint, little is currently understood about how LLMs can improve
the effectiveness of crowdsourcing workflows and how such workflows can be
evaluated. In this work, we present a vision for exploring this gap from the
perspectives of various stakeholders involved in the crowdsourcing paradigm --
the task requesters, crowd workers, platforms, and end-users. We identify
junctures in typical crowdsourcing workflows at which the introduction of LLMs
can play a beneficial role and propose means to augment existing design
patterns for crowd work. | Garrett Allen, Gaole He, Ujwal Gadiraju | 2023-07-05T12:35:29Z | http://arxiv.org/abs/2307.02243v1 | # Power-up! What Can Generative Models Do for Human Computation Workflows?
###### Abstract.
We are amidst an explosion of artificial intelligence research, particularly around large language models (LLMs). These models have a range of applications across domains like medicine, finance, commonsense knowledge graphs, and crowdsourcing. Investigation into LLMs as part of crowdsourcing workflows remains an under-explored space. The crowdsourcing research community has produced a body of work investigating workflows and methods for managing complex tasks using hybrid human-AI methods. Within crowdsourcing, the role of LLMs can be envisioned as akin to a cog in a larger wheel of workflows. From an empirical standpoint, little is currently understood about how LLMs can improve the effectiveness of crowdsourcing workflows and how such workflows can be evaluated. In this work, we present a vision for exploring this gap from the perspectives of various stakeholders involved in the crowdsourcing paradigm -- the task requesters, crowd workers, platforms, and end-users. We identify junctures in typical crowdsourcing workflows at which the introduction of LLMs can play a beneficial role and propose means to augment existing design patterns for crowd work.
Keywords:crowdsourcing, generative AI, large language models, workflows, human computation +
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTe Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTe Templates
+
Footnote †: journal: Journal of LaTeX Templates |
2308.03900 | Developability Approximation for Neural Implicits through Rank
Minimization | Developability refers to the process of creating a surface without any
tearing or shearing from a two-dimensional plane. It finds practical
applications in the fabrication industry. An essential characteristic of a
developable 3D surface is its zero Gaussian curvature, which means that either
one or both of the principal curvatures are zero. This paper introduces a
method for reconstructing an approximate developable surface from a neural
implicit surface. The central idea of our method involves incorporating a
regularization term that operates on the second-order derivatives of the neural
implicits, effectively promoting zero Gaussian curvature. Implicit surfaces
offer the advantage of smoother deformation with infinite resolution,
overcoming the high polygonal constraints of state-of-the-art methods using
discrete representations. We draw inspiration from the properties of surface
curvature and employ rank minimization techniques derived from compressed
sensing. Experimental results on both developable and non-developable surfaces,
including those affected by noise, validate the generalizability of our method. | Pratheba Selvaraju | 2023-08-07T20:23:39Z | http://arxiv.org/abs/2308.03900v3 | # Developability Approximation for Neural Implicits through Rank Minimization
###### Abstract
Developability refers to the process of creating a surface without any tearing or shearing from a two-dimensional plane. It finds practical applications in the fabrication industry. An essential characteristic of a developable 3D surface is its zero Gaussian curvature, which means that either one or both of the principal curvatures are zero. This paper introduces a method for reconstructing an approximate developable surface from a neural implicit surface. The central idea of our method involves incorporating a regularization term that operates on the second-order derivatives of the neural implicits, effectively promoting zero Gaussian curvature. Implicit surfaces offer the advantage of smoother deformation with infinite resolution, overcoming the high polygonal constraints of state-of-the-art methods using discrete representations. We draw inspiration from the properties of surface curvature and employ rank minimization techniques derived from compressed sensing. Experimental results on both developable and non-developable surfaces, including those affected by noise, validate the generalizability of our method.
## 1 Introduction
Developable surfaces have practical applications in digital fabrication, industrial design, architecture, and geometric surface abstraction. They are used in the automotive industry for car body panels, in industrial design for curved furniture elements, in computer graphics to simplify complex 3D shapes to enhance rendering performance, and in architecture for building facades to reduce material wastage. Research efforts are focused on developing computer-aided algorithms to identify these developable patches on a surface, aiming to minimize material wastage in industry applications during cutting and re-assembling processes. Several approaches exist for approximating developable shapes, employing optimization techniques [3, 5, 32, 35], shape wrapping techniques that utilize multiple developable patches [12, 13, 15] and methods for synthesizing developable surfaces [6]. Existing methods for reconstructing developable shapes often rely on fixed topology representations. In contrast, implicit surfaces offer advantages such as smooth interpolation, deformation capabilities, and the ability to handle topological changes naturally. In recent years, there has been extensive research in neural implicit surface representations for 3D reconstruction, from single shape reconstruction [4], to data-driven methods [14, 39], and generative modeling by deformation [42]. These approaches achieve high-detail surface reconstruction, and regularization techniques have been incorporated to encourage smoothness [1]. However, there is currently no implicit reconstruction method that specifically promotes surface developability. Our paper presents a novel approach where we introduce a regularization term into neural implicit formulations to deform the surface into approximately piecewise developable patches.
Our approach is driven by two key observations. Firstly, implicit surfaces offer the advantage of providing access to gradients and higher-order derivatives, allowing us to compute surface normals, curvature, and other surface properties without additional computations. In our method, we utilize the second-order derivatives of the implicit function with respect to input coordinates, which are related to Gaussian curvature [31]. Secondly, surface developability necessitates zero Gaussian curvature [21]. To achieve this, we formulate the developability condition for implicits as a rank minimization problem, inspired by the work of Sellan et al. [32], who applied the concept to height fields derived from depth maps. We employ both gaussian curvature minimization and implicit hessian rank minimization, combining them in an objective that encourages both developability, resulting in the elimination of Gaussian curvature and shape fitting of the implicit to the input point cloud.
Our implicit-driven approach has key advantages over current discrete representation methods, which struggle with higher polygon counts. Discrete optimization-based methods suffer from deformation-induced topology changes leading to inaccuracies surface representation, our approach allows for easier topological deformation while maintaining shape approximation. Additionally, our method eliminates the need for specialized solvers by relying on a single regularizer weight value to control the level of developability.
In summary, our main contribution is a novel approach introducing regularization term and optimization procedure promoting developability in neural implicit surface reconstruction. Qualitative and quantitative evaluations in section 5 demonstrate its effectiveness for complex topologies, robustness to noise, and superior shape preservation compared to existing methods.
## 2 Related Works
Our approach leverages implicit functions and employs second-order optimization techniques based on rank minimization to reconstruct an input point cloud into a piecewise developable surface. In this section, we provide a brief overview of prior research on developable surface reconstructions, and applications of rank minimization approximations.
Developable surface Approximation.Existing developable surface reconstruction methods often use discrete representations and rely on constrained optimization or patch-based wrapping. For instance, Stein et al. [35] achieve piece-wise developable surfaces through mesh vertex optimization. However, these methods assume noise-free inputs and may require manual tuning, potentially getting trapped in local minima due to non-convexity. Binninger and Verhoeven et al. [3] use Gaussian image thinning for developability with automatic crease development. However, increasing cone angles leads to significant shape deformation, deviating from the ground truth shape. Gavvili et al. [19] propose a similar approach using Gauss thinning. Sellen et al. [32] achieve developability with nuclear norm minimization but focus on planar height fields. Computation time in these methods scales with polygon count in the mesh. Patch wrapping methods approximate a surface by fitting developable patches onto the input surface geometry. Verhoeven et al. [38] use planar quad strips aligned to principal curvatures for curved parts, while Rabinovich et al. [30] use orthogonal quad meshes to optimize for developable surfaces with constrained overlapping quad patches. Peternell et al. [22] fit a developable surface by estimating tangent planes through curve approximation from data points. Spline-based methods offer another discrete representation for smooth reconstruction. Tang et al. [6] use cubic spline developable patches projected onto the surface and merge them iteratively with proximity constraints. Leopoldseder et al. [34] employ interpolation of tangent planes with right circular cones for appropriate rulings. Gavvili et al. [19] propose B-spline surface optimization using Gauss thinning for developability. Kenneth et al. [18] define a 3D polyline boundary and generate a smooth discrete developable surface that interpolates this boundary. Additionally, Solomon et al. [16] introduce a flexible structure for developable surface representation.
On the other hand, our proposed method uses implicit functions and second-order optimization based on rank minimization to achieve piecewise developable surfaces from point cloud inputs.
Rank minimization.Stein et al. [27] proposes minimizing the \(L_{1}\) norm of second derivatives to reconstruct piecewise planar surfaces. Sellan et al. [32] utilizes nuclear norm minimization for reconstructing piecewise developable surfaces from input depth maps, but their method is restricted to heightfields. Liu et al. [11] employ rank minimization using the \(L_{1}\) norm to reconstruct surfaces in a cube-like style while preserving the original shape's content. Besides \(L_{1}\) norm minimization, log-determinant minimization is utilized as rank approximation for distance matrices [24], subspace clustering in [7]. \(L_{0}\) minimization is employed for basis selection in [9] and for image editing in [20]. Additionally, Oh et al. [37] uses partial sum nuclear norm minimization for PCA.
## 3 Background
Implicit surface representation.In an implicit surface representation, a surface is defined as the zero level set of the implicit function. The implicit function takes as input the coordinates of a point \(\mathbf{p}\in\mathcal{R}^{3}\) in space and returns a
scalar value \(s=f(\mathbf{p})\) where \(s\in\mathcal{R}\). Iso-levels of the implicit function represent surfaces in \(\mathcal{R}^{3}\). Points on the surface are those where the implicit function evaluates to zero \(f(\mathbf{p})=0\). Points with negative scalar values \(f(\mathbf{p})<0\) correspond to the shape interior, while points with positive values \(f(\mathbf{p})>0\) represent the shape exterior.
Gaussian curvature and developability.A developable surface is defined by its Gaussian curvature, which plays a key role in its characterization. Gaussian curvature measures the curvature of a surface at each point and is determined by the _principal curvatures_[21]. In the case of a developable surface, the Gaussian curvature is zero everywhere. This means that either or both the principal curvatures, \(K_{1}\) and \(K_{2}\), is zero. As a result, developable surfaces can be unrolled or flattened onto a plane without any distortion or stretching. The principal curvatures of a surface represent the extreme curvatures in different directions. They are determined by the maximum and minimum values of the _normal curvature_ at a given point on the surface. Normal curvature is the curvature of curves formed by the intersection of normal planes and the surface at that point. The direction with the maximum principal curvature (\(K_{1}\)) corresponds to the maximum normal curvature, while the direction with the minimum principal curvature (\(K_{2}\)) corresponds to the minimum normal curvature. For a deeper understanding of the differential geometry of surfaces and its relation to geometry processing applications, refer to the tutorial by Crane et al. [17].
Gaussian curvature of implicits.The Gaussian curvature, denoted as \(K\), of a surface represented by implicit function \(f(\mathbf{p})\) can be defined by the first and second-order derivatives of the implicit function with respect to the surface coordinates \(\mathbf{p}\in\mathcal{R}^{3}\). The first order derivative, known as the gradient \(\nabla f(\mathbf{p})=(\frac{\partial f(\mathbf{p})}{\partial x},\frac{ \partial f(\mathbf{p})}{\partial y},\frac{\partial f(\mathbf{p})}{\partial z})\), yields a 3D vector pointing to the direction of the surface normal. The second order derivative, referred to as the Hessian \(H_{f}(\mathbf{p})\) provides information about the rate of change of the surface normal in different directions, which in turn determines the curvature.
\[\mathbf{H}_{f}(\mathbf{p})=\begin{bmatrix}\frac{\partial^{2}f(\mathbf{p})}{ \partial^{2}x}&\frac{\partial^{2}f(\mathbf{p})}{\partial x\partial y}&\frac{ \partial^{2}f(\mathbf{p})}{\partial x\partial z}\\ \frac{\partial^{2}f(\mathbf{p})}{\partial y\partial x}&\frac{\partial^{2}f( \mathbf{p})}{\partial y\partial z}&\frac{\partial^{2}f(\mathbf{p})}{\partial y \partial z}\\ \frac{\partial^{2}f(\mathbf{p})}{\partial z\partial x}&\frac{\partial^{2}f( \mathbf{p})}{\partial z\partial y}&\frac{\partial^{2}f(\mathbf{p})}{\partial z }\end{bmatrix} \tag{1}\]
To compute the Gaussian curvature at a specific point \(\mathbf{p}\) we employ the following procedure as outlined in the work by Goldman [31]:
\[K(\mathbf{p})\!\!=\!\!-\frac{det(\hat{\mathbf{H}}_{f}(\mathbf{p}))}{\left| \nabla f(\mathbf{p})\right|^{4}}\!\!,\text{where}\ \hat{\mathbf{H}}_{f}(\mathbf{p})\!\!=\!\!\!\begin{bmatrix}\mathbf{H}_{f}( \mathbf{p})&\nabla f(\mathbf{p})^{T}\\ \nabla f(\mathbf{p})&\mathbf{0}\end{bmatrix}, \tag{2}\]
For a smooth surface, zero Gaussian curvature at a point \(\mathbf{p}\) entails that \(K(\mathbf{p})=0\Leftrightarrow K_{1}\cdot K_{2}=0\Leftrightarrow K_{1}=0\) or \(K_{2}=0\). Since the Gaussian surface is defined only on points where the gradient \(f(\mathbf{p})\) is non-zero, having zero Gaussian curvature means that the determinant of the \(4\times 4\) matrix in the numerator of Eq.2 must be zero: \(\text{det}(\hat{\mathbf{H}}_{f}(\mathbf{p}))=0\). The equation for the determinant of the matrix \(\hat{\mathbf{H}}_{f}(\mathbf{p})\) can be expressed as follows [31]:
\[det(\hat{\mathbf{H}}_{f}(\mathbf{p}))=-\nabla f(\mathbf{p})\cdot\text{Cof}(H_ {f}(\mathbf{p}))\cdot\nabla f(\mathbf{p})^{T} \tag{3}\]
By utilizing the properties of the cofactor matrix, \(\text{Cof}(\mathbf{H}_{f}(\mathbf{p}))^{T}=det(\mathbf{H}_{f}(\mathbf{p})) \mathbf{H}_{f}(\mathbf{p})^{-1}\), we can minimize the rank of \(\mathbf{H}_{f}(\mathbf{p})\) than \(\hat{\mathbf{H}}_{f}(\mathbf{p})\).
Norm minimization.Equating the determinant of an \(n\times n\) matrix \(\mathbf{X}\) to zero is essentially the same as ensuring that the matrix is not full rank, i.e., \(rank(\mathbf{X})<n\). Considering that the absolute value of the determinant of a square matrix is equivalent to the product of its singular values, i.e., \(\left|det(A)\right|=\prod_{i=1}^{n}\sigma_{i}\), the rank minimization can be framed as a problem of minimizing the \(L_{0}\) norm of the singular values. To prevent trivial solutions, the minimization process is subject to additional constraints, often expressed as a linear system \(\mathbf{AX}=\mathbf{B}\)[23, 32]:
\[\min_{\mathbf{AX}=\mathbf{B}}rank(\mathbf{X})\Leftrightarrow\min_{\mathbf{AX} =\mathbf{B}}\left\|\sigma(\mathbf{X})\right\|_{0} \tag{4}\]
,where the vector \(\sigma(\mathbf{X})\) stores the singular values of the matrix \(\mathbf{X}\). The \(L_{0}\) minimization problem is non-convex, non-differentiable, and generally non-tractable (NP-hard) [26]. However, when dealing with the hessian rank of the implicits with rank of 3, we can make use of a smoother approximation of the \(L_{0}\) minimization objective similar to [8, 41]. Instead of employing the \(L_{0}\) cardinality approximation, we consider a relaxed alternative in the form of the nuclear norm (also known as the \(L_{1}\) norm) of the singular values which has been demonstrated as a tight convex approximation to the rank function [25]. The nuclear norm \(\|\mathbf{X}\|_{*}\) of the matrix:
\[\min_{\mathbf{AX}=\mathbf{B}}\|\sigma(\mathbf{X})\|_{1}\Leftrightarrow\min_{ \mathbf{AX}=\mathbf{B}}\|\mathbf{X}\|_{*} \tag{5}\]
Alternative approximations.There is a significant drawback in solving for nuclear norm minimization. While minimizing the \(L_{1}\) norm provides an approximation to the rank minimization problem and leads to a low-rank matrix \(X\), it also simultaneously diminishes the high-variance information that includes important details like the structure of the object. Thus we also explored the use non-convex surrogate partial sum minimization method [36], minimizing the sum of smaller singular values:
\[\min_{\mathbf{AX}=\mathbf{B}}rank(\mathbf{X})=\sum_{i=r+1}^{n}\sigma_{i}( \mathbf{X}) \tag{6}\]
,where \(\sigma_{i}(\mathbf{X})\) refers to the \(i^{th}\) singular value, arranged in decreasing order, and the parameter \(r\) determines the number of largest singular values to be excluded during the minimization process. To mitigate the impact of large singular values, an alternative non-convex surrogate involving the log determinant function was also employed [7]. Since the hessian matrix \(\mathbf{H}_{f}(\mathbf{p})\) of the implicit function may not always be positive semi-definite, we express the rank minimization in the form:
\[\log(\det(\mathbf{X}^{T}\mathbf{X}+\mathbf{I}))=\sum_{i=1}^{n}\log(1+\sigma_{i }^{2}) \tag{7}\]
Some other alternative relaxation methods include Weighted Nuclear Norm Minimization, Capped \(L_{1}\) Norm, Schatten-p Norm, Truncated Rank Minimization, which we could not cover in our experiments. We recommend [2] for brief review of these approaches.
## 4 Method
Overview.Our approach takes a point cloud \(\mathcal{P}=\{\mathbf{p}_{i},\mathbf{n}_{i}\}_{i=1}^{N}\), as input, where \(\mathbf{p}_{i}\in\mathbb{R}^{3}\) represents the 3D position of a point, \(\mathbf{n}_{i}\in\mathbb{R}^{3}\) is its corresponding normal, and \(N\) is the total number of points. The output of our method is an implicit function \(f(\mathbf{p})\), which assigns a scalar value to input points \(\mathbf{p}\in\mathbb{R}^{3}\). The reconstructed surface is obtained by extracting the zero iso-level (\(s=0\)) of the implicit function using the marching cubes algorithm [40]. Our goal is to obtain an implicit function that approximates the input point cloud and generates a surface that maximizes its developability. This implies that the resulting surface points should ideally possess negligible or zero Gaussian curvature while still retaining the overall shape of the point cloud.
To achieve this objective, we formulate the problem as an optimization task aimed at estimating the parameters \(\mathbf{\theta}\) of a neural network function \(f(\mathbf{p};\mathbf{\theta})\) that represents the implicit function (Fig.2). This optimization involves minimizing a loss function consisting of two components: (a) a data fitting term \(L_{\text{data}}(\mathbf{\theta})\), which encourages the zero iso-surface of the implicit function to closely match the input point cloud, and (b) a regularizer term \(L_{*}(\mathbf{\theta})\) that encourages surface developability in the output. In the following sections, we elaborate on these two terms, describe the network architecture, and outline our optimization procedure.
Data term.Single surface reconstruction methods involve fitting neural networks [4] to a single input point cloud using data losses. The samples \(\mathbf{p}_{j}\) used for fitting are obtained by directly selecting input points from the point cloud that have reference implicit values \(s_{j}=0\) ("on-surface point samples"), and also by perturbing points along the normals \(\mathbf{p}_{j}=\mathbf{p}_{i}+\epsilon\mathbf{n}_{i}\) resulting in \(s_{j}=\epsilon\) ("off-surface point samples"). In recent data-driven approaches, surface reconstruction is performed by estimating the parameters of the implicit function from a large dataset of different point clouds. DeepSDF [14] is one such example, utilizing an auto-decoder architecture for this purpose. In our method, we follow a similar architecture.
Our surface reconstruction network takes a point \(\mathbf{p}\) as input and maps it to an implicit function \(f(\mathbf{p};\mathbf{\theta})\). During training, the model parameters are optimized through backpropagation using point samples \(\{\mathbf{p}_{j},s_{j}\}_{j=1}^{K}\) taken around the input point cloud [14]. The training process initially optimizes the parameters based on the data fitting term. After achieving the fitting, the developability regularizer is introduced for fine-tuning, resulting in a surface close to being developable. The process of fitting an implicit function to an input point cloud involves penalizing the differences between estimated and reference implicit values at various sample points surrounding the input point cloud. Specifically, given \(K\) point samples \(\{\mathbf{p}_{j}\}_{j=1}^{K}\) with associated scalar signed distance values \(\{s_{j}\}_{j=1}^{K}\), the data term can be formulate as the \(L_{1}\) loss or clamped \(L_{1}\) loss [14], which aims to make the parameter estimation more sensitive to details near the surface as follows:
\[L_{\text{data}}(\mathbf{\theta})=\sum_{j=1}^{K}|f(\mathbf{p}_{j};\mathbf{\theta})- \text{cl}(s_{j},\delta)|, \tag{8}\]
, where \(\text{cl}(\cdot,\delta)=\text{min}(\delta,\text{max}(-\delta,\cdot))\) and \(\delta\) is a clamping parameter. We set \(\delta=0.01\) in our experiments.
Network architecture.Our architecture is comprised of \(8\) fully connected layers, each with 512 nodes, applying group normalization. To enable the computation of Hessians for second-order differentiable regularization, we explored several second-order activation functions [siLU[33], geLU[33], tanH, sine [39], and elu].
Developability regularizer term.Our regularizer term is motivated by rank minimization applied to the matrix \(\hat{\mathbf{H}}\)
Figure 2: **Our MLP architecture** maps a point \(\mathbf{p}\in\mathcal{R}^{3}\) to an implicit function \(f(\mathbf{p};\theta)\) with learned parameters \(\theta\). The parameters are optimized through a loss function comprising data term and regularization term detailed in Section.4. Backpropagation (dotted arrows), computes gradients and Hessians with respect to \(\mathbf{p}\) coordinates for fine-tuning \(\theta\).
storing the gradients and Hessian of the implicit function in Eq.2. We experimented with all rank minimization formulations except \(L_{0}\) as discussed in 3:
\[L_{\text{H}_{NN}}(\boldsymbol{\theta})=\sum_{i=1}^{N}\|\sigma( \mathbf{H}_{f}(\mathbf{p}_{i}))\|_{1} \tag{9}\] \[L_{\text{\hat{H}}_{det}}(\boldsymbol{\theta})=\sum_{i=1}^{N}\det( \hat{\mathbf{H}}_{f}(\mathbf{p}_{i})),\forall\text{rank}(\hat{\mathbf{H}}_{f}( \mathbf{p}_{i}))=3\] (10) \[L_{\text{H}_{leqdet}}(\boldsymbol{\theta})=\sum_{i=1}^{N}\log \det(\mathbf{H}_{f}^{\top}(\mathbf{p}_{i})\cdot\mathbf{H}_{f}(\mathbf{p}_{i})+ \mathbf{I})\] (11) \[L_{\text{H}_{PNN}}(\boldsymbol{\theta})=\sum_{i=1}^{N}\sum_{o=r +1}^{o=3}\sigma_{o}(\mathbf{H}_{f}(\mathbf{p}_{i})) \tag{12}\]
In our experiments, we set \(r=1\) (i.e., we ignore the two largest singular values of the Hessian). In our experiments, we minimized the determinants of both \(\hat{\text{H}}\) and H, and it was observed that \(\hat{\text{H}}\) exhibited better developability.
**Minimization procedure.** Our optimization procedure aims to minimize an objective function combining the data term and developability regularizer term:
\[L(\boldsymbol{\theta})=L_{\text{data}}(\boldsymbol{\theta})+ \lambda\cdot L_{*}(\boldsymbol{\theta}) \tag{13}\]
where \(L_{*}\) denotes one of the regularizer terms detailed above(4), and \(\lambda\) is set through hyperparameter tuning. Steps to minimizing the objective involves (1) achieving good shape approximation to the point cloud, (2) regularizing it for developability, and (3) computing its derivative with respect to the input point and network parameters. We solve the first problem by minimizing the data term \(L_{data}(\boldsymbol{\theta})\) for the input point cloud, yielding an approximate iso-surface. We then minimize the full loss function \(L(\boldsymbol{\theta})\) evaluated on the model parameters. Computing the implicit's Hessians needed in the developability term is feasible due to the twice-differentiable feed-forward activation function used in our network.
We use subgradient optimization for \(L_{1}\) norm minimization. As for other minimization terms, we leveraged pytorch's autograd (_torch.autograd.grad_) to calculate the gradients and their derivatives required for backpropagation. By minimizing the complete loss function with backpropagation, we are able to obtain piecewise developable surfaces with reduced Gaussian curvature and automatic crease formation. The iterations were run until convergence.
**Implementation details.** Prior to conducting the experiments, we standardized the inputs by fitting point cloud to unit bounding box. For shape optimization, we set the learning rate to \(10^{-4}\) and adjust it to \(10^{-5}\) during the fine-tuning stage for developability. We employed the Adam minimizer [10] for optimization purposes. Our implementation is in Pytorch. The source code and data will be made available upon acceptance of the paper.
## 5 Experiments
We now discuss our experiments for evaluating the effects of the proposed developability regularization, the evaluation metrics, and finally show results and comparisons.
**Competing variants.** We evaluate different variants based on the same network architecture mentioned in Section 4. The evaluated variants include:
(a) Activation variants: _geLU_, _siLU_, _tanh_, _elu_ used in the network for both shape approximation and fine-tuning for developability (ref. _supplement_)
(b) Regularization variants: H\({}_{NN}\), H\({}_{logdet}\), H\({}_{PNN}\), \(\hat{\text{H}}_{det}\), which perform fine-tuning of the network parameters based on the total loss function (Eq.13). H\({}_{NN}\), H\({}_{logdet}\) and H\({}_{PNN}\) minimizes the \(L_{1}\) norm of the singular values, the logarithmic determinant of squared hessian and the lowest
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Median K \(\downarrow\) & Mean K \(\downarrow\) & CD \(\downarrow\) \\ \hline GT \(|57K|\) & 0.004 & 0.012 & 0.0 \\ \hline
[3]\(|57K|\) & \(3e^{-5}\) & 0.006 & _115.1_ \\
[3]\(|230K|\) & \(1e^{-5}\) & 0.003 & _102.1_ \\ SDF+H\({}_{det}\,|57K|\) & \(1e^{-3}\) & 0.02 & 25.5 \\ SDF+H\({}_{det}\,|950K|\) & \(5e^{-5}\) & 0.003 & 25.6 \\ SDF+H\({}_{det}\,|3.8M|\) & \(\boldsymbol{1e^{-5}}\) & **0.001** & 27.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of Discrete Gaussian curvature (K) and Chamfer Distance (CD) for bunny 3.** Ground-truth is the SDF surface reconstructed using marching cubes. Chamfer distance evaluated as sum of square difference of the vertices.
Figure 3: **Visualization of Discrete Gaussian curvature of results of Table 1**. (a) SDF Ground truth(230K\(|\)57K Vertices), (b) Binninger,Verhoeven et.al [3] on (a), (c) marching cube [40] reconstructed surface using \(\hat{\text{H}}_{det}\) regularizer described in 4. _Note: Marching cube reconstruction lacks smooth edges and quality relies on voxel resolution (512 used here).
singular value of the implicit hessian \(\mathbf{H}_{f}(\mathbf{p})\) respectively, while \(\hat{\mathbf{H}}_{det}\) minimizes the determinant of the matrix \(\hat{\mathbf{H}}_{f}(\mathbf{p})\) (Eq. 3). For fair comparison, these variants were evaluated with the gelU activation function.
Evaluation metrics.We evaluate the regularizer variants using two metrics. Firstly, we employ the Chamfer distance metric to assess the similarity between the reconstructed surface and the ground-truth surface. This involves sampling 250k points on both surfaces (using Poisson disk uni
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Bunny} & \multicolumn{3}{c|}{Horse} & \multicolumn{3}{c|}{Dragon} & \multicolumn{3}{c}{Griffin} \\ \cline{2-13} & Med \(\downarrow\) & Med \(\downarrow\) & CD \(\downarrow\) & Med \(\downarrow\) & Med \(\downarrow\) & CD \(\downarrow\) & Med \(\downarrow\) & CD \(\downarrow\) & Med \(\downarrow\) & Med \(\downarrow\) & CD \(\downarrow\) \\ & K\({}_{min}\) & K & & K\({}_{min}\) & K & & K\({}_{min}\) & K & & K\({}_{min}\) & K & \\ \hline SDF & 1.6 & 10.2 & 14.4 & 2.0 & 19.9 & 1.3 & 2.9 & 37.4 & 2.9 & 1.6 & 18.5 & 2.5 \\ SDF[3] & 0.2 & 0.8 & _430.7_ & **0.2** & **1.27** & _170.5_ & **0.3** & 1.8 & _860.2_ & **0.2** & 0.9 & _174.4_ \\ SDF[35](\(|3K|\)) & 0.5 & 1.3 & 54.2 & 2.0 & 18.2 & 1.6 & - & - & - & - & - & - \\ SDF+H\({}_{NN}\) & 0.3 & 0.8 & 71.6 & 1.4 & 9.4 & 5.0 & 1.7 & 12.7 & 75.8 & 0.8 & 5.2 & 19.4 \\ SDF+H\({}_{logdet}\) & 0.7 & 1.7 & 62.9 & 1.2 & 8.8 & 3.9 & 1.4 & 12.1 & 5.4 & 0.6 & 3.7 & 4.0 \\ SDF+\(\hat{\mathbf{H}}_{det}\) & 0.08 & 0.5 & 110.4 & **0.2** & **1.31** & **18.6** & 0.5 & 3.5 & 51.0 & **0.2** & **0.8** & **11.5** \\ SDF+H\({}_{PNN}\) & **0.03** & **0.2** & **134.7** & 0.3 & 2.1 & 10.2 & **0.4** & 4.5 & **63.2** & **0.2** & 1.7 & 15.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparison of Implicit Curvature and Chamfer distance**. Implicit Curvature(K) is calculated according to Eq.2, and K\({}_{min}\) is calculated using Eq.16, and their median value is taken (Med). Chamfer distance (CD) uses sum of the squared distance metric for 500K points. \(\downarrow\) means lower values are better. For comparison, we employ SDF reconstruction from the developable surface results of other methods (SDF[3], SDF[35]). _Note: Only the implicit curvature values are measured from the SDF reconstruction, Chamfer distance is measured from their discrete representation by trying to match or lower the_ Med(K\({}_{min}\))
Figure 4: **Histogram of Implicit Gaussian curvature of stanford bunny 6 reconstructed using (a) SDF shape approximation without any regularizer, in comparison to the SDF Developable shape approximation using regularizer variants 4, from (b) - (e).**
Figure 5: **Ablation on regularizer weights, \(\lambda\), and developability of surface**. Row (a), shows comparison for \(\hat{\mathbf{H}}_{det}\) regularized surface, row (b), shows results for \(\mathbf{H}_{PNN}\) regularized surface.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & Reg & Med \(\downarrow\) & Mean \(\downarrow\) & CD \(\downarrow\) \\ \(\lambda=0\) & \(\lambda\) & K & K & \\ \hline \multirow{4}{*}{**Quantitative result: Ablation study on regularizer weight, \(\lambda^{\prime}s\) effect on surface developability and chamfer distance**. Evaluation of implicit Gaussian curvature (K) for different \(\lambda\) weights on Albert Einstein by iczfirzis licensed under CC BY-SA.(Fig.5). Median(Med) and mean of K as well as Chamfer Distance (CD) to ground-truth, are measured using 250k surface points after ICP [29]. The best performance is highlighted in bold.
form sampling) followed by iterative closest point (ICP) [29] for alignment. Then bidirectional Chamfer distance is calculated between the point samples. Secondly, to measure surface developability, we compute the median of the absolute minimum of the implicit principal curvatures (\(K_{\text{min}}\)) at each point. Measuring the minimum is crucial in cases where either one of \(K_{1}\) or \(K_{2}\) is close to 0 while the other possesses a high value (such as creases). This allows us to accurately assess the overall curvature, avoiding underestimation of developability at points where the surface is nearly flat. To compute the principal curvatures, and we follow Goldman [31] to obtain the implicit mean curvature \(M\):
\[M(\mathbf{p})\text{=}\frac{\nabla f(\mathbf{p})\cdot\mathbf{H}_{f}(\mathbf{p })\cdot\nabla f(\mathbf{p})^{T}-\left|\nabla f(\mathbf{p})\right|^{2}\cdot \text{Tr}(\mathbf{H}_{f}(\mathbf{p}))}{2\left|\nabla f(\mathbf{p})\right|^{3}} \tag{14}\]
Figure 6: **Comparison of developable surface reconstruction** results of two of our regularization methods (c) and (d) from 4 with the results of (b) Binninger,Verhoeven et al. [3]. The insets shows that (b) fails to preserve the structure of the shape while our methods does. Row 1 shows the rotated image of back of the dragon, Row 3 shows grapes of back side of Lucy, Row 4 the wings
Using Eq.2 and Eq.14, we get the principal curvatures values and their absolute minimum as follows [31] :
\[K_{1},K_{2}{=}M(\mathbf{p})\pm\sqrt{M^{2}(\mathbf{p})-K(\mathbf{p})} \tag{15}\] \[K_{\text{min}}{=}\text{min}(\left|K_{1}\right|,\left|K_{2}\right|) \tag{16}\]
Results.Tables 1, 2 and 3 present our quantitative evaluation of the regularizer variants. We find that all variants incorporating our developability term significantly reduce Gaussian curvature on the reconstructed surface compared to the ground truth. It is worth noting that while approaching developability, the shape approximation of the reconstruction does not deviate much(Fig.6). It shown in the Chamfer distance metric compared to other methods (1, 2). Additionally, we observe that each variant offers a different level of developability approximation (Fig.1), owing to the varied approaches and relaxations used to minimize the hessian rank, but the H\({}_{NN}\) and H\({}_{det}\) variants provide consistent piecewise developable patches with lower \(K_{\text{min}}\) and lower chamfer distance. The same conclusion can be observed qualitatively (1,3). H\({}_{det}\) gives piecewise planar developable patches, H\({}_{NN}\) gives planar and non-planar developable patches, H\({}_{logdet}\) minimizes overall curvature, but produces stronger crease lines, while H\({}_{NN}\) tries to minimize both the principal curvatures but has adverse effect on shape approximation.
Robustness.We introduce a perturbation of \(1\%\) to the positions of the input point cloud while preserving the normal sign. In Figure 7, (b) shows the reconstruction of noisy point cloud using SDF reconstruction without any regularizer, while (c) and (d) depict the reconstructions using the H\({}_{NN}\) and H\({}_{det}\) regularizer variants for the same noisy input. Despite the noise, the reconstructed surfaces remain close to the ground truth surface. Note that, as the noise \(\%\) increases, the surface SDF reconstruction gets thicker.
Ablation Results.Our findings reveal that increasing the regularizer weight enhances developability, as evident from the lower median and mean curvatures shown in Table 3. However, this improvement comes with the trade-off of a greater deviation from the shape approximation.
## 6 Conclusion
We have developed a method to approximate the developability of surfaces, applicable to closed surfaces with varying levels of detail. Our approach leverages the implicit representation of the surface and introduces a novel regularizer term that acts on the implicit's hessian and gradient, encouraging the emergence of a piecewise developable surface with automatic crease formation. Experimental results demonstrate the results of developability achieved by our method while preserving the surface's structural characteristics better than alternative techniques.
Limitations and future work.While our method shows promising results, it works only on single closed surfaces and assumes correct normal signs in the provided point cloud. With the current architecture the natural next step is to generalize this as data-driven approach while extending it to handle open surfaces and noisy normals. Our method is a global shape optimization method, thus does not adequately preserve details in regions in a shape with different levels of detailing. (Fig.6, Lucy model, where higher weights makes drapes with more developable, but face details are not preserved). Moreover, there are potential failures arising from marching cube reconstruction, which may not ensure connectivity for shapes with thinner connectivity structures, as illustrated in Figure 8 with higher regularizer weights. Thus, automatically segmenting point clouds and applying appropriate regularizer for individual segments is a promising avenue for future research.
Figure 8: **Failed case using our method.** (b) shows the reconstruction using H\({}_{PNN}\) regularizer variant using larger regularizer weight \(\lambda\). Thin connectivity structure (dog ears) gets disconnected or vanishes.
Figure 7: **Comparison of developable surface reconstruction results with noisy input** point cloud (\(1\%\) noise) for and H\({}_{PNN}\) and H\({}_{det}\) regularizer variants. |
2305.13537 | Internal groupoids as involutive-2-links | Regardless of its environment, the category of internal groupoids is shown to
be equivalent to the full subcategory of involutive-2-links that are unital and
associative. The new notion of involutive-2-link originates from the study of
triangulated surfaces and their application in additive manufacturing and
3d-printing. Thus, this result establishes a bridge between the structure of an
internal groupoid and an abstract triangulated surface. An example is provided
which can be thought of as a crossed-module of magmas rather than groups. | Nelson Martins-Ferreira | 2023-05-22T23:20:12Z | http://arxiv.org/abs/2305.13537v1 | # Internal groupoids as involutive-2-links
###### Abstract.
Regardless of its environment, the category of internal groupoids is shown to be equivalent to the full subcategory of involutive-2-links that are unital and associative. The new notion of involutive-2-link originates from the study of triangulated surfaces and their application in additive manufacturing and 3d-printing. Thus, this result establishes a bridge between the structure of an internal groupoid and an abstract triangulated surface. An example is provided which can be thought of as a crossed-module of magmas rather than groups.
The purpose of this note is to build a bridge betwwen the study of internal groupoids and the study of triangulated surfaces. The structure of an abstract triangulated surface, as described in [22], has motivated the search for an analogous model to an internal groupoid. The result is presented here under the name _involutive-2-link_ with its two main properties: unitary and associativity.
**Theorem 1**.: _Let \(\mathbf{C}\) be any category. The category of internal groupoids is equivalent to the full subcategory of unital and associative involutive-2-links._
An _involutive-2-link_ is a morphism \(m\colon A\to B\) equipped with two interlinked involutions on its domain. More precisely, it consists of a triple \((\theta,\varphi,m\colon A\to B)\) with \(\theta,\varphi\colon A\to A\) such that \(\theta^{2}=\varphi^{2}=1_{A}\) and \(\theta\varphi\theta=\varphi\theta\varphi\). Note that the subgroup of \(\operatorname{Aut}(A)\), generated by \(\theta\) and \(\varphi\), is the dihedral group of order 6.
A morphism between involutive-2-links, say from \((\theta,\varphi,m\colon A\to B)\) to \((\theta^{\prime},\varphi^{\prime},m^{\prime}\colon A^{\prime}\to B^{\prime})\) is a pair of morphisms \((f\colon A\to A^{\prime},g\colon B\to B^{\prime})\) such that \(f\theta=\theta^{\prime}f\), \(f\varphi=\varphi^{\prime}f\) and \(m^{\prime}f=gm\).
**Definition 1**.: _Let \(\mathbf{C}\) be any category. An involutive-2-link structure in \(\mathbf{C}\), say \((\theta,\varphi,m\colon C_{2}\to C_{1})\), is said to be:_
1. unital _when the two pairs of morphisms_ \((m,m\theta)\)_,_ \((m,m\varphi)\) _are jointly monomorphic and there exist morphisms_ \(e_{1},e_{2}\colon C_{1}\to C_{2}\)
_such that_ \[me_{1}=1_{C_{1}}=me_{2}\] (1) \[\theta e_{2}=e_{2},\quad\varphi e_{1}=e_{1}\] (2) \[m\theta\varphi e_{2}=m\varphi\theta e_{1}\] (3) \[m\theta e_{1}m\varphi=m\varphi e_{2}m\theta\] (4) \[m\theta e_{1}m=m\theta e_{1}m\theta\] (5) \[m\varphi e_{2}m=m\varphi e_{2}m\varphi.\] (6)
2. associative _when the pair_ \((m\varphi,m\theta)\) _is bi-exact (see diagram (_7_)_ _bellow with_ \(m\varphi\) _as_ \(\pi_{1}\) _and_ \(m\theta\) _as_ \(\pi_{2}\)_) and the induced morphisms_ \(m_{1},m_{2}\colon C_{3}\to C_{2}\)_, determined by (see diagram (_9_))_ \[\pi_{1}m_{1} =mp_{1},\quad\pi_{2}m_{1}=\pi_{2}p_{2}\] \[\pi_{1}m_{2} =\pi_{1}p_{1},\quad\pi_{2}m_{2}=mp_{2}\] _are such that_ \(mm_{1}=mm_{2}\)_._
A pair of parallel morphisms (or a digraph) is said to be _bi-exact_ if when considered as a span it can be completed into a commutative square which is both a pullback and a pushout and moreover, if considered as a cospan, it can be completed into another commutative square which is both a pullback and a pushout. In other words, a digraph such as
(7)
is _bi-exact_ precisely when the zig-zag
(8)
can be completed with two commutative squares
(9)
which are both simultaneously a pullback and pushout. Such squares are also called exact squares, bicartesian squares, Dolittle diagrams or pulation squares [1].
The notion of a bi-exact pair of parallel morphisms is a way to study internal groupoids in arbitrary categories, even thought pullbacks may not be available as canonical constructions. However, since the results are invariant via the Yoneda embedding, our proofs are based in the ambient category of sets and maps. Nevertheless, details are given as if working in a context where pullbacks have to be considered as a property of commutative squares.
The functor \(F\) from the category of internal groupoids to the category of involutive-2-links is defined via the assignment
(10)
with \(\theta=\langle i\pi_{1},m\rangle\), \(\varphi=\langle m,i\pi_{2}\rangle\) and it is full and faithful. Indeed, let us consider an internal groupoid ([24], see also [3], Section 7.1) as a diagram of the form
(11)
such that
\[de=1_{C_{1}}=ce \tag{12}\] \[dm=d\pi_{2},\quad cm=c\pi_{1},\quad d\pi_{1}=c\pi_{2}\] (13) \[di=c,\quad ci=d,\quad i^{2}=1_{C_{1}},\quad ie=e \tag{14}\]
and satisfying the following further properties:
(a) the commutative square
(15)
is a pullback square;
(b) \(m\langle 1_{C_{1}},ed\rangle=1_{C_{1}}=m\langle ec,1_{C_{1}}\rangle\);
(c) \(m\langle 1_{C_{1}},i\rangle=ec,\quad m\langle i,1_{C_{1}}\rangle=ed\);
(d) the cospan \(C_{2}\xrightarrow{d\pi_{2}}C_{0}\xrightarrow{c}C_{1}\) can be completed into a pullback square
(16)
1. \(m(1\times m)=m(m\times 1)\), where \((1\times m),(m\times 1)\colon C_{3}\to C_{2}\) are morphisms uniquely determined as \[\pi_{2}(m\times 1) =p_{2}\] \[\pi_{1}(m\times 1) =m\] \[\pi_{2}(1\times m) =m\langle\pi_{2}p_{1},p_{2}\rangle\] \[\pi_{1}(1\times m) =\pi_{2}.\]
The functor \(F\) takes an internal groupoid such as (11), forgets the underlying reflexive graph
(17)
keeps the morphism \(m\colon C_{2}\to C_{1}\) (the multiplicative structure of the internal groupoid) and contracts the remaining information as two endomorphisms \(\theta,\varphi\colon C_{2}\to C_{2}\) of the form
\[\theta=\langle i\pi_{1},m\rangle\quad\varphi=\langle m,i\pi_{2}\rangle. \tag{18}\]
As a consequence, we have
\[m\varphi=\pi_{1},\quad m\theta=\pi_{2} \tag{19}\] \[\pi_{1}\varphi=m,\quad\pi_{1}\theta=i\pi_{1}\] (20) \[\pi_{2}\varphi=i\pi_{2},\quad\pi_{2}\theta=m. \tag{21}\]
The conditions \(\theta^{2}=\varphi^{2}=1_{C_{2}}\) and \(\theta\varphi\theta=\varphi\theta\varphi\) are easily verified. Hence, the functor is well defined and it is clearly faithful.
In order to see that the functor \(F\) is full, let us consider two internal groupoids, say
(22)
and
(23)
denoted respectively by \(C\) and \(C^{\prime}\). Let us assume the existence of a morphism of involutive-\(2\)-links from \(F(C)\) to \(F(C^{\prime})\), that is, a pair of morphisms \(f_{i}\colon C_{i}\to C^{\prime}_{i}\), with \(i=1,2\) such that \(\theta^{\prime}f_{2}=f_{2}\theta\), \(\varphi^{\prime}f_{2}=f_{2}\varphi\) and \(m^{\prime}f_{2}=f_{1}m\), with \(\theta,\varphi,\theta^{\prime},\varphi^{\prime}\) the respective involutions associated with \(F(C)\) and \(F(C^{\prime})\). We need to show that the pair \((f_{2},f_{1})\) can be
extended to a morphism of internal groupoids
(24)
First observe that \(f_{2}(x,y)=(f_{1}(x),f_{1}(y))\) since \(\pi_{1}^{\prime}f_{2}=m^{\prime}\varphi^{\prime}f_{2}=m^{\prime}f_{2}\varphi=f_{1 }m\varphi=f_{1}\pi_{1}\) and similarly \(\pi_{2}^{\prime}f_{2}=f_{1}\pi_{2}\). This means that the hypotheses \(\theta^{\prime}f_{2}=f_{2}\theta\), \(\varphi^{\prime}f_{2}=f_{2}\varphi\) and \(m^{\prime}f_{2}=f_{1}m\) are translated, respectively, as
\[(f_{1}(x)^{-1},f_{1}(x)f_{1}(y)) =(f_{1}(x^{-1}),f_{1}(xy))\] \[(f_{1}(x)f_{1}(y),f_{1}(y)^{-1}) =(f_{1}(xy),f_{1}(y^{-1}))\] \[f_{1}(x)f_{1}(y) =f_{1}(xy)\]
from which we conclude \(i^{\prime}f_{1}=f_{1}i\). We also have \(f_{1}ed(x))=f_{1}(x^{-1}x)=f_{1}(x^{-1})f_{1}(x)=f_{1}(x)^{-1}f_{1}(x)=e^{ \prime}d^{\prime}f_{1}(x)\) and \(f_{1}ec(x)=e^{\prime}c^{\prime}f_{1}(x)\), which give
\[\langle 1,e^{\prime}d^{\prime}\rangle f_{1} =f_{2}\langle 1,ed\rangle\] \[\langle 1,e^{\prime}c^{\prime}\rangle f_{1} =f_{2}\langle 1,ec\rangle\]
and permits the definition of \(f_{0}\) either as \(d^{\prime}f_{1}e\) or as \(c^{\prime}f_{1}e\). Hence, the triple \((f_{2},f_{1},f_{0})\) is a morphism of internal groupoids from \(C\) to \(C^{\prime}\), showing that the functor \(F\) is full.
Let us observe that even when \(i\) is not made explicit, the two involutions \(\theta\) and \(\varphi\) are still uniquely determined because the object \(C_{2}\) can be presented not only as the pullback of \(d\) and \(c\) but also as the kernel pair of \(d\) or the kernel pair of \(c\) and hence both pairs \((m,\pi_{1})\) and \((m,\pi_{2})\) are in particular jointly monomorphic. This fact suggests the possibility of considering an even simpler structure to describe internal groupoids by using only \(\theta\) or \(\varphi\) together with the multiplication \(m\). However, this would give rise to a different structure which requires further investigation. Nevertheless, it is possible that the bridge with triangulated structures [22] will be widened by the new structure to be found. Moreover, the common denominator to the unitary and associativity properties is the requirement that the two pairs \((m,m\theta)\) and \((m,m\varphi)\) are jointly monomorphic which reinforces the possibility of having, say, the existence of \(\varphi\) as a property of the pair \((m,m\theta)\).
In order to prove Theorem 1 it is readily seen that that if \((\theta,\varphi,m)\) is obtained from an internal groupoid by applying the functor \(F\) then
it is a unital and associative involutive-2-link. On the other hand, if \((\theta,\varphi,m)\) is a unital and associative involutive-2-link, then the fact that the pairs \((m,m\theta)\) and \((m,m\varphi)\) are jointly monomorphic uniquely determines the morphisms \(e_{1}\) and \(e_{2}\) which are required to exist by the unitary property and hence fulfill the properties (b) and (c) of an internal groupoid. Indeed, the morphism \(i\colon C_{1}\to C_{1}\) is obtained by condition (3) either as \(i=m\theta\varphi e_{2}\) or as \(i=m\varphi\theta e_{1}\). The morphism \(e\colon C_{0}\to C_{1}\) is uniquely determined by condition (4) as such that \(ed=m\theta e_{1}\) and \(ec=m\varphi e_{2}\) where \(d\) and \(c\) are obtained as in diagram (9) (with \(m\varphi\) as \(\pi_{1}\) and \(m\theta\) as \(\pi_{2}\), which consequenctly also gives the property (a) of an internal groupoid because the pair \((m\theta,m\varphi)\) is bi-exact). Conditions (1), (5) and (6) assert the contractibility of the pairs \((m,m\theta)\) and \((m,m\varphi)\) in the sense of Beck (see [18], p. 150). Condition (2) is a central ingredient and gives \(e_{1}e=e_{2}e\) from which the conditions \(dm=d\pi_{2}\) and \(cm=c\pi_{1}\) are deduced, thus permitting to define the two morphisms \(m_{1}\) and \(m_{2}\) from the fact that the pair \((m\theta,m\varphi)\) is bi-exact.
The remaining details in the proof are easily obtained. Let us turn our attention to an example that can be seen as a generalization of crossed-modules from groups to magmas [23, 25].
**Example 1**.: _Let \(X=(X,\cdot)\) be a magma with a distinguished element \(1\in X\) and \(B\) be a non-empty set, with \(0\in B\), together with maps \(\bar{(})\colon X\to X\), \(f\colon X\times B\times X\times B\to X\) and \(g\colon X\times B\to B\) such that_
\[f(y\cdot x,b,y^{\prime}\cdot x^{\prime},b^{\prime})=f(y,g(x,b), y^{\prime},g(x^{\prime},b^{\prime}))\cdot f(x,b,x^{\prime},b^{\prime}) \tag{25}\] \[g(y\cdot x,b)=g(y,g(x,b))\] (26) \[g(1,0)=0,\quad 1\cdot 1=1\] (27) \[g(\bar{x}(\bar{y}(yx)),b)=b,\quad g(\bar{x}x,b)=b. \tag{28}\]
_Note that sometimes \(x\cdot y\) is written as \(xy\). Let us also consider the sets_
\[C_{1}=\{(x,b)\in X\times B\mid f(x,b,0,0)=x=f(0,0,x,b)\} \tag{29}\] \[C_{2}=\{(y,x,b)\in X^{2}\times B\mid(y,g(x,b)),(x,b)\in C_{1}\}, \tag{30}\]
_and the formulas_
\[m(y,x,b)=(yx,b) \tag{31}\] \[\theta(y,x,b)=(\bar{y},yx,b)\] (32) \[\varphi(y,x,b)=(yx,\bar{x},g(\bar{y}(yx),b)). \tag{33}\]
The following propositions refer to the structure of Example 1 and should be considered as simple observations.
**Proposition 1**.: _The maps \(\theta,\varphi\colon X^{2}\times B\to X^{2}\times B\) are involutions if and only if the conditions_
\[\bar{\bar{x}}=x,\quad\bar{y}(yx)=x,\quad(yx)\bar{x}=y \tag{34}\]
hold for all \(x,y\in X\). In addition, the further condition \(\theta\varphi\theta=\varphi\theta\varphi\) is satisfied if and only if the two extra conditions_
\[x(\overline{yx})=\bar{y},\quad(\overline{yx})y=\bar{x} \tag{35}\]
_are also satisfied for all \(x,y\in X\)._
Let us restrict our attention to the subsets \(C_{1}\) and \(C_{2}\).
**Proposition 2**.: _If \((y,x,b)\in C_{2}\) then \((yx,b)\in C_{1}\)._
Hence, \(m(y,x,b)=(yx,b)\) is a well defined map \(m\colon C_{2}\to C_{1}\).
**Proposition 3**.: _The formulas \(\theta\), \(\varphi\) are well defined maps \(C_{2}\to C_{2}\) if and only if the following condition holds:_
\[\text{if }(y,x,b)\in C_{2}\text{ then }(\bar{x},\bar{y},g(yx,b)\in C_{2}. \tag{36}\]
Let us give sufficient conditions for the structure \((\theta,\varphi,m\colon C_{2}\to C_{1})\) to be (a well defined) involutive-2-link.
**Proposition 4**.: _The structure \((\theta,\varphi,m\colon C_{2}\to C_{1})\) is (a well defined) involutive-2-link as soon as the following two conditions hold:_
\[\text{if }(x,b)\in C_{1}\text{ then }(\bar{x},g(x,b))\in C_{1}\text{ and }\bar{\bar{x}}=x \tag{37}\] \[\text{if }(y,x,b)\in C_{2}\text{ then }\bar{y}(yx)=x,\,(yx)\bar{x}=y,\,x( \overline{yx})=\bar{y},\,(\overline{yx})y=\bar{x} \tag{38}\]
Note that when \(\bar{x}\) is unique with the properties \(\bar{x}(x\bar{x})=\bar{x}\) and \((x\bar{x})x=x\) then \((\bar{x},g(x,b))\in C_{1}\) as soon as \((x,b)\in C_{1}\).
From now on we assume the conditions of Proposition 4 and \(\bar{1}=1\).
**Proposition 5**.: _If \((1,b)\in C_{1}\) for every \(b\in B\), then the pair \((m\theta,m\varphi)\) is bi-exact. Furthermore, the triple \((\theta,\varphi,m\colon C_{2}\to C_{1})\) is an associative involutive-2-link as soon as \((X,\cdot)\) is a semigroup._
**Proposition 6**.: _If \((1,b)\in C_{1}\) for all \(b\in B\) and the pairs \((m,m\theta)\), \((m,m\varphi)\) are jointly monomorphic, then the triple \((\theta,\varphi,m\colon C_{2}\to C_{1})\) is a unital involutive-2-link as soon as \((X,\cdot,1)\) is a unital magma._
Merging the two previous results, while using Theorem 1, we obtain.
**Proposition 7**.: _If \((1,b)\in C_{1}\) for all \(b\in B\), the pairs \((m,m\theta)\), \((m,m\varphi)\) are jointly monomorphic and \((X,\cdot,1)\) is a monoid, then the triple \((\theta,\varphi,m\colon C_{2}\to C_{1})\) is a unital and associative involutive-2-link in the category of sets and maps. Moreover, the underlying reflexive graph of the internal groupoid associated with \((\theta,\varphi,m\colon C_{2}\to C_{1})\) is_
\[C_{1}\xrTo{d}{e}{g}B, \tag{39}\]
_with \(d(x,b)=b\) and \(e(b)=(1,b)\)._
In order to lift the structure \((\theta,\varphi,m\colon C_{2}\to C_{1})\) from the category of sets and maps to the category of magmas and magma homomorphisms we will now assume that \(B=(B,+)\) is a magma and consider the sets \(C_{1}\) and \(C_{2}\) as magmas with operations, respectively,
\[(x,b)+(x^{\prime},b^{\prime})=(f(x,b,x^{\prime},b^{\prime}),b+b^{ \prime}),\] \[(y,x,b)+(y^{\prime},x^{\prime},b^{\prime})=(f(y,g(x,b),y^{\prime },g(x^{\prime},b^{\prime})),f(x,b,x^{\prime},b^{\prime}),b+b^{\prime}).\]
For simplicity, let us from now on assume that \((X,\cdot,1)\) is a group, with \(\bar{x}=x^{-1}\), and \(f(1,b,1,b^{\prime})=1\) for all \(b,b^{\prime}\in B\).
**Proposition 8**.: _The structure \((\theta,\varphi,m\colon C_{2}\to C_{1})\) is a unital and associative involutive-2-link in the category of magmas if and only if the condition_
\[g(f(x,b,x^{\prime},b^{\prime}),b+b^{\prime})=g(x,b)+g(x^{\prime},b^{\prime}) \tag{40}\]
_is satisfied for every \((x,b)\) and \((x^{\prime},b^{\prime})\) in \(C_{1}\). Furthermore, if \((x,0)\in C_{1}\) for every \(x\in X\), then, for every \((x,b)\in C_{1}\), \(g(x,b)=g(x,0)+b\)._
Finally, in order to compare the previous results with the classical notion of crossed module, let us assume that \((B,+,0)\) is a group. It then follows that the map \(f\) is necessarily of the form
\[f(x,b,x^{\prime},b^{\prime})=x\cdot f(1,b,x^{\prime},0) \tag{41}\]
and that \(\xi(b,x^{\prime})=f(1,b,x^{\prime},0)\) is a group action of \(B\) on \(X\). Moreover, it is not difficult to see that the conditions (25) and (40) reproduce the classical crossed-module constrains.
We have shown that independently of its environment, the category of internal groupoids is equivalent to the full subcategory of involutive-2-links that are unital and associative. Our approach contrasts with the one in which a groupoid is seen as a reflexive graph equipped with an extra structure, usually adopted when groupoids are studied from an algebraic point of view [2, 4, 5, 6, 16, 19, 20, 21]. In our case, the underlying reflexive graph of a groupoid is found as a property of its associated involutive-2-link which is closer to a more geometrical (or differential) point of view [10, 11, 12, 14]. However, this work also goes into the direction of [7, 15] in the sense that it does not require the ambient categories to have pullbacks as canonical constructions and furthermore it can be generalized to higher dimensions [17].
We conclude with the observation that although an internal groupoid is an instance of an internal category, Brandt [8, 9] predates Eilenberg and Mac Lane [13] in delineating an axiomatic portrait of a (connected) groupoid ([26] Remark 19.3.12). Our approach suggests that internal groupoids can be studied as involutive-2-links in which the unitary and associativity properties, being independent of each other, give rise to a wide spectrum of generalizations.
## Acknowledgement
Funded by FCT/MCTES (PIDDAC) through the following Projects: Associate Laboratory ARISE LA/P/0112/2020; UIDP/04044/2020; UIDB/04044/2020; PAMI-ROTEIRO/0328/2013 (N\({}^{\text{o}}\) 022158); MATIS (CENTRO-01-0145-FEDER-000014 - 3362); CENTRO-01-0247-FEDER-(069665, 039969); POCI-01-0247-FEDER-(069603, 039958, 039863, 024533); Generative thermodynamic; by CDRSP and ESTG from the Polytechnic of Leiria.
Special thanks are due to IPEiria's Run-EU program and in particular to the kind and inspiring hospitality offered by FH Vorarlberg - University of Applied Sciences, at Dornbirn, Austria.
|
2308.04909 | Adversarial Deep Reinforcement Learning for Cyber Security in Software
Defined Networks | This paper focuses on the impact of leveraging autonomous offensive
approaches in Deep Reinforcement Learning (DRL) to train more robust agents by
exploring the impact of applying adversarial learning to DRL for autonomous
security in Software Defined Networks (SDN). Two algorithms, Double Deep
Q-Networks (DDQN) and Neural Episodic Control to Deep Q-Network (NEC2DQN or
N2D), are compared. NEC2DQN was proposed in 2018 and is a new member of the
deep q-network (DQN) family of algorithms. The attacker has full observability
of the environment and access to a causative attack that uses state
manipulation in an attempt to poison the learning process. The implementation
of the attack is done under a white-box setting, in which the attacker has
access to the defender's model and experiences. Two games are played; in the
first game, DDQN is a defender and N2D is an attacker, and in second game, the
roles are reversed. The games are played twice; first, without an active
causative attack and secondly, with an active causative attack. For execution,
three sets of game results are recorded in which a single set consists of 10
game runs. The before and after results are then compared in order to see if
there was actually an improvement or degradation. The results show that with
minute parameter changes made to the algorithms, there was growth in the
attacker's role, since it is able to win games. Implementation of the
adversarial learning by the introduction of the causative attack showed the
algorithms are still able to defend the network according to their strengths. | Luke Borchjes, Clement Nyirenda, Louise Leenen | 2023-08-09T12:16:10Z | http://arxiv.org/abs/2308.04909v2 | # Adversarial Deep Reinforcement Learning for Cyber Security in Software Defined Networks
###### Abstract
This paper focuses on the impact of leveraging autonomous offensive approaches in Deep Reinforcement Learning (DRL) to train more robust agents by exploring the impact of applying adversarial learning to DRL for autonomous security in Software Defined Networks (SDN). Two algorithms, Double Deep Q-Networks (DDQN) and Neural Episodic Control to Deep Q-Network (NEC2DQN or N2D), are compared. NEC2DQN was proposed in 2018 and is a new member of the deep q-network (DQN) family of algorithms. The attacker has full observability of the environment and access to a causative attack that uses state manipulation in an attempt to poison the learning process. The implementation of the attack is done under a white-box setting, in which the attacker has access to the defender's model and experiences. Two games are played; in the first game, DDQN is a defender and N2D is an attacker, and in second game, the roles are reversed. The games are played twice; first, without an active causative attack and secondly, with an active causative attack. For execution, three sets of game results are recorded in which a single set consists of 10 game runs. The before and after results are then compared in order to see if there was actually an improvement or degradation. The results show that with minute parameter changes made to the algorithms, there was growth in the attacker's role, since it is able to win games. Implementation of the adversarial learning by the introduction of the causative attack showed the algorithms are still able to defend the network according to their strengths.
adversarial learning, deep reinforcement learning, software defined network, cyber security
## I Introduction
Software Defined Networking (SDN) is a three-layer network architecture that has been in practice since 2013 [1]. Comprising an application layer, a control layer, and an infrastructure layer, SDN delivers a robust framework for managing network tasks [2]. One of the major advantages of SDN is its separation of network control and forwarding functions, which enables the controller to be programmed for various application services and tasks. This separation facilitates the convenient management, configuration, and optimisation of network resources using standardised protocols. It is also shown in [3] that machine learning has many uses in SDN. The COVID-19 pandemic led to a significant increase in telecom users, driving further investment in SDN, an essential technology for realising the potential of 5G networks. These 5G networks are expected to bring substantial improvements to the telecom industry [4]. However, this surge has also led to an increase in cybercrime [5, 6], highlighting the ever-present need for enhanced network security.
This work focuses on employing adversarial learning in the training and implementation of model-free deep reinforcement learning in Software-Defined Networking (SDN). The rise of AI models and algorithms has been significant but met with increased secpticism. Despite this, the automation capabilities of SDN have positioned it as a strong candidate for autonomous defence mechanisms [2, 7], prompting its broad adoption across various industries. Achieving robustness in AI model implementation has proven to be crucial since attackers are perpetually attempting to exploit vulnerabilities in the learning process. Thus, it is essential to cultivate models capable of tolerating corrupted or malicious inputs, the work done in [8] emphasizes the importance of this. In this regard, the attacking agent utilises a data poisoning attack, implemented through state manipulation [7]. Experiences used for training are manipulated by implanting false positives and negatives, building on the previous work cited [9], wherein two model-free deep reinforcement learning algorithms, double deep q-learning and neural episodic control to deep q-network, were juxtaposed. In [9], they were implemented into a software defined network running a capture the flag (CTF) game. The game was set up such that one agent had to defend the network against the other with the goal to measure and compare performances. The same game setting is used in this experimentation.
The remainder of this paper is summarised as follows: Section II summarises the problem faced in previous work when applying the model-free deep reinforcement learning agents to a software defined networks; Section III introduces the environment employed in the investigation of the work, which is kept the same as in previous work and inspired material; Section IV covers the results of the investigation, in which the win rates and performance is evaluated; Section V concludes the investigation.
## II Problem: Deep Reinforcement Learning for Cybersecurity in Software Defined Networking
### _Background on Software Defined Networking_
Software-defined networking (SDN) is an approach to network management that allows dynamic, efficient network configuration in order to improve network performance [1]. It was
spawned as the result of the desire to separate the data plane from the control plane [1]. SDN is composed of the three layers: (1) application layer; (2) Control layer; (3) infrastructure layer. The application layer is made up of applications which deliver services and communicate their network requirements to the controller using northbound APIs. The Control layer hosts the SDN controller, translates requirements into low-level controls that are then sent to the infrastructure layer using southbound API's. The infrastructure layer consists of network switches and other infrastructural components [1, 2]. The major advantage of SDN is that it separates network control and forwarding functions, allowing the controller to be programmable to perform various application services and tasks [1, 2]. Consequently, network resources can be conveniently managed, configured and optimised using the standardised protocols. Due to its architecture there has been a good variety of available open-source SDN controller platforms/frameworks, a few examples being OpenDayLight, RYU, NOX/POX and Open vSwitch [1, 2].
### _Background on Reinforcement Learning_
Reinforcement Learning (RL) deals with a sequential decision making problem where an agent interacts with the environment to maximise its rewards, implemented as a Markov Decision Process (MDP). An MDP is classified as follows: \((S,A,P,R,GAMMA(\gamma))\)[10] where each time step \(t\), the agent (1) receives an observation \(s_{t}\) (\(S\)) of the environment; (2) takes an action \(a_{t}\) (\(A\)) based on its policy \(\pi\) (\(P\)), which is a mapping from states to actions; and (3) obtains a reward \(r_{t}\)(\(R\)) based on state \(s_{t}\), action \(a_{t}\), and the environment's transition to a new state \(s_{t+1}\). The goal of the agent is to maximise its cumulative rewards, i.e., \(R_{t}\) = \(\sum_{\tau=t}^{\infty}\gamma^{\tau-t}r_{\tau}\), where \(\gamma\in(0,1]\) is a discount factor which affects the present importance of long-term rewards [7]. The focus of experimentation was on a well known Deep RL algorithm -- Double Deep Q-Networks (DDQN) [11] and new variant Neural Episodic Control to Deep Q-Network (NEC2DQN) [12] -- and their ability to perform.
Double Deep Q-LearningTo solve the overestimation of action-values the algorithm Double Q-Learning is proposed. Double Q-Learning is the implementation of two Q functions: \(Q_{A}\) and \(Q_{B}\). Each Q function is updated from the other's next state [11]. Its creation was the result of combating the over estimation problem well known with DQL as the maximisation bias [11].
Neural Episodic Control to Deep Q-NetworkNeural Episodic Control (NEC), proposed in [13], can execute successful strategies as soon as they are experienced, instead of waiting for optimization mechanisms, such as stochastic gradient descent, to be done as is the case with DQN. Nevertheless, NEC becomes very memory intensive in latter stages; this is where a DQN is introduced, since both converge to a \(Q\) value. A DQN can be trained from NEC and once a certain point of convergence is reached, the load can be shifted from the NEC to DQN. The shift from one to the other is gradual, but at a point, the change step \(CS\), NEC is no longer used for decision making, but only training and evaluation, and decision making is done using the DQN. In this paper we decided to make the \(CS\) occur after the first 20% of turns have been passed.
### _Model-Free Deep Reinforcement Learning for Cybersecurity in Software Defined Networking_
In [9], where DDQN and N2D were implemented, tested and compared for cybersecurity within an SDN framework, the goal was to investigate using deep reinforcement learning for autonomous network defence. DDQN, a well known and matured algorithm was placed against a relatively newer algorithm N2D. N2D was chosen because it was designed to overcome the limitations of both NEC and DQN and has been shown to perform better than DDQN in certain cases [12]. Two-tailed t-test analysis of results was done to determine if one was better than the other, by determining if there was any statistical difference, however the results showed that there was none. Therefore, DDQN was determined to be the more favourable due to its simplicity. The work also served as a baseline of what can be expected as well as have a reference point to reflect on when analysing newer results from changes.
While the work cited previously showed promise, there were notable limitations and concerns [9]. One significant issue was the defender's domination of all game runs. On the surface, this bias towards the preferred outcome seems beneficial, but a deeper look reveals room for improvement, particularly from an attacker's perspective. More balanced engagement between the players would foster better learning for both agents, mitigating the environment's apparent defender bias. To counteract this bias, the attacker was permitted full observability of the environment.
The work in [9] proposed increasing the number of game runs to offer more total steps for each agent and a larger data pool for analysis. Furthermore, it suggested the implementation of adversarial learning, with the attacker conducting a causative attack on the defending agent, alongside the expansion of the network topology. It's important to note that while improving these algorithms may yield diminishing returns as a defender, there could be significant growth as an attacker. The game environment's inherent bias towards the defender means improving these algorithms may also make them more effective as offensive tools within the cybersecurity space, highlighting the potential for growth in the attacking role.
### _Adversarial Machine Learning_
Adversarial machine learning is the study of the attacks on machine learning algorithms and is used in machine learning to misguide a model with malicious input [14]. It has also been shown that by maliciously altering the input for Deep Neural Networks with adversarial attacks it can easily be fooled into predicting the wrong label [15]. The purpose of adversarial machine learning is not to emphasise the flaws of these algorithms, but to leverage these attacks during training as a means of training more robust agents [19]. Most deployed cyber defence solutions are still rule-based and require human involvement, this opens the opportunity for false alarms [7].
Training robust agents through adversarial learning could help against any possible false alarms, allowing them to still make optimal decisions.
In this investigation a data poisoning attack was chosen and done by the perturbation of the input for the agents. Since a state \(s_{t}\) at any step \(t\) is an array of length 80 containing binary digits \(d\in[0,1]\), the attack was implemented as the injection of false positives \((FPs)\) and false negatives \((FNs)\). The original observed experience is \((s,a,s^{\prime},r)\) but instead the agent observes the new tampered experience \((s,a,s^{\prime}+\delta,r^{\prime})\) instead. The implementation of the adversarial learning attack is described in section III.
## III Environment
```
1:INPUT: Original experience \((s,a,s^{\prime},r)\)
2:Limit on number of FPs and FNs: \(LIMIT\)
3:OUTPUT: Original experience \((s,a,s^{\prime}+\delta,r^{\prime})\)
4:\(FP\) = \(FN\) = [ ]
5:\(minQ_{FP}=minQ_{FN}=[\) ]
6:
7:for node in State do
8:if\(node\) is uncompromised mark as compromised then
9:if\(Q(s^{\prime}+\delta,a^{\prime})<1\) or \(Q(s^{\prime}+\delta,a^{\prime})<\) any value in \(minQ_{FN}\)then
10: Insert \(FN\) into \(FN\) and \(minQ_{FN}\)
11:if\(|FN|<LIMIT\)then
12: remove extra nodes from \(FN\) and \(minQ_{FN}\)
13:endif
14:endif
15: restore \(node\) as uncompromised
16:endif
17:if\(node\) is compromised mark as uncompromised then
18:if\(Q(s^{\prime}+\delta,a^{\prime})<1\) or \(Q(s^{\prime}+\delta,a^{\prime})<\) any value in \(minQ_{FP}\)then
19: Insert \(FP\) into \(FP\) and \(minQ_{FP}\)
20:if\(|FP|<LIMIT\)then
21: Remove extra nodes from \(FP\) and \(minQ_{FP}\)
22:endif
23:endif
24: restore \(node\) as compromised
25:endif
26:endfor
27: Change nodes in \(FN\) to uncompromised
28: Change nodes in \(FP\) to compromised
29:return\((s,a,s^{\prime}+\delta,r^{\prime})\)
```
**Algorithm 1** State manipulation attack originally from [2].
The adversarial machine learning attack implemented in this research is a state manipulation attack, which was adopted from [2], and is presented in algorithm 1. In [2], two adversarial attacks were implemented; the first being the flipping of reward signs, and the second being a data poisoning attack done through state perturbation (manipulation). However, in [7], they stated that the flipping reward sign attack proved to have made little to no impact; therefore in this work we have chosen to omit it and focus solely on the state manipulation attack.
As mentioned in section II subsection II-D, the experience of the defending agent is poisoned by the injection of false positives and false negatives in the state. Slight changes were made from the original in [2], the core however remains the same. In our case we input the original state and loop over the part of the state that contains the nodes.
Our environment utilised an SDN network, composed of four subnets with a total of 32 hosts and 48 visible links, integrated with a CTF game [9]. Just as in previous research, three starting points for the attacker were chosen, and a critical server flag was established as the attacker's goal [9]. The attacker targets the training step of the defending agent and operates under a white-box setting, where the attacker has direct access to the experiences and model of the defender [2]. If a black-box setting had been chosen, the attacker would need to train a surrogate model and select the appropriate nodes to falsify based on that surrogate model [7].
In the games, the players are the attacking and the defending agents. Games are categorised according to which agent is attacking and defending. For game 1, the attacker is the agent using DDQN and the defender is the agent using N2D. For game 2 these roles are reversed. Subsequently each game is played initially without the attack, this means that no adversarial learning takes place. The games are then played again with the inclusion of the attack, introducing adversarial learning. The CTF game was implemented in the same SDN emulation used in [9] in which the SDN was built using MiniNet with RYU as the network controller of choice. A star topology was used for the SDN, with four subnets. Subnet 1 contains 6 hosts, subnet 2 contains 8 hosts, subnet 3 contains 9 hosts, and subnet four contains 9 hosts.
## IV Results
The following results are representative of the performance of the agents in their roles. In this work we take the results of multiple different game sets. For each game we have 3 sets, each containing 10 consecutive game runs. Set 1 contains games played with 5,000 turns, set 2 contains games played with 50,000, set 3 contains games played with 500,000 turns. Game sets of multiple turn counts are recorded due to the change step functionality of the NEC2DQN algorithm, thus the algorithm will function differently according to the amount of total steps in the game. Having results over three sets allows us to see the impact of a varying change step value, as mentioned in section II subsection II-B. In addition we also get to see the scaling of DDQN since DDQN is set to have a better performance in longer games.
These results are discussed according to their game, afterwhich their results are analysed and their implications to SDN are discussed. It should be noted that multiple outcomes could occur as a result of the inclusion of the attack but the following is considered; (1) An agent could win more games but with an increase in average amount of turns, (2) An agent
can win less games but have an improved turn count, (3) the agent could win more with an improved turn count and (4) there could be no change at all. Only outcomes 1 and 3 are confident indicators of improvement, 2 however is more subjective to the situation.
1. _Game 1 Results:_ Table I shows the control results for the different sets of game 1 without the causative attack active. The attacking agent uses DDQN and the defending agent uses N2D. For set 1 the results are 7 - 3 in favour of the defender. The attacking agent managed to win games 1, 6 and 10. The defending agent on the hand managed to hold back the defender for the entirety of the game's duration. The defender won all games by means of outlasting the attacker giving it an average of 5,000 turns, however, the attacker took on average 4,140 turns to win. For set 2 the results are 6 - 4 in favour of the defender. The attacker manages to win games 1, 3, 5 and 6. The defender however in the remainder of the set manages to isolate/remove the attacker from the network. On average it took the defender 7,401 number of turns to win and 5,611 for the attacker to win. For set 3 the results are 7 - 3 in favour of the attacker. Most of the runs in the game set are won by the attacker, with the exception of runs 5, 7, and 10. On average it took the defender 9,534 number of turns to win and 5,698 for the attacker to win.
Table II shows the game results for the different sets of game 1 with the causative attack active. For set 1 the results are 8 - 2 in favour of the defender, with only runs 1 and 2 being won by the attacker, and the latter being won by the defender. On average it took the defender 4,845 turns to win and 589 for the attacker to win. For set 2 the results are 6 - 4 in favour of the defender, with only runs 1 - 6 being won by the defender and the latter. On average it took the defender 9,800 turns to win and 7,625 turns for the attacker to win.
Table IV shows the control results for the different sets of game 1 without the causative attack active. For set 1 the results were 7 - 3 to three in favour of the defender. On average it took the defender 4,345 turns to win and 1,558 turns for the attacker to win. Set 2 was dominated by the attacker that used the N2D algorithm with 9 wins as the attacker to 1 win for defender On average it took the defender 8,303 amount of turns to win and 3,428 for the attacker to win. Set 3 was dominated by the attacker that used the N2D algorithm with 9 wins as the attacker to 1 win for defender similarly as seen in set 2. The average it took the defender 27,641 turns
to win and of the 9 game runs an average of 3,110 for the attacker.
The results for each game are analysed as follows:
1. _Game 1:_ Figures 1 and 2 demonstrate the impact of the attack implementation on the algorithms from both defender and attacker perspectives. In Fig. 1, the results show that the defending agent using the N2D algorithm achieved more wins and improved its turn average after the attack implementation in set 1. In set 2, there was no change in win rates, but the defender's average turn count increased by 32.41%. For set 3, the defender took longer to isolate the attacker, with a significant increase in turns from 9,534 to 15,593. In Fig. 2, the attacker's average performance improved, but it won fewer games in set 1, indicating efficiency at the cost of consistency. In set 2, there was no change in win rates, but the attacker's average turn count increased by 35.89%, signifying a loss in performance. However, in set 3, the time taken to capture the flag and win decreased by 32.84%, indicating a notable improvement. The inclusion of the attack against the defender using NEC2DQN caused a longer time to isolate the attacker and win, decreased performance in set 2, and significant improvement in set 3.
2. _Game 2:_ Figures Figures 3 and 4 illustrate the impact of the attack implementation on the algorithms from their respective roles. In Fig. 3, the defender's performance is analyzed before and after the attack. For set 1, the defender using the DDQN algorithm won 3 games by isolating the attacker, a notable improvement from previously outlasting the attacker. However, in set 2 and set 3, the defender's wins decreased from 3 to 1 and from 5 to 1, respectively, indicating a clear negative impact from the attack. Examining the average turn count for the lone win in each set becomes irrelevant in this context. In Fig. 4, the attacker's perspective is explored before and after the attack implementation. For set 1, the attacking agent using the NEC2DQN algorithm experienced no change in win rate. However, in sets 2 and 3, there was a significant increase in win rates. Despite this improvement, the attacker's average turn count increased by 42.48% in set 2 and by 26.73% in set 3. Notably, the impact of the data poisoning attack was greater on the agent using the DDQN algorithm, as it only managed to win one game in both sets, while the attacker secured 9 out of 10 games in sets 2 and 3, albeit with a higher average turn count.
The experimental results presents two significant implications, regardless of the perspective of the agent. While it may initially seem unfavourable for a defender to struggle in isolating an attacker, the reality offers a silver lining. Prolonged engagements lead to the accumulation of a larger pool of training data, satisfying a core objective of adversarial learning and facilitating the creation of a robust algorithm. The implications of this are discussed in section V.
## V Conclusion and Future Work
This investigation highlights that the DDQN algorithm is more vulnerable to adversarial learning attacks, while NEC2DQN exhibits better resilience. The experiments also show improved engagement and performance of agents in attacking roles and the possibility of training models with adversarial samples during active network engagement. This opens up the potential for an always-online approach without the need for model downtime.
Robust AI model implementation is crucial as attackers constantly strive to break defence mechanisms. In this era of AI and automation, AI systems become the next prime
Fig. 1: Comparison of average number of turns taken for N2D as the defender to win.
Fig. 2: Comparison of average number of turns taken for DDQN as attacker to win.
target. Their main vulnerability lies in the learning process, emphasising the importance of developing models robust enough to handle malicious input.
For future work, a more ad hoc network with randomised starting positions and additional defence mechanisms against adversarial attacks will be considered. Potential exploration of partial observability for the attacker and a black box setting will also be examined.
|
2305.13251 | Necessary and sufficient conditions for distances on the real line | When dealing with certain mathematical problems, it is sometimes necessary to
show that some function induces a metric on a certain space. When this function
is not a well renowned example of a distance, one has to develop very
particular arguments that appeal to the concrete expression of the function in
order to do so. The main purpose of this paper is to provide several sufficient
results ensuring that a function of two variables induces a distance on the
real line, as well as some necessary conditions, together with several examples
that show the applicability of these results. In particular, we show how a
hypothesis about the sign of the cross partial derivative of the candidate to
distance is helpful for deriving such kind of results. | Daniel Cao Labora, Francisco Javier Fernández, Fernando Adrián F. Tojo, Carlos Villanueva | 2023-04-17T14:21:44Z | http://arxiv.org/abs/2305.13251v1 | # Necessary and sufficient conditions for distances
###### Abstract
When dealing with certain mathematical problems, it is sometimes necessary to show that some function induces a metric on a certain space. When this function is not a well renowned example of a distance, one has to develop very particular arguments that appeal to the concrete expression of the function in order to do so. The main purpose of this paper is to provide several sufficient results ensuring that a function of two variables induces a distance on the real line, as well as some necessary conditions, together with several examples that show the applicability of these results. In particular, we show how a hypothesis about the sign of the cross partial derivative of the candidate to distance is helpful for deriving such kind of results.
**Keywords:** Distances, real line, integration
**MSC 2020:** 26B99, 51N20, 54E35, 00A08
## 1 Motivation and introduction
Throughout the rest of the document we will focus on distances on the set of real numbers \(\mathbb{R}\). Thus, it is suitable to recall the definition of distance in the particular case of a distance on \(\mathbb{R}\).
**Definition 1.1**.: Given \(d:\mathbb{R}^{2}\to\mathbb{R}\), we say that \(d\) is a _metric_ or _distance_ whenever it fulfills the following three properties simultaneously:
* _Positive definiteness:_ \(d(x,y)\geq 0\) for any \(x,y\in\mathbb{R}\), where \(d(x,y)=0\) if and only if \(x=y\).
* _Symmetry:_ \(d(x,y)=d(y,x)\) for any \(x,y\in\mathbb{R}\).
* _Triangle inequality:_\(d(x,y)+d(y,z)\leq d(x,z)\) for any \(x,y,z\in\mathbb{R}\).
If one is given a certain function \(d:\mathbb{R}^{2}\to\mathbb{R}\) and is asked to prove that it is a distance on the real line, it is quite reasonable to proceed as follows. First, the symmetry of the \(d\) should be quite clear, just by inspecting that \(d\) stays invariant when interchanging the roles of \(x\) and \(y\). Positive definiteness should also be direct, or sometimes a mere consequence of a tricky factorization of \(d\) that shows that the function is a square that only vanishes for \(x=y\). Regarding the triangle inequality, one could try to use the very particular expression of \(d\) in order to prove it, or some arguments involving concavity/convexity. However, how to state reasonably general theorems, with easy-to-check hypotheses and that ensure the triangle inequality is fulfilled does not seem immediate. This quest guides the main topic of this paper, together with the description of some necessary conditions for \(d\) being a metric and a special mention to the case of translation invariant distances.
## 2 A special case: translation invariant distances
If we are interested in metrics on the real line, it is quite reasonable to put our initial goal on translation invariant distances. Informally, we consider distances such that, rather depending on the two variables \(x\) and \(y\), they only depend on the difference \(x-y\). Thus, \(d:\mathbb{R}^{2}\to\mathbb{R}\) is said to be a _translation invariant distance_ if \(d\) is a distance and \(d(x+z,y+z)=d(x,y)\) for every \(x,y,z\in\mathbb{R}\).
In this very particular case, it is not complicated to characterize such distances. The fundamental notion necessary is that of subadditive function. In this sense, we say that \(f:\mathbb{R}\to\mathbb{R}\) is _subadditive_ if \(f(x+y)\leq f(x)+f(y)\) for any pair \(x,y\in\mathbb{R}\).
**Theorem 2.1**.: _The function \(d:\mathbb{R}^{2}\to\mathbb{R}\) is a translation invariant distance if and only if it is of the form \(d(x,y)=f(y-x)\) where \(f:\mathbb{R}\to\mathbb{R}\) is an even subadditive function with \(f(0)=0\) and \(f(x)>0\) for any \(x\neq 0\)._
Proof.: First assume that \(d:\mathbb{R}^{2}\to\mathbb{R}\) is a translation invariant distance. Define \(f(x)=d(0,x)\) for \(x\in\mathbb{R}\). Clearly \(f(0)=0\), \(f(x)>0\) for any \(x\neq 0\), and \(d(x,y)=d(0,y-x)=f(y-x)\) for any \(x,y\in\mathbb{R}\), since \(d\) is translation invariant. Besides, \(f\) is even since, for \(x\in\mathbb{R}\),
\[f(x)=d(0,x)=d(x,0)=d(0,-x)=f(-x).\]
Furthermore, for any \(x,y\in\mathbb{R}\),
\[f(x+y)=d(0,x+y)=d(-x,y)\leq d(-x,0)+d(0,y)=d(0,x)+d(0,y)=f(x)+f(y),\]
so \(f\) is subadditive.
On the other hand, if \(f:\mathbb{R}\to\mathbb{R}\) is an even subadditive function with \(f(0)=0\) and \(f(x)>0\) for \(x\neq 0\), let us define \(d(x,y):=f(y-x)\) and prove that \(d\) is a distance. Indeed, \(d(x,x)=f(0)=0\) for every \(x\in\mathbb{R}\), \(d(x,y)=f(y-x)>0\) for every \(x\neq y\), and \(d(x,y)=f(y-x)=f(x-y)=d(y,x)\) for every \(x,y\in\mathbb{R}\). Finally,
\[d(x,z)=f(z-x)=f(z-y+y-x)\leq f(z-y)+f(y-x)=d(y,z)+d(x,y),\]
for every \(x,y,z\in\mathbb{R}\), so the triangle inequality holds.
**Remark 2.2**.: Observe that, as a consequence of Theorem 2.1, a distance \(d\) is translation invariant if and only if it can be factorized as \(d=g\circ l\), where \(l:\mathbb{R}^{2}\to[0,\infty)\) is the usual distance, \(l(x,y)=|x-y|\), and \(g:[0,\infty)\to[0,\infty)\) is a function with \(g(0)=0\), \(g(x)>0\) for \(x>0\), and whose even extension is subadditive.
The following example shows that, in general, the even extension of a subadditive function is not subadditive. This implies that, in the previous paragraph, it is not enough to ensure that \(g\) is subadditive, but we need to ensure that the even extension of \(g\) is subadditive.
**Example 2.3**.: Consider the even function \(f:\mathbb{R}\rightarrow\mathbb{R}\) such that
\[f(x)=\begin{cases}\left|x\right|,&0\leq\left|x\right|<1,\\ 2-\left|x\right|,&1\leq\left|x\right|<\dfrac{5}{3},\\ \dfrac{1}{3},&\dfrac{5}{3}\leq\left|x\right|.\end{cases}\]
Clearly, \(f\) is not subadditive since \(f(3-2)=f(1)=1>2/3=f(3)+f(-2)\).
Nevertheless, the restriction of \(f\) to \([0,\infty)\), which we will denote by \(g\), is subadditive. In order to prove this claim, let us consider \(x\geq y\geq 0\) and analyze a few cases.
1. If \(x+y\geq 2\), then \(x\geq 1\). Thus, \(g(x+y)\leq g(x)+g(y)\) since \(g\) is decreasing on \([1,\infty)\).
2. If \(1\leq x+y\leq 2\), the subadditivity is clear when \(x\geq 1\) since \(g\) is decreasing on \([1,\infty)\). If \(x<1\), then \(x\geq y>0\) and \(g(x)+g(y)=x+y\geq 1\geq g(x+y)\).
3. If \(x+y\leq 1\), then simply \(g(x+y)=x+y=g(x)+g(y)\).
Thus, in this case, the definition \(d(x,y)=g(\left|x-y\right|)\) does not induce a distance, even though \(g\) is subadditive, and the underlying reason is that its even extension \(f\) is not subadditive.
**Remark 2.4**.: We observe that, when \(g\) is non-decreasing and subadditive, then its even extension \(f\) is automatically subadditive. The key consideration for proving this relies on the fact that, for any \(x,y\in\mathbb{R}\), we have \(g(\left|x+y\right|)\leq g(\left|x\right|)+g(\left|y\right|)\). Observe how if \(x\) and \(y\) have the same sign, this is simply the subadditivity of \(g\) because \(\left|x+y\right|=\left|x\right|+\left|y\right|\). If \(x\) and \(y\) have different signs, then either \(\left|x+y\right|<\left|x\right|\) or \(\left|x+y\right|<\left|y\right|\) and, since \(g\) is non-decreasing, \(g(\left|x+y\right|)\leq g(\left|x\right|)+g(\left|y\right|)\).
**Remark 2.5**.: If \(g:[0,\infty)\rightarrow[0,\infty)\) is non-decreasing and subadditive and \(f\) is the even extension of \(g\), we have that
\[f(x+y)=g(\left|x+y\right|)\leq g(\left|x\right|)+g(\left|y\right|)=f(x)+f(y),\]
for every \(x,y\in\mathbb{R}\). Thus, under these hypotheses, \(f\) is also subadditive.
Consequently, from Theorem 2.1 and Remark 2.5, we derive the following corollary.
**Corollary 2.6**.: _The function \(d:\mathbb{R}^{2}\rightarrow\mathbb{R}\) is a translation invariant distance fulfilling \(d(0,x)\leq d(0,y)\) for any \(x,y\in\mathbb{R}\), where \(0\leq x\leq y\), if and only if it is of the form \(d(x,y)=g(\left|y-x\right|)\) where \(g:[0,\infty)\rightarrow\mathbb{R}\) is a subadditive non-decreasing function with \(g(0)=0\) and \(g(x)>0\) for any \(x>0\)._
Finally, we observe that there are examples of distances which can be obtained from the even extension \(f\) of a non-monotonic function \(g\) where \(f\) is subadditive. We show this via the following suitably modified version of the previous example.
Figure 2.1: Graph of the function \(f\) in Example 2.3.
**Example 2.7**.: Consider the even function \(f:\mathbb{R}\to\mathbb{R}\) fulfilling
\[f(x)=\begin{cases}\left|x\right|,&0\leq\left|x\right|<1,\\ 2-\left|x\right|,&1\leq\left|x\right|<\dfrac{4}{3},\\ \dfrac{2}{3},&\dfrac{4}{3}\leq\left|x\right|.\end{cases}\]
We claim that \(f\) is subadditive, and we will prove this by distinguishing several cases. Note that, since \(f\) is even, we can assume \(x+y\geq 0\). Besides, without loss of generality, we will assume \(x\geq y\), so we will also have \(x\geq 0\).
1. If \(x+y\geq 2\), then \(x\geq 1\). Thus, \(f(x+y)\leq f(x)+f(y)\) since \(f\) is decreasing on \([1,\infty)\) and \(f(x+y)=\min\{f(s):\,s\in[1,\infty)\}\).
2. If \(1\leq x+y\leq 2\), the subadditivity is clear when \(x\geq 1\) since \(f\) is decreasing on \([1,\infty)\). If \(x<1\), then \(1>x\geq y>0\) necessarily, so \(f(x)+f(y)=x+y\geq 1\geq f(x+y)\).
3. If \(2/3\leq x+y\leq 1\), and \(x\leq x+y\) then \(x\geq y\geq 0\) necessarily, so \(f(x)+f(y)=x+y=f(x+y)\). The complicated case happens when \(x\geq x+y\), so \(y\leq 0\). If \(x\leq 1\), then \(f(x)+f(y)=x-y\geq x+y=f(x+y)\). If \(1\leq x\leq 4/3\), then \(f(x)+f(y)=2-(x+y)\geq x+y\). If \(x\geq 4/3\), then \(y\leq-1/3\) so \(f(x)+f(y)\geq 1\geq f(x+y)\).
4. If \(0\leq x+y\leq 2/3\), the result is clear when \(x\geq x+y\) since \(f(x+y)=x+y\) and the minimum for \(f\) on \([x+y,\infty)\) is \(x+y\). If \(x\leq x+y\), then \(x\geq y\geq 0\) necessarily, so \(f(x)+f(y)=x+y=f(x+y)\).
## 3 Necessary conditions
In the previous section we have dealt with the particular case of translation invariant metrics on \(\mathbb{R}\), providing a characterization for such distances. Now, we focus on the generic case of a distance on the real line, that will be denoted by \(d:\mathbb{R}^{2}\to\mathbb{R}\). In this case, we cannot expect to find a characterization in order to know whether \(d\) is a distance or not, but only necessary or sufficient conditions.
In this section we prove some necessary conditions on \(d\), provided that \(d:\mathbb{R}^{2}\to\mathbb{R}\) is a metric. In the rest of the document, \(\Delta\subset\mathbb{R}^{2}\) will denote the diagonal of the cartesian plane, that is,
\[\Delta=\{(x,x)\in\mathbb{R}^{2}:x\in\mathbb{R}\}.\]
Let \(X:=\{(x,y)\in\mathbb{R}^{2}:\ x\leq y\}\), \(Y:=\{(x,y)\in\mathbb{R}^{2}:\ y\leq x\}\). Given a function \(d:\mathbb{R}^{2}\to\mathbb{R}\) and \((x,y),v\in\mathbb{R}^{2}\), we will define the _directional derivative from the right_ as
\[\partial_{v}^{+}d(x,y):=\lim_{h\to 0^{+}}\frac{d((x,y)+hv)-d(x,y)}{h},\]
in case the limit exists. We will use the notation \(\partial_{1}^{+}\), \(\partial_{2}^{+}\), \(\partial_{1}^{-}\) and \(\partial_{2}^{-}\) for the cases \(v=(1,0)\), \((0,1)\), \((-1,0)\) and \((0,-1)\) respectively.
Figure 2.2: _Graph of the function \(f\) in Example 2.7._
Since \(X\) and \(Y\) are not open sets, we provide a short comment regarding the notion of differentiability for \(d|_{X}\) and \(d|_{Y}\). In the interior of \(X\) (respectively \(Y\)) the notion of differentiability is well known. With respect to the points of the form \((x,x)\in\Delta\), we understand that \(d|_{X}\) is differentiable at \((x,x)\in\Delta\) if there exists \(w\in\mathbb{R}^{2}\) such that for every \((\widetilde{x},\widetilde{y})\in X\),
\[d(\widetilde{x},\widetilde{y})=d(x,x)+w\cdot(\widetilde{x}-x,\widetilde{y}-y )+o(\|(\widetilde{x}-x,\widetilde{y}-y)\|),\]
where \(\cdot\) denotes the scalar product, and \(o\) is used for the Landau notation. Since we can take \((\widetilde{x},\widetilde{y})\) such that \((\widetilde{x}-x,\widetilde{y}-y)=(0,1)\) or \((\widetilde{x}-x,\widetilde{y}-y)=(-1,0)\), the choice for \(w\), if it exists, is unique. In case of existence of such a \(w\), we say that \(w\) is the _derivative or gradient of \(d|_{X}\) at \((x,x)\)_ and we write \(\nabla d(x,y)=w\). A similar definition goes for \(Y\).
Observe that, due to the symmetry property, if \(d\in\mathcal{C}(\mathbb{R}^{2},[0,\infty))\) is a distance and \(d|_{X}\) is differentiable, then \(d|_{Y}\) is differentiable too. Hence, if \(d\in\mathcal{C}(\mathbb{R}^{2},[0,\infty))\) is a distance and \(d|_{X}\) is differentiable, \(\partial_{v}^{+}d(x,y)\) exists for every \((x,y)\) and any \(v\in\mathbb{R}^{2}\). Besides, due to the symmetry of \(d\), \(\partial_{1}^{+}d(x,y)=\partial_{2}^{+}d(y,x)\) and \(\partial_{1}^{-}d(x,y)=\partial_{2}^{-}d(y,x)\).
### Conditions involving first order derivatives
In this section we will prove some necessary conditions involving first order derivatives to guarantee that a function \(d\) is a distance.
**Theorem 3.1**.: _Let \(d\in\mathcal{C}(\mathbb{R}^{2},[0,\infty))\) be a distance such that \(d|_{X}\) is differentiable, \((x,y)\in\mathbb{R}^{2}\). Then we have that \(\left|\partial_{2}^{-}d(x,y)\right|\leq\left|\partial_{2}^{-}d(y,y)\right|\) and \(\left|\partial_{2}^{+}d(x,y)\right|\leq\left|\partial_{2}^{+}d(y,y)\right|\) for any \(x,y\in\mathbb{R}\)._
Proof.: Let \(x,y,z\in\mathbb{R}\). By the triangle inequality, \(d(x,z)\leq d(x,y)+d(y,z)\) and \(d(x,y)\leq d(x,z)+d(y,z)\), so \(|d(x,z)-d(x,y)|\leq d(y,z)\). Now,
\[\left|\frac{d(x,z)-d(x,y)}{|z-y|}\right|\leq\left|\frac{d(y,z)}{|z-y|}\right| =\left|\frac{d(y,z)-d(y,y)}{|z-y|}\right|.\]
If \(z\to y^{-}\), we deduce \(\left|\partial_{2}^{-}d(x,y)\right|\leq\left|\partial_{2}^{-}d(y,y)\right|\) for any \(x,y\in\mathbb{R}\). Analogously, if \(z\to y^{+}\), we deduce the inequality \(\left|\partial_{2}^{+}d(x,y)\right|\leq\left|\partial_{2}^{+}d(y,y)\right|\).
**Remark 3.2**.: The symmetry of \(d\) implies that \(\left|\partial_{1}^{-}d(y,x)\right|\leq\left|\partial_{1}^{-}d(y,y)\right|\) and \(\left|\partial_{1}^{+}d(y,x)\right|\leq\left|\partial_{1}^{+}d(y,y)\right|\) for any \(x,y\in\mathbb{R}\).
**Corollary 3.3**.: _Let \(d\in\mathcal{C}(\mathbb{R}^{2},[0,\infty))\) be a distance such that \(d|_{X}\) is differentiable. Then, \(d\) is not differentiable at any point of \(\Delta\)._
Proof.: Since \(d(x,y)=d(y,x)\), \(\partial_{1}d(x,y)=\partial_{2}d(y,x)\) for \(x\neq y\), and \(\partial_{1}^{+}d(x,x)=\partial_{2}^{+}d(x,x)\). Assume \(d\) is differentiable at \((y,y)\in\Delta\). Then, if \(\nu=(1,1)\) and we compute the directional derivative of \(d\) in the direction of \(\nu\), we get that, since \(d(x,x)=0\) for every \(x\in\mathbb{R}\),
\[0=\partial_{\nu}^{+}d(y,y)=\partial_{1}^{+}d(y,y)+\partial_{2}^{+}d(y,y)=2 \partial_{2}^{+}d(y,y),\]
and we conclude that \(\nabla d(y,y)=0\). Now, for \(x>y\), by Theorem 3.1, we have that
\[\left|\partial_{2}d^{+}(x,y)\right|\leq\left|\partial_{2}^{+}d(y,y)\right|=0,\]
that is, \(\partial_{2}d(x,y)=0\). Hence, for \(x>y\), \(d(x,y)=-\int_{y}^{x}\partial_{2}d(x,z)\,\mathrm{d}z=0\), which is not possible since \(d\) is a distance.
**Remark 3.4**.: Observe that, by the same reasoning as in Corollary 3.3, \(\partial_{1}^{+}d(x,x)=\partial_{2}^{+}d(x,x)>0\), \(\partial_{1}^{-}d(x,x)=\partial_{2}^{-}d(x,x)>0\) for every \(x\in\mathbb{R}\).
### Conditions involving second order derivatives
Now, we will analyze some necessary conditions involving second order derivatives to guarantee that a function \(d\) is a distance.
**Theorem 3.5**.: _Let \(d\in\mathscr{C}(\mathbb{R}^{2},[0,\infty))\) be a distance such that \(d|_{X}\) twice differentiable. Then, \(\partial_{1}^{-}(\partial_{1}^{+}d)(x,x)\leq\partial_{1}^{-}(\partial_{1}^{+} d)(x,y)\), for every \((x,y)\in\mathbb{R}^{2}\)._
Proof.: Given that \(d|_{X}\) is twice differentiable, for any \((x,y)\in\mathbb{R}\) and \(h>0\) we have that, using the triangle inequality,
\[-\partial_{1}^{-}(\partial_{1}^{+}d)(x,y)= -\frac{1}{h}\left[\partial_{1}^{+}d(x-h,y)-\partial_{1}^{+}(x,y)+ o(h)\right]\] \[= -\frac{1}{h}\left[\frac{1}{h}\left[d(x,y)-d(x-h,y)+o(h)\right]- \frac{1}{h}\left[d(x+h,y)-d(x,y)+o(h)\right]+o(h)\right]\] \[= +\frac{1}{h}\left[\frac{1}{h}\left[d(x+h,y)-2d(x,y)+d(x-h,y)+o(h )\right]+o(h)\right]\] \[\leq +\frac{1}{h}\left[\frac{1}{h}\left[d(y,x)+d(x,x+h)-2d(x,y)+d(y,x) +d(x,x-h)+o(h)\right]+o(h)\right]\] \[= +\frac{1}{h}\left[\frac{1}{h}\left[d(x+h,x)+d(x-h,x)-2d(x,x)+o(h )\right]+o(h)\right]\] \[= -\frac{1}{h}\left[\frac{1}{h}\left[d(x,x)-d(x-h,x)+o(h)\right]- \frac{1}{h}\left[d(x+h,x)-d(x,x)+o(h)\right]+o(h)\right]\] \[= -\frac{1}{h}\left[\partial_{1}^{+}d(x-h,x)-\partial_{1}^{+}d(x,x )+o(h)\right]\] \[= -\partial_{1}^{-}(\partial_{1}^{+}d)(x,x)\]
After inverting the inequality, we get the result.
**Remark 3.6**.: Observe that, if \(x\neq y\), then \(\partial_{1}^{+}(\partial_{1}^{-}d)(x,y)=-\partial_{11}d(x,y)\) and we have the inequality \(\partial_{11}d(x,y)\leq-\partial_{1}^{+}(\partial_{1}^{-}d)(x,x)\). Furthermore, for the case \((x,y)\in X\), since \(\partial_{1}^{+}(\partial_{1}^{-}d)(x,x))=\partial_{2}^{+}(\partial_{1}^{-}d) (x,x)\), \(\partial_{11}d|_{X}(x,y)\leq\partial_{12}d|_{X}(x,x)\).
## 4 Sufficient conditions
In this section, we provide several sufficient results in order to ensure that a function \(d:\mathbb{R}^{2}\to\mathbb{R}\) defines a metric on \(\mathbb{R}\). Of course, the hypotheses regarding the symmetry and sign of \(d\) are obvious. Nevertheless, with respect to what hypotheses we can demand in order to obtain the triangle inequality, we make the following short discussion in this preamble.
We will assume \(x<y<z\) without loss of generality, since the triangle inequality is evident in the case where at least two of these three numbers are equal (positive definiteness and symmetry are enough to conclude). It is important to have in mind these assumptions concerning the order of \(x,y\) and \(z\), since they will be used with high frequency along the whole document. In order to prove the triangle inequality, we need to show the following three inequalities:
\[\begin{split} 1.& d(x,y)+d(y,z)\geq d(x,z),\\ 2.& d(x,z)+d(y,z)\geq d(x,y),\\ 3.& d(x,y)+d(x,z)\geq d(y,z).\end{split} \tag{4.1}\]
It is also important to realize that the nature of the first inequality is somehow distinct to the two other ones. The main reason is that the first inequality compares the "distance" from \(x\) to \(z\) with the
sum of two "distances" pivoting via \(y\). The fact that \(y\) is the intermediate value in the order relation assumption \(x<y<z\) plays a special role here. Nevertheless, the second and third inequalities are somehow symmetric, as we shall see along the proofs in this article.
In the first part of the section we will provide two versions of a useful lemma that, essentially, provides a sufficient condition for having the first inequality \(d(x,y)+d(y,z)\geq d(x,z)\). In the second part, we will state and prove an initial version of these sufficiency theorems for \(d\) being a distance. In each of these theorems, we add a different hypothesis that allows us to get \(d(x,z)+d(y,z)\geq d(x,y)\) and \(d(x,y)+d(x,z)\geq d(y,z)\). In the third part, we will weaken the assumption involving the smoothness of \(d\).
### The cross partial derivative and the triangle inequality
We will provide two versions of a lemma that shows, roughly speaking, how a non-negative sign of the cross partial derivative outside of \(\Delta\) implies \(d(x,y)+d(y,z)\geq d(x,z)\) for any \(x<y<z\). The main difference between these two versions is that \(d\) is required to be continuous on \(\mathbb{R}^{2}\) in Lemma 4.1, but not in Lemma 4.2. This extra assumption allows us to provide a simple proof of Lemma 4.1, and a nice geometrical explanation of the issue via Figure 4.1. Besides, this proof of Lemma 4.1 captures the essence of the idea that is needed to prove Lemma 4.2. In this last case, the absence of continuity for \(d\) forces us to make some technical considerations in our argumentation.
**Lemma 4.1**.: _Consider a function \(d\in\mathcal{C}^{2}\big{(}\mathbb{R}^{2}\backslash\Delta,[0,\infty)\big{)} \cap\mathcal{C}\big{(}\mathbb{R}^{2},[0,\infty)\big{)}\) fulfilling \(\partial_{12}d(a,b)\geq 0\) for every \((a,b)\not\in\Delta\) and three given real numbers \(x<y<z\). Then, we have the inequality \(d(x,z)\leq d(x,y)+d(y,z)\)._
Proof.: _Idea:_ The key for the proof of this lemma is to derive a differential inequality from the sign condition \(\partial_{12}d\geq 0\), that implies the desired result after integration and a direct application of Fundamental Theorem of Calculus. In geometrical terms, see Figure 4.1, we observe how, due to the symmetry of \(d\), the integral of \(\partial_{2}d\) on the vertical black segment coincides with the integral of \(\partial_{1}d\) on the horizontal black segment. Then, since \(\partial_{12}d\geq 0\), we know that \(\partial_{1}d\) increases with respect to increments in the second variable. Consequently, the integral of \(\partial_{1}d\) on the black segment will be bounded from above by the corresponding integral on the gray segment.
From the calculus point of view, the proof for Lemma 4.1 is straightforward
\[d(x,z)-d(x,y)=d(z,x)-d(y,x)=\int_{y}^{z}\partial_{1}d(s,x)\,\mathrm{d}s\leq \int_{y}^{z}\partial_{1}d(s,y)\,\mathrm{d}s=d(z,y)=d(y,z),\]
Figure 4.1: Representation of the different integration paths that lead to the inequality \(d(x,y)+d(y,z)\geq d(x,z)\).
since \(\partial_{1}d\) is increasing with respect to the second variable due to the condition \(\partial_{12}d(a,b)\geq 0\).
Now, we state a stronger version for the previous lemma, where we drop out the hypothesis regarding the continuity assumption for \(d\). This generalization is relevant, since many renowned examples of distances on \(\mathbb{R}\) are not induced by continuous functions, as we shall see in the last part of the paper. Observe that this loss of continuity on \(\Delta\), and specifically at the point \((y,y)\), impedes us to use Fundamental Theorem of Calculus to claim that \(\int_{y}^{z}\partial_{1}d(s,y)\,\mathrm{d}s=d(z,y)-d(y,y)\). Hence, the technique consists in making a valid limit argument that does not need the continuity of \(d\). Nevertheless, the main idea of this proof is, essentially, the same one as in Lemma 4.1. In order to synthesize the argument, it will be convenient to establish a notation for certain functions, and to prove some properties regarding their monotonicity.
**Lemma 4.2**.: _Consider a function \(d\in\mathscr{C}^{2}\left(\mathbb{R}^{2}\backslash\Delta,[0,\infty)\right)\) fulfilling the assumption \(\partial_{12}d(a,b)\geq 0\) for every \((a,b)\not\in\Delta\) and three given real numbers \(x<y<z\). Then, the function_
\[G^{z}_{y,H}(\lambda)\coloneqq\int_{y}^{z}\partial_{1}d(s,\lambda)\,\mathrm{d}s\]
_is increasing on the intervals \((-\infty,y]\) and \([z,+\infty)\) and the function_
\[G^{y}_{x,y}(\lambda)\coloneqq\int_{x}^{y}\partial_{2}d(\lambda,s)\,\mathrm{d}s\]
_is increasing on the intervals \((-\infty,x]\) and \([y,+\infty)\). As a consequence, we have the inequality \(d(x,z)\leq d(x,y)+d(y,z)\)._
Proof.: The key remark is that \(\partial_{1}d(s,\lambda)\leq\partial_{1}d(s,\widetilde{\lambda})\) for any \(s\in(y,z)\) and any \(\lambda<\widetilde{\lambda}\leq y\), since \(\partial_{12}d\geq 0\) on \(R=(y,z)\times[\lambda,\widetilde{\lambda}]\subset(y,z)\times(-\infty,y]\) because \(R\cap\Delta=\emptyset\). Therefore, \(G^{z}_{y,H}(\lambda)\) is increasing on \((-\infty,y]\) and, analogously, it is also increasing on \([z,\infty)\). A similar argument applies in order to show the increasing character of \(G^{y}_{x,y}(\lambda)\) on the intervals \((-\infty,x]\) and \([y,+\infty)\).
For the final claim, if we observe that \(d\) is two times differentiable at any point of the closure \(\overline{R}\) except at \((y,y)\), we can apply Fundamental Theorem of Calculus to deduce
\[d(z,\lambda)-d(y,\lambda)\leq d(z,\widetilde{\lambda})-d(y,\widetilde{\lambda }){\leq d(z,\widetilde{\lambda})},\]
for any \(\lambda<\widetilde{\lambda}<y\). If we take \(\lambda=x\), and let \(\widetilde{\lambda}\to y\), the continuity of \(d\) outside the diagonal together will imply
\[d(z,x)-d(y,x)\leq d(z,y),\]
which is obviously equivalent to the desired inequality.
### Initial statements for sufficient conditions
We now state four sufficiency theorems, each of them implying that \(d\) defines a distance under certain hypotheses, together with their corresponding proofs.
**Theorem 4.3**.: _Consider a function \(d\in\mathscr{C}^{2}\left(\mathbb{R}^{2}\backslash\Delta,[0,\infty)\right)\). Suppose that \(d\) fulfills the following properties:_
1. \(d(x,y)>0\) _for every_ \((x,y)\not\in\Delta\)_, and_ \(d(x,x)=0\) _for every_ \(x\in\mathbb{R}\)_._
2. \(d(x,y)=d(y,x)\) _for every_ \((x,y)\in\mathbb{R}^{2}\)_._
3. \(\partial_{12}d(x,y)\geq 0\) _for every_ \((x,y)\not\in\Delta\)_._
4. _For any fixed_ \(a\in\mathbb{R}\)_, the function_ \(d(\cdot,a)\) _is non-increasing on the interval_ \((-\infty,a)\) _and non-decreasing on the interval_ \((a,\infty)\)
_Then, \(d\) defines a distance on \(\mathbb{R}\)._
Proof.: It is clear that we only need to check the triangle inequality for \(x<y<z\). Besides, due to hypothesis H2, we observe that hypothesis H4 implies that \(d(a,\cdot)\) is non-increasing on the interval \((-\infty,a)\) and non-decreasing on the interval \((a,\infty)\). On the one hand, \(d(x,y)+d(y,z)\geq d(x,z)\) is a straightforward consequence of Lemma 4 due to hypothesis H3. On the other hand, derivation for the two last inequalities in (4.1) is immediate from H4 since distances are non negative, \(d(x,z)\geq d(x,y)\), and \(d(x,z)\geq d(y,z)\).
**Theorem 4**.: _Consider a function \(d\in\mathcal{C}^{2}\big{(}\mathbb{R}^{2}\backslash\Delta,[0,\infty)\big{)}\) fulfilling the following properties:_
1. \(d(x,y)>0\) _for every_ \((x,y)\not\in\Delta\)_, and_ \(d(x,x)=0\) _for every_ \(x\in\mathbb{R}\)_._
2. \(d(x,y)=d(y,x)\) _for every_ \((x,y)\in\mathbb{R}^{2}\)_._
3. \(\partial_{12}d(x,y)\geq 0\) _for every_ \((x,y)\not\in\Delta\)_._
4. \(\lim_{\lambda\to+\infty}[d(b,\lambda)-d(a,\lambda)]\leq\lim_{\lambda\to-\infty }[d(b,\lambda)-d(a,\lambda)]\) _for every pair_ \((a,b)\) _with_ \(a<b\)_, where both limits are finite._
_Then, \(d\) defines a distance on \(\mathbb{R}\)._
_Idea._ In geometrical terms -see Figure 4, left- if we consider a horizontal segment from \((y,\lambda)\) to \((z,\lambda)\), the integral of \(\partial_{1}d\) along the oriented segment increases when the height \(\lambda\) increases, with the only caution that the segment cannot cut \(\Delta\). So, instead of cutting \(\Delta\), the idea consists in "passing through infinity", as it will be explained in the next paragraph. The geometrical explanation for Figure 4, right, is the same one, but changing vertical and horizontal roles. In terms of Figure 4, we want to prove that the integral of the horizontal/vertical partial derivative of \(d\) along the gray oriented segment is greater than the analogous one on the black oriented segment.
Proof.: As in the previous result, from the first three hypotheses we derive the non-negativity, symmetry, and \(1\) in (4.1). Thus, if \(x<y<z\), it suffices to see that
* \(d(x,z)+d(y,z)\geq d(x,y)\),
* \(d(x,y)+d(x,z)\geq d(y,z)\).
Figure 4: Comparison of the horizontal (left) and vertical (right) segments.
First, if we recall the definition \(G^{z}_{y,H}(\lambda):=\int_{x}^{y}\partial_{t}d(s,\lambda)\,\mathrm{d}s\) made in Lemma 4.2, we know that \(G^{z}_{y,H}\) increases on \((-\infty,y]\) and \([z,\infty)\). Besides, since \(d\) is smooth enough outside of \(\Delta\), we can apply the Fundamental Theorem of Calculus to the integral defined by \(G^{z}_{y,H}(\lambda)\) for \(\lambda\in(-\infty,y)\cup(z,\infty)\). In particular, we are interested in the cases where \(\lambda\to\infty\) or \(\lambda\to-\infty\), since H4B can be stated as
\[\lim_{\lambda\to+\infty}G^{z}_{y,H}(\lambda)\leq\lim_{\lambda\to-\infty}G^{z} _{y,H}(\lambda), \tag{4.2}\]
implying \(G^{z}_{y,H}(x)\geq G^{z}_{y,H}(z)\). Analogously, if we recall the definition \(G^{y}_{x,V}(\lambda):=\int_{x}^{y}\partial_{2}d(\lambda,s)\,\mathrm{d}s\), due to the symmetry of \(d\), we immediately derive
\[\lim_{\lambda\to+\infty}G^{y}_{x,V}(\lambda)\leq\lim_{\lambda\to-\infty}G^{y} _{x,V}(\lambda), \tag{4.3}\]
implying \(G^{y}_{x,V}(x)\geq G^{y}_{x,V}(z)\). We simply observe that
\[G^{z}_{y,H}(x)\geq G^{z}_{y,H}(z)\Leftrightarrow d(s,x)-d(y,x) \geq-d(y,z)\Leftrightarrow d(x,z)+d(y,z)\geq d(x,y),\] \[G^{y}_{x,V}(x)\geq G^{y}_{x,V}(z)\Leftrightarrow d(x,y)\geq d(z,y)-d(z,x)\Leftrightarrow d(x,y)+d(x,z)\geq d(y,z),\]
and we are finished.
**Theorem 4.5**.: _Consider a function \(d\in\mathcal{C}^{2}\big{(}\mathbb{R}^{2}\backslash\Delta,[0,\infty)\big{)}\) fulfilling the following properties:_
1. \(d(x,y)>0\) _for every_ \((x,y)\not\in\Delta\)_, and_ \(d(x,x)=0\) _for every_ \(x\in\mathbb{R}\)_._
2. \(d(x,y)=d(y,x)\) _for every_ \((x,y)\in\mathbb{R}^{2}\)_._
3. \(\partial_{12}d(x,y)\geq 0\) _for every_ \((x,y)\not\in\Delta\)_._
4. \(\nabla d(x,y)\to 0\) _when_ \((x,y)\to\infty\)_._
_Then, \(d\) defines a distance on \(\mathbb{R}\)._
Proof.: In order to prove this theorem, it suffices to see how H4B is derived from H4C. Since the gradient tends to zero, for any given \(\varepsilon>0\), it is possible to make \(\|\nabla d\|<\varepsilon\) outside of a big enough square \([-l,l]\times[-l,l]\). Hence, after considering any \(\lambda\) such that \(|\lambda|>l\), we have that \(|G^{b}_{a,H}(\lambda)|\leq\varepsilon\cdot(b-a)\). Thus, H4B would be automatically fulfilled, since it would read \(0\leq 0\).
**Theorem 4.6**.: _Consider a function \(d\in\mathcal{C}^{2}\big{(}\mathbb{R}^{2}\backslash\Delta,[0,\infty)\big{)}\) fulfilling the following properties:_
1. \(d(x,y)>0\) _for every_ \((x,y)\not\in\Delta\)_, and_ \(d(x,x)=0\) _for every_ \(x\in\mathbb{R}\)_._
2. \(d(x,y)=d(y,x)\) _for every_ \((x,y)\in\mathbb{R}^{2}\)_._
3. \(\partial_{12}d(x,y)\geq 0\) _for every_ \((x,y)\not\in\Delta\)_._
4. _We have that_ \(\lim_{\lambda\to-\infty}d(c,\lambda)=\lim_{\lambda\to\infty}d(c,\lambda)\in \mathbb{R}\) _for any_ \(c\in\mathbb{R}\)_._
_Then, \(d\) defines a distance on \(\mathbb{R}\)._
Proof.: It is an immediate consequence of Theorem 4.4, since H4D implies H4B in an obvious way.
**Remark 4.7**.: Due to the symmetry property, hypothesis \(4D\) is obviously equivalent to what we could call hypothesis H4D', that would read as
_H4D:_ We have that \(\lim_{\lambda\to-\infty}d(\lambda,c)=\lim_{\lambda\to\infty}d(\lambda,c)\in \mathbb{R}\) for any \(c\in\mathbb{R}\).
We have made explicit the previous remark since H4D and H4D' imply that \(d\) can be extended to a class two map on the periodic domain of the form \(M=\mathbb{S}^{1}\times\mathbb{S}^{1}\setminus\{(\alpha,\alpha)\in\mathbb{S}^{1 }\times\mathbb{S}^{1}:\alpha\in\mathbb{S}^{1}\}\), provided that the matching at the infinity points induced by H4D and H4D' is smooth enough. Since \(M\) is diffeomorphic to a cylinder, a possible way to produce distances on \(\mathbb{R}\) would be, roughly speaking, to find class two scalar fields on a cylinder that are positive (hypothesis 1), symmetric (hypothesis 2) and with non-negative cross partial derivative after applying the already mentioned diffeomorphism (hypothesis 3).
### An extension for sufficient conditions
Before applying the previous theorems to some examples, it will be convenient to weaken their hypotheses, specially the one involving the required smoothness for \(d\). In order to do so, first, we take into account the following trivial remark.
**Remark 4.8**.: Suppose that \(h:\mathbb{R}\to\mathbb{R}\) is a bijective map. Then, \(d(x,y)\) defines a distance on \(\mathbb{R}\) if and only if \((d\circ(h\times h))(x,y)=d(h(x),h(y))\) defines a distance on \(\mathbb{R}\). In particular, \(d\) fulfills H1 and H2 in Theorems 4.3, 4.4, 4.5, and 4.6 if and only if \((d\circ(h\times h))\) fulfills hypotheses H1 and H2.
The previous remark states, essentially, that being a distance does not depend on the coordinates that we are considering. We highlight that, in principle, this map \(h\) would not need to be even continuous, measurable or to have any nice property. Nevertheless, the interest of the previous remark is that, in some examples and for some points \((x,y)\) outside the diagonal, the distance \(d\) may not be regular enough in order to apply any result of the previous section. This problem can be avoided after considering the distance \(d\circ(h\times h)\) for a suitable smooth choice of \(h\), instead of simply considering the distance \(d\). In practice, the choice for the function \(h\) will be \(h(x)=x^{2n+1}\) for a suitable natural number \(n\in\mathbb{N}\). The reason for this choice is that, in many expressions, we have addends like \(|x|^{p}\) that are not smooth enough in order to apply the previous theorems when \(p>0\) is too low, but we can make \(|h(x)|^{p}\) to be smooth enough after choosing a sufficiently large value for \(n\). In this sense, we take into account the following well known remark.
**Remark 4.9**.: For any fixed \(p>0\), the regularity of the function \(g(x)=|h(x)|^{p}=|x|^{(2n+1)p}\) increases with respect to \(n\). Specifically, \(g\in\mathscr{C}^{m}(\mathbb{R})\), where \(m=\lceil(2n+1)p\rceil-1\). In particular, \(g\in\mathscr{C}^{2}(\mathbb{R})\) whenever \((2n+1)p>2\).
The consideration made in the previous remark, and the fact that such a choice for \(h\) is bijective and increasing, are the motivation for the next two lemmas. First, we state and demonstrate a sufficient condition ensuring that \(d\circ(h\times h)\) fulfills H3 on \(\mathbb{R}^{2}\setminus\Delta\), provided that \(d\) satisfies H3 on a bit smaller set.
**Lemma 4.10**.: _Consider a function \(d:\mathbb{R}^{2}\to[0,\infty))\) such that \(d\in\mathscr{C}^{2}\big{(}\mathbb{R}^{2}\backslash(\Delta\cup\Lambda),[0, \infty)\big{)}\), where the set \(\mathbb{R}^{2}\setminus(\Delta\cup\Lambda)\) is dense in \(\mathbb{R}^{2}\setminus\Delta\). Assume that it exists an increasing bijective differentiable map \(h:\mathbb{R}\to\mathbb{R}\) such that \(d\circ(h\times h)\in\mathscr{C}^{2}\big{(}\mathbb{R}^{2}\backslash\Delta,[0, \infty)\big{)}\). If \(\partial_{12}d\) is non-negative on \(\mathbb{R}^{2}\setminus(\Delta\cup\Lambda)\), then \(\partial_{12}(d\circ(h\times h))\) is non-negative outside of \(\Delta\)._
Proof.: First observe that, since \(h\) is bijective increasing and continuous, it is a homeomorphism. Therefore, the function \(\varphi(x,y)=(h(x),h(y))\), where \((x,y)\in\mathbb{R}^{2}\), is a homeomorphism as well and fulfills \(\varphi(\Delta)=\Delta\). Hence, since \(\mathbb{R}^{2}\setminus(\Delta\cup\Lambda)\) is dense in \(\mathbb{R}^{2}\setminus\Delta\), \(\varphi^{-1}(\mathbb{R}^{2}\setminus(\Delta\cup\Lambda))\) is dense in \(\varphi^{-1}(\mathbb{R}^{2}\setminus\Delta)=\mathbb{R}^{2}\setminus\Delta\).
Due to the chain rule, we have that
\[\partial_{12}(d\circ\varphi)(x,y)=\partial_{12}(d\circ(h\times h))(x,y)=h^{ \prime}(x)\cdot h^{\prime}(y)\cdot(\partial_{12}d)(h(x),h(y)),\]
and this expression is valid whenever \(\varphi(x,y)\in\mathbb{R}^{2}\setminus(\Delta\cup\Lambda)\), that is, \((x,y)\in\varphi^{-1}(\mathbb{R}^{2}\setminus(\Delta\cup\Lambda))\). Besides, since \(h\) is increasing and \(\partial_{12}d\) is non-negative, we conclude that \(\partial_{12}(d\circ(h\times h))\) is non-negative whenever \((x,y)\in\varphi^{-1}(\mathbb{R}^{2}\setminus(\Delta\cup\Lambda))\). Finally, since \(d\circ(h\times h)\in\mathscr{C}^{2}\big{(}\mathbb{R}^{2}\backslash\Delta,[0, \infty)\big{)}\), and \(\varphi^{-1}(\mathbb{R}^{2}\setminus(\Delta\cup\Lambda))\) is dense in \(\varphi^{-1}(\mathbb{R}^{2}\setminus\Delta)=\mathbb{R}^{2}\setminus\Delta\), we conclude that \(\partial_{12}(d\circ(h\times h))\) is non-negative outside of \(\Delta\).
Finally, we state a second lemma ensuring that if \(d\) fulfills one of the versions of H4, so does \(d\circ(h\times h)\). We deliberately exclude hypothesis H4C, since the change of coordinates induced by \(h\) can break the vanishing property for the gradient of \(d\) at infinity.
**Lemma 4.11**.: _Consider a function \(d\in\mathscr{C}(\mathbb{R}^{2},[0,\infty))\), together with an increasing bijection \(h:\mathbb{R}\to\mathbb{R}\)._
* _If_ \(d\) _fulfills H4A, then_ \(d\circ(h\times h)\) _fulfills H4A._
* _If_ \(d\) _fulfills H4B, then_ \(d\circ(h\times h)\) _fulfills H4B._
* _If_ \(d\) _fulfills H4D, then_ \(d\circ(h\times h)\) _fulfills H4D._
Proof.: For the first part, we have that, for any fixed \(a\in\mathbb{R}\), the function \(d(\cdot,a)\) is non-increasing on the interval \((-\infty,a)\) and non-decreasing on the interval \((a,\infty)\). Since \(h\) is bijective and it preserves the order in the real line, \(d\circ(h\times h)\) fulfills H4A.
For the second part, we have that \(\lim_{\lambda\to+\infty}[d(b,\lambda)-d(a,\lambda)]\leq\lim_{\lambda\to- \infty}[d(b,\lambda)-d(a,\lambda)]\) for every pair \((a,b)\) with \(a<b\). Since \(h\) is bijective and it preserves the order in the real line, \(d\circ(h\times h)\) fulfills H4B.
For the last part, we have that \(\lim_{\lambda\to-\infty}d(c,\lambda)=\lim_{\lambda\to\infty}d(c,\lambda)\) for any \(c\in\mathbb{R}\), and that this value is finite. Since \(h\) is bijective and it preserves the order in the real line, \(d\circ(h\times h)\) fulfills H4D.
As a consequence of all the previously exposed material, the main result of this part of the section can be stated and proved as follows.
**Theorem 4.12**.: _Consider a function \(d:\mathbb{R}^{2}\to[0,\infty))\) such that \(d\in\mathcal{C}^{2}\left(\mathbb{R}^{2}\backslash(\Delta\cup\Lambda),[0, \infty)\right)\), where \(\mathbb{R}^{2}\setminus(\Delta\cup\Lambda)\) is dense in \(\mathbb{R}^{2}\setminus\Delta\). Assume that it exists an increasing bijective differentiable map \(h:\mathbb{R}\to\mathbb{R}\) such that \(d\circ(h\times h)\in\mathcal{C}^{2}\left(\mathbb{R}^{2}\backslash\Delta,[0, \infty)\right)\). Finally, suppose also that \(d\) fulfills the following properties:_
1. \(d(x,y)>0\) _for every_ \((x,y)\notin\Delta\)_, and_ \(d(x,x)=0\) _for every_ \(x\in\mathbb{R}\)_._
2. \(d(x,y)=d(y,x)\) _for every_ \((x,y)\in\mathbb{R}^{2}\)_._
3. \(\partial_{12}d(x,y)\geq 0\) _for every_ \((x,y)\notin\Delta\cup\Lambda\)_._
4. _The function_ \(d\) _fulfills at least one of the hypotheses_ 4\(A\),_ 4\(B\) _or_ 4\(D\) _in Theorems_ 4.3_,_ 4.4 _or_ 4.6_._
_Then, \(d\) defines a distance on \(\mathbb{R}\)._
Proof.: Depending on whether \(d\) fulfills hypothesis H4A, H4B or H4D we shall use Theorem 4.3, 4.4 or 4.6 to conclude. First, if \(d\) fulfills H1 and H2 so does \(d\circ(h\times h)\), due to Remark 4.8. Second, Lemma 4.10 allows to deduce H3' for \(d\circ(h\times h)\). Third, at least one of the hypotheses H4A, H4B or H4D is fulfilled by \(d\) and, due to Lemma 4.11, also by \(d\circ(h\times h)\). We highlight that \(d\circ(h\times h)\) is a continuous map and that its cross partial derivative is smooth enough outside the diagonal because of the hypotheses. Therefore, \(d\circ(h\times h)\) is a metric and, consequently, \(d\) is a metric due to Remark 4.8.
## 5 Examples
Here we present some classical examples of distances to which the previous criteria can be applied to provide a proof that they are distances. Initially, we provide five examples of five candidates to distances \(d\) and, in each case, via Theorems 4.3, 4.4, 4.5, 4.6 or the general version in 4.12, we deduce that \(d\) is indeed a distance. More examples of known distances can be found in [1].
**Example 5.1** (Concave translation invariant metric).: The function \(d(x,y)=g(|y-x|)\) is a distance, whenever \(g\in\mathcal{C}^{2}([0,\infty))\) is concave, \(g(0)=0\) and \(g(x)>0\) if \(x>0\).
Let us check the hypothesis of Theorem 4.3. We have the required regularity outside of the diagonal, and also positive definitness and symmetry. If \((x,y)\notin\Delta\),
\[\partial_{12}d(x,y)=\begin{cases}-g^{\prime\prime}(y-x),&\text{if }x<y,\\ -g^{\prime\prime}(x-y),&\text{if }x>y,\end{cases}\]
and since \(g\) is concave and twice differentiable on \((0,\infty)\), \(\partial_{12}d(x,y)\geq 0\) for \((x,y)\notin\Delta\). Lastly, such \(g\) is necessarily a non-decreasing function, so H4A in Theorem 4.3 is also satisfied, and \(d\) is a metric.
Recall that we had already studied translation invariant metrics in Section 2. Therefore, even without the differentiability assumption, we could have stated that a positive concave function \(g\) defined on \((0,\infty)\) is necessarily subadditive and non-decreasing. Hence, by Corollary 2.6, the function defined as \(d(x,y)=g(|y-x|)\) is a distance.
**Example 5.2** (\(p\)-relative metric).: Given \(p\in[1,\infty)\), the function
\[d(x,y)=\begin{cases}\frac{|y-x|}{(|x|^{p}+|y|^{p})^{\frac{1}{p}}},&(x,y)\in \mathbb{R}^{2},\;(x,y)\neq 0,\\ 0,&(x,y)=0,\end{cases}\]
is a distance.
In order to apply any of the previous results, the only possible option is to check the hypotheses of Theorem 4.12, since \(d\) is not smooth on the set \(\Lambda\) of the points \((x,y)\) where \(x\cdot y=0\), at least when \(p\) is small enough.
First, if we select \(h(x)=x^{2n+1}\) in a such a way that \((2n+1)p>2\) we are able to ensure the condition \(d\circ(h\times h)\in\mathscr{C}^{2}\big{(}\mathbb{R}^{2}\backslash\Delta,[0, \infty)\big{)}\). In particular, we highlight that, in contrast to what happens with \(d\), \(d\circ(h\times h)\) is two times continuously differentiable on \(\mathbb{R}^{2}\setminus\Delta\).
As usual, positive definiteness and symmetry are clear. Besides, hypothesis H4D is immediate to check, since
\[\lim_{\lambda\to-\infty}d(c,\lambda)=\lim_{\lambda\to-\infty}\frac{|\lambda- c|}{(|c|^{p}+|\lambda|^{p})^{\frac{1}{p}}}=1=\lim_{\lambda\to+\infty}\frac{| \lambda-c|}{(|c|^{p}+|\lambda|^{p})^{\frac{1}{p}}}=\lim_{\lambda\to+\infty}d(c,\lambda).\]
Thus, we only need to check H3' or, in other words, that the cross partial derivative of \(d\) is non-negative outside \(\Delta\cup\Lambda\). Due to symmetry we can assume \(x>y\). An straightforward computation for the three cases (\(x>y>0\), \(x>0>y\), \(0>x>y\)) gives
\[\partial_{12}d(x,y)=\frac{\operatorname{sgn}(x)\,|x|^{2p-1}-\operatorname{ sgn}(y)\,|y|^{2p-1}}{(|x|^{p}+|y|^{p})^{2+\frac{1}{p}}}+p\operatorname{ sgn}(xy)\frac{|x|-|y|}{(|x|^{p}+|y|^{p})^{2+\frac{1}{p}}}|xy|^{p-1}.\]
It is clear that \(\partial_{12}d(x,y)\geq 0\) on the open ray of argument \(-\frac{\pi}{4}\) Besides, this is also true for half of the first quadrant (where \(x>y>0\)), and for half of the fourth quadrant (where \(0>-x>y\)), since both addends involved in \(\partial_{12}f\) are clearly positive.
Figure 5.1: \(1\)-relative (left) and \(2\)-relative (right) metrics.
The inequality for the two pending cases is derived as follows. For the case of half of the third quadrant \(0>x>y\), we can write \(|x|=\lambda|y|\) for some \(0<\lambda<1\). After this substitution, cancellation of denominators, and taking into account the value for the sign function, we need to check that
\[|y|^{2p-1}(1-\lambda^{2p-1})\geq p|y|^{2p-1}(1-\lambda)\lambda^{p-1}.\]
Equivalently, we have to explain why
\[1+p\lambda^{p}\geq p\lambda^{p-1}+\lambda^{2p-1}.\]
If we define the function
\[h(x)=(x+1)\lambda^{x}+(p-x)\lambda^{p+x},\]
since \(h(0)=1+p\lambda^{p}\) and \(h(p-1)=p\lambda^{p-1}+\lambda^{2p-1}\), it is enough to prove that \(h(0)\geq h(p-1)\). This is straightforward since \(\log(\lambda)<0\) and, for any \(x\in(0,p-1)\), we have that
\[h^{\prime}(x)=\log(\lambda)((x+2)\lambda^{x}+(p-x-1)\lambda^{p+x})<0.\]
The pending case for half of the fourth quadrant \(0>y>-x\) is similar to the previous one. We can write \(|y|=\lambda|x|\) for some \(0<\lambda<1\). We need to check that
\[|x|^{2p-1}(1+\lambda^{2p-1})\geq p|x|^{2p-1}(1-\lambda)\lambda^{p-1}.\]
This is equivalent to showing that
\[1+p\lambda^{p}\geq p\lambda^{p-1}-\lambda^{2p-1},\]
but we have already verified this for the case where \(\lambda^{2p-1}\) carries a plus sign.
**Example 5.3** (Relative metric).: The function
\[d(x,y)=\begin{cases}\frac{|y-x|}{\max\{|x|,|y|\}},&(x,y)\in\mathbb{R}^{2},\ (x,y)\neq 0,\\ 0,&(x,y)=0,\end{cases}\]
is a distance.
The relative metric is the pointwise limit of the \(p\)-relative metrics when \(p\to\infty\). Since positive definiteness, symmetry and triangle inequality are clearly preserved by taking limits on \(p\), the function \(d\) is a distance.
Figure 5.2: Relative metric.
**Example 5.4** (Chordal metric): The function \(d(x,y)=\frac{2\left|y-x\right|}{\sqrt{1+x^{2}}\sqrt{1+y^{2}}}\), \(x,y\in\mathbb{R}\) is a distance.
Let us check the hypotheses of Theorem 4.4. We have that \(d\in\mathscr{C}^{2}(\mathbb{R}^{2}\setminus\Delta,[0,\infty))\). Besides, positive definiteness and symmetry are clear. With respect to the cross partial derivative outside \(\Delta\) we observe that
\[\partial_{12}d(x,y)=\mathrm{sgn}(x-y)\frac{2(x-y)}{\left(1+x^{2}\right)^{3/2} \left(1+y^{2}\right)^{3/2}}=\frac{2\left|x-y\right|}{\left(1+x^{2}\right)^{3/ 2}\left(1+y^{2}\right)^{3/2}}.\]
which is greater than zero for every \((x,y)\notin\Delta\). Hence, we only need to check H4D. For any fixed \(c\), we have that
\[\lim_{\lambda\to\infty}d(c,\lambda)=\lim_{\lambda\to\infty}\frac{2(\lambda-c) }{\sqrt{1+c^{2}}\sqrt{1+\lambda^{2}}}=\frac{2}{\sqrt{1+c^{2}}}=\lim_{\lambda \to-\infty}\frac{2(c-\lambda)}{\sqrt{1+c^{2}}\sqrt{1+\lambda^{2}}}=\lim_{ \lambda\to-\infty}d(c,\lambda).\]
**Example 5.5** (Generalized chordal metric): Let \(\alpha>0,\beta\geq 0,p\geq 1\). The function
\[d(x,y):=\frac{\left|y-x\right|}{\left(\alpha+\beta\left|x\right|^{p}\right)^{ \frac{1}{p}}\cdot\left(\alpha+\beta\left|y\right|^{p}\right)^{\frac{1}{p}}},\]
is a distance.
In order to show that the previous function is a metric, we shall make some simplifications before checking that the cross partial derivative \(\partial_{12}\) is non-negative. First note that
\[d(x,y):=\frac{1}{\alpha^{\frac{3}{p}}}\frac{\left|y-x\right|}{\left(1+\left( \beta/\alpha\right)\left|x\right|^{p}\right)^{\frac{1}{p}}\cdot\left(1+\left( \beta/\alpha\right)\left|y\right|^{p}\right)^{\frac{1}{p}}}.\]
With the idea of a change of variables, we rewrite the denominator
\[d(x,y):=\frac{1}{\alpha^{\frac{3}{p}}}\frac{\left|y-x\right|}{\left(1+\left( \beta^{\frac{1}{p}}\left|x\right|/\alpha^{\frac{1}{p}}\right)^{\frac{1}{p}} \cdot\left(1+\left(\beta^{\frac{1}{p}}\left|y\right|/\alpha^{\frac{1}{p}} \right)^{p}\right)^{\frac{1}{p}}},\]
and the numerator
\[d(x,y):=\frac{1}{\alpha^{\frac{1}{p}}\beta^{\frac{1}{p}}}\frac{\left(\beta^{ \frac{1}{p}}/\alpha^{\frac{1}{p}}\right)\left|y-x\right|}{\left(1+\left(\beta ^{\frac{1}{p}}\left|x\right|/\alpha^{\frac{1}{p}}\right)^{p}\right)^{\frac{1} {p}}\cdot\left(1+\left(\beta^{\frac{1}{p}}\left|y\right|/\alpha^{\frac{1}{p}} \right)^{p}\right)^{\frac{1}{p}}}.\]
Thus,
\[d(g(x),g(y)):=\frac{1}{\alpha^{\frac{1}{p}}\beta^{\frac{1}{p}}}\frac{\left|g(y )-g(x)\right|}{\left(1+\left|g(x)\right|^{p}\right)^{\frac{1}{p}}\cdot\left(1+ \left|g(y)\right|^{p}\right)^{\frac{1}{p}}},\]
Figure 5.3: Chordal metric.
where we have taken \(g(s)=(\beta/\alpha)^{\frac{1}{p}}s\). Hence, if we define, \(d_{g}=d\circ(g\times g)\), we need to check that
\[d_{g}(x,y):=\frac{|y-x|}{(1+|x|^{p})^{\frac{1}{p}}\cdot(1+|y|^{p})^{\frac{1}{p}}}\]
is a metric. Since the sufficiency theorems do not care about the smoothness of the metric candidate on the diagonal, the only apparent pending issue is the smoothness of its denominator. Hence, we shall use the map \(h(x)=x^{2n+1}\) for \(n\) such that \((2n+1)p>2\) together with Theorem 4.12 in order to surpass this problem.
First, we have that \(d_{g}\circ(h\times h)\) has the demanded regularity in Theorem 4.12: \(d_{g}\) is a class two function on \(\mathbb{R}^{2}\setminus\Delta\). Moreover, hypotheses H1 and H3 can be easily checked.
Besides, on the one hand, outside the diagonal (\(\Delta\)) or the points where \(xy=0\) (\(\Lambda\)), the cross partial derivative is well-defined and it has the value
\[\partial_{12}d_{g}(x,y)=\frac{\operatorname{sgn}(y)\,|y|^{p-1}-\operatorname{ sgn}(x)\,|x|^{p-1}}{(1+|x|^{p})^{\frac{p+1}{p}}\cdot(1+|y|^{p})^{\frac{p+1}{p}}},\]
which is greater than or equal to zero at any point. Therefore, H3 holds. On the other hand, H4 of Theorem 4.12 also holds for the case H4D, since
\[\lim_{\lambda\rightarrow-\infty}d_{g}(c,\lambda)=1=\lim_{\lambda\rightarrow+ \infty}d_{g}(c,\lambda).\]
|
2309.02089 | On the use of U-statistics for linear dyadic interaction models | Even though dyadic regressions are widely used in empirical applications, the
(asymptotic) properties of estimation methods only began to be studied recently
in the literature. This paper aims to provide in a step-by-step manner how
U-statistics tools can be applied to obtain the asymptotic properties of
pairwise differences estimators for a two-way fixed effects model of dyadic
interactions. More specifically, we first propose an estimator for the model
that relies on pairwise differencing such that the fixed effects are
differenced out. As a result, the summands of the influence function will not
be independent anymore, showing dependence on the individual level and
translating to the fact that the usual law of large numbers and central limit
theorems do not straightforwardly apply. To overcome such obstacles, we show
how to generalize tools of U-statistics for single-index variables to the
double-indices context of dyadic datasets. A key result is that there can be
different ways of defining the Hajek projection for a directed dyadic
structure, which will lead to distinct, but equivalent, consistent estimators
for the asymptotic variances. The results presented in this paper are easily
extended to non-linear models. | G. M. Szini | 2023-09-05T09:51:45Z | http://arxiv.org/abs/2309.02089v1 | # On the use of U-statistics for linear dyadic interaction models
###### Abstract
Even though dyadic regressions are widely used in empirical applications, the (asymptotic) properties of estimation methods only began to be studied recently in the literature. This paper aims to provide in a step-by-step manner how U-statistics tools (Serfling, 2009) can be applied to obtain the asymptotic properties of pairwise differences estimators for a two-way fixed effects model of dyadic interactions. More specifically, we first propose an estimator for the model that relies on pairwise differencing such that the fixed effects are differenced out. As a result, the summands of the influence function will not be independent anymore, showing dependence on the individual level and translating to the fact that the usual law of large numbers and central limit theorems do not straightforwardly apply. To overcome such obstacles, we show how to generalize tools of U-statistics for single-index variables to the double-indices context of dyadic datasets. A key result is that there can be different ways of defining the Hajek projection for a directed dyadic structure, which will lead to distinct, but equivalent, consistent estimators for the asymptotic variances. The results presented in this paper are easily extended to non-linear models, as in Graham (2017) and Jochmans (2018).
+
## 1 Introduction
Dyadic regression analysis is a common practice in several applications for network models. It is used, for instance, in the estimation of gravity models for international trade flows since its establishment by Tinbergen (1962). As defined by Graham (2020), a dyadic dataset corresponds to a situation where the outcome of interest reflects a pairwise interaction among the sample units. Therefore, it is natural that datasets on trade flows are characterized by such a dyadic structure, as the value of imports and exports are determined by both the importer and the exporter countries. Other examples of applications of dyadic settings are, for instance, the estimation of models of migration, equity, international financial flows, and information flows (Jackson and Lopez-Pintado, 2013).
Even though dyadic regressions are widely used in empirical applications, the (asymptotic) properties of estimation methods only began to be studied recently in the literature. One key feature that all studies mentioned above contain is the presence of two-way unit-specific effects, one for each individual in the dyadic interaction, and the fact that the outcome variable (and, in most cases, explanatory variables) is double indexed. For linear models with the two-way unobserved heterogeneity and the idiosyncratic error term entering additively in the specification it is possible to estimate the model consistently and (asymptotically) unbiasedly using the two-way fixed effects estimator (as long as the model is correcely specified; see Juodis (2021))..
However, many economic models are non-linear, more specifically, in the context of network datasets, network formation models have the structure of discrete choice models (where the outcome of interest is binary), and outcomes of interest that are bounded below at zero (such as trade flows) can be approximated by a gravity equation in its multiplicative form. Naturally, one approach be to estimate such models with a probit/logit, and a poisson-pseudo maximum likelihood estimator (Silva and Tenreyro, 2006), respectively. The challenge is that, while it is desirable to treat the unit-specific effects as parameters to be estimated (i.e., fixed effects, such that the conditional distribution of the unobserved heterogeneity given the covariates is left unrestricted), as Fernandez-Val and Weidner (2016) show, even if both dimensions of the
(pseudo-)panel dataset grow with the sample size, these estimators suffer from the incidental parameter problem (Neyman and Scott, 1948) in the presence of two-way fixed effects. This problem occurs since, in non-linear models, the estimates of the coefficients of the covariates depend on the estimates of the fixed-effects, and the latter converges at a slower rate than the first, resulting in an asymptotic bias in the estimates (and, therefore, invalid inference).
To address the incidental parameter problem in estimates for non-linear models with two-way fixed effects, such as logit, probit and poisson-pseudo maximum likelihood estimators, Fernandez-Val and Weidner (2016) proposed analytical and jackknife bias correction methods. However, for network formation models (discrete choice models), others propose a conditional maximum-likelihood approach under the logistic specification, such as in Charbonneau (2017), Jochmans (2018) and Graham (2019). The conditioning sets in this approach translates to a model where pairwise differences of the outcomes (and covariates) are taken such that the two-way fixed effects are differenced out from the objective function, eliminating the incidental parameter problem.
The advantage of the conditional maximum-likelihood approach as opposed to the bias-correction methods is that it accomodates sparse networks in the case of a network formation model (Jochmans, 2018). However, when taking such differences in the model, in general, the summands of the influence functions will not be independent anymore, showing some dependence on the unit level and translating to the fact that usual law of large numbers and central limit theorems do not straightforwardly apply.
To overcome such obstacle, U-statistics tools are generally applied such that one can show the asymptotic properties of those estimators. Although the U-statistics properties are well-known for single-index variables (see Serfling (2009, Chapter 5), and Van der Vaart (2000, Chapters 11 and 12)), those of double-indexed variables are generally not treated (up to my knowledge) in textbooks, with few exceptions related to applications for dyadic contexts, such as Graham (2019, Chapter 4).
The main purpose of this paper is to illustrate, step-by-step in a comprehensive way, how to obtain the asymptotic properties of pairwise differences estimators (such that the fixed effects are
cancelled out) for models of dyadic interactions using tools from the literature on U-statistics. More specifically, we show how to accomodate such tools to double-indexed variables, as the outcome is and the covariates are indexed by both individuals in the interaction. Even though, as mentioned earlier, the classical two-way fixed effects estimators delivers consistent and (asymptotically) unbiased estimates in the linear model (such that the pairwise differencing approach is not needed), for simplicity, we consider a linear two-way fixed effects, but the arguments can be generalized (with additional regularity conditions) to non-linear models or models with multiplicative individual heterogeneity and errors structure (as showed in Jochmans (2017)).
A U-statistic for single-indexed variables is formally defined as an unbiased estimator that is of the form of an average of a function (kernel) of i.i.d. random variables. The main idea to determine the asymptotic properties of this estimator is to define a projection of the U-statistic, the so-called Hajek projection, that is asymptotically equivalent to the U-statistic itself. This projection consists on the average of the conditional expected value of the U-statistics, where each summand is the expected value of the U-statistic conditional on each index. Thus, by conditional independence arguments the Hajek projection becomes a simple average of i.i.d. random variables, to which laws of large numbers and central limit theorems can be applied. This concept will become more clear in the following sections.
A key result provided in this paper, is that, for a directed dyadic structure, there can be different ways of defining the Hajek projection, which will lead to distinct, but equivalent, consistent estimators for the asymptotic variance of the proposed estimator. More specifically, we provide two possible projections depending on which random variables one conditions the expected value of the U-statistics. Central to both possibilities is the fact that the summands of the influence function have a conditional independence structure once conditioned on dyad level attributes and the individual heterogeneity of both individuals forming a dyad (a fundamental difference with respect to single-index contexts). This is intuitive to dyadic settings, where the dependence across the dyads arises only through the individual fixed effects and the possible correlated observations for the same individual in different pairwise interactions and thus, outcomes
and covariates.
The organization of this paper is as follows: in Section 2 we define the linear model of directed dyadic interactions, in Section 3 we propose a pairwise differences estimator, in Section 4 we explain how some tools of U-statistics can be employed in this context and extended to dyadic settings, in Section 5 we discuss the asymptotic properties of the estimator and possible consistent estimates of its asymptotic variance and in Section 6 we demonstrate a Monte Carlo simulation exercise to investigate the finite sample properties of the estimator.
**Notation**
Random variables are denoted by capital letters, specific realizations thereof by lower case, and their support by blackboard bold capital letters. That is, \(Y\), \(y\), and \(\mathbb{Y}\) respectively denote a generic draw of, a specific value of, and the support of \(Y\).
Calligraphic letters denote sets. For instance, denote by \(\mathcal{N}=\{1,2,...N\}\) the set of indices for \(N\) individuals (or nodes). Denote by \(\mathcal{C}(\mathcal{N},4)\) the multiset containing all sets of combinations of four individuals from the sampled \(N\) observations. Moreover, denote \(|\mathcal{C}(\mathcal{N},4)|=\binom{N}{4}\) the number of obtained combinations, and denote by \(\mathcal{C}\) an unordered set formed by a given combination, say \(\mathcal{C}=\{i,j,k,l\}\).
Set \(\mathcal{P}(\mathcal{C},4)\) to be the multiset containing all sets of permutations containing four elements of a given combination \(\mathcal{C}\). Also, let \(|\mathcal{P}(\mathcal{C},4)|=4!\) to be the number of possible permutations, and \(\pi\) to be the ordered set formed by a given permutation, where \(\pi_{1}\), \(\pi_{2}\), \(\pi_{3}\) and \(\pi_{4}\) denotes its first, second, third and fourth elements. For instance, given a permutation \(\pi=\{k,l,j,i\}\), we have that \(\pi_{1}=k\), \(\pi_{2}=l\), \(\pi_{3}=j\) and \(\pi_{4}=i\).
**2. A linear model of dyadic interactions**
We consider a linear model of dyadic interactions between \(N\) agents, in which we assume that all variables of all pairwise interactions are observed (therefore, there is no sample selection). Let \((y_{ij},x_{ij})\) denote the realizations of the random vector of outcomes and covariates \((Y_{ij},X_{ij})\) for the dyad \((i,j)\), i.e., related to the interaction between agents \(i\) and \(j\). Importantly, \(Y_{ij}\) is
an outcome variable generated by the interaction of the individuals and it is continuous in this setting. We allow for directed interactions, such that \((Y_{ij},X_{ij})\) need not be equal to \((Y_{ji},X_{ji})\), and we do not include self links. Therefore, for a set \(\mathcal{N}=\{1,2,...N\}\) of \(N\) agents, we have \(N(N-1)\) observed dyads. Following the notation of Graham (2020), we denote that the first subscript on \(Y_{ij}\) or \(X_{ij}\) to be the _ego_, or sending agent, and the second to be the _alter_, or receiving agent.
Consider the following linear model of dyadic direct interactions taking into account two-way fixed effects:
\[Y_{ij}=\beta_{1}X_{ij}+\theta_{i}+\xi_{j}+U_{ij}. \tag{1}\]
For simplicity, we consider for now only one regressor \(X_{ij}\) (which can easily be relaxed to a vector). We assume that an agent-level attribute \(A_{i}\) (which also can be relaxed to be a vector), and an attribute \(B_{j}\) are observed, such that \(X_{ij}=f(A_{i},B_{j})\) is a constructed dyad-level attribute. On the other hand, the sequences of individual-level heterogeneity \(\{\theta_{i}\}_{i=1}^{N}\) and \(\{\xi_{i}\}_{i=1}^{N}\) (for the _ego_ and for the _alter_, respectively) are unobserved and we treat them as fixed-effects. In particular, there are no restrictions on correlations between \(\theta_{i}\), \(\xi_{j}\) and \(X_{ij}\). In other words, the joint distribution between the observed and unobserved agent-level characteristics, \(\{A_{i},B_{i},\theta_{i},\xi_{i}\}\) is left unrestricted, such that the model is semiparametric. Finally, \(U_{ij}\) is an idiosyncratic component that is also not necessarily equal to \(U_{ji}\).
Taking for instance the classical gravity model for international trade flows given by Anderson and Van Wincoop (2003), and usually applied in the empirical literature, the variable \(Y_{ij}\) would refer to the log of the value of exports from country \(i\) to country \(j\), \(X_{ij}\) refers to characteristics of the dyad \((i,j)\), for instance, the distance between the two countries, and \(\theta_{i}\) and \(\xi_{j}\) refers to the so-called unobserved _multilateral resistance_ terms. The latter terms refer, for example, to unmodeled export orientation of an economy, undervalued currencies and consumption taste.
We impose the following assumptions on this model:
**Assumption 2.1**.: _The error term \(U_{ij}\) is i.i.d., independent of the sequence \(\{A_{i},B_{i},\theta_{i},\xi_{i}\}_{i=1}^{N}\) for
any \(i\) and \(j\), and satisfies:_
\[\mathbb{E}[U_{ij}]=0\]
\[\mathbb{E}[U_{ij}U_{lk}]=\begin{cases}\sigma_{u}^{2}&\text{if }i=l,j=k\\ 0&\text{otherwise.}\end{cases}\]
**Assumption 2.2**.: _The dyad-level observed variable \(X_{ij}\) is given by:_
\[X_{ij}=f(A_{i},B_{j})\]
_where \(f\) is a measurable function, \(A_{i}\) and \(B_{j}\) are observed individual-level characteristics of the ego and the alter, respectively. Moreover, \(A_{i}\), \(B_{i}\) are i.i.d. and the sequences \(\{A_{i},B_{i}\}_{i=1}^{N}\) are mutually independent._
**Assumption 2.3**.: _(Analogous to Graham (2017)) Random sampling: Let \(i=1,...N\) index a random sample of agents from a population satisfying Assumption 1. It is observed \((Y_{ij},X_{ij})\) for \(i=1,...N\), \(j\neq i\) (i.e., all sampled dyads)._
Given the presence of the two-way fixed effects, namely \(\theta_{i}\) and \(\xi_{j}\), we have that conditional independence between the outputs of different dyads given the sequences of \(A_{i}\) and \(B_{j}\) (or, given the covariates \(X_{ij}\)) is unlikely to hold. Even conditioning on the sequence of covariates, outcomes that share the same _ego_ or _alter_ indices are likely to not be independent. For instance, the outcomes \(Y_{12}\) and \(Y_{34}\) are independent of each other, but the outcomes \(Y_{12}\) and \(Y_{13}\) are likely to be dependent, even after conditioning on \(X_{12}\) and \(X_{13}\). As pointed out by Graham (2020), in the international trade example, this translates to the fact that exports from Japan to Korea will likely covary with exports from Japan to the United States, even after controlling for covariates, due to the Japan exporter effect. Graham (2020) denotes these patterns as dyadic dependence.
However, after conditioning also on the fixed-effects, that is, conditional on \(\{X_{12},X_{13},\theta_{1},\xi_{2},\xi_{3}\}\), or, equivalently from Assumption 2.2, conditional on \(\{A_{1},B_{2},B_{3},\theta_{1},\xi_{2},\xi_{3}\}\), the outcomes \(Y_{12}\) and \(Y_{13}\) are independent. This result follows from Assumption 2.1. This conditional independence structure will be essential for the asymptotic properties of the estimator proposed in the follow
ing Section, mainly because this structure is well-suited for applying the tools of U-statistics. Shalizi (2016) denotes such models of dyadic interactions with such independency structure as conditionally independent dyad models (CID).
## 3 A pairwise differences estimator
Even though the model given by Equation (1) and under Assumptions 2.1 and 2.3 could be consistently and (asymptotically) unbiasedly estimated with a two-way fixed-effects estimator, we propose an estimator that differences out the fixed effects through pairwise differences. This estimator builds up on differencing arguments for a similar model, however, non-linear, introduced by Charbonneau (2017). She considers a model of network formation, where the outcome variable \(Y_{ij}\) is binary, indicating whether an individual \(i\) forms a _directed_ link with individual \(j\).
As mentioned before, Fernandez-Val and Weidner (2016) shows that maximum likelihood estimators for nonlinear models with two-way fixed effects, such as probit/logit, suffer from the incidental parameter problem even if both dimensions of the (pseudo-)panel dataset tend to infinity. This is due to the fact that the dimensions of the vectors of nuisance parameters (how the fixed effects are treated in both Charbonneau (2017) and in this paper) grows with the number of observations. At this point, it is important to notice that datasets of dyadic interactions can be seen as a pseudo panel data, where both dimensions of the panel tend to infinity as the number of individuals grow. Fernandez-Val and Weidner (2016) proposes analytical bias corrections to reduce the incidental parameter (asymptotic) bias, which was implemented by Dzemski (2019) to a network formation context. However, as explained by Jochmans (2018), the problem with the bias correction approach is that for sparse networks the individual-specific parameters (the fixed effects) may not be consistently estimable or may be estimable only at a very slow rate.
The approach proposed by Charbonneau (2017) becomes very attractive for sparse networks, since, through a conditional maximum likelihood approach for logistic models, it delivers an estimator that differences out the fixed-effects. The estimator essentially is based on a set of conditions that translates to a transformation of the dependent and covariates, where pairwise differences are taken, such that the fixed-effects in the model are cancelled out. Even though a
classic logit estimation can be used to obtain the estimates of the coefficients of the covariates, inference does not follow the textbook usual procedures, since agent-level dependencies arise when taking such pairwise differences. The asymptotic properties of this estimator are studied by Jochmans (2018) and are obtained by employing tools of U-statistics. He shows that this estimator is consistent, asymptotically unbiased and the estimated variances deliver correct sizes for the t-test.
The estimator that we introduce in this Section is based on a similar pairwise differences methodology for transforming the dependent variable and covariates to difference out fixed-effects as presented by Charbonneau (2017), however, for a linear model. Our purpose when introducing this estimator is to provide a better understanding, through a simpler and linear model, on how to apply the tools of U-statistics to derive the asymptotic properties of estimators based on such pairwise differences for dyadic data.
First, we define the following notation for the random variable obtained by taking the specified pairwise differences among different dyads' outputs:
\[\tilde{Y}_{ijkl}\equiv(Y_{ij}-Y_{ik})-(Y_{lj}-Y_{lk}), \tag{2}\]
and analogously for \(\tilde{X}_{ijkl}\) and \(\tilde{U}_{ijkl}\).
If we substitute the expressions for each of the outcomes \(Y_{ij}\), \(Y_{ik}\), \(Y_{lj}\) and \(Y_{lk}\) given by the model in Equation (1) to the expression for \(\tilde{Y}_{ijkl}\) in Equation (2), we obtain:
\[\tilde{Y}_{ijkl}=\beta_{1}\tilde{X}_{ijkl}+\tilde{U}_{ijkl}, \tag{3}\]
where the fixed effects are differenced out. The equation above is simply a linear regression with the transformed variables obtained by taking the pairwise differences between the dyads \((i,j)\), \((i,k)\) and \((l,j)\), \((l,k)\).
This form of differencing out the fixed effects depends heavily on the fact that the individual-specific heterogeneity parameters (i.e., the fixed effects themselves) enter the model additively.
For more general specifications, this transformation fails to difference out the fixed effects. However, other studies, such as Chen et al. (2021) and Jochmans (2017), study cases with interactive fixed-effects. The former proposes an analytical bias correction estimator and the latter provides also an argument for differencing out the individual-specific parameters.
Inspired by the same methodology of Charbonneau (2017) that is further studied by Jochmans (2018), we can then estimate \(\beta_{1}\) with an ordinary least squares estimator by taking into account the transformed variables \(\tilde{Y}_{ijkl}\) and \(\tilde{X}_{ijkl}\). Notice that the model given by Equation (3), where the fixed effects are differenced out, holds for all combinations of quadruples of indices from the set \(\mathcal{N}=\{1,\ldots,N\}\) and its permutations. Therefore, we can write the pairwise differences OLS estimator as:
\[\hat{\beta}_{1,PD} =\left[\sum_{i=1}^{N}\sum_{j\neq i}\sum_{k\neq i,j}\sum_{l\neq i, j,k}\tilde{X}_{ijkl}\tilde{X}^{\prime}_{ijkl}\right]^{-1}\left[\sum_{i=1}^{N} \sum_{j\neq i}\sum_{k\neq i,j}\sum_{l\neq i,j,k}\tilde{X}_{ijkl}\tilde{Y}_{ijkl}\right] \tag{4}\] \[=\left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}( \mathcal{N},4)}\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{ \pi_{1}\pi_{2}\pi_{3}\pi_{4}}\tilde{X}^{\prime}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}} \right]^{-1}\left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}( \mathcal{N},4)}\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{ \pi_{1}\pi_{2}\pi_{3}\pi_{4}}\tilde{Y}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\right],\]
where, in the second line, we use the fact that summing over all possible permutations of quadruples is equivalent to summing over all possible combinations of quadruples and then all permutations of such combinations. Therefore, say we look at a specific combination given by \(\mathcal{C}\), then the multiset denoted by \(\mathcal{P}(\mathcal{C},4)\) corresponds to all permutations of those indices. Then, given a permutation, \(\pi=\{i,j,k,l\}\), we have that \(\pi_{1}=i\), \(\pi_{2}=j\), \(\pi_{3}=k\) and \(\pi_{4}=l\), such that \(\pi_{1}\) refers to the index occuping the first position in the permutation set, and analogously for \(\pi_{2}\), \(\pi_{3}\) and \(\pi_{4}\).
To obtain the properties of the estimator \(\hat{\beta}_{1,PD}\), it is useful, as shown in the regular textbook case for OLS estimators to rewrite the previous expression in terms of its influence function:
\[\hat{\beta}_{1,PD}=\beta_{1}+\left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C} \in\mathcal{C}(\mathcal{N},4)}\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\tilde{X}^{\prime}_{\pi_{1}\pi_{2} \pi_{3}\pi_{4}}\right]^{-1}\left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in \mathcal{C}(\mathcal{N},4)}\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4) }\tilde{X}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\tilde{U}_{\pi_{1}\pi_{2}\pi_{3}\pi _{4}}\right]. \tag{5}\]
In order to derive the asymptotic properties of this estimator, it is necessary to first derive the asymptotic properties of the last term in the equation above, namely:
\[\left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)}\frac {1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{\pi_{1}\pi_{2}\pi_{3} \pi_{4}}\tilde{U}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\right].\]
Notice that the transformed error terms \(\tilde{U}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\equiv(U_{\pi_{1}\pi_{2}}-U_{\pi_{1} \pi_{3}})-(U_{\pi_{4}\pi_{2}}-U_{\pi_{4}\pi_{3}})\) are not independent over the dataset obtained when applying the transformation over all possible combinations and its permutations of quadruples, since the same dyads will appear in different terms, leading to a correlation amongst the terms. Therefore, the traditional application of LLNs and CLTs does not hold straightforwardly.
From now on we will denote the last term in Equation (5) by:
\[U_{N} =\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N },4)}\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{\pi_{1}\pi _{2}\pi_{3}\pi_{4}}\tilde{U}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}} \tag{6}\] \[=\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N },4)}\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}((X_{\pi_{1}\pi_{2}}-X _{\pi_{1}\pi_{3}})-(X_{\pi_{4}\pi_{2}}-X_{\pi_{4}\pi_{3}}))((U_{\pi_{1}\pi_{2}} -U_{\pi_{1}\pi_{3}})-(U_{\pi_{4}\pi_{2}}-U_{\pi_{4}\pi_{3}})).\]
Even though this term resembles a U-statistic, it is not strictly speaking. However, we can adapt tools used in the literature of U-statistics to obtain the properties of the term \(U_{N}\). Namely, we employ a Hoeffding decomposition (Hoeffding et al., 1948) to obtain the variance of the term \(U_{N}\), and we also propose two possibilities of Hajek projections of this term, which we prove both to be asymptotically equivalent to \(U_{N}\). The reason why obtaining such projections is that the terms on it are i.i.d. such that it is possible to apply laws of large numbers and central limit theorems to obtain the asymptotic properties of \(U_{N}\), and, thus of the proposed estimator.
In this paper we consider asymptotics under one single network growing, i.e., we consider that \(N\) (the number of individuals in a network) tends to infinity when obtaining the asymptotic properties of the proposed estimator.
## 4 Using U-statistics Tools In Dyadic Settings
### The U-statistics
According to Serfling (2009), the U-statistic is a generalization of the sample mean, i.e., a generalization of the notion of forming an average. The formal definition of the U-statistic is the following:
**Definition 1**.: Let \(W_{1},W_{2},...W_{n}\) be independent observations on a distribution \(F\) (which can be vector-valued). Consider a parametric function \(\theta=\theta(F)\) for which there is an unbiased estimator:
\[\theta(F)=\mathbb{E}[h(W_{1},...W_{m})]=\int...\int h(w_{1},...,w_{m})dF(w_{1} )...dF(w_{m})\]
for some function \(h=h(x_{1},...,x_{m})\) called a kernel. It is assumed without loss of generality that \(h\) is symmetric. Then, for any kernel \(h\), the corresponding U-statistic for estimation of \(\theta\) on the basis of a sample \(X_{1},...X_{n}\) of size \(n\geq m\) is obtained by averaging the kernel symmetrically over the observations:
\[U_{n}=U(W_{1},...W_{n})=\frac{1}{\binom{n}{m}}\sum_{c}h(W_{i_{1}},...,W_{i_{m}})\]
where \(\sum_{c}\) denotes summation over the \(\binom{n}{m}\) combinations of \(m\) distinct elements \(\{i_{1},...,i_{m}\}\) from \(\{1,...,n\}\). An important property is that \(U_{n}\) is an unbiased estimate of \(\theta\).
We can see that the term \(U_{N}\), as defined in Equation (6), contains elements of a U-statistic, resembling one at a first glace. However, it is not formally one given the definition above.
The shared properties to a U-statistic are related to having a similar dependence structure, such that it consists of a sum over all combinations of quadruples of individuals, evaluated at some given function, analogous to a fourth-order U-process. We can define the symmetric kernel
for a given combination \(\mathcal{C}=\{i,j,k,l\}\) in our case to be:
\[s_{ijkl}:=\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}((X_{ \pi_{1}\pi_{2}}-X_{\pi_{1}\pi_{3}})-(X_{\pi_{4}\pi_{2}}-X_{\pi_{4}\pi_{3}}))((U_ {\pi_{1}\pi_{2}}-U_{\pi_{1}\pi_{3}})-(U_{\pi_{4}\pi_{2}}-U_{\pi_{4}\pi_{3}})), \tag{7}\]
which is essentially the score of our estimator (being the reason why we denote by \(s\), and not \(h\)). Note once again that the indices \(k_{1},k_{2},k_{3},k_{4}\) denote the elements of the permutations of a given combination of individuals \(i,j,k,l\). Then, we can also see that another shared property with the U-statistic is that the kernel is permutation invariant and that the arguments of it, namely, the random variables \(X_{ij}\) and \(U_{ij}\), are identically distributed from Assumptions 2.1 and 2.2.
Another important property that the term \(U_{N}\) has in common to a U-statistic is that, if we define a parametric function \(\theta\) to be:
\[U =\theta(F) \tag{8}\] \[=\mathbb{E}_{F}\left[\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{ C},4)}\left((X_{\pi_{1}\pi_{2}}-X_{\pi_{1}\pi_{3}})-(X_{\pi_{4}\pi_{2}}-X_{\pi_{4} \pi_{3}})\right)((U_{\pi_{1}\pi_{2}}-U_{\pi_{1}\pi_{3}})-(U_{\pi_{4}\pi_{2}}-U _{\pi_{4}\pi_{3}})\right)\right]\] \[=0,\]
then, we also have in our context that \(U_{N}\) is an estimator of \(\theta\), and it is also unbiased, since,
\[\mathbb{E}_{F}[U_{N}] =\mathbb{E}_{F}\left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C} \in\mathcal{C}(\mathcal{N},4)}\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\tilde{U}_{\pi_{1}\pi_{2}\pi_{3}\pi _{4}}\right] \tag{9}\] \[=\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{ N},4)}\mathbb{E}_{F}\left[\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)} \tilde{X}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\tilde{U}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4 }}\right]\] \[=\theta=0,\]
where the second equality follows from linearity of expectations.
In spite of these similarities, the statistic \(U_{N}\) is not an U-statistic as conventionally defined, since its kernel includes random variables at both the individual (since \(X_{ij}=f(A_{i},B_{j})\)) and dyad
level \((U_{ij})\). Therefore, single-index U-statistics as the one defined in the definition above are not well-suited, hence, the tools need to be slightly modified to accomodate the dyadic structure.
Even more crucial is the fact that the observations \(\{X_{ij}\}_{i=1,j\neq i}^{N}\) are not independent, due to the common individual characteristics \(A_{i}\) or \(B_{j}\). However, the fact that \(U_{ij}\) is i.i.d. and that it is independent of \(X_{ij}\) allows us to employ tools of U-statistics, such as the Hoeffding decomposition to obtain the variance of \(U_{N}\), and the possibility to define a Hajek projection to obtain the asymptotic properties of this term (since we will demonstrate that \(U_{N}\) and the projections are asymptotically equivalent). Importantly, to apply LLNs and CLTs to the Hajek projection, we will exploit the conditional independence structure of the projection. The conditional independence arguments extend straightforwardly to CID models, making them well suited for the use of U-statistic tools.
### Calculating the variance of \(U_{n}\) using a Hoeffding decomposition
To derive the variance, we first use some arguments provided by Serfling (2009), that are also employed by, for instance, Graham (2017). First, we define:
**Definition 2**.: Consider two sets of combinations, say \(\{i,j,k,l\}\) and \(\{m,n,o,p\}\), of four distinct individuals from the set \(\mathcal{N}=\{1,\ldots,N\}\). Then, let \(q\in\{0,1,2,3,4\}\) be the number of common individuals in the two combinations. Then, it follows by symmetry of the kernel function \(s\), and by Assumptions 2.1 and 2.2, that:
\[\Delta_{q}:=\text{Cov}[\tilde{s}_{ijkl},\tilde{s}_{mnop}]=\mathbb{E}[\tilde{s }_{ijkl}\tilde{s}_{mnop}],\]
where \(\tilde{s}_{ijkl}=s_{ijkl}-\theta\).
Notice that independently of which pairs of combinations of quadruples we look at from the sampled individuals, the covariance between the two kernels evaluated at such combinations will only depend on the number of common individuals that the combinations share, namely, \(q\). This follows from the fact that from Assumptions 2.1 and 2.2, \(U_{ij}\), \(A_{i}\) and \(B_{j}\) are i.i.d. and that the kernel (score) \(s\) is symmetric on its arguments. By working out further the expression for the
covariance \(\Delta_{q}\), one can see that the nonzero terms in the expression are mainly driven by the covariance between the idiosyncratic errors. This is due to: (i) \(\{U_{ij}\}_{i=1,j\neq i}^{N}\) being independent of \(\{X_{ij}\}_{i=1,j\neq i}^{N}\), and to (ii) the idiosyncratic errors being independent of each other, while, for instance, \(X_{ij}\) is correlated with \(X_{ik}\) due to the common individual factor \(A_{i}\). Therefore, if there is no common dyad in the expressions of the kernels for both combinations, the covariance between them will be zero, since \(U_{ij}\) is i.i.d. (importantly, for instance, \(U_{ij}\) and \(U_{ik}\) are independent). This argument will become clearer in the Appendix A.
Due to the dyadic structure and the fact that \(U_{ij}\) is i.i.d., \(\text{Cov}(s_{ijkl},s_{mnp})=0\) whenever the quadruples share zero or only one individual in common. Therefore, \(\Delta_{0}=\Delta_{1}=0\) indicates that \(U_{N}\) exhibits degeneracy of order one. As long as the combinations have two or more indices in common, since the kernels sums over all permutations of the combinations, the same idiosyncratic error (with the same indices \(i\) and \(j\)) appears in both terms \(s_{ijkl}\) and \(s_{mnp}\), leading to a non-zero covariance.
Assuming further that:
**Assumption 4.1**.: _The symmetric kernel \(s_{ijkl}\) satisfies:_
\[\mathbb{E}[s_{ijkl}^{2}]<\infty.\]
Since the covariances \(\Delta_{q}\) are constant across pairs of combinations sharing \(q\) individuals in common, we can obtain the variance of \(U_{N}\) through the Hoeffding decomposition (Hoeffding et al., 1948), as the following Lemma states:
**Lemma 1**.: _The variance of \(U_{N}\) is given by:_
\[\text{Var}(U_{N})=\binom{N}{4}^{-1}\sum_{q=0}^{4}\binom{4}{q}\binom{N-4}{4-q} \Delta_{q}.\]
_And it satisfies, given Assumption 4.1:_
\[\text{Var}(U_{N})<\infty.\]
Proof.: Provided in Appendix B.1.
Given the result provided by Lemma 1, we can rescale the statistic \(U_{N}\) by the factor \(\sqrt{N(N-1)}\), and by taking into account that \(\Delta_{0}=\Delta_{1}=0\), and denoting:
\[\bar{s}_{ij}=\mathbb{E}[s_{ijkl}|A_{i},B_{j},U_{ij}]\quad\text{and}\quad\bar{s }_{ji}=\mathbb{E}[s_{ijkl}|A_{j},B_{i},U_{ji}],\]
\[\delta_{2}=\mathbb{E}[\bar{s}_{ij}^{2}]=\mathbb{E}[\bar{s}_{ji}^{2}],\]
we arrive at the following result:
**Theorem 1**.: _Given the result in Lemma 1, and under Assumptions 2.1-2.3 and 4.1:_
\[\text{Var}(\sqrt{N(N-1)}U_{N})=\mathcal{O}(1)+\mathcal{O}\left(\frac{1}{N} \right)+\mathcal{O}\left(\frac{1}{N^{2}}\right).\]
_The term related to \(\Delta_{2}\) asymptotically dominates the expression, such that the variance of the rescaled statistic \(U_{N}\) converges to:_
\[\text{Var}(\sqrt{N(N-1)}U_{N})\xrightarrow{N\to\infty}72\Delta_{2}=144\delta_ {2}.\]
Proof.: Provided in Appendix B.2.
Where the terms of order \(\mathcal{O}(1)\) in the above expression relates to the term \(\Delta_{2}\), \(\mathcal{O}\left(\frac{1}{N}\right)\) relates to the term \(\Delta_{3}\) and \(\mathcal{O}\left(\frac{1}{N^{2}}\right)\) relates to the term \(\Delta_{4}\). Furthermore, given our simplified model, it is possible to further pin down the expression for \(\Delta_{2}\). This result can be found in Appendix C.
### Deriving the Hajek projection of \(U_{n}\)
As explained by Serfling (2009), the appealing feature of a U-statistic as given by Definition 1, is its simple structure as a sum of identically distributed random variables. But, even in the simpler context of a single-index U-statistic, if the kernel \(h\) has a dimension \(m>1\), then the summands in the statistic \(U_{N}\) are not all independent, as the sample sampled observations are taken into account in different combinations. Therefore, it is not possible to directly employ
LLNs and CLTs for sums of independent random variables, as it is customary done. However, Serfling (2009) and other textbooks on U-statistics show that it is possible to obtain a projection to which the U-statistic can be approximated to. The advantage is that such projection is a sum of i.i.d. random variables, to which classical limit theory can be applied.
In the following we will present the formal definition of this projection, the Hajek projection, and explain how this concept can be applied in our context. We also highlight that there are considerable differences between our approach and the classical textbook projection. Again, the main difference is that, while the standard definitions account for single-index variables, in our case of a dyadic setting the random variables forming the U-statistics have double-indices. Moreover, the pairwise differences structure in the kernel are formed by random variables reflecting the dyadic interactions originated by four individuals.
Therefore, to obtain the Hajek projection, instead of conditioning on a single-indexed random variable alone as it is done in textbooks, we condition on both the individual and dyad-level random variables given by the dyad indices. We will show that by doing so, we will still obtain a projection where the summands are conditionally independent, even if the sequence \(\{X_{ij}\}_{i=1,j\neq i}^{N}\) is not formed by independent variables, since the idiosyncratic errors \(U_{ij}\) are i.i.d. and independent of the former sequence. This relies on the previously mentioned arguments of conditional independence of CID models.
Besides, in the general textbook case or single-index variables, it is stated that the projection has no purpose in the case where \(\Delta_{1}=0\), however, we will see that in our case, due to the dyadic structure, the projection proves to be useful even when \(\Delta_{1}=0\) holds.
The most important result in this section is that we can propose two different forms of Hajek projections, depending whether we condition on all random variables generated by the combination \(\{i,j\}\) of a dyad, or if we condition on the random variables generated by the permutation \(\{i,j\}\), where the ordering of the indices mat
#### 4.3.1 The textbook definition of a Hajek projection
According to Serfling (2009), and following the same notation and framework as in Definition 1, we have the following definition for a Hajek projection:
**Definition 3**.: Assume \(E_{F}|h|<\infty.\) The projection of the U-statistic \(U_{n}\) is defined as
\[\hat{U}_{n}:=\sum_{i=1}^{n}E_{F}\left\{U_{n}\mid W_{i}\right\}-(n-1)\theta.\]
Notice that, in the context of Serfling (2009) it is exactly a sum of i.i.d. random variables.
It is important to notice that, in the definition of \(\hat{U}_{n}\) above, when taking the expectation of \(U_{n}\) conditioning on each different \(W_{i}\) for each summand, we are left with a sum of i.i.d. random variables, since \(W_{i}\) are i.i.d. themselves.
#### 4.3.2 First Hajek projection, \(\hat{U}_{N,1}\)
We denote the first proposed Hajek projection by \(\hat{U}_{N,1}\). In our context, we already derived before that \(\theta=\mathbb{E}[s_{ijkl}]=0\) (see Equation (8)), therefore, we only need to focus now on deriving the first term of the similar projection proposed in Definition 3. In addition, we are working in a context of dyads, therefore, the sum is over the expected value of the statistic \(U_{N}\) conditional on each of the dyad characteristics, namely, for a given dyad \(\{i^{\prime},j^{\prime}\}\) we condition on \(\{A^{\prime}_{i},B^{\prime}_{j},U_{i^{\prime}j^{\prime}}\}\). Therefore, we sum over all the possible dyads \(N(N-1)\). Notice that the order of the indices in the dyad matter, since we have a directed network.
**Definition 4**.: Given the statistic in Equation (6), we define the first Hajek projection as:
\[\hat{U}_{N,1} =\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{\prime}}\mathbb{E}[U _{N}|A_{i^{\prime}},B_{j^{\prime}},U_{i^{\prime}j^{\prime}}] \tag{10}\] \[=\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{\prime}}\mathbb{E} \left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)} \frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{\pi_{1}\pi_{2}\pi _{3}\pi_{4}}\tilde{U}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}|A_{i^{\prime}},B_{j^{ \prime},},U_{i^{\prime}j^{\prime}}\right]\] \[=\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{\prime}}\mathbb{E} \left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)}s _{ijkl}|A_{i^{\prime}},B_{j^{\prime}},U_{i^{\prime}j^{\prime}}\right]\] \[=\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{\prime}}\frac{1} {\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)}\mathbb{E}[s_{ ijkl}|A_{i^{\prime}},B_{j^{\prime}},U_{i^{\prime}j^{\prime}}].\]
The main idea behind this projection is that the double sum \(\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{\prime}}\) fixes the two indices of a dyad, and refers to the individual-level \(\{A_{i^{\prime}},B_{j^{\prime}}\}\) and the dyad level characteristics \(U_{i^{\prime}j^{\prime}}\), which we condition the statistic \(U_{N}\) on. In this case, the order of the indices \((i^{\prime},j^{\prime})\) matter to determine on which random variables we condition on.
For each summand of the double sum \(\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{\prime}}\) we take the expectation of the statistic \(U_{N}\) conditional on the variables described above. The statistic is essentially an average of the scores \(s_{ijkl}\) evaluated at all possible combinations of quadruples \(\{i,j,k,l\}\) from the set \(\mathcal{N}\). From Assumption 2.1, and more precisely, the fact that \(U_{ij}\) is independent from the sequence \(\{X_{ij}\}_{i=1,j\neq i}^{N}\), leads to the fact that the only non-zero summands are the ones where the combination \(\mathcal{C}\) contains the elements \(i^{\prime}\) and \(j^{\prime}\), and any other two remaining elements. Since the kernel contains all permutations of the combination, inevitably the term \(U_{i^{\prime}j^{\prime}}\) will appear in the expression for the kernel \(s_{ijkl}\) in this case (where \(\{i^{\prime},j^{\prime}\}\subset\{i,j,k,l\}\), with, for instance, \(i=i^{\prime}\) and \(j=j^{\prime}\)), leading to a non-zero conditional expectation.
We can further boil down the expression of the projection \(\hat{U}_{N,1}\) by first noting that, as shown in Appendix C, for a given combination \(\{i^{\prime},j^{\prime},k,l\}\) for any value of \(k\) and \(l\), the conditional expected value of the kernel evaluated at such combination is of the form:
\[\mathbb{E}[s_{i^{\prime}j^{\prime}kl}|A_{i^{\prime}},B_{j^{\prime}},U_{i^{ \prime}j^{\prime}}]=8[(X_{i^{\prime}j^{\prime}}-\mathbb{E}[X_{i^{\prime}j^{ \prime}}|A_{i^{\prime}}]-\mathbb{E}[X_{i^{\prime}j^{\prime}}|B_{j^{\prime}}]+ \mathbb{E}[X_{i^{\prime}j^{\prime}}])U_{i^{\prime}j^{\prime}}].\]
Moreover, there will be \(\binom{N-2}{2}\) possible combinations of four elements of the set \(\mathcal{N}\) containing the individuals \(i^{\prime}\) and \(j^{\prime}\). Then, we can rewrite the projection as:
\[\hat{U}_{N,1} =\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{\prime}}\frac{1}{ \binom{N}{4}}\frac{1}{4!}\binom{N-2}{2}8[(X_{i^{\prime}j^{\prime}}-\mathbb{E} [X_{i^{\prime}j^{\prime}}|A_{i^{\prime}}]-\mathbb{E}[X_{i^{\prime}j^{\prime}}| B_{j^{\prime}}]+\mathbb{E}[X_{i^{\prime}j^{\prime}}])U_{i^{\prime}j^{\prime}}] \tag{11}\] \[=\frac{4}{N(N-1)}\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{ \prime}}[(X_{i^{\prime}j^{\prime}}-\mathbb{E}[X_{i^{\prime}j^{\prime}}|A_{i^{ \prime}}]-\mathbb{E}[X_{i^{\prime}j^{\prime}}|B_{j^{\prime}}]+\mathbb{E}[X_{i^ {\prime}j^{\prime}}])U_{i^{\prime}j^{\prime}}].\]
In order to show in the following sections that the statistic \(U_{N}\) and the projection \(\hat{U}_{N,1}\) are asymptotically equivalent, we first need to derive the variance of the projection.
**Lemma 2**.: _Under Assumptions 2.1 and 2.2, the variance of the first Hajek projection given by Definition 4 is:_
\[\text{Var}(\hat{U}_{N,1})=\frac{144}{N(N-1)}\delta_{2}.\]
_Therefore, by rescaling the projection by the factor \(\sqrt{N(N-1)}\), we have:_
\[\text{Var}(\sqrt{N(N-1)}\hat{U}_{N,1})=144\delta_{2}.\]
Proof.: Proof provided in Appendix B.3.
#### 4.3.3 Second Hajek projection, \(\hat{U}_{N,2}\)
Before defining the second possibility for the Hajek projection, notice first that, conditioning on individual and dyad-level random variables related to a directed dyad \((i^{\prime},j^{\prime})\) is different than conditioning on all individual and dyad-level random variables related to a combination \(\{i^{\prime},j^{\prime}\}\). More specifically, the first comprises of the elements \(A_{i^{\prime}}\), \(B_{j^{\prime}}\) and \(U_{i^{\prime}j^{\prime}}\), while the second comprises of \(A_{i^{\prime}}\), \(A_{j^{\prime}}\), \(B_{i^{\prime}}\), \(B_{j^{\prime}}\), \(U_{i^{\prime}j^{\prime}}\) and \(U_{j^{\prime}i^{\prime}}\).
Therefore, in this second proposed projection, instead of summing over all possible directed dyads, we sum over all possible combinations of indices \(i^{\prime}\) and \(j^{\prime}\), which amounts to \(\frac{N(N-1)}{2}\) combinations. We therefore condition on all characteristics of these both indices:
**Definition 5**.: Given the statistic in Equation (6), we define the second Hajek projection as:
\[\hat{U}_{N,2} :=\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}>i^{\prime}}\mathbb{E} \left[U_{N}|A_{i^{\prime}},B_{j^{\prime}},U_{i^{\prime}j^{\prime}},A_{j^{\prime }},B_{i^{\prime}},U_{j^{\prime}i^{\prime}}\right] \tag{12}\] \[=\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}>i^{\prime}}\mathbb{E} \left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)} \frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{\pi_{1}\pi_{2}\pi _{3}\pi_{4}}\tilde{U}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}|A_{i^{\prime}},B_{j^{ \prime}},U_{i^{\prime}j^{\prime}},A_{j^{\prime}},B_{i^{\prime}},U_{j^{\prime} i^{\prime}}\right]\] \[=\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}>i^{\prime}}\mathbb{E} \left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)}s _{ijkl}|A_{i^{\prime}},B_{j^{\prime}},U_{i^{\prime}j^{\prime}},A_{j^{\prime}},B _{i^{\prime}},U_{j^{\prime}i^{\prime}}\right]\] \[=\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}>i^{\prime}}\frac{1}{ \binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)}\mathbb{E}[s_{ijkl }|A_{i^{\prime}},B_{j^{\prime}},U_{i^{\prime}j^{\prime}},A_{j^{\prime}},B_{i^ {\prime}},U_{j^{\prime}i^{\prime}}].\]
Again, the double sum, \(\sum_{i^{\prime}=1}^{N}\sum_{j>i}\), fixes two indices \(i^{\prime}\) and \(j^{\prime}\) of the possible tetrads, and it runs over the conditioning terms. Then, we take the expectation of the statistic \(U_{N}\) conditional on the terms described above. The structure of the second Hajek projection is essentially the same as the first, apart from which terms we condition on. Therefore, again, we will have that for all combinations of quadruples for which we take the average of the conditional expectation of the score function (kernel), only \(\binom{N-2}{2}\) combinations will lead to non-zero expectated values. Those combinations refer again to the ones containing the elements \(i^{\prime}\) and \(j^{\prime}\).
The difference with respect to the previous projection, that is induced by the extra conditioning terms, boils down to the terms in the score function \(s_{ijkl}\) (where \(\{i^{\prime},j^{\prime}\}\subset\{i,j,k,l\}\), with, for instance, \(i=i^{\prime}\) and \(j=j^{\prime}\)) that will be non-zero, since now the permutations that both the terms \(U_{i^{\prime}j^{\prime}}\) and \(U_{j^{\prime}i^{\prime}}\) will have non-zero terms.
Once again, we can further boil down the expression of the projection \(\hat{U}_{N,2}\) by first noting that, as shown in Appendix C, for a given combination \(\{i^{\prime},j^{\prime},k,l\}\) for any value of \(k\) and \(l\), the conditional expected value of the kernel evaluated at such combination is of the form:
\[\mathbb{E}[s_{i^{\prime}j^{\prime}kl}|A_{i^{\prime}},B_{j^{ \prime}},U_{i^{\prime}j^{\prime}},A_{j^{\prime}},B_{i^{\prime}},U_{j^{\prime}i ^{\prime}}]\] \[=8[(X_{i^{\prime}j^{\prime}}-\mathbb{E}[X_{i^{\prime}j^{\prime} }|A_{i^{\prime}}]-\mathbb{E}[X_{i^{\prime}j^{\prime}}|B_{j^{\prime}}]+\mathbb{ E}[X_{i^{\prime}j^{\prime}}])U_{i^{\prime}j^{\prime}}]+8[(X_{j^{\prime}i^{ \prime}}-\mathbb{E}[X_{j^{\prime}i^{\prime}}|A_{j^{\prime}}]-\mathbb{E}[X_{j^ {\prime}i^{\prime}}|B_{i^{\prime}}]+\mathbb{E}[X_{j^{\prime}i^{\prime}}])U_{j^ {\prime}i^{\prime}}].\]
Such that we can further simplify the expression for the projection:
\[\hat{U}_{N} =\sum_{i=1^{\prime}}^{N}\sum_{j^{\prime}>i^{\prime}}\frac{1}{{N\choose 4 }}\frac{1}{4!}{N-2\choose 2}\Big{[}8[(X_{i^{\prime}j^{\prime}}-\mathbb{E}[X_{i^{ \prime}j^{\prime}}|A_{i^{\prime}}]-\mathbb{E}[X_{i^{\prime}j^{\prime}}|B_{j^{ \prime}}]+\mathbb{E}[X_{i^{\prime}j^{\prime}}])U_{i^{\prime}j^{\prime}}]\] \[+8[(X_{j^{\prime}i^{\prime}}-\mathbb{E}[X_{j^{\prime}i^{\prime}}|A_ {j^{\prime}}]-\mathbb{E}[X_{j^{\prime}i^{\prime}}|B_{i^{\prime}}]+\mathbb{E}[X _{j^{\prime}i^{\prime}}])U_{j^{\prime}i^{\prime}}]\Big{]} \tag{13}\] \[=\frac{4}{N(N-1)}\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}>i^{\prime }}\Big{[}(X_{i^{\prime}j^{\prime}}-\mathbb{E}[X_{i^{\prime}j^{\prime}}|A_{i^{ \prime}}]-\mathbb{E}[X_{i^{\prime}j^{\prime}}|B_{j^{\prime}}]+\mathbb{E}[X_{i ^{\prime}j^{\prime}}])U_{i^{\prime}j^{\prime}}\] \[+(X_{j^{\prime}i^{\prime}}-\mathbb{E}[X_{j^{\prime}i^{\prime}}|A_ {j^{\prime}}]-\mathbb{E}[X_{j^{\prime}i^{\prime}}|B_{i^{\prime}}]+\mathbb{E}[ X_{j^{\prime}i^{\prime}}])U_{j^{\prime}i^{\prime}}\Big{]}.\]
Again, we proceed by deriving the variance of this second projection, which should be equivalent to the variance of the first proposed projection.
**Lemma 3**.: _Under Assumptions 2.1 and 2.2, the variance of the second Hajek projection given by Definition 5 is:_
\[\text{Var}(\hat{U}_{N,2})=\frac{72}{N(N-1)}\Delta_{2}.\]
_Therefore, by rescaling the projection by the factor \(\sqrt{N(N-1)}\), we have:_
\[\text{Var}(\sqrt{N(N-1)}\hat{U}_{N,2})=72\Delta_{2}.\]
Proof.: Provided in Appendix B.4.
### Showing the asymptotic equivalence of \(U_{n}\) and \(\hat{U}_{N,1}\) or \(\hat{U}_{N,2}\)
The main idea of defining a Hajek projection is to obtain a statistic that is approximatelly and asymptotically close enough to the U-statistic to which central limit theorems and laws of large numbers can be applied.
Serfling (2009) provides readily applicable results for the asymptotic equivalence of the U-statistic given by Definition 1 and the Hajek projection given by Definition 2, and, consequently, to the asymptotic properties of the U-statistics, since in this case the projection is an average of i.i.d. random variables. However, in our case, as the statistic \(U_{N}\) is not formally an U-statistic, such results cannot be immediately used.
To derive the asymptotic equivalence between \(U_{N}\) and the two proposed projections, \(\hat{U}_{N,1}\) and \(\hat{U}_{N,2}\), we follow closely the arguments in Graham (2017).
**Remark 1**.: According to Graham (2017), the asymptotic equivalence1 of \(\sqrt{N(N-1)}U_{N}\) and of \(\sqrt{N(N-1)}\hat{U}_{N}\) follows if:
Footnote 1: This result is also provided in Serfling (2009).
\[N(N-1)\mathbb{E}[(\hat{U}_{N}-U_{N})^{2}]\quad\text{is}\quad o(1)\]
Following up this Remark, we have the following result:
**Theorem 2**.: _Given the definitions of the statistic \(U_{N}\) given by Equation (6) and the proposed Hajek projections \(\hat{U}_{N,1}\), in Definition 4, and \(\hat{U}_{N,2}\), in Definition 5, \(U_{N}\) is asymptotically equivalent to \(\hat{U}_{N,1}\) and \(\hat{U}_{N,2}\) under Assumptions 2.1-2.3 and 4.1._
Proof.: Provided in Appendix B.5.
Hence, even though the statistic \(U_{N}\) is not properly defined as an U-statistic, we still have the result that, under the assumptions needed for the results above, its limit distribution coincides with that of the proposed Hajek projections. This property is key to define the asymptotic properties of the pairwise differences estimator in the next section.
## 5 Asymptotic properties of the Pairwise Differences estimator and Estimation
### Asymptotic properties of the Pairwise Differences estimator
Considering the rewritten estimator, defined before:
\[\hat{\beta}_{1,PD}=\beta_{1}+\left[\frac{1}{\binom{N}{4}}\sum_{ \mathcal{C}\in\mathcal{C}(\mathcal{N},4)}\frac{1}{4!}\sum_{\pi\in\mathcal{P}( \mathcal{C},4)}\tilde{X}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\tilde{X}^{\prime}_{\pi _{1}\pi_{2}\pi_{3}\pi_{4}}\right]^{-1}\left[\frac{1}{\binom{N}{4}}\sum_{ \mathcal{C}\in\mathcal{C}(\mathcal{N},4)}\frac{1}{4!}\sum_{\pi\in\mathcal{P}( \mathcal{C},4)}\tilde{X}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\tilde{U}_{\pi_{1}\pi_{2 }\pi_{3}\pi_{4}}\right]. \tag{14}\]
We note that to obtain the asymptotic properties, it is key to obtain the convergence of the Hessian:
\[\left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)}\frac{ 1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{\pi_{1}\pi_{2}\pi_{3}\pi _{4}}\tilde{X}^{\prime}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\right]. \tag{15}\]
While in most of the studies on dyadic regressions and U-statistics tools associated with it the convergence of this Hessian is assumed (for instance, in Graham (2019)), we instead proof such convergence result. Observe that, even though this term also resembles a U-statistic, or, at least a term to which U-statistics tools can be applied to, it is not the case. This follows from not having a term such as \(U_{ij}\) which is i.i.d. in the dyad level in such statistic, which would guarantee the conditional independence of summands. Therefore, the same tools applied to the statistic \(U_{N}\) cannot be carried over here. Instead, our approach relies on deriving the variance of such term, and proving the convergence in probability through the Chebyshev's inequality.
**Proposition 1**.: _Under the assumption that :_
\[\mathbb{E}[|X_{ij}X_{i^{\prime}j^{\prime}}|]<\infty\quad\forall\quad i,i^{ \prime},j,j^{\prime}\]
_It follows that:_
\[\left[\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)} \frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}\tilde{X}_{\pi_{1}\pi_{2} \pi_{3}\pi_{4}}\tilde{X}^{\prime}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\right] \xrightarrow{p}\Gamma:=\mathbb{E}[\tilde{X}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}} \tilde{X}^{\prime}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}],\]
_where \(\Gamma\) is finite and invertible._
Proof.: Provided in Appendix B.6.
Given the result of the Proposition above, we can rewrite, by rescaling the expression of the rewritten estimator by \(\sqrt{N(N-1)}\):
\[\sqrt{N(N-1)}(\hat{\beta}_{1,PD}-\beta_{1})=\Gamma^{-1}\sqrt{N(N-1)}U_{N}+o_{ p}(1), \tag{16}\]
which follows by the continuous mapping theorem. Therefore, the asymptotic sampling properties of \(\sqrt{N(N-1)}(\hat{\beta}_{1,PD}-\beta_{1})\) will be driven by the behaviour of \(\sqrt{N(N-1)}U_{N}\).
From Theorem 2, we have that the statistic \(U_{N}\) is asymptotically equivalent to the projections \(\hat{U}_{N,1}\) and \(\hat{U}_{N,2}\). Therefore, the asymptotic properties of those carry over to the asymptotic properties of \(U_{N}\). Notice that, from Equation (B.12) and its analogous for the second proposed projection, we have that the summands of the projections are uncorrelated, but not necessarily independently distributed. The dependence structure remains since the same individual characteristics \(A_{i}\) and \(B_{j}\) for a given \(i\) and \(j\) are still present in different summands, as for instance, we can have terms such as \(\mathbb{E}[X_{ij^{\prime}}|B_{j^{\prime}}]\) in one summand and \(\mathbb{E}[X_{ik^{\prime}}|B_{k^{\prime}}]\) in another summand.
However, as pointed out by Graham (2017) and Jochmans (2018) in their contexts, by law of iterated expectations, we can rewrite:
\[\hat{U}_{N,1} =\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{\prime}}\frac{1} {\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)}\mathbb{E}[s_{ ijkl}|A_{i^{\prime}},B_{j^{\prime}},U_{i^{\prime}j^{\prime}}]\] \[=\sum_{i^{\prime}=1}^{N}\sum_{j^{\prime}\neq i^{\prime}}\frac{1} {\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N},4)}\mathbb{E}[ \mathbb{E}[s_{ijkl}|A_{i^{\prime}},B_{j^{\prime}},U_{i^{\prime}j^{\prime}}] \mathbf{A},\mathbf{B}]. \tag{17}\]
Such that the summands of the projections are conditionally independent when conditional on all indices \(\{A_{i}\}_{i=1}^{N}\) and \(\{B_{j}\}_{j=1}^{N}\), which is a characteristic of CID models, and that carries over to this context. Given this conditional independence of the random variables, we can assert:
**Lemma 4**.: _From a conditional version of the strong law of large numbers and a conditional version of the Lyapunov's central limit theorem, given by Rao (2009), it follows that:_
_(i)_ \(\hat{U}_{N,1}\xrightarrow{p}0\) _and_ \(\hat{U}_{N,2}\xrightarrow{p}0\)__
_(ii)_ \(\sqrt{N(N-1)}\hat{U}_{N,1}\xrightarrow{d}N(0,144\delta_{2})\) _and_ \(\sqrt{N(N-1)}\hat{U}_{N,2}\xrightarrow{d}N(0,72\Delta_{2}).\)__
_Since the expectation of the Hajek projections are zero, and their variances are defined by Lemma 2 and Lemma 3._
_Moreover, since \(U_{N}\) and the projections are asymptotically equivalent, that is, \(||\hat{U}_{N,1}-U_{N}||\xrightarrow{p}0\), and \(||\hat{U}_{N,2}-U_{N}||\xrightarrow{p}0\), we also have that:_
_(i) \(U_{N}\xrightarrow{p}0\)_
_(ii) \(\sqrt{N(N-1)}U_{N}\xrightarrow{d}N(0,\sigma_{U}^{2})\), where \(\sigma_{U}^{2}=144\delta_{2}=72\Delta_{2}\)._
_Proof._ Available in the next versions of this paper.
Following Proposition 1 and Lemma 4, we can establish first the consistency of the estimator, which is provided in the following theorem.
**Theorem 3**.: _Given the results of Proposition 1 and Lemma 4, and its associated assumptions, we have that \(\hat{\beta}_{1,PD}\) is a **consistent estimator** of \(\beta_{1}\):_
\[\hat{\beta}_{1,PD}\xrightarrow{p}\beta_{1}.\]
_Proof._ Provided in Appendix B.7.
From the same Proposition and Lemma, the asymptotic normality and the associated asymptotic variance of the estimator can be established. Also note that the estimator is asymptotically unbiased according to the following theorem.
**Theorem 4**.: _Using the representation in Equation (16):_
\[\sqrt{N(N-1)}(\hat{\beta}_{1,PD}-\beta_{1})=\Gamma^{-1}\sqrt{N(N-1)}U_{N}+o_{ p}(1).\]
_And under the results of Lemma 4, it follows by the Slutsky theorem:_
\[\sqrt{N(N-1)}(\hat{\beta}_{1,PD}-\beta_{1})\xrightarrow{d}N(0,\Gamma^{-1}144 \Gamma^{-1})=N(0,\Gamma^{-1}72\Delta_{2}\Gamma^{-1}).\]
_Therefore, the estimator \(\hat{\beta}_{1,PD}\) is **normally distributed** and **asymptotically unbiased**._
### An estimator for the asymptotic variance of \(\hat{\beta}_{1,PD}\)
From Theorem 4, it trivially follows that:
\[\hat{\beta}_{1,PD}\stackrel{{ a}}{{\sim}}N\left(\beta_{1},\frac{1}{N (N-1)}\Gamma^{-1}144\delta_{2}\Gamma^{-1}\right)\]
\[\hat{\beta}_{1,PD}\stackrel{{ a}}{{\sim}}N\left(\beta_{1},\frac{1}{N (N-1)}\Gamma^{-1}72\Delta_{2}\Gamma^{-1}\right)\]
Therefore, the asymptotic variance of the estimator \(\hat{\beta}_{1,PD}\) can be estimated as:
\[\widehat{\text{AVar}}(\hat{\beta}_{1,PD})=\frac{1}{N(N-1)}\hat{\Gamma}^{-1}14 4\hat{\delta}_{2}\hat{\Gamma}^{-1}\]
\[\widehat{\text{AVar}}(\hat{\beta}_{1,PD})=\frac{1}{N(N-1)}\hat{\Gamma}^{-1}72 \hat{\Delta}_{2}\hat{\Gamma}^{-1},\]
where:
\[\hat{\Gamma}=\frac{1}{\binom{N}{4}}\sum_{\mathcal{C}\in\mathcal{C}(\mathcal{N },\mathcal{4})}\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},\mathcal{4})} \tilde{X}_{\pi_{1}\pi_{2}\pi_{3}\pi_{4}}\tilde{X}^{\prime}_{\pi_{1}\pi_{2}\pi _{3}\pi_{4}},\]
where we then have that the asymptotic variance can be estimated using either a consistent estimator for \(\delta_{2}\) or a consistent estimator for \(\Delta_{2}\). In the following subsections we will propose consistent estimators for both.
#### 5.2.1 A consistent estimator of \(\delta_{2}\)
As mentioned before, the definition of \(\delta_{2}\) is:
\[\delta_{2}=\mathbb{E}[\bar{s}_{ij}\bar{s}^{\prime}_{ij}],\]
where:
\[\bar{s}_{ij}=\mathbb{E}[s_{ijkl}|A_{i},B_{j},U_{ij}]\]
Importantly, the elements which we condition on, namely, \(A_{i},B_{j},U_{ij}\) have indices \(i\) and \(j\) that necessarily are in the combinations \(\{i,j,k,l\}\) for any other elements \(k\) and \(l\). This reflects the fact that the term \(\delta_{2}\) originates from the expression of the variance of the statistic \(U_{N}\), considering the
components in such variance that has two elements in common. To obtain a consistent estimator \(\hat{\delta}_{2}\), we also need a consistent estimator \(\hat{\hat{s}}_{ij}\).
Graham (2017) suggests that, for an undirected network the consistent estimators are:
\[\hat{\Delta}_{2,G}=\frac{1}{n}\sum_{i<j}\hat{\hat{s}}_{ij}\hat{\hat{s}}_{ij}^{ \prime},\]
\[\hat{\hat{s}}_{ij,G}=\frac{1}{n-2(N-1)+1}\sum_{k<l,\{i,j\}\cap\{k,l\}=\emptyset }s_{ijkl},\]
where \(n=\frac{N(N-1)}{2}\) is the number of undirected dyads, therefore the expression for \(\hat{\Delta}_{2,G}\) considers the average over all undirected dyads. The sum \(\sum_{k<l,\{i,j\}\cap\{k,l\}}\)plicity means that, given two fixed indices \(i\) and \(j\) for the first dyad, we take the sum over all possible remaining different indiced \(k\) and \(l\), such that \(k<l\), since in the context of Graham (2017) we have an undirected network, and therefore only the different combinations \(\{k,l\}\) matters, but not its different permutations. Moreover, notice that \(n-2(N-1)+1\) coincides with the \(\binom{N-2}{2}\) tetrads that contain a fixed \(i\) and \(j\). Therefore, the expression for \(\hat{\hat{s}}_{ij,G}\) averages over all the kernels of the combinations that contain \(i\) and \(j\).
As in our case we are looking at a directed network, some adjustments seem to be necessary. Especially, notice that, for a directed network we have that not necessarily \(\bar{s}_{ij}=\bar{s}_{ji}\), since:
\[\mathbb{E}[s_{ijkl}|A_{i},B_{j},U_{ij}]\neq\mathbb{E}[s_{ijkl}|A_{j},B_{i},U_{ ji}]\]
That means that in the expression for the consistent estimator of \(\delta_{2}\) we should average over all possible directed dyads:
\[\hat{\delta}_{2}=\frac{1}{N(N-1)}\sum_{i=1}^{N}\sum_{j\neq i}\hat{\hat{s}}_{ij }\hat{\hat{s}}_{ij}^{\prime}. \tag{18}\]
One possibility is to work out further the expression for \(\bar{s}_{ji}\), such that it does not simply boil down to, when estimated, the average over the kernels.
To be more precise, we can see that, when taking the expectation over the kernel
conditioning on the characteristics of a single dyad \(\{i,j\}\), only some of its permutations (that are inside the kernel, and namely the ones that contain the idiosyncratic error term \(U_{ij}\)) will have a conditional expectation different than zero:
\[\bar{s}_{ij} =\mathbb{E}[s_{ijkl}|A_{i},B_{j},U_{ij}] \tag{19}\] \[=\mathbb{E}[\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}(( X_{\pi_{1}\pi_{2}}-X_{\pi_{1}\pi_{3}})-(X_{\pi_{4}\pi_{2}}-X_{\pi_{4}\pi_{3}}))((U_{ \pi_{1}\pi_{2}}-U_{\pi_{1}\pi_{3}})-(U_{\pi_{4}\pi_{2}}-U_{\pi_{4}\pi_{3}}))|A _{i},B_{j},U_{ij}]\] \[=\frac{1}{4!}\Big{(}\mathbb{E}[((X_{ij}-X_{ik})-(X_{lj}-X_{lk})) ((U_{ij}-U_{ik})-(U_{lj}-U_{lk}))|A_{i},B_{j},U_{ij}]\] \[+\mathbb{E}[((X_{ik}-X_{ij})-(X_{lk}-X_{lj}))((U_{ik}-U_{ij})-(U_ {lk}-U_{lj}))|A_{i},B_{j},U_{ij}]\] \[+\mathbb{E}[((X_{kj}-X_{kl})-(X_{ij}-X_{il}))((U_{kj}-U_{kl})-(U_ {ij}-U_{il}))|A_{i},B_{j},U_{ij}]\] \[+\mathbb{E}[((X_{lk}-X_{lj})-(X_{ik}-X_{ij}))((U_{lk}-U_{lj})-(U_ {ik}-U_{ij}))|A_{i},B_{j},U_{ij}]\] \[+\mathbb{E}[((X_{kl}-X_{kj})-(X_{il}-X_{ij}))((U_{kl}-U_{kj})-(U_ {il}-U_{ij}))|A_{i},B_{j},U_{ij}]\] \[+\mathbb{E}[((X_{lj}-X_{lk})-(X_{ij}-X_{ik}))((U_{lj}-U_{lk})-(U_ {ij}-U_{ik}))|A_{i},B_{j},U_{ij}]\] \[+\mathbb{E}[((X_{ij}-X_{il})-(X_{kj}-X_{kl}))((U_{ij}-U_{il})-(U_ {kj}-U_{kl}))|A_{i},B_{j},U_{ij}]\] \[+\mathbb{E}[((X_{il}-X_{ij})-(X_{kl}-X_{kj}))((U_{il}-U_{ij})-(U_ {kl}-U_{kj}))|A_{i},B_{j},U_{ij}]\Big{)},\]
which is different than:
\[\bar{s}_{ji} =\mathbb{E}[s_{ijkl}|A_{j},B_{i},U_{ji}] \tag{20}\] \[=\mathbb{E}[\frac{1}{4!}\sum_{\pi\in\mathcal{P}(\mathcal{C},4)}((X _{\pi_{1}\pi_{2}}-X_{\pi_{1}\pi_{3}})-(X_{\pi_{4}\pi_{2}}-X_{\pi_{4}\pi_{3}})) ((U_{\pi_{1}\pi_{2}}-U_{\pi_{1}\pi_{3}})-(U_{\pi_{4}\pi_{2}}-U_{\pi_{4}\pi_{3}} ))|A_{j},B_{i},U_{ji}]\] \[=\frac{1}{4!}\Big{(}\mathbb{E}[((X_{ji}-X_{jk})-(X_{li}-X_{lk})) ((U_{ji}-U_{jk})-(U_{li}-U_{lk}))|A_{j},B_{i},U_{ji}]\] \[+\mathbb{E}[((X_{jk}-X_{ji})-(X_{lk}-X_{li}))((U_{jk}-U_{ji})-(U_ {lk}-U_{li}))|A_{j},B_{i},U_{ji}]\] \[+\mathbb{E}[((X_{ki}-X_{kl})-(X_{ji}-X_{jl}))((U_{ki}-U_{kl})-(U_ {ji}-U_{jl}))|A_{j},B_{i},U_{ji}]\] \[+\mathbb{E}[((X_{lk}-X_{li})-(X_{jk}-X_{ji}))((U_{lk}-U_{li})-(U_ {jk}-U_{ji}))|A_{j},B_{i},U_{ji}]\] \[+\mathbb{E}[((X_{kl}-X_{ki})-(X_{jl}-X_{ji}))((U_{kl}-U_{ki})-(U_ {jl}-U_{ji}))|A_{j},B_{i},U_{ji}]\] \[+\mathbb{E}[((X_{li}-X_{lk})-(X_{ji}-X_{ik}))((U_{li}-U_{lk})-(U_ {ji}-U_{ik}))|A_{j},B_{i},U_{ji}]\] \[+\mathbb{E}[((X_{ji}-X_{jl})-(X_{ki}-X_{kl}))((U_{ji}-U_{jl})-(U_ {ki}-U_{kl}))|A_{j},B_{i},U_{ji}]\] \[+\mathbb{E}[((X_{jl}-X_{ji})-(X_{kl}-X_{ki}))((U_{jl}-U_{ji})-(U_ {kl}-U_{ki}))|A_{j},B_{i},U_{ji}]\Big{)}.\]
Therefore, if we take the sample analogue of those expressions applied to a given combination \(\{i,j,k,l\}\) that contain the fixed elements \(i,j\) and any elements \(k,l\):
\[\hat{\bar{s}}_{ij} =\frac{1}{n-2(N-1)+1}\sum_{k<l,\{i,j\}\cap\{k,l\}=\emptyset}\frac {1}{4!}\Big{(}((X_{ij}-X_{ik})-(X_{lj}-X_{lk}))\hat{\bar{U}}_{ijkl}\] \[+((X_{ik}-X_{ij})-(X_{lk}-X_{lj}))\hat{\bar{U}}_{ikjl}+((X_{kj}-X _{kl})-(X_{ij}-X_{il}))\hat{\bar{U}}_{kjli}\] \[+((X_{lk}-X_{lj})-(X_{ik}-X_{ij}))\hat{\bar{U}}_{lkji}+((X_{kl}- X_{kj})-(X_{il}-X_{ij}))\hat{\bar{U}}_{klji}\] \[+((X_{lj}-X_{lk})-(X_{ij}-X_{ik}))\hat{\bar{U}}_{ljki}+((X_{ij}- X_{il})-(X_{kj}-X_{kl}))\hat{\bar{U}}_{ijlk}\] \[+((X_{il}-X_{ij})-(X_{kl}-X_{kj}))\hat{\bar{U}}_{iljk}\Big{)},\]
\[\hat{\hat{s}}_{ji} =\frac{1}{n-2(N-1)+1}\sum_{k<l,\{i,j\}\cap\{k,l\}=0}\frac{1}{4!} \Big{(}((X_{ji}-X_{jk})-(X_{li}-X_{lk}))\hat{\hat{U}}_{jikl}\] \[+((X_{jk}-X_{ji})-(X_{lk}-X_{li}))\hat{\hat{U}}_{jkil}+((X_{ki}-X_ {kl})-(X_{ji}-X_{jl}))\hat{\hat{U}}_{kilj}\] \[+((X_{lk}-X_{li})-(X_{jk}-X_{ji}))\hat{\hat{U}}_{lkij}+((X_{kl}-X_ {ki})-(X_{jl}-X_{ji}))\hat{\hat{U}}_{klij}\] \[+((X_{li}-X_{lk})-(X_{ji}-X_{jk}))\hat{\hat{U}}_{likj}+((X_{ji}-X _{jl})-(X_{ki}-X_{kl}))\hat{\hat{U}}_{jilk}\] \[+((X_{jl}-X_{ji})-(X_{kl}-X_{ki}))\hat{\hat{U}}_{jlik}\Big{)}.\]
In the expressions above we plugged in the estimates of the idiosyncratic error terms, obtained from the estimated coefficient \(\hat{\beta}_{1,PD}\), such that:
\[\hat{\hat{U}}_{ijkl}=\tilde{Y}_{ijkl}-\hat{\beta}_{1,PD}\tilde{X}_{ijkl},\]
for any indices \(i,j,k,l\).
Then, for these proposed consistent estimators we would have that \(\hat{\hat{s}}_{ij}\neq\hat{\hat{s}}_{ji}\).
#### 5.2.2 A consistent estimator of \(\Delta_{2}\)
In this case, we have that the previous definition of \(\Delta_{2}\) is:
\[\Delta_{2}=\text{Cov}(s_{ijkl},s_{ijmp}) =\mathbb{E}[s_{ijkl}s^{\prime}_{ijmp}]-\mathbb{E}[s_{ijkl}]\mathbb{ E}[s_{ijmp}]^{\prime} \tag{21}\] \[=\mathbb{E}[\mathbb{E}[s_{ijkl}s^{\prime}_{ijmp}|A_{i},A_{j},B_{ i},B_{j},U_{ij},U_{ji}]]\] \[=\mathbb{E}[\bar{s}_{ij,2}\bar{s}^{\prime}_{ij,2}]\] \[=\mathbb{E}[\bar{s}^{2}_{ij,2}].\]
As the Hajek projection in this case was obtained by summing all combinations (and not permutations) of indices \(i\) and \(j\), we have that the consistent estimator of \(\Delta_{2}\) should average over all these possible combinations:
\[\hat{\Delta}_{2}=\frac{2}{N(N-1)}\sum_{i=1}^{N}\sum_{j>i}\hat{\hat{s}}^{2}_{ij,2}. \tag{22}\]
Moreover, remembering that \(\tilde{s}_{ij,2}\) is the kernel conditioning on all characteristics of \(i\) and \(j\), we have that its estimator, \(\hat{\hat{s}}_{ij,2}\) is given by:
\[\hat{\hat{s}}_{ij,2} =\frac{1}{n-2(N-1)+1}\sum_{k<l,\{i,j\}\cap\{k,l\}=\emptyset}\frac {1}{4!}\Big{(}((X_{ij}-X_{ik})-(X_{lj}-X_{lk}))\hat{\hat{U}}_{ijkl}\] \[+((X_{ik}-X_{ij})-(X_{lk}-X_{lj}))\hat{\hat{U}}_{ikjl}+((X_{kj}-X _{kl})-(X_{ij}-X_{il}))\hat{\hat{U}}_{kjli}\] \[+((X_{lk}-X_{lj})-(X_{ik}-X_{ij}))\hat{\hat{U}}_{lkji}+((X_{kl}-X _{kj})-(X_{il}-X_{ij}))\hat{\hat{U}}_{klji}\] \[+((X_{lj}-X_{lk})-(X_{ij}-X_{ik}))\hat{\hat{U}}_{ljki}+((X_{ij}-X _{il})-(X_{kj}-X_{kl}))\hat{\hat{U}}_{ijlk}\] \[+((X_{il}-X_{ij})-(X_{kl}-X_{kj}))\hat{\hat{U}}_{iljk}+((X_{ji}-X _{jk})-(X_{li}-X_{lk}))\hat{\hat{U}}_{jikl}\] \[+((X_{jk}-X_{ji})-(X_{lk}-X_{li}))\hat{\hat{U}}_{jkil}+((X_{ki}-X _{kl})-(X_{ji}-X_{jl}))\hat{\hat{U}}_{kilj}\] \[+((X_{lk}-X_{li})-(X_{jk}-X_{ji}))\hat{\hat{U}}_{lkj}+((X_{kl}-X _{ki})-(X_{jl}-X_{ji}))\hat{\hat{U}}_{klij}\] \[+((X_{li}-X_{lk})-(X_{ji}-X_{i^{\prime}k}))\hat{\hat{U}}_{likj}+( (X_{ji}-X_{jl})-(X_{ki}-X_{kl}))\hat{\hat{U}}_{jilk}\] \[+((X_{jl}-X_{ji})-(X_{kl}-X_{ki}))\hat{\hat{U}}_{jlik}\Big{)},\]
where, again, in the expression above we plugged in the estimates of the idiosyncratic error terms, obtained from the estimated coefficient \(\hat{\beta}_{1,PD}\), such that.
With both consistent estimates of the covariances, it is then possible to conduct valid inference. Moreover, in the next Section we investigate the finite sample performance of both analytical estimates.
## 6 Simulations
In this section, we explore the finite sample properties of the estimator \(\hat{\beta}_{1,PD}\) through a Monte Carlo simulation exercise. We also aim to evaluate the finite sample properties of the estimator of the asymptotic variance of \(\hat{\beta}_{1,PD}\), and the associated t-tests using both the consistent estimator \(\hat{\delta}_{2}\), based on the first obtained Hjek projection, and the consistent estimator \(\hat{\Delta}_{2}\), based on the second. In a nutshell, we find that: (i) the estimated slope parameters are unbiased in general, even when the fixed effects are correlated with the covariates; (ii) the estimated asymptotic
variances using either estimators are very close to each other, which was expected; and (iii) the size of the t-tests are correct, indicating a valid inference procedure.
### Data Generating Processes
For simplifying purposes, for now, we consider the case of a single regressor \(X_{ij}\) in the different proposed designs. In general, I follow closely the DGP specifications proposed by Jochmans (2018) and Charbonneau (2017), who also consider a directed network model. Note, however, that in their cases, they consider a binary outcome variable, while we consider a continuous dependent variable.
The DGP in general follows:
\[Y_{ij}=\beta_{1}X_{ij}+\theta_{i}+\xi_{j}+U_{ij}\]
In all different designs, we take \(\beta_{1}=0\). The idiosyncratic error terms \(U_{ij}\) are independently drawn from a standard normal distribution, \(U_{ij}\sim N(0,1)\). In our case of a directed network, we specifically have that \(U_{ij}\neq U_{ji}\), therefore for a simulation considering \(N\) nodes we draw from the standard normal \(N(N-1)\) idiosyncratic errors. The fixed effects \(\theta_{i}\) and \(\xi_{j}\) are also drawn from standard normal distributions.
The difference among the designs relies on how the regressor \(X_{ij}\) is drawn.
#### 6.1.1 Design 1
Here we follow essentially the same DGP as proposed by Jochmans (2018). We generate the single regressor as:
\[X_{ij}=-|A_{i}-B_{j}|\]
where \(A_{i}=V_{i}-\frac{1}{2}\), for \(V_{i}\sim\text{Beta}(2,2)\), and the same for \(B_{j}\). The covariate thus is generated in such a way that is dependent across both senders and receivers in the dyadic relation. The difference to the DGP proposed by Jochmans (2018) relies on the fact that we consider the
individual effect of the _alter_, \(A_{i}\), to be different and drawn independently from that of the _ego_, \(B_{j}\), while Jochmans (2018) considers \(B_{j}=A_{j}\).
#### 6.1.2 Design 2
We introduce a correlation between the regressor \(X_{ij}\) and the fixed effects \(\theta_{i}\) and \(\xi_{j}\), such that:
\[X_{ij}=-|A_{i}-B_{j}|+\theta_{i}+\xi_{j}\]
where \(A_{i}=V_{i}-\frac{1}{2}\), for \(V_{i}\sim\text{Beta}(2,2)\). Also, \(B_{j}\) is drawn independently from \(A_{i}\), such that \(B_{j}=V_{j}-\frac{1}{2}\), for \(V_{j}\sim\text{Beta}(2,2)\). Note that the manner in which we introduce a correlation between the regressor and the fixed effects is similar to that of Charbonneau (2017).
#### 6.1.3 Design 3
We now consider a binary regressor that is uncorrelated with the fixed effects. We generate the regressor according to:
\[X_{ij}=\mathbb{1}\{A_{i}-B_{j}>0\}\]
where \(A_{i}\) and \(B_{j}\) are drawn according to Designs 1 and 2.
#### 6.1.4 Design 4
We again consider a binary regressor, however, now it is correlated with the fixed effects, such that:
\[X_{ij}=\mathbb{1}\{A_{i}-B_{j}+\theta_{i}+\xi_{j}>0\}\]
where, again, \(A_{i}\) and \(B_{j}\) are drawn according to Designs 1 and 2.
### Results of Monte Carlo simulations
We propose several settings of Monte Carlo simulations. For each design, we run simulations for \(S\in\{1000,5000,10000\}\), where \(S\) refers to the number of simulations, and for \(N\in\{10,20,30,50\}\).
#### 6.2.1 Results for the estimator \(\hat{\beta}_{1,PD}\) and its estimated asymptotic variance
In the tables below we show the results for the estimator \(\hat{\beta}_{1,PD}\) in terms of biasedness, as well as its variance across the simulations. We also present the results for the average of the estimated asymptotic variance considering both the estimations taking into account \(\hat{\delta}_{2}\), according to Equation 18, and taking into account \(\hat{\Delta}_{2}\), according to Equation 22.
\begin{table}
\begin{tabular}{r r r r r r} \hline Simulations & N & bias(\(\hat{\beta}_{1}\)) & var(\(\hat{\beta}_{1}\)) & mean(\(\hat{\text{var}}(\hat{\beta}_{1}))_{\hat{\delta}_{2}}\) & mean(\(\hat{\text{var}}(\hat{\beta}_{1}))_{\hat{\Delta}_{2}}\) \\ \hline
1000 & 10 & 0.021 & 0.181 & 0.222 & 0.194 \\
1000 & 20 & -0.004 & 0.036 & 0.042 & 0.041 \\
1000 & 30 & 0.000 & 0.015 & 0.017 & 0.016 \\
1000 & 50 & 0.002 & 0.006 & 0.006 & 0.005 \\
5000 & 10 & 0.001 & 0.179 & 0.224 & 0.196 \\
5000 & 20 & -0.004 & 0.036 & 0.041 & 0.040 \\
5000 & 30 & -0.001 & 0.016 & 0.017 & 0.016 \\
5000 & 50 & -0.000 & 0.005 & 0.005 & 0.005 \\
10000 & 10 & 0.002 & 0.174 & 0.225 & 0.198 \\
10000 & 20 & -0.001 & 0.035 & 0.042 & 0.040 \\
10000 & 30 & 0.001 & 0.015 & 0.017 & 0.016 \\
10000 & 50 & -0.000 & 0.005 & 0.006 & 0.005 \\ \hline \end{tabular}
\end{table}
Table 2: Results of the Monte Carlo Simulation of the Pairwise Differences estimators obtained for the second data generating process
\begin{table}
\begin{tabular}{r r r r r r} \hline Simulations & N & bias(\(\hat{\beta}_{1}\)) & var(\(\hat{\beta}_{1}\)) & mean(\(\hat{\text{var}}(\hat{\beta}_{1}))_{\hat{\delta}_{2}}\) & mean(\(\hat{\text{var}}(\hat{\beta}_{1}))_{\hat{\Delta}_{2}}\) \\ \hline
1000 & 10 & -0.004 & 0.180 & 0.221 & 0.192 \\
1000 & 20 & 0.004 & 0.033 & 0.042 & 0.040 \\
1000 & 30 & 0.004 & 0.015 & 0.017 & 0.016 \\
1000 & 50 & 0.002 & 0.005 & 0.005 & 0.005 \\
5000 & 10 & 0.004 & 0.179 & 0.224 & 0.197 \\
5000 & 20 & -0.001 & 0.034 & 0.042 & 0.040 \\
5000 & 30 & 0.001 & 0.015 & 0.017 & 0.016 \\
5000 & 50 & -0.001 & 0.005 & 0.005 & 0.005 \\
10000 & 10 & -0.004 & 0.175 & 0.224 & 0.197 \\
10000 & 20 & -0.003 & 0.036 & 0.042 & 0.040 \\
10000 & 30 & -0.001 & 0.015 & 0.017 & 0.016 \\
10000 & 50 & 0.000 & 0.005 & 0.005 & 0.005 \\ \hline \end{tabular}
\end{table}
Table 3: Results of the Monte Carlo Simulation of the Pairwise Differences estimators obtained for the third data generating process
From the tables above, we point out two results: (i) the estimator \(\hat{\beta}_{1}\) seems to be unbiased, and (ii) the mean of the estimated variaces is basically on spot when compared to the variance of \(\hat{\beta}_{1}\) across the simulations and across the different designs. More specifically, while for all designs (except for Design 3) there is still some bias in the simulations for \(N=10\) and \(S=1000\), the bias essentially vanishes as we consider larger numbers of nodes \(N\), or a larger number of simulations \(S\).
Another feature that was already expected is that the average of the estimated asymptotic variances are very close when comparing to whether the variance was estimated using \(\hat{\delta}_{2}\) or \(\hat{\Delta}_{2}\). Moreover, we notice that in general those averages are almost spot on with the variances of the estimated \(\hat{\beta}_{1}\) across simulations. The only exception are the simulations with \(N=10\) for designs 1 and 2, however, as soon as \(N\) is increased the results are again essentially the same. This indicates that the variance estimator captures well the small-sample variability in the point estimator, and that inference using such estimators is valid.
We next explore if normality might be a good approximation to the finite sample distribution of the proposed estimator \(\hat{\beta}_{1}\). We present below the histograms and the QQ-plots of the estimated values for Designs 2 and 4, which are considered to be the most relevant, since it allows for correlations between the covariates and the fixed-effects. However, the plots for the other designs can be found in Appendix D.
\begin{table}
\begin{tabular}{r r r r r r} \hline \hline Simulations & N & bias(\(\hat{\beta}_{1}\)) & var(\(\hat{\beta}_{1}\)) & mean(\(\hat{\text{var}}(\hat{\beta}_{1}))_{\hat{\delta}_{2}}\) & mean(\(\hat{\text{var}}(\hat{\beta}_{1}))_{\hat{\Delta}_{2}}\) \\ \hline
1000 & 10 & -0.001 & 0.173 & 0.229 & 0.203 \\
1000 & 20 & 0.003 & 0.035 & 0.041 & 0.040 \\
1000 & 30 & -0.003 & 0.015 & 0.017 & 0.016 \\
1000 & 50 & -0.001 & 0.005 & 0.005 & 0.005 \\
5000 & 10 & -0.001 & 0.170 & 0.223 & 0.197 \\
5000 & 20 & 0.002 & 0.035 & 0.042 & 0.040 \\
5000 & 30 & -0.001 & 0.015 & 0.017 & 0.016 \\
5000 & 50 & 0.000 & 0.005 & 0.005 & 0.005 \\
10000 & 10 & -0.004 & 0.177 & 0.223 & 0.196 \\
10000 & 20 & -0.002 & 0.036 & 0.042 & 0.040 \\
10000 & 30 & 0.000 & 0.015 & 0.017 & 0.016 \\
10000 & 50 & -0.000 & 0.005 & 0.005 & 0.005 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of the Monte Carlo Simulation of the Pairwise Differences estimators obtained for the fourth data generating process
From the histograms above there is evidence that the estimator of \(\beta_{1}\) is normally distributed as the size of \(N\) increases for all the number of simulations \(S\). More specifically, for smaller values of \(N\), we can see that the range of the histogram is wider than the one from a normal distribution. This is corroborated by the QQ-plots, that show that for any number of simulations
\(S\), for lower values of \(N\), the distribution seems to have fatter tails than a normal distribution, but as \(N\) increases it seems to be distributed as a normal.
We next examine the size of the _t-tests_ where the test statistic use the asymptotic variance estimator proposed before. We test the null hypothesis that the coefficient \(\beta_{1}\) is equal to its true value, \(\beta_{1}=0\). The tables below shows the fractions of samples for which the null hypothesis is rejected at the 5% statistical significance level.
\begin{table}
\begin{tabular}{r r r r} \hline Simulations & N & \(\hat{\mathrm{var}}(\hat{\beta}_{1})_{\hat{\delta}_{2}}\) & \(\hat{\mathrm{var}}(\hat{\beta}_{1})_{\hat{\Delta}_{2}}\) \\ \hline
1000 & 10 & 0.036 & 0.049 \\
1000 & 20 & 0.036 & 0.038 \\
1000 & 30 & 0.048 & 0.049 \\
1000 & 50 & 0.055 & 0.057 \\
5000 & 10 & 0.041 & 0.058 \\
5000 & 20 & 0.035 & 0.038 \\
5000 & 30 & 0.047 & 0.048 \\
5000 & 50 & 0.040 & 0.040 \\
10000 & 10 & 0.038 & 0.055 \\
10000 & 20 & 0.034 & 0.038 \\
10000 & 30 & 0.039 & 0.041 \\
10000 & 50 & 0.042 & 0.043 \\ \hline \end{tabular}
\end{table}
Table 6: Results of the Monte Carlo Simulation of the size of the t-test of the Pairwise Differences estimators obtained for the second data generating process
\begin{table}
\begin{tabular}{r r r r} \hline Simulations & N & \(\hat{\mathrm{var}}(\hat{\beta}_{1})_{\hat{\delta}_{2}}\) & \(\hat{\mathrm{var}}(\hat{\beta}_{1})_{\hat{\Delta}_{2}}\) \\ \hline
1000 & 10 & 0.033 & 0.037 \\
1000 & 20 & 0.043 & 0.050 \\
1000 & 30 & 0.030 & 0.032 \\
1000 & 50 & 0.040 & 0.040 \\
5000 & 10 & 0.040 & 0.054 \\
5000 & 20 & 0.032 & 0.038 \\
5000 & 30 & 0.039 & 0.042 \\
5000 & 50 & 0.046 & 0.047 \\
10000 & 10 & 0.037 & 0.050 \\
10000 & 20 & 0.034 & 0.038 \\
10000 & 30 & 0.039 & 0.041 \\
10000 & 50 & 0.042 & 0.043 \\ \hline \end{tabular}
\end{table}
Table 5: Results of the Monte Carlo Simulation of the size of the t-test of the Pairwise Differences estimators obtained for the first data generating process
When we look at the size of the t-test for the different variance estimators, we see that, as expected from the previous findings, the sizes for the estimators are close to 0.05, however the estimates using \(\hat{\Delta}_{2}\) are somewhat closer than those using \(\hat{\delta}_{2}\).
## 7 Conclusion and Further Research
In this paper we showed how one can adapt U-statistics tools to show the asymptotic properties of linear dyadic models for network data. More specifically, we proposed a linear model with two-way fixed effects that enter additively in the specification. While the usual two-way fixed effects estimator is consistent and asymptotically unbiased for this particular model, we propose an estimator that relies on pairwise differences, that completely eliminate the fixed effects from the objective (and influence) function(s). This choice of estimator was done with the purpose of demonstrating step-by-step in a simple model how one can adapt tools from U-statistics to this particular dyadic setting (with a pairwise differences estimator) to obtain an analytical form and an estimator for the asymptotic variance.
These specific tools are needed because the pairwise differencing approach introduces a dependence structure in the summands of the influence function of the estimator. A similar set of tools are also used in non-linear models that employ a similar estimation method, in par
ticular in Charbonneau (2017) and in Graham (2017). For non-linear models, differencing out the fixed effects is desirable to eliminate the incidental parameter problem, which would lead to asymptotically biased estimates of the coefficients of the covariates.
With a Monte Carlo exercise, we showed that the obtained estimates for the slope coefficients are unbiased in finite samples, and the estimated asymptotic variance delivers the correct size for the t-test. However, the model assumed in this paper can still be relaxed to allow for a richer dependence structure in the network in future research. For instance, we also could allow for dependencies across outcomes \(Y_{ij}\) and \(Y_{ji}\) by relaxing Assumption 2.1 such that the idiosyncratic terms \(U_{ij}\) and \(U_{ji}\) are allowed to covary. In practice, this would have implications for how the tools of U-statistics are employed in this dyadic framework. Namely, the Hoeffding decomposition for the variance of the U-statistic and the Hajek projection would have to be modified to allow for these dependencies.
Another avenue of interest is that we could allow for, in Assumption 2.2, the individual-level observed characteristics \(A_{i}\) and \(B_{i}\) to covary. That would allow, for instance, that exports from Japan to Korea might covary with those from Korea to Thailand. In our derivations, this would have implications for the probability limit of the Hessian of the proposed estimator.
Finally, the main computational challenge is the estimate of \(\Delta_{2}\). It could also be of pratical use, in future research, to explore possible bootstrap procedures to obtain inference, such as in Graham (2019) and in Menzel (2018).
|
2307.02525 | Emergent Global Symmetry from IR N-ality | We present a new family of IR dualities in three space-time dimensions with
eight supercharges. In contrast to 3d mirror symmetry, these dualities map
Coulomb branches to Coulomb branches and Higgs branches to Higgs branches in
the deep IR. For a large class of quiver gauge theories with an emergent
Coulomb branch global symmetry, one can construct a sequence of such dualities
by step-wise implementing a set of quiver mutations. The duality sequence leads
to a set of quiver gauge theories which flow to the same IR superconformal
field theory -- a phenomenon we refer to as IR N-ality. We show that this set
of N-al quivers always contains a theory for which the rank of the IR Coulomb
branch symmetry is manifest in the UV. For a special subclass of theories, the
emergent symmetry algebra itself can be read off from the quiver description of
the aforementioned theory. | Anindya Dey | 2023-07-05T18:00:00Z | http://arxiv.org/abs/2307.02525v2 | # Emergent Global Symmetry from IR N-ality
###### Abstract
We present a new family of IR dualities in three space-time dimensions with eight supercharges. In contrast to 3d mirror symmetry, these dualities map Coulomb branches to Coulomb branches and Higgs branches to Higgs branches in the deep IR. For a large class of quiver gauge theories with an emergent Coulomb branch global symmetry, one can construct a sequence of such dualities by step-wise implementing a set of quiver mutations. The duality sequence leads to a set of quiver gauge theories which flow to the same IR superconformal field theory - a phenomenon we refer to as IR N-ality. We show that this set of N-al quivers always contains a theory for which the rank of the IR Coulomb branch symmetry is manifest in the UV. For a special subclass of theories, the emergent symmetry algebra itself can be read off from the quiver description of the aforementioned theory.
_Introduction._ Some of the most interesting non-perturbative phenomena in QFTs in three and four space-time dimensions arise in the IR limit, where the theories may become strongly-interacting at special points of the vacuum moduli space. Broadly speaking, the properties of a QFT that arise in the neighborhood of such special points but are not manifest in the UV description, are collectively referred to as _emergent_ properties. A particularly important example involves the global symmetry of the QFT at these special points.
3d \(\mathcal{N}=4\) theories provide a rich laboratory for studying non-perturbative phenomena in QFTs. The theories are super-renormalizable in the UV and generically flow to strongly-coupled SCFTs in the IR. The vacuum moduli space has two distinguished branches : the Higgs branch (HB), which is protected from quantum corrections by a non-renormalization theorem, and the Coulomb branch (CB), which receives 1-loop as well as non-perturbative corrections. We will focus on theories which are _good_ in the Gaiotto-Witten sense [1] - the two branches in this case intersect at a single point where the IR SCFT lives. 3d \(\mathcal{N}=4\) theories also present interesting examples of IR duality - a pair of distinct theories in the UV flowing to the same IR SCFT. A particularly important example of such a duality is 3D Mirror Symmetry [2; 3] which acts by mapping the CB of one theory to the HB of the other and vice-versa, in the deep IR.
The HB 0-form symmetry, including its global form, is classically manifest. For the CB, however, the IR symmetry algebra \(\mathfrak{g}_{\mathbb{C}}^{\rm IR}\) may be larger compared to the UV-manifest symmetry \(\mathfrak{g}_{\mathbb{C}}^{\rm UV}\). If the rank of the IR symmetry is greater than the UV-manifest rank, we will refer to the IR symmetry as _emergent_, otherwise we will simply refer to it as _enhanced_.
A very well-known example of a CB symmetry enhancement involves a linear quiver gauge theory with unitary gauge nodes, as shown in Fig. 1. The theory is good in the Gaiotto-Witten sense [1] if the integers \(e_{\alpha}=N_{\alpha-1}+N_{\alpha+1}+M_{\alpha}-2N_{\alpha}\) (balance parameter for the \(\alpha\)-th node) obey the condition \(e_{\alpha}\geq 0,\forall\alpha\).
For every unitary gauge node, there exists a \(\mathfrak{u}(1)\) topological symmetry, and the CB global symmetry manifest in the UV is simply \(\mathfrak{g}_{\mathbb{C}}^{\rm UV}=\oplus_{\alpha=1}^{L}\mathfrak{u}(1)_{\alpha}\). The UV-manifest rank is \(\operatorname{rk}(\mathfrak{g}_{\mathbb{C}}^{\rm UV})=L\), where \(L\) is the total number of gauge nodes. In the IR, every array of \(k\) consecutive balanced (i.e. \(e_{\alpha}=0\)) gauge nodes contributes an \(\mathfrak{su}(k+1)\) factor to the symmetry algebra, while every overbalanced node (i.e. \(e_{\alpha}>0\)) contributes a factor of \(\mathfrak{u}(1)\)[1]. The IR global symmetry algebra therefore has the generic form:
\[\mathfrak{g}_{\mathbb{C}}^{\rm IR}=\oplus_{\alpha}\,\mathfrak{su}(k_{\alpha} +1)_{\alpha}+\oplus_{\beta}\,\mathfrak{u}(1)_{\beta}, \tag{1}\]
where \(\alpha\) labels every array of \(k_{\alpha}\) consecutive balanced gauge nodes, while \(\beta\) labels the overbalanced nodes. Note that, while \(\mathfrak{g}_{\mathbb{C}}^{\rm IR}\neq\mathfrak{g}_{\mathbb{C}}^{\rm UV}\), we have \(\operatorname{rk}(\mathfrak{g}_{\mathbb{C}}^{\rm IR})=\operatorname{rk}( \mathfrak{g}_{\mathbb{C}}^{\rm UV})=L\). Therefore, the rank of the IR global symmetry is manifest in the UV. For every \(\mathfrak{u}(1)\) factor in \(\mathfrak{g}_{\mathbb{C}}^{\rm UV}\), one can turn on a triplet of Fayet-Iliopoulos (FI) parameters in the UV Lagrangian. In the IR, these parameters account for \(\mathcal{N}=4\)-preserving mass deformations of the SCFT, deforming/resolving the HB.
More generally, however, one may have \(\operatorname{rk}(\mathfrak{g}_{\mathbb{C}}^{\rm IR})>\operatorname{rk}( \mathfrak{g}_{\mathbb{C}}^{\rm UV})\), which implies that some of the mass deformations are simply not visible in the UV Lagrangian. These are often referred to as theories with "hidden FI parameters" [1; 4; 5]. A particularly interesting class is given by quiver gauge theories with unitary and special unitary gauge nodes and hypermultiplets in the fundamental/bifundamental representations (see Fig. 2), with at least one of the special unitary nodes being _balanced_ i.e.
Figure 1: A linear quiver with unitary gauge nodes. A black circular node with label \(N\) represents a \(U(N)\) gauge node, a black square node with label \(F\) represents \(F\) hypermultiplets in the fundamental representation, and a thin black line connecting two gauge nodes is a bifundamental hypermultiplet.
the total number of fundamental/bi-fundamental hypers associated with a given \(SU(N_{\alpha})\) node is \(2N_{\alpha}-1\). The latter condition ensures that the quiver has an emergent IR CB symmetry, as we will see momentarily.
In this paper, we will be interested in a slightly more general theory - a unitary/special unitary quiver as above with certain additional hypermultiplets that transform in powers of the determinant and/or the anti-determinant representations [6; 7] of the unitary gauge nodes. We will collectively refer to these matter multiplets as _Abelian Hypermultiplets_. A generic quiver gauge theory of this class is given in Fig. 3. The simplest quiver gauge theory of this class is a \(U(N)\) theory with \(N_{f}\) fundamental hypermultiplets and \(P\) hypermultiplets in the determinant representation, which we will denote as \(\mathcal{T}^{N}_{N_{f},P}\). For \(P\geq 1\), these theories are good if \(N_{f}\geq 2N-1\), and bad otherwise.
_Outline of the paper._ For certain ranges of \(N_{f}\) and \(P\), the theory \(\mathcal{T}^{N}_{N_{f},P}\) can be shown to have an IR dual, where the duality maps the CB (HB) of one theory to the CB (HB) of the other in the deep IR. Using these dualities one can construct a set of four distinct quiver mutations which act locally at appropriate gauge nodes of a quiver having the generic form of Figure 3. Any two quivers, which are related by a mutation, flow to the same SCFT in the IR, and are therefore IR dual by construction.
One can then show that starting from a given theory \(\mathcal{T}\) having the generic form of Figure 2 (note that it is a special case of the quiver in Figure 3), one can construct a sequence of IR dualities by implementing these quiver mutations. The duality sequence leads to \(N\geq 2\) distinct quiver gauge theories which flow to the same IR SCFT and are therefore IR dual to each other. We refer to this phenomenon as _IR N-ality_. A generic _N-al_ theory will be of the form given in Figure 3.
Recall that the theory \(\mathcal{T}\) has an emergent IR CB symmetry. We show that the set of N-al theories includes at least one theory - \(\mathcal{T}_{\text{maximal}}\) - for which the rank of the IR CB symmetry becomes UV-manifest. For \(\mathcal{T}\) being a linear quiver, the complete symmetry algebra itself can be read off from the quiver \(\mathcal{T}_{\text{maximal}}\). One of the main results of this paper is to give a clear recipe for constructing the quiver \(\mathcal{T}_{\text{maximal}}\) given \(\mathcal{T}\) and present an illustrative example.
_The IR Dualities of \(\mathcal{T}^{N}_{N_{f},P}\)._ We will denote the IR dualities of \(\mathcal{T}^{N}_{N_{f},P}\) as \(\mathcal{D}^{N}_{N_{f},P}\) indicating that there is always a \(\mathcal{T}^{N}_{N_{f},P}\) theory on one side. It was shown in [8] that there exist three infinite families of such IR dualities, which are summarized in Table 1.
In this notation, the duality \(\mathcal{D}^{N}_{2N-1,0}\) is the well-known IR duality for an ugly theory [1] - it has a \(\mathcal{T}^{N}_{2N-1,0}\) theory on one side and a \(\mathcal{T}^{N-1}_{2N-1,0}\) theory plus a decoupled twisted hypermultiplet (a \(\mathcal{T}^{1}_{1,0}\) theory) on the other. The dualities in Table 1 are related to each other as well as the duality \(\mathcal{D}^{N}_{2N-1,0}\) by various Abelian gauging operations and RG flows triggered by large mass parameters, forming a "duality web" [8]. The dualities can also be checked independently by matching supersymmetric observables like the \(S^{3}\) partition function [9] and the supersymmetric index on \(S^{2}\times S^{1}\) in the Coulomb/Higgs limits [10; 11] - we refer the reader to Section 3 of [8] for details. In the appendix, we summarize the \(S^{3}\) partition function identities for these dualities.
Figure 3: A generic unitary/special unitary quiver with Abelian hypermultiplets. A blue square box with label \(F\) represents \(F\) Abelian hypermultiplets in the determinant representation. A thin blue line connecting multiple unitary gauge nodes is an Abelian hypermultiplet with charges \(\{Q^{i}\}\). A thick blue line with a label \(P\) denotes a collection of \(P\) Abelian hypermultiplets.
Figure 2: A generic quiver with unitary/special unitary gauge nodes with at least one of the \(SU\) nodes being balanced. A yellow circular node with label \(N\) represents an \(SU(N)\) gauge node.
Let us now discuss how the CB symmetry matches across these dualities. For the duality \(\mathcal{D}^{N}_{2N+1,1}\), one has a balanced \(SU(N+1)\) gauge theory on one side. This theory has no UV-manifest CB global symmetry, but it does have an emergent \(\mathfrak{u}(1)\) symmetry. This can be verified, for example, by computing the CB Hilbert Series of the theory. On the other side of the duality, this emergent symmetry appears as the UV-manifest \(\mathfrak{u}(1)\) topological symmetry of the \(U(N)\) gauge group in \(\mathcal{T}^{N}_{N+1,1}\). The duality \(\mathcal{D}^{N}_{2N,P}\) is the self-duality of the theory \(\mathcal{T}^{N}_{2N,P}\) which does not have an emergent symmetry.
For \(\mathcal{D}^{N}_{2N-1,P}\) with \(P\geq 1\), the theory \(\mathcal{T}^{N}_{2N-1,P}\) has a \(\mathfrak{u}(1)\) topological symmetry, and an emergent symmetry algebra \(\mathfrak{u}(1)\oplus\mathfrak{u}(1)\) for \(P>1\) and \(\mathfrak{su}(2)\oplus\mathfrak{u}(1)\) for \(P=1\). On the dual side, two \(\mathfrak{u}(1)\) factors are manifest in the UV as topological symmetries of the \(U(1)\) and the \(U(N-1)\) gauge groups respectively, thereby matching the rank of the emergent symmetry of \(\mathcal{T}^{N}_{2N-1,P}\). For \(P=1\), one can in fact read off the complete IR symmetry from the dual quiver using the result (1) for linear quivers. Let us think of the dual quiver as being constituted of two linear quivers connected by an Abelian hypermultiplet. The \(U(1)\) gauge node is balanced and contributes an \(\mathfrak{su}(2)\) factor according to (1), while the \(U(N-1)\) gauge node is over-balanced and contributes a \(\mathfrak{u}(1)\) factor. Therefore, one can visually read off the IR symmetry from the dual quiver as \(\mathfrak{su}(2)\oplus\mathfrak{u}(1)\), which is precisely the emergent symmetry of \(\mathcal{T}^{N}_{2N-1,1}\).
From the above dualities, we learn that a balanced \(SU(N)\) gauge node and a \(U(N)\) gauge node with balance parameter \(e=-1\) plus Abelian hyper(s) have emergent CB symmetries, while overbalanced \(SU(N)\) nodes and \(U(N)\) nodes with \(e\geq 0\) do not. This will be an important observation for our construction of \(\mathcal{T}_{\text{maximal}}\).
_Quiver Mutations and Duality Sequence._ Given the dualities in Table 1, one can construct four distinct quiver mutations which act on the different gauge nodes of a quiver gauge theory \(\mathcal{T}\) of the generic form given in Figure 3. It turns out that for constructing the theory \(\mathcal{T}_{\text{maximal}}\), it is sufficient to study the sequence of IR dualities generated by only two of the four quiver mutations. We discuss the details of these two mutations below, while the remaining two are summarized in the appendix. For more details on these mutations and additional examples, we refer the reader to [12].
The first mutation, which we will refer to as mutation \(I\) and the associated quiver operation as \(\mathcal{O}_{I}\), involves replacing a balanced \(SU\) node by a unitary node of the same rank and a single Abelian hypermultiplet, as shown in (3). This mutation is obtained by using the duality \(\mathcal{D}^{N}_{2N+1,1}\) in the reverse direction. The Abelian hyper is charged under \(U(N_{\alpha}-1)\) as well as under the unitary gauge nodes connected to it by bifundamental hypers, with the charge vector being of the generic form:
\[\mathbf{Q}=(0,\dots,N_{\alpha_{1}},N_{\alpha_{2}},-(N_{\alpha}-1),N_{\alpha_{3}}, N_{\alpha_{4}},\dots,0), \tag{2}\]
where \(\{N_{\alpha_{i}}\}\) denote the ranks of the connected gauge nodes.
(3)
The three remaining mutations act on \(U(N_{\alpha})\) gauge nodes with Abelian hypermultiplets, and correspond to following values of the balance parameter \(e_{\alpha}=1,0,-1\). Mutation \(I^{\prime}\) and Mutation \(II\) (with associated quiver operations \(\mathcal{O}_{I^{\prime}}\) and \(\mathcal{O}_{II}\) respectively) act on gauge nodes with balance parameters \(e_{\alpha}=1\) and \(e_{\alpha}=0\) respectively, and are not relevant for the construction of \(\mathcal{T}_{\text{maximal}}\) (we will explain why momentarily). We discuss these mutations in the appendix.
Mutation \(III\) (quiver operation \(\mathcal{O}_{III}\)) corresponds to the case \(e_{\alpha}=-1\), and is obtained by using the duality \(\mathcal{D}^{N}_{2N-1,P}\). The mutation splits the \(U(N_{\alpha})\) gauge node into a \(U(N_{\alpha}-1)\) node and a \(U(1)\) node with the latter node having a single fundamental hyper, as shown in (4) for the \(P=1\) case. The \(P\) Abelian hypers in \(\mathcal{T}\) of charges \(\{\mathbf{Q}^{l}\}_{l=1,\dots,P}\) are mapped to another
Abelian hypers in \(\mathcal{T}^{\vee}\). The latter Abelian hypers all have charge \(1\) under the new \(U(1)\) node and have charges \(\{\mathbf{Q}^{\prime l}\}_{l=1,\ldots,P}\) under the remaining gauge nodes. For a generic \(\mathbf{Q}^{l}=(Q^{l}_{1},\ldots,Q^{l}_{\alpha_{1}},Q^{l}_{\alpha_{2}},N_{\alpha},Q ^{l}_{\alpha_{3}},Q^{l}_{\alpha_{4}},\ldots,Q^{l}_{L})\), the charge vector \(\mathbf{Q}^{\prime l}\) is given as
\[\mathbf{Q}^{\prime l}= (Q^{l}_{1},\ldots,Q^{l}_{\alpha_{1}}+N_{\alpha_{1}},Q^{l}_{\alpha _{2}}+N_{\alpha_{2}},-(N_{\alpha}-1),\] \[Q^{l}_{\alpha_{3}}+N_{\alpha_{3}},Q^{l}_{\alpha_{4}}+N_{\alpha_ {4}},\ldots,Q^{l}_{L}), \tag{5}\]
where \(\{N_{\alpha_{i}}\}\) denote the ranks of the nodes connected to \(U(N_{\alpha})\) by bifundamental hypers. Note that only the charges associated with the nodes connected to \(U(N_{\alpha})\) with bifundamental hypers get transformed under the mutation. The mutations can be realized in terms of supersymmetric observables - we will discuss the \(S^{3}\) partition function realization in the appendix.
Let us now consider a theory \(\mathcal{T}\) in the class of theories of Fig. 2. As we saw above, a balanced \(SU\) node is associated with a \(\mathfrak{u}(1)\) emergent symmetry. In the presence of balanced unitary nodes connected to this balanced \(SU\) node, the CB symmetry may be further enhanced. As before, the emergent symmetry can be verified using the CB limit of the index. Given the quiver mutations discussed above, the duality sequence leading to the theory \(\mathcal{T}_{\text{maximal}}\) can be obtained in the following fashion.
One begins by first implementing mutation \(I\) at every balanced \(SU\) node in \(\mathcal{T}\). Other \(SU\) nodes which were overbalanced in \(\mathcal{T}\) might be rendered balanced as a result, in which case we implement mutation \(I\) sequentially until we have a theory that contains no balanced \(SU\) nodes. In the next step, one implements mutation \(III\) at every gauge node that admits it. In doing so, one will generically alter the balance of both unitary and special unitary nodes in the quiver, thereby creating new nodes where mutation \(III\) or mutation \(I\) can be implemented. The duality sequence finally terminates at a quiver for which none of the gauge nodes admit either mutation \(I\) or mutation \(III\). This quiver therefore consists of overbalanced special unitary nodes and unitary nodes of balance parameters \(e\geq 0\) with or without Abelian hypers. Since neither type of gauge nodes leads to emergent CB symmetry, one expects that the UV-manifiest rank should match the rank of the IR symmetry of the quiver. The theory is therefore a candidate for \(\mathcal{T}_{\text{maximal}}\).
The quiver operations \(\mathcal{O}_{I}\) and \(\mathcal{O}_{III}\) increase the number of \(\mathfrak{u}(1)\) topological symmetries by \(1\), \(\mathcal{O}_{I^{\prime}}\) decreases it by \(1\), and \(\mathcal{O}_{II}\) keeps it invariant. This is the reason why one can ignore \(\mathcal{O}_{I^{\prime}}\) and \(\mathcal{O}_{II}\) if one is interested in finding a single candidate for \(\mathcal{T}_{\text{maximal}}\). However, the complete duality sequence must include these mutations as well. In particular, there may be multiple candidates for \(\mathcal{T}_{\text{maximal}}\) which are related by \(\mathcal{O}_{II}\). In addition, the operation \(\mathcal{O}_{I^{\prime}}\) arises in the closure relations of \(\mathcal{O}_{I}\) and \(\mathcal{O}_{III}\), as we discuss in the appendix.
_An Illustrative Example._ In this section, we will construct the duality sequence for a linear quiver with unitary/special unitary gauge nodes and determine \(\mathcal{T}_{\text{maximal}}\) explicitly. We will show that it is possible to read off the emergent CB symmetry algebra \(\mathfrak{g}^{\text{IR}}_{\text{C}}\) from the quiver representation of \(\mathcal{T}_{\text{maximal}}\). Consider a three-node quiver \(\mathcal{T}\) with a single \(SU\) node of the following form:
We will focus on the case where the central \(SU(N)\) gauge node as well as the two unitary nodes are balanced i.e. \(N_{1}+N_{2}=2N-1\), \(M_{1}+N=2N_{1}\) and \(M_{2}+N=2N_{2}\). The theory has an emergent symmetry \(\mathfrak{g}^{\text{IR}}_{\text{C}}(\mathcal{T})=\mathfrak{su}(2)\oplus \mathfrak{su}(2)\oplus\mathfrak{su}(4)\oplus\mathfrak{u}(1)\). In particular, the rank of the emergent symmetry \(\text{rk}(\mathfrak{g}^{\text{IR}}_{\text{C}}(\mathcal{T}))=6\) is manifestly different from the rank of the UV symmetry \(\text{rk}(\mathfrak{g}^{\text{UV}}_{\text{C}}(\mathcal{T}))=2\).
The first step for constructing the duality sequence is to implement mutation \(I\) on the balanced \(SU(N)\) node following (3):
The above mutation increases the UV-manifiest rank by \(1\), since \(\text{rk}(\mathfrak{g}^{\text{UV}}_{\text{C}}(\mathcal{T}^{\vee}_{\text{I}}))=3\), as can be seen from the quiver \(\mathcal{T}^{\vee}_{\text{I}}\). The balance of the first and the third gauge nodes (from the left) are \(e_{1}=e_{3}=-1\), and therefore one can implement the mutation \(\mathcal{O}_{III}\) at each of these nodes. In the second step, we implement mutation \(III\) on the leftmost node following (4) which leads to the quiver \(\mathcal{T}^{\vee}_{2}\):
This is followed by the mutation on the rightmost gauge node which leads to the quiver (\(\mathcal{T}^{\vee}_{3}\)):
Note that at each step, starting from \(\mathcal{T}_{1}^{\vee}\) to \(\mathcal{T}_{3}^{\vee}\), the UV-manifest rank of the symmetry increases by \(1\), due to the addition of a single \(U(1)\) gauge node. In the quiver \(\mathcal{T}_{3}^{\vee}\), the central gauge node has balance \(e_{2}=-1\), and one can implement yet another \(\mathcal{O}_{III}\) mutation:
The first and the third gauge node (from the left) in \(\mathcal{T}_{4}^{\vee}\) are balanced, while the central node is overbalanced i.e. \(e_{2}=1\). This implies that one cannot implement another mutation \(III\). Since there are no \(SU\) nodes left, one cannot implement a mutation \(I\) either. Therefore, following the logic described in the previous section, we have
\[\mathcal{T}_{\text{maximal}}=:\mathcal{T}_{4}^{\vee}. \tag{6}\]
It is convenient to rewrite the quiver after a simple field redefinition in the following form:
For the quiver \(\mathcal{T}_{4}^{\vee}\), the UV-manifest rank can be read off as \(\text{rk}(\mathfrak{g}_{\text{C}}^{\text{UV}}(\mathcal{T}_{4}^{\vee}))=6\), which precisely matches the rank of the IR symmetry of \(\mathcal{T}\). Let us now show how one can read off the symmetry algebra \(\mathfrak{g}_{\text{C}}^{\text{IR}}\) itself using our intuition from linear quivers with unitary gauge groups.
Firstly, note that the quiver \(\mathcal{T}_{4}^{\vee}\) is built out of two linear subquivers with unitary gauge groups connected by a single Abelian hyper that is charged under a single node in each subquiver. The first subquiver - a chain of three balanced \(U(1)\) nodes - contributes a factor \(\mathfrak{su}(4)\) to the IR symmetry, following (1). In the second subquiver, the balanced nodes \(U(N_{1}-1)\) and \(U(N_{2}-1)\) are expected to contribute an \(\mathfrak{su}(2)\) factor each, while the overbalanced central node (connected to the Abelian hyper) gives a \(\mathfrak{u}(1)\) factor. Therefore, one reads off the IR symmetry of \(\mathcal{T}_{4}^{\vee}\) as \(\mathfrak{g}_{\text{C}}^{\text{IR}}(\mathcal{T}_{4}^{\vee})=\mathfrak{su}(4) \oplus\mathfrak{su}(2)\oplus\mathfrak{su}(2)\oplus\mathfrak{u}(1)\), which precisely matches the IR symmetry algebra of \(\mathcal{T}\).
_Conclusion and Outlook._ A unitary-special unitary quiver gauge theory \(\mathcal{T}\) of generic shape with at least a single balanced \(SU\) node admits a sequence of IR dualities. This duality sequence can be generated by stepwise implementing four distinct quiver mutations locally at different gauge nodes, starting with a balanced \(SU\) node. These quiver mutations are in turn built out of IR dualities of \(U(N)\) gauge theories with \(N_{f}\) hypers in the fundamental representation and \(P\) hypers in the determinant representation, for certain ranges of \(N_{f}\) and \(P\).
The theory \(\mathcal{T}\) has an emergent CB symmetry characterized by the presence of hidden FI parameters which implies that the rank of the IR symmetry is greater than UV-manifest rank. The sequence of dualities provides a neat way to study the emergent CB symmetry of \(\mathcal{T}\). We have shown that duality sequence produces at least one theory \(\mathcal{T}_{\text{maximal}}\) for which the correct rank of the IR symmetry becomes manifest from the quiver description. For a subclass of theories, one may even be able to read off the correct symmetry algebra. Using a simple example, we demonstrated that this is indeed the case when \(\mathcal{T}\) is a linear quiver.
Our formalism gives the first systematic way to study the emergent CB symmetry (and therefore hidden FI parameters) in 3d \(\mathcal{N}=4\) theories which do not have a realization in String Theory (like the Hanany-Witten [13] description or a description in terms of magnetic quivers [14]). It also leads to an extremely efficient algorithm for generating the 3d mirrors of unitary-special unitary quivers with generic shape, which will be presented in a paper to appear soon. Analogous to 3d mirror symmetry, various aspects of these IR dualities - for example, the duality maps for BPS local operators and line defects - should furnish interesting physics and deserve detailed investigation. Finally, one expects to find novel non-supersymmetric dualities as one subjects these N-al theories to soft supersymmetry-breaking, in a fashion similar to [15]. Some of these topics will be addressed in future work.
**Acknowledgments** The author would like to thank Amihay Hanany and Zohar Komargodski for discussion on related issues, and Vivek Saxena for comments on the draft. The author would like to thank the organizers of the program "Hyperkahler quotients, singularities, and quivers" at the Simons Center for Geometry and Physics where results connected to this work were presented. The author acknowledges the hospitality of the Simons Summer Workshop 2023 during the completion of this work. The author is supported in part at the Johns Hopkins University by the NSF grant PHY-2112699.
|
2308.15804 | Collaborative Learning Framework to Detect Attacks in Transactions and
Smart Contracts | With the escalating prevalence of malicious activities exploiting
vulnerabilities in blockchain systems, there is an urgent requirement for
robust attack detection mechanisms. To address this challenge, this paper
presents a novel collaborative learning framework designed to detect attacks in
blockchain transactions and smart contracts by analyzing transaction features.
Our framework exhibits the capability to classify various types of blockchain
attacks, including intricate attacks at the machine code level (e.g., injecting
malicious codes to withdraw coins from users unlawfully), which typically
necessitate significant time and security expertise to detect. To achieve that,
the proposed framework incorporates a unique tool that transforms transaction
features into visual representations, facilitating efficient analysis and
classification of low-level machine codes. Furthermore, we propose an advanced
collaborative learning model to enable real-time detection of diverse attack
types at distributed mining nodes. Our model can efficiently detect attacks in
smart contracts and transactions for blockchain systems without the need to
gather all data from mining nodes into a centralized server. In order to
evaluate the performance of our proposed framework, we deploy a pilot system
based on a private Ethereum network and conduct multiple attack scenarios to
generate a novel dataset. To the best of our knowledge, our dataset is the most
comprehensive and diverse collection of transactions and smart contracts
synthesized in a laboratory for cyberattack detection in blockchain systems.
Our framework achieves a detection accuracy of approximately 94% through
extensive simulations and 91% in real-time experiments with a throughput of
over 2,150 transactions per second. | Tran Viet Khoa, Do Hai Son, Chi-Hieu Nguyen, Dinh Thai Hoang, Diep N. Nguyen, Tran Thi Thuy Quynh, Trong-Minh Hoang, Nguyen Viet Ha, Eryk Dutkiewicz, Abu Alsheikh, Nguyen Linh Trung | 2023-08-30T07:17:20Z | http://arxiv.org/abs/2308.15804v3 | Securing Blockchain Systems: A Novel Collaborative Learning Framework to Detect Attacks in Transactions and Smart Contracts
###### Abstract
With the escalating prevalence of malicious activities exploiting vulnerabilities in blockchain systems, there is an urgent requirement for robust attack detection mechanisms. To address this challenge, this paper presents a novel collaborative learning framework designed to detect attacks in blockchain transactions and smart contracts by analyzing transaction features. Our framework exhibits the capability to classify various types of blockchain attacks, including intricate attacks at the machine code level (e.g., injecting malicious codes to withdraw coins from users unlawfully), which typically necessitate significant time and security expertise to detect. To achieve that, the proposed framework incorporates a unique tool that transforms transaction features into visual representations, facilitating efficient analysis and classification of low-level machine codes. Furthermore, we propose a customized collaborative learning model to enable real-time detection of diverse attack types at distributed mining nodes. In order to create a comprehensive dataset, we deploy a pilot system based on a private Ethereum network and conduct multiple attack scenarios. To the best of our knowledge, our dataset is the most comprehensive and diverse collection of transactions and smart contracts synthesized in a laboratory for cyberattack detection in blockchain systems. Our framework achieves a detection accuracy of approximately 94% through extensive simulations and real-time experiments with a throughput of over 1,100 transactions per second. These compelling results validate the efficacy of our framework and showcase its adaptability in addressing real-world cyberattack scenarios.
Cybersecurity, cyberattack detection, deep learning, blockchain, smart contract.
## I Introduction
Blockchain technology has been rapidly being developed with many applications in recent years. This technology was initially developed with a well-known digital currency application named Bitcoin. After that, many potential applications using this technology have been developed beyond cryptocurrency. The tremendous development of this technology is from the fact that it provides a new approach to data sharing and storage without the need for any third party (e.g., bank and government). Blockchain is a decentralized environment in which transactions and smart contracts can be recorded and executed in a secure and transparent manner. It is challenging to manipulate transactions once they are put into the blocks. Thus, blockchain technology protects data integrity, and its applications have been being widely developed in various fields of industry such as smart manufacturing, supply chain management, food industry, smart grid, healthcare, and Internet of Things [1].
Smart contracts (SC) are solely programs in blockchain systems (e.g., Ethereum and Solana). Smart contracts define and enforce a set of rules for users via using codes. They also facilitate user interactions by allowing them to send transactions to execute a defined function. By default, smart contracts and the interactions with them are irreversible [2]. However, in practical scenarios, attackers can inject malicious codes into smart contracts and transactions to attack a blockchain system for specific purposes. For instance, smart contracts (SCs) exhibit various vulnerabilities [3], which attackers can exploit to engage in injurious purposes, including unauthorized coin withdrawals from other users' pockets and taking control of the system [4]. Specifically, in 2016, an SC named Decentralized Autonomous Organization (DAO) was a victim of a re-entrancy attack. At that time, this SC held $150 million in the Ethereum network, and this attack led to a hardfork of Ethereum that created Ethereum Classic (ETC) [3]. In addition, the 4Chan group created an SC named Proof of Weak Hands Coin (PoWHC) on the Ethereum system. However, this SC witnessed an underflow attack that caused a loss of 866 ETH (i.e., Ethereum coins) [5]. Although most of the attacks on blockchain systems happened in the finance sector, many blockchain-based applications are developing in different sectors such as healthcare, supply chain, and food industry [1].
There are a number of challenges to detect and prevent attacks in transactions and SCs. The first challenge is lacking a dataset synthesized in the laboratory for various kinds of attacks on transactions and SCs in a blockchain system. In recent research (e.g., [6] and [7]), the authors use datasets from the public blockchain network and label data using the attack records history. When using this method to label attack data, it is assumed that the benign data does not include the insight
attacks. Therefore, generating data, which has "clean" samples of normal behavior and attacks in transactions and SCs, is urgently needed. However, a blockchain system in the Mainnet has large and diverse types of data. Thus, a synthesized dataset from the laboratory needs to be diverse and similar to reality. The second challenge is to understand and analyze the content of Bytecode, the compiled form of an SC's source code. It is worth noting that the main functions of the transactions and smart contracts are encoded into the Bytecode, which is represented by a series of hexadecimal numbers, to be implemented in a blockchain system [4]. It is crucial for a real-time attack detection system in analyzing the content of Bytecode to detect attacks in a blockchain system [6]. There are two approaches to analyzing the Bytecode, i.e., using the source code of SCs for comparison and analyzing the Bytecode. Unfortunately, only 1% source codes of SCs are open [6], and analyzing Bytecode without the corresponding source code of smart contracts and transactions can be unreliable and time-consuming [6]. The third challenge is that most of the current attack detection models are centralized. Thus, they need to gather all data (i.e., transactions together with their labels, e.g., attack or normal) into a centralized model to perform training and testing. However, blockchain systems are decentralized environments so it is challenging to collect data from all mining nodes (MNs) to perform training at the centralized server. In addition, if we transfer data from all MNs to the centralized server for processing (e.g., training and testing), data privacy can be compromised.
Given the above, in this paper, we first set up experiments in our laboratory to deploy various kinds of attacks on transactions and SCs in a blockchain system (i.e., a private Ethereum system). To address the first challenge, we collect all the transactions in MNs to build a dataset, called **Attacks on Blockchain Transactions Dataset (ABTD)**. This is the first cyberattack dataset on transactions and SCs in a blockchain network synthesized in a laboratory. To enrich the dataset, we create a large number of individual accounts to send transactions to the blockchain network for execution randomly. This dataset can be used for both research and industry purposes address cyberattacks in transactions and smart contracts. In addition, to deal with the second challenge of Bytecode analysis, we propose a novel ML-based framework that analyzes transactions and SCs without the need of understanding the SC source codes. Our proposed framework automatically extracts transaction features in real-time and efficiently analyzes them to detect insight attacks. To do this, we first build a highly-effective tool, called **Blockchain Code Extraction and Conversion Tool (BCEC)**, to convert important information of transactions and SCs to grey images. This tool calls the transaction using a transaction hash (i.e., a feature of the transaction) and then extracts key fields like Bytecode and value from the transactions. After that, it can convert the contents into images for further processing. Second, we develop an ML-based approach based on CNN to learn and detect attacks insight transactions and SCs. To the best of our knowledge, **this is the first ML-based framework that analyzes the Bytecode directly and detects various types of attacks in transactions and SCs**. Such an ML-based framework, which uses important information from transactions for analysis, is more flexible and easier to detect new types of attacks than other vector-based methods. To address the third challenge about centralized attack detection, we develop a novel collaborative cyberattack detection framework that can detect cyberattacks inside transactions and SCs in real-time with high accuracy. In our proposed framework, the CNN of each mining node can exchange learning knowledge (i.e., the trained models) with other nodes to create a global model. In this way, the learning model of each node can improve the detection accuracy without sending their local data over the network. Our major contributions can be summarized as follows:
* We implement a blockchain system and perform experiments to collect the ABTD dataset. To the best of our knowledge, this is the first dataset with cyberattacks on transactions and SCs of a blockchain system that was synthesized in a laboratory.
* We develop BCEC that can collect transactions, extract their features, and convert them into images to build a dataset. This tool can implement in real-time to support the analysis of the attack detection framework.
* We develop a real-time attack detection framework that can deploy at the mining nodes to detect attacks in transactions and SCs for a blockchain network. In our framework, the mining nodes can detect attacks in transactions and SCs in real-time at about 2150 transactions per second.
* We propose a collaborative learning framework that can efficiently detect attacks in a blockchain network. In our framework, each mining node can exchange learning knowledge with others and then aggregate a new global model without any centralized model. In this way, our framework can achieve high accuracy at about 94% without exposing the mining node's local dataset over the network.
* We perform both simulations and real-time testing to evaluate our proposed framework. Our proposed framework can achieve accuracy up to 94% in simulation and 91% in real-time experimental results. In addition, our framework has the capacity to analyze various types of transaction features, expanding the detection capabilities for the diversity of attacks.
## II Related work
There are several works trying to deal with attacks on transactions and SCs in blockchain networks. In [8], the authors propose to convert the source codes of smart contracts into vectors. They then use bidirectional long-short-term memory to identify abnormal patterns of vectors to detect re-entrancy attacks. The simulation results show that their proposed model can achieve 88.26% F1-Score and 88.47% accuracy in detecting re-entrancy attacks. In [9], the authors propose to use feature extraction to analyze the Bytecode of SCs. This approach is motivated by the fact that the characteristics of attacks are often expressed as sets of hexadecimal numbers embedded inside bytecodes. In this paper, the authors use
various types of machine learning models to detect 6 types of attacks with an F1-score of up to 97%. Even though the methods in [8, 9] can detect some types of attacks, they need to use source code of SCs in high-level programming languages (e.g., Solidity). It is worth noting that when an SC is created, the SC creates corresponding transactions for execution and then sends them to MNs for the mining process. From the MN point of view, we only can observe transactions with the encoded content (e.g., Bytecode) in their features. In real-time attack detection, we need to analyze this content to find out the insight attacks in transactions and SCs.
Unlike the above deep learning approaches, in [10], the authors also study the Bytecode. They propose to use the attack vector method to directly analyze the Bytecode. This approach can be effectively detected some specific attacks by a few pre-defined sets of Opcodes. Hence, this method is difficult to extend to various types of new attacks. In addition, even though the attack detection ability can achieve up to 100% in some types of attacks (e.g., re-entrancy, delegatecall, overflow, etc), the authors only test this method in a smart scale of data (about 100 samples). In [11], the authors introduce a smart contract security testing approach with the aim of identifying the suspicious behaviors associated with vulnerabilities of smart contracts in blockchain networks. In [6] the authors propose to use Graph embedding to analyze Bytecode. To do this, the authors convert the Bytecode of SC into vectors and then compare the similarities between the vectors of SC to detect the insight attacks of SC. The experimental results show that this method can achieve a precision of up to 91.95% in detecting attacks. In addition, in [7], the authors propose DefectChecker which is a framework using symbolic execution to analyze Bytecode without the need for source codes. This framework can detect eight types of attacks in SCs and get an F1-score of 88%. Unlike all the above works and others in the literature, in this paper, we introduce an innovative ML-based framework to analyze Bytecode directly from transactions without the need for source code. To do this, we propose to convert the encoded information of transactions into images. Our proposed framework can analyze these images to detect various types of attacks in both transactions and SCs. In this way, our proposed framework is flexible and makes detecting new types of attacks easier. Moreover, all of the methods above focus on centralized learning. To implement those methods, all the data needs to be gathered in a centralized server for learning and analysis. However, blockchain is a decentralized environment and MNs are distributed worldwide.Thus, gathering all blockchain data to perform training and testing is difficult.
## III Blockchain System: Fundamental and Proposed Collaborative Learning framework
### _Blockchain_
Blockchain technology is a decentralized method to store and manage data. In a blockchain system, each MN can be used to store and process data. When an MN receives transactions, it typically groups them into a block as a part of the mining process. However, it is worth noting that the consensus mechanism is responsible for managing the rules of the mining process in a blockchain network. There are various types of consensus mechanisms being used in blockchain networks [12]. For example, Ethereum 2.0 uses Proof of Stake (PoS) [13] as its consensus mechanism for the mining process. In PoS, a validator, who is responsible for proposing a new block, is randomly selected based on the amount of staked ETH in users' deposits. When the mining process is completed, the valid block is added to the main chain of blocks. After that, the block is irreversible to ensure the integrity of transactions in a blockchain. Another characteristic of blockchain is transparency which enables all MNs to access the history of transactions within a blockchain network. This
Fig. 1: The system model of our proposed framework. While receiving transactions, our framework will perform preprocessing to extract important information. After that, our collaborative learning will perform the attack detection process to detect network normal behaviour or a type of attack.
transparency ensures total transaction records are visible to all MNs and promotes trust in the blockchain network. Overall, blockchain possesses numerous valuable characteristics, including decentralization, transparency, immutability, and data tamper resistance, making it applicable across various sectors to enhance human life.
### _Designed Blockchain System and Our Proposed Collaborative Learning Framework_
In our laboratory, we set up experiments to collect datasets for training and testing our framework. We first deploy a blockchain system based on a private Ethereum network in our laboratory (more details are shown later in Fig. 4). This network uses the latest version of the Ethereum network (i.e., Ethereum 2.0). This version uses Proof-of-Stake (PoS) as a consensus mechanism for validating new blocks. Our system includes various MNs, to collect data from their local networks, and bootnodes, the management nodes to connect MNs together. The MNs can receive transactions from various types of blockchain applications such as smart cities, smart agriculture, IoT, and cryptocurrency. As described above, the transactions are first sent to MNs. They are then put into a block, and the MNs will perform the mining process to put them into the main chain. We perform various attacks using malicious transactions and SCs on this system. These attacks (i.e., DoS with block gas limit, overflows and underflows, flooding of transactions, re-entrancy, delegatedcall, and function default visibility) happened and caused serious damage to blockchain systems [14]. Through experiments, we build a state-of-the-art dataset with both normal and attacked transactions and SCs to evaluate the performance of attack detection methods.
In this paper, we consider a blockchain system with \(T\) MNs working in a blockchain system as described in Fig. 1. When an MN receives transactions from the blockchain network, it uses **BCEC** (the tool that we developed in our laboratory) to preprocess them by extracting information from important features and then converting them to grey images. After that, we propose a collaborative learning framework for analyzing the images to detect insight attacks in transactions and SCs. In our framework, each MN uses its local dataset to train a deep neural network. After the training process, each MN shares its trained model with other nodes and also receives their trained models in return. Afterward, every MN aggregates all the received trained models from other nodes together with its current trained model to generate a new global model for further training (we will explain more details in the next section). In this way, MN can exchange its learning knowledge with the neural network of other MNs. This approach can not only improve the overall learning knowledge of the neural network of all MNs but also protect the privacy of local data over network transmission. By preventing the transmission of the local data of each MN over the network, our approach can also reduce network traffic to avoid network congestion. Thus, the neural networks of MNs can improve the accuracy of detecting attacks for transactions and SCs in blockchain systems.
## IV Proposed Attack Detection framework
In our proposed attack detection framework, the MNs are used to learn and share their learning knowledge with others to improve the accuracy of their attack detection. At each MN, we propose to use a deep neural network as a detector to learn the data of the MN's local system. After that, the MN exchange its learning knowledge (i.e., trained model) with other MNs. When an MN receives trained models from others, it will integrate them with its current model to train its local dataset. This process is iteratively repeated until reaching a predefined number of iterations. In summary, our proposed framework includes three processes. The first process is preprocessing. In this process, our proposed framework captures and extracts the important information of the incoming transactions and then converts them to grey images. The second process is to develop a deep convolution neural network to classify the grey images to detect attacks. The last process is collaborative learning. In this process, each MN can exchange the trained model with others to improve the accuracy of attack detection.
### _Preprocessing Process_
Fig. 2 describes our proposed preprocessing process for transactions in a blockchain system. The main purposes of the preprocessing process are extracting the important features from incoming transactions and converting them into images for further processing. It is worth noting that SCs are a set of agreements to deploy transactions. For implementation, a server has to send transactions of the SCs to the MN for a mining process. From the MN point of view, we only can observe transaction hashes (i.e., the unique addresses of incoming transactions) which are represented in a series of hexadecimal numbers. The preprocessing process has three steps to deal with these transaction hashes as follows:
* **Step 1:** Capture transaction hashes from the MN and then recover transactions from transaction hashes to have the full information of all features in transactions such as content, value, block hash, block number, chainID, etc.
* **Step 2:*
* Extract the content of two crucial features in transactions named Bytecode and value. The bytecode feature includes the main functions of transactions and the value feature indicates the amount of ETH (Ethereum) involved in a transaction. Although we can effectively use the bytecode feature in detecting various types of attacks in transactions and SCs, it does not provide any information in some specific types of attacks, such as Flooding of Transactions [15], where the transaction content is null. Thus, it may be inefficient if we only rely on the bytecode feature for analysis. Therefore, we propose to enhance the attack detection framework by incorporating information from the value feature (we will justify its benefits in section V). After that, we apply appropriate preprocessing methods to the corresponding features as follows:
* **Bytecode feature**: Extract the content and then transform them into opcode using EVM Bytecode Decompiler [16]. The opcode is a series of executed comments in assembly. Thus, we propose to convert
all features of this assembly code to a grey image named Grey Image 1. * **Value feature**: we first scale its content to an appropriate range and then convert it to another grey image named Grey Image 2.
* **Step 3:** In this step, we combine both Grey Image 1 and Grey Image 2 to create the Final Grey Image. This Final Grey Image includes all essential information of a transaction and an SC in the blockchain system. They can be used to train the deep convolution neural network to find out the hidden attacks inside.
In this framework, all these steps are encapsulated in the **BCEC** tool. This tool can perform the preprocessing process in real-time to support the analysis of collaborative attack detection to detect hidden attacks for transactions and SCs in a blockchain system.
### _Learning Process_
In our proposed framework, at each MN, we implement a detector that can help to detect attacks based on the grey images from the preprocessing process with high accuracy. The core component of the detector is developed based on a Deep Convolutional Neural Network (CNN). The reason for using CNN is that this framework can classify a large amount of labeled data, especially in image classification with high accuracy [17]. Additionally, the CNN model does not have to learn their local data separately, it can exchange its trained model with other MNs to improve the learning knowledge as well as enhance the accuracy of attack detection. In detail, the architecture of CNN in an MN includes three types of layers, i.e., convolution layer, max pooling layer, and fully connected layer [17]. Fig. 3 describes the layer of a CNN in an MN. These layers can be described as follows:
* **Convolution layer:** The neurons in this layer learn the feature representation of input images. The neurons in this layer are formed in feature maps. In addition, these feature maps can connect with others of the previous layer by weight parameters called filter banks [18]. In this layer, the input is convoluted with weight parameters in every iteration to create feature maps.
* **Max pooling layer:** The main purpose of this layer is to reduce the resolution of feature maps in the previous layer. To do this, this layer selects the largest values in areas of feature map [17] and then sends them to the next layer.
* **Fully connected layer:** This layer performs classification functions for the neural network. In this layer, the feature maps from previous layers are first flattened. They are then put into a fully connected layer for classification. The softmax function is included at the end of this layer to produce the output in normal behavior or a type of attack.
We denote **D** as a local dataset of an MN to train a CNN. \(\mathbf{D}\) includes \(\mathbf{I}\) images and \(\mathbf{Y}\) labels so we can denote \(\mathbf{D}=(\mathbf{I},\mathbf{Y})\). We consider \(n=\{1,..,N\}\) as the training layer of the neural network. The output of a convolution layer \(n\) can be calculated as follows [19]:
\[\mathbf{I}_{n+1}=\gamma_{n}\Big{(}\mathbf{I}_{n}*\mathbf{F}\Big{)}, \tag{1}\]
where \((*)\) is the convolutional operation, \(\gamma_{n}\) is the activation function and \(\mathbf{F}\) is the filter bank. After that, the output of the convolution layer is put into a max pooling layer. The output of a max pooling layer can be calculated as follows:
\[\mathbf{I}_{n+2}=\alpha\Big{(}\mathbf{I}_{n+1}\Big{)}, \tag{2}\]
where \(\alpha\) is the max pooling function that selects the maximum value in a pooling area. We denote \(\mathbf{I}_{e}\) as the last image after processing with multiple convolution layers and max pooling layers. This image is put into a softmax function to classify and produce the output in the fully connected layer. We consider \(l\in\{1,...,L\}\) as the classification group number, the probability that an output image \(\hat{Y}\) belongs to group \(l\) can be calculated as follows:
\[\begin{split} p(\hat{Y}=l|\mathbf{I}_{e},\mathbf{W}_{e},\mathbf{b}_{e})& =softmax(\mathbf{W}_{e},\mathbf{b}_{e})\\ &=\frac{e^{\mathbf{W}_{e}\mathbf{I}_{e}+\mathbf{b}_{e}}}{\sum_{l}e^{\mathbf{W}_{ e},l}\mathbf{I}_{e}+\mathbf{b}_{e,l}},\end{split} \tag{3}\]
where \(\mathbf{W}_{e},\mathbf{b}_{e}\) are the weights and biases of the fully connected layer, respectively. Based on equation (3), we can calculate a vector of prediction \(\mathbf{\hat{Y}}\) which includes output images \(\hat{Y}\) belonging group \(l\) with probability \(p\) as follows:
\[\mathbf{\hat{Y}}=\underset{l}{\mathrm{argmax}}[p(\hat{Y}=l|\mathbf{I}_{e},\mathbf{W}_{e}, \mathbf{b}_{e})], \tag{4}\]
Fig. 2: The preprocessing process of our proposed framework. Our developed BCEC tool first collects the transactions in mining nodes. It then extracts the content of transactions to find “Bytecode” and “Value”. After that, this tool converts them into images for further processing.
In this stage, we compare the output predictions with the labels using a sparse categorical cross-entropy function to calculate the loss for backpropagation. The loss function can be calculated as follows:
\[\mathbf{J}(\mathbf{W})=-\sum_{l=1}^{L}Y_{l}\log\hat{Y}_{l}, \tag{5}\]
We denote \(\mathbf{W}\) as the model of the neural network. Based on equation (5), we can calculate the gradient of this function as follows:
\[\nabla\mathbf{\theta}=\frac{\partial\mathbf{J}(\mathbf{W})}{\partial\mathbf{W}} =-\frac{\partial\Big{(}\sum_{l=1}^{L}Y_{l}\log\hat{Y}_{l}\Big{)}}{ \partial\mathbf{W}}, \tag{6}\]
After having the gradient based on equation (6). We then use it for the Adam optimizer to update the parameters of the neural networks. We consider \(m\) and \(v\) as the moment vectors of the next iteration \(i+1\) of the Adam optimizer. The \(m_{i+1}\) and \(v_{i+1}\) can be calculated from the gradient and Adam functions [20] as \(m_{i+1}=A_{1}(\nabla\mathbf{\theta})\) and \(v_{i+1}=A_{2}(\nabla\mathbf{\theta})\). A new global model in the next iteration \(i+1\) can be calculated as follows:
\[\begin{split}\mathbf{\Gamma}_{i+1}&=\mathbf{\Gamma}_{i}- \beta_{i+1}\frac{m_{i+1}}{\sqrt{v_{i+1}}}\\ &=\mathbf{\Gamma}_{i}-\beta_{i+1}\frac{A_{1}(\nabla\mathbf{\theta})}{ \sqrt{A_{2}(\nabla\mathbf{\theta})}},\end{split} \tag{7}\]
where \(\mathbf{\Gamma}_{i+1}\) is the new optimal trained model of an MN, \(\beta_{i+1}/\sqrt{v_{i+1}}\) is the learning rate.
### _Collaborative learning Process_
In this paper, we propose a Collaborative Deep Convolutional Neural Network framework (Co-CNN) to detect the different types of attacks in a blockchain network. In this framework, each MN has a CNN model to train and test its dataset. The CNN model can receive trained models from other MNs to improve the accuracy of attack detection. To do this, the CNN model of an MN first gets the trained model (gradient) based on equation (6). It then sends the trained model to other MNs and receives trained models from others. We consider at iteration \(i\), an MN receives \(T-1\) trained models from others. It can update its trained model using the following formulas [21]:
\[\mathbf{\theta}_{i+1}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{\theta}_{t,i}. \tag{8}\]
After updating the trained model, each MN will calculate a new global model using equation (7). This process continuously repeats until the algorithm converges or the reaching the predefined maximum number of iterations. After the training process, we can obtain the optimal trained model in each MN to analyze and detect the attacks inside a series of grey images. This process is summarized in Algorithm 1.
```
1:while\(i\leq\) maximum number of iterations do
2:for\(\forall t\in T\)do
3: The CNN of the MN-\(t\) learns \(D_{t}\) to produce \(\hat{Y}\).
4: The MN-\(t\) creates gradient \(\theta_{t}\) and sends it to others
5: The MN-\(t\) receives \(T-1\) gradients from others.
6: MN calculates a new optimal trained model \(\mathbf{\Gamma}_{i+1}\).
7:endfor
8:\(i=i+1\).
9:endwhile
10: MN uses its optimal model \(\mathbf{\Gamma}_{optimal}\) to detect attacks based on input grey images.
```
**Algorithm 1** The learning process of Co-CNN model
## V Experiment and Performance Analysis
### _Experiment Setup_
In our experiments, we set up an Ethereum 2.0 system in our laboratory as shown in Fig. 4. This version of Ethereum uses a new consensus mechanism namely Proof-of-Stake (PoS)
Fig. 3: The architecture of a CNN model. The convolution layer learns the feature representation of the input. The Max pooling layer reduces the resolution of the feature map in the previous layer. The Fully connected layer performs classification functions to produce output.
instead of Proof-of-Work (PoW). There are five Ethereum nodes, two bootnodes, a trustful device, and an attack device in our experiments. All these devices are connected to a Cisco switch, which serves as the central hub for our local network. The configuration of these devices is as follows:
* an official open-source implementation of Ethereum network [22] and _Prysm v3.2.0_
- an official implementation of the PoS consensus mechanism in Ethereum 2.0 [23]. They share the same genesis configurations, e.g., chainID, block gas limit at 30,000,000 gas, etc. The configurations of nodes 1, 2, and 3 are workstation computers with processor Intel Core i9-10900 @5.2 GHz, RAM of 64 GB. The configurations of nodes 4 and 5 are personal computers with processor Intel Core i7-4810MQ @3.8 GHz, RAM of 16 GB.
* _Geth_ bootnode and _Prysm_ bootnode are also created by _Geth v1.10.22_ and _Prysm v3.2.0_, respectively. They are responsible for connecting all the Ethereum nodes together.
### _Dataset Collection_
According to the detailed analysis of the public Ethereum network on transaction behavior [24], the addresses that are associated with less than \(10\) transactions account for \(88\)% of total addresses. About \(50\)% received addresses appear only one time for a transaction in history. This is because most people want to create transactions anonymously. Therefore, to create diversity and reality for our dataset, we need to create a large number of unique accounts (i.e., 10,000 accounts in our experiments) to send transactions to Ethereum nodes. A truthful server, as shown in Fig. 4, randomly selects accounts from these accounts to create transactions for the blockchain system.
#### Iv-B1 Normal State
For the normal state, we use _OpenZeppelin Contracts_[25] library as the secured SCs. Two types of transactions below are used to generate samples randomly for the normal state.
* Exchange ETH: On the public Ethereum network, most transactions only exchange the ETH to another address without any bytecode. This kind of transaction accounts for \(75\)% of the total samples of the normal state in our experiment.
* Transactions-related SCs: There are two types of these transactions: The transactions for deploying SCs and the transactions that interact with functions in deployed SCs. We perform three essential SCs' categories in the Ethereum system, i.e., Tokens/Coins/NFT, Ethereum 2.0 deposit, and SCs for other purposes.
Although the number of original SCs is minuscule compared to the total transactions in the dataset. The content of transactions and deployed SCs are not duplicated. Because we randomly select not only the senders and recipients but also the amount of ETH and inputs of functions in any generated transaction.
#### Iv-B2 Attack States
SCs have a number of vulnerabilities listed in SWC [3] because of programmers, consensus mechanisms, and compilers. Attackers can exploit these weaknesses of SC to perform attacks and then steal money in blockchain systems [14]. In this work, we regenerate several real-world attacks from the tracks that they left on Ethereum's ledger. We give a brief description of the six types of application layer-based attacks.
* _DoS with Block Gas Limit (DoS)_: There are several functions inside SCs. These functions can be temporarily disabled when their gas requirements exceed the block gas limit. A _DoS_ case occurred in 2015 when SC GovernMental's 1,100 ETH jackpot payout was stuck [3]. The GovernMental SC is deployed in our work, and we continuously join the jackpot to disable the payout function.
* _Overflows and Underflows (OaU)_: In solidity language, if a variable is out of its range, it is in the overflow or underflow state. In this case, the variable is turned to another value (e.g., \(0\) for overflow and \(2^{256}-1\) for underflow). Attackers can use this vulnerability to bypass SCs' conditions when withdrawing funds. For example, they can bypass the requirements of checking their accounts' balances. Several real _OaU_ attacks were detected, e.g., \(2^{256}\) BEC tokens, CSTR token, \(\$800\)k USD of PoWH token [5], and so on [3]. We re-perform the above _OaU_ attacks on their original SCs in the dataset.
* _Flooding of Transactions (FoT)_: Attackers spam a number of meaningless transactions to delay the consensus of blockchain networks. Such an attack caused the unconfirmation of \(115\)k Bitcoin transactions in 2017 [15]. In our setup, _FoT_ attacks are generated by continuously sending a negligible amount of ETH from a random sender to another arbitrary recipient.
* _Re-entrancy (Re)_: When the SCs do not update their states before sending funds, attackers can recursively call the withdraw function to drain the SCs' balances. Two types of _Re_ are single-function and cross-function. The single-function type happened and led to a loss of 3.6 million ETH in 2016. Both types of Re are performed in our dataset [3].
* _Delegatecall (DeC): delegatedcall()_ is the mechanism to
Fig. 4: Real experiment setup.
inherit functions, storage, and variables from other deployed SCs. If the inherited SCs are attacked, they will in-directly affect the main SC. To implement, we re-create the \(2^{nd}\) Parity MultiSig Wallet attack [3]. In this attack, attackers took control and suicide the inherited SC.
* _Function Default Visibility (FDV)_: If the programmers do not define the visibility of functions in SCs, it will default to the public. Thus, anyone can interact with those functions. For implementation, we perform the \(1^{st}\) Parity MultiSig Wallet attack [3]. In this attack, attackers took control of this SC through an FDV flaw.
Table I shows the number of samples in each class of our proposed dataset. The proportions of the samples in the classes are not balanced, e.g., the number of _Re_ samples is twice that of _FDV_. Because _Re_ requires a series of attack transactions instead of only one attack transaction as in _FDV_.
### _Evaluation Methods_
The confusion matrix [26, 27] is widely used to evaluate the performance of machine learning models. We denote TP, TN, FP, and TN as "True Positive", "True Negative", "False Positive", and "True Negative". In this paper, we use ubiquitous parameters (i.e., accuracy, precision, recall) in the confusion matrix to evaluate the performance of models. The accuracy of a model can be calculated as follows:
\[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+ \text{FN}}. \tag{9}\]
In addition, we use the macro-average precision and macro-average recall to evaluate the performance of the models. With \(L\) as the number of classification groups (i.e., the total number of normal and attack states), the macro-average precision is calculated as follows:
\[\text{Precision}=\sum_{l=1}^{L}\frac{\text{TP}_{l}}{\text{TP}_{l}+\text{FP}_ {l}}. \tag{10}\]
The macro-average recall of the total system can be calculated as follows:
\[\text{Recall}=\sum_{l=1}^{L}\frac{\text{TP}_{l}}{\text{TP}_{l}+\text{FN}_{l}}. \tag{11}\]
### _Simulation and Experimental Results_
In this section, we present the simulation and real-time experimental results of our experiments. In particular, we use the confusion matrix to evaluate our proposed model's performance (in terms of accuracy, precision, and recall) compared to the centralized model.
#### Iv-C1 Preprocessing Analysis
In this section, we compare our proposed model in two schemes. We use our proposed preprocessing process in the first scheme as in Fig. 2. In the second scheme, we eliminate the value feature and use only the Bytecode preprocessing to analyze the transactions and SCs. Though the results of these schemes, we demonstrate the efficiency of our proposed preprocessing process in combining various features of transactions. We use CNN to classify different types of cyberattacks and normal behavior in transactions and SCs. Fig. 5 describes the accuracy results of two schemes. In this figure, the model w/-V has accuracy, precision, and recall at 93.849%, 90.413%, and 89.742%, respectively. These results outperformed the model w/o-V which has accuracy, precision, and recall at 72.163%, 58.911%, and 58.638%, respectively. Especially, Fig. 6 provides detailed information for all types of attacks and normal behavior. In Fig. 6, we can see that the model w/o-V cannot detect DoS and FoT attacks because it classifies all samples of DoS and FoT attacks into normal behavior. In contrast, the model w/-V can detect these types of attacks with high accuracy at about 97% for DoS detection and 100% for FoT detection. This is because the value feature is essential to support the learning models to detect many types of important attacks.
#### Iv-C2 Accuracy Analysis
In this section, we perform experiments to compare the performance results of the centralized model with our proposed model. The centralized model (Centralized-CNN) that we design can learn knowledge from all MNs for training and testing processes. Besides, we use different schemes of the collaborative learning model with 3 mining nodes (Co-CNN-3), 5 mining nodes (Co-CNN-5), and 10 mining nodes (Co-CNN-10). In each scheme, the collected datasets are divided equally among all mining nodes. To implement experiments, we first perform cyberattacks on transactions and SCs in our deployed private Ethereum
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Class** & **Number of samples** & **Portion (\%)** \\ \hline Normal & 152,423 & 50.34 \\ \hline DoS & 22,994 & 7.59 \\ \hline OaU & 29,254 & 9.66 \\ \hline FoT & 41,732 & 13.78 \\ \hline Re & 22,682 & 7.49 \\ \hline DeC & 22,455 & 7.41 \\ \hline FDV & 11,209 & 3.73 \\ \hline _Total_ & 302,749 & 100 \\ \hline \end{tabular}
\end{table} TABLE I: Number of samples on the proposed ABTD dataset.
Fig. 5: The results of the preprocessing processes in different schemes.
platform to collect datasets from all MNs. In our proposed collaborative learning model, each MN uses its local dataset for both training and testing processes. However, in the training process, the MNs can exchange their trained models with others to improve their learning knowledge as well as the accuracy of attack detection. On the other hand, in the Centralized-CNN, all the local datasets of MNs will be gathered into a big dataset for its training and testing process.
The performance results of two scenarios of preprocessing processes (i.e., without value feature (w/o-V) and with value feature (w/-V) with all schemes are also provided in Table II and Table III. Table II presents the performance of the simulation results of all schemes with the w/o-V preprocessing process. In Table II, the accuracy, precision, and recall are nearly the same at around 72-73%, 58-59%, and 58-59%, respectively. In contrast, in Table III, we can observe that the performance of all schemes with the w/-V preprocessing process outperforms those w/o-V preprocessing process at about 93-94%, 90-91%, and 89-90% in accuracy, precision, and recall, respectively. In detail, we first can see in Table III that the performance results of our proposed models are nearly the same as the Centralized-CNN. However, in some MNs such as MN-5 of the Co-CNN-5, the accuracy, precision, and recall are higher than those of the Centralized-CNN at around 0.6%, 0.6%, and 0.7%, respectively. Specifically, Fig. 7 provides detailed information for each type of attack of the Centralized-CNN and MN-5 of Co-CNN-5. These figures show that the misdetection of MN-5 of the Co-CNN-5 is dramatically reduced compared to the Centralized-CNN. In detail, the misdetection of the MN-5 from Normal to DoS is at 0.88% which is smaller than that of the Centralized-CNN at 1.14%. Similarly, the misdetection of the MN-5 from OaU to Normal is at 0.926% of total samples of OaU which is smaller than that of the Centralized-CNN at 3.89%.
#### Iv-B3 Convergence Analysis
In this section, we compare the convergence of different models, i.e., the Centralized-CNN, and the collaborative model with 3, 5, and 10 mining nodes. Fig. 8 describes the accuracy and loss of these models in 1,000 iterations. In general, all of the models converged after about 800 iterations in terms of accuracy and loss. While the accuracies of Centralized-CNN, Co-CNN-3, and Co-CNN-5 models fast reach the convergence after 400 iterations at about 93%, the accuracies of Co-CNN-10 need about 800 iterations to converge and reach 93%. The same trends happen with the loss. This is because the number of samples of each MN in Co-CNN-10 is much smaller than those of other models while the number of workers is higher than those of other models. Thus, Co-CNN-10 needs more time to exchange learning knowledge with other models. It finally reaches convergence after about 800 iterations and has accuracies nearly the same as other models.
#### Iv-B4 Real-time Attack Detection
In this section, we consider a practical scenario by evaluating the performance of the system in real-time cyberattack scenarios. To do this, we first take the trained models from all schemes (noted that the trained modes are trained in the schemes as in the accuracy analysis, i.e., Centralized-CNN, Co-CNN-3, Co-CNN-5). There are 5 blockchain nodes participating in these experiments and they join a private Ethereum network as described in the above section. After the learning models are trained, they are deployed on MNs. In the experiments, both two cases with value and without value preprocessing processes are considered. In real-time scenarios, both normal and attack samples continuously come to the blockchain node. Thus, the BCEC has to collect all the transaction traffic in 3 seconds into a package and then convert them into images. All processes including preprocessing (i.e., converting samples into images) and processing (i.e., model prediction) must be completed within 3 seconds before the next package comes.
Table IV presents the performance of Co-CNN-3, Co-CNN-5, and Centralized-CNN models in two cases of preprocessing. In general, we can observe in Table IV(a) that the performance of these models in accuracy, precision, and recall w/-V in the preprocessing process is at about 88-91%, 76-80%, and 77-79%, respectively. These results outperform those of the w/o-V in preprocessing process with accuracy, precision, and recall at
Fig. 6: The detection results of the models w/ and w/o-V feature. (a) Centralized-CNN w/o-V. (b) Centralized-CNN w/-V.
about 65-66%, 44-51%, and 48-51%, respectively. In addition, when we compare the same case w/-V in preprocessing process of the simulation as in Table III and the real-time experimental results as in Table IV(a), we can observe that the accuracy, precision, recall of the real-time experimental results are little smaller than those of simulation results about 3%, 10%, and 11%, respectively. This is because, in simulation, we implement multiple types of attacks on the blockchain system and then collect data to have enough samples for the dataset to train the model. However, in real-time scenarios, some attack types, such as Re, DeC, and FDV, rarely appear during the experiment. Thus, it makes more difficult for the learning models to detect them in real-time.
Specifically, we can observe in Table IV(a) that MN-4 of Co-CNN-5 has higher performance in accuracy, precision, and recall than MN-4 of the Centralized-CNN about 1.3%, 4%, and 2%, respectively. Therefore, in real-time detection scenarios, our proposed model still demonstrates better performance in detecting attacks than in simulation.
#### Iv-B5 Real-time Monitoring and Detection
Fig. 9 shows the real-time cyberattack monitoring from the output of our proposed model Co-CNN-5 in Ethereum node 1. In these figures, the normal and each type of attack are displayed in different lines. Fig. 9(a) displays the normal state of the system with the high value of the predicted normal state over time. We can observe that in the normal state, the predicted states of all types of attacks are nearly 0. When a type of attack happens, the predicted state of that attack will increase, e.g., the FoT attack state as in Fig. 9(d). As described in the previous section, in real-time scenarios, RE, Dec, and FDV attack states have a little number of attack samples. Therefore, their predicted states in Fig. 9(b), Fig. 9(f) and Fig. 9(g) do not have high
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{**Centralized-CNN**} & \multicolumn{4}{c|}{**Co-CNN-3**} & \multicolumn{4}{c|}{**Co-CNN-5**} \\ \cline{3-12} & & **MN-1** & **MN-2** & **MN-3** & **MN-1** & **MN-2** & **MN-3** & **MN-4** & **MN-5** \\ \hline
**Accuracy** & 72.163 & 71.686 & 71.761 & 72.080 & 72.735 & 72.519 & 72.211 & 72.760 & 72.627 \\ \hline
**Precision** & 58.911 & 58.3325 & 58.298 & 58.646 & 59.676 & 59.300 & 58.818 & 59.699 & 59.032 \\ \hline
**Recall** & 58.638 & 57.539 & 57.951 & 58.608 & 58.955 & 58.807 & 58.415 & 59.444 & 58.969 \\ \hline \hline \multicolumn{12}{|c|}{**Co-CNN-10**} \\ \cline{2-12} & **MN-1** & **MN-2** & **MN-3** & **MN-4** & **MN-5** & **MN-6** & **MN-7** & **MN-8** & **MN-9** & **MN-10** \\ \hline
**Accuracy** & 72.768 & 73.333 & 73.184 & 73.117 & 73.150 & 72.984 & 73.017 & 73.267 & 73.516 & 73.117 \\ \hline
**Precision** & 58.169 & 59.462 & 59.107 & 58.957 & 58.779 & 58.621 & 58.288 & 59.503 & 59.013 & 59.125 \\ \hline
**Recall** & 58.131 & 58.531 & 58.462 & 58.727 & 58.775 & 58.285 & 58.528 & 59.066 & 59.192 & 58.650 \\ \hline \end{tabular}
\end{table} TABLE II: Simulation results w/o-V with Centralized-CNN, Co-CNN-3, Co-CNN-5, and Co-CNN-10 models.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{**Centralized-CNN**} & \multicolumn{4}{c|}{**Co-CNN-3**} & \multicolumn{4}{c|}{**Co-CNN-5**} \\ \cline{3-12} & & **MN-1** & **MN-2** & **MN-3** & **MN-1** & **MN-2** & **MN-3** & **MN-4** & **MN-5** \\ \hline
**Accuracy** & 93.849 & 93.88 & 94.384 & 94.115 & 94.347 & 94.057 & 94.148 & 94.206 & 94.439 \\ \hline
**Precision** & 90.413 & 90.216 & 91.162 & 90.860 & 90.794 & 90.540 & 90.637 & 90.903 & 91.029 \\ \hline
**Recall** & 89.742 & 89.665 & 90.688 & 89.970 & 90.329 & 89.932 & 90.025 & 90.514 & 90.536 \\ \hline \hline \multicolumn{12}{|c|}{**Co-CNN-10**} \\ \cline{2-12} & & **MN-1** & **MN-2** & **MN-3** & **MN-4** & **MN-5** & **MN-6** & **MN-7** & **MN-8** & **MN-9** & **MN-10** \\ \hline
**Accuracy** & 93.633 & 94.248 & 93.849 & 93.566 & 93.899 & 93.832 & 93.516 & 93.732 & 93.699 & 93.849 \\ \hline
**Precision** & 89.326 & 90.611 & 90.095 & 89.969 & 90.106 & 90.048 & 89.252 & 90.684 & 89.778 & 90.464 \\ \hline
**Recall** & 89.206 & 89.716 & 89.313 & 89.114 & 89.745 & 89.289 & 89.213 & 89.464 & 89.298 & 89.477 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Simulation results w/-V with Centralized-CNN, Co-CNN-3, Co-CNN-5, and Co-CNN-10 models.
Fig. 7: The detection results of Centralized-CNN and Co-CNN-5 models. (a) Centralized-CNN w/-V. (b) Co-CNN-5 w/-V.
values. However, our proposed model can still detect all of the attacks in real-time with high accuracy at 91%.
#### V-D6 Processing Time
Fig. 10 describes the processing time of two MNs with the same Co-CNN-5 model. We can observe in Fig. 10 that when the number of transactions increases, the processing time of both MNs also linearly increases. However, there is a different capacity between the two MNs. In detail, while MN-5 can process about 1,100 transactions per second, the number of transactions that MN-1 can process is around 2,150 transactions per second. This is because of the different types of computer configuration between the two MNs described in section V-A. However, in the mainnet of the Ethereum system, the maximum recorded number of transactions is 93.01 per second [28]. Therefore, the capacity of our proposed system can be well-adapted to detect attacks on the mainnet Ethereum system.
## VI Conclusion
In this work, we developed a collaborative learning model that can efficiently detect malicious attacks in transactions and smart contracts in a blockchain network. To do this, we implemented a private Ethereum network in our laboratory. We then performed attacks in transactions and SCs of that network for analysis. Next, we analyzed the transaction data and extract the important features (i.e., Bytecode and value) to build the dataset. After that, we converted the dataset into grey images to train and evaluate the performance of our proposed model. In our proposed model, a learning node can detect the attacks in transactions and SCs of a blockchain network and receive and aggregate learning knowledge (i.e., trained models) from other learning nodes to improve the accuracy of detection. In this way, our proposed model does not expose the local data of learning nodes over the network, thereby protecting the privacy of the local data of learning nodes. Both simulation results and real-time experimental results showed the efficiency of our proposed model in detecting attacks. In the future, we will continue studying to develop other methods for detecting attacks in various kinds of networks.
|
2301.09926 | A two stages Deep Learning Architecture for Model Reduction of
Parametric Time-Dependent Problems | Parametric time-dependent systems are of a crucial importance in modeling
real phenomena, often characterized by non-linear behaviors too. Those
solutions are typically difficult to generalize in a sufficiently wide
parameter space while counting on limited computational resources available. As
such, we present a general two-stages deep learning framework able to perform
that generalization with low computational effort in time. It consists in a
separated training of two pipe-lined predictive models. At first, a certain
number of independent neural networks are trained with data-sets taken from
different subsets of the parameter space. Successively, a second predictive
model is specialized to properly combine the first-stage guesses and compute
the right predictions. Promising results are obtained applying the framework to
incompressible Navier-Stokes equations in a cavity (Rayleigh-Bernard cavity),
obtaining a 97% reduction in the computational time comparing with its
numerical resolution for a new value of the Grashof number. | Isabella Carla Gonnella, Martin W. Hess, Giovanni Stabile, Gianluigi Rozza | 2023-01-24T11:24:18Z | http://arxiv.org/abs/2301.09926v2 | # A two stages Deep Learning Architecture for Model Reduction of Parametric Time-Dependent Problems
###### Abstract
Parametric time-dependent systems are of a crucial importance in modeling real phenomena, often characterized by non-linear behaviours too. Those solutions are typically difficult to generalize in a sufficiently wide parameter space while counting on limited computational resources available. As such, we present a general two-stages deep learning framework able to perform that generalization with low computational effort in time. It consists in a separated training of two pipe-lined predictive models. At first, a certain number of independent neural networks are trained with data-sets taken from different subsets of the parameter space. Successively, a second predictive model is specialized to properly combine the first-stage guesses and compute the right predictions. Promising results are obtained applying the framework to incompressible Navier-Stokes equations in a cavity (Rayleigh-Bernard cavity), obtaining a 97% reduction in the computational time comparing with its numerical resolution for a new value of the Grashof number.
_Keywords--_ reduced order modeling, deep learning, long-short term memory networks, convolutional layers, time forecasting, time-dependent parametric PDEs
## 1 Introduction
Time-dependent systems, especially in the parametrized setting, describe a huge number of problems and are therefore a pervasive topic of extended scientific interest and industrial value. Indeed, _parametric dynamical systems_ modeling and control play a fundamental role in many research fields, as in the case of fluid dynamics, chemical reactions, biological problems and more.
In the majority of scenarios, the most suitable way to study such dynamics passes through numerical simulation. Especially for what concerns problems modelled by differential and partial differential equations, numerical approximation represent the standard to compute the system's response.
However, a problem of dimensionality of the system's numerical discretization often appears significant, as performing multiple simulations in large-scale settings typically reveals demands of computational resources difficult to handle.
This gives rise to the need of finding alternatives to classical numerical methods (Finite Element Method, Finite Volume Method, Finite Difference Method) in order to approximate the parametric response of a given system at a reduced computational cost. Reduced order models (ROMs) demonstrated to be a powerful tools in this regard and nowadays it is possible to find a large variety of applications in a number of different fields as heat transfer, fluid dynamics, shape optimization, uncertainty quantification. The main idea of ROMs is to approximate a high dimensional model, usually referred as full order model (FOM), with a low dimensional one still preserving the solution's key features. There mainly exist two different techniques to obtain a ROM: intrusive and non-intrusive approaches. The common feature of both approaches is the computational splitting into two distinct phases: an offline (or training), where the parametric response of the system is explored for selected values of the input parameters, and online (or
testing) one that allow to retrieve the system's response for any new value of the input parameters [41]. In both cases the results acquired during the initial exploration of the solution manifold are used to perform a compression of the discrete solution manifold. It can be performed using both linear (proper orthogonal decomposition, reduced basis methods) or nonlinear approaches (autoencoders, convolutional autoencoders). The two differ in the methodology used to approximate the evolution of the latent coordinates (reduced basis coefficients) in the latent space (reduced basis space).
Intrusive methods, that have its root in the classical field of scientific computing, use a Galerkin-(Petrov) projection of the system of equations describing the dynamics onto a linear-(nonlinear) subspace-(manifold) in order to generate a low dimensional model that need to be solved for any new value of the input parameters. These techniques, exploiting the underlying physical principles generally exhibit better generalization properties and perform well with less training data ([24, 14, 39]). On the other hand, they show severe limitations when addressing nonlinear time-dependent parametric PDEs, due in general to the difficulty of capturing complex physical patters and generalizing them to a large set of _online parameters_[38, 41].
Non-intrusive approaches are instead solely based on input-output data and do not require the explicit knowledge of the underlying equations. The evolution of the latent coordinates is retrieved by means of different regression or interpolation techniques. Being _data-driven_ they have the significant advantage of making the methods _non-intrusive_, _i.e._ allowing the high-fidelity model to be run in "black-box" mode, needing only to specify a set of input parameters and generate the corresponding system outputs. In this article we will focus only on the second type of methods (i.e. non-intrusive methods) and particularly on approaches suitable to address parameter and time dependent problems. Many research works that employ non-intrusive methods are in fact dedicated to stationary parameter-dependent problems or to transient problems [16], but far less material is available for transient and parameter-dependent problems.
About parameter-dependent problems, some developments are present dealing with properly enhanced reduced basis methods [22]. In particular, the use of data-driven techniques shows itself to be a key tool in the formulation of reduced basis methods that are both stable and highly efficient, even for general nonlinear problems. This is achieved by introducing non-intrusive reduced order models in which a data-driven map is learned as the map between parameter space and coefficients of the reduced basis to reconstruct the solutions, as in the case of [48, 21, 18]. However, these approaches do not deal with time-dependent problems.
An example of an approach in that direction is provided by [20], where a combination of Proper Orthogonal Decomposition (POD), Dynamic Mode Decomposition (DMD) and Manifold Interpolation is developed to approximate a given time-trajectory. An other case in which POD is utilized and also time dependence is considered, is found in [34], where time is treated as an extra parameter.
Instead, for what concerns machine learning techniques, many of them have demonstrated to be particularly useful in the approximation of nonlinear dynamics. It is the case of models such as SVM [44], ARIMA [35], as well as probabilistic ones involving hidden Markov models [54] or fuzzy logic [7]. Finally, Artificial Neural Networks (ANNs) have been recently massively considered to provide fast and reliable approximations of PDE solutions, thanks to the universal approximation theorem [27] that led to different proposals on the topic [31][1][8]. Specifically, ANNs provided with internal recurrence mechanisms have gradually become the standard for time series prediction when dealing with large amount of data available for the training [15][43][46].
However, ANNs express interesting potentialities not only for what concerns sequential learning with memory-aware networks, but also with tools to operate nonlinear dimensionality reduction such as Convolutional Auto-Encoders (CAE), which are actually employed in many recent works [32, 11, 33]. These works actually deal with time-dependent systems, thus including some kind of time prediction methodologies after the first nonlinear reduction with CAEs. For instance, in the first reference the time stepping is done intrusively using multistep methods on the reduced model derived from a Galerkin projection procedure, while in the last ones LSTMs and FFNNs are used for time stepping of the reduced state. Multi-level CAEs are moreover used in [50], employed to reduce the spatial and temporal dimensions of the problem. In addition, POD and CAE are sometimes used one after the other in the same dimensionality reduction process, as in the case of [6].
It is to be noted that much of the success of Artificial Neural Networks (ANN) based ROM has been boosted further by the availability of open source software frameworks such as PyTorch [36] and Tensorflow [2]. Indeed, they have made implementation and training possible without expert knowledge, also exploiting the eventual availability of computation accelerating hardware such as GPUs, which has made training of very large models feasible.
In this work a novel two-stages memory-aware ANNs model order reduction approach is developed. At the best of our knowledge, it implements a different strategy with respect to what has been already proposed in the field of trainable architectures able to generalize parametric time-dependent dynamics with scarce set of available solutions.
A windowed approach involving LSTMs (see Appendix 7) for the time-stepping is chosen, meaning
that, given a time series forecasting problem, we aim to find:
\[\mathcal{J}:(f(x_{t-p};\theta),\ldots,f(x_{t};\theta))\longrightarrow(f(x_{t+1}; \theta),\ldots,f(x_{t+m};\theta))\]
where the time-series \(f(\cdot)\) is dependent on the parameters \(\theta\), being \((f(x_{t-p};\theta),\ldots,f(x_{t};\theta))\) the input time-window. Windowed regressive networks [15] have been already exploited in multiple applications such as _neural ODEs_[5], where deterministic numerical solvers are led to consider also statistically learned residuals to perform the PDE integration, but also in [49], where a time series approach using LSTMs proved to be effective in forecasting the sea surface temperature in marine systems. It is in general to be noted that an architecture aimed to find an effective correlation between past sequenced and future ones exhibits close similarities with the behavior of numerical solvers: both of them build predictions for future times based on a certain number of the past ones.
More in depth, differently from what has been done until now, our Neural Network architecture implements a _partitioning-averaging_ approach to the parametric problem. It requires different models to be trained for different areas of the parameter space. Their predictions are subsequently combined in a weighted proper way depending on the new parameter for which the prediction is asked. This strategy has in principle the advantage to be able to learn an internal non-linear representation of the qualitative changes of a system with respect to the action of a certain set of parameters. Indeed, it breaks in two parts the reproduction of multiple potentially different local dynamics and their generalization to any new parameter belonging to the considered space.
LSTM-derived neural networks are used in a two stages framework (described in Section 2) for their ability in learning both short and long-time dependencies in the data, which make them particularly important among all the different recurrent cells (see Appendix 7). Moreover, with this architecture an arbitrary long prediction in time can be obtained thanks to an auto-sustained iterative mechanism, that updates each time the input of the framework with the previous predictions.
Such framework generalization capabilities have been firstly tested on ODEs systems, whose results are available in Section 3, and secondly on a widely used benchmark considering the incompressible Navier-Stokes equations in a rectangular cavity (see Section 4): the Rayleigh-Benard cavity problem. In order to deal with such a high-dimensional discretized system, the example reported in [12] with the POD-DL-ROM has been followed, and a POD has been previously performed to reduce the dimensions, speeding up the training phase.
This last test case considers as the model parameter the Grashof number \(Gr\), which is a non-dimensional quantity that describes the ratio of buoyancy forces to viscous forces. It is to be noted that, although this problem considers only one physical parameter,it exhibits a wide range of patterns. Indeed, if at low Grashof numbers the system has unique steady-state solutions, as \(Gr\) increases the system undergoes several Hopf bifurcations and multiple solutions arise for the same parameter value. Such solutions past the Hopf bifurcations result time-dependent, being time-periodic at medium Grashof numbers, and exhibiting turbulent behaviour at very high \(Gr\) values. A particular difficulty in applying a ROM approach to the Rayleigh-Benard cavity over a large range of Grashof numbers is related to the fact that frequencies of time-periodic solutions could significantly vary in such range, making hard an exact approximation of the solution for a general _online parameter_.
Our tests apply the new model reduction approach to a range of \(50\cdot 10^{3}\) medium Grashof numbers, taking as parameter space the interval \(Gr\in\{100\cdot 10^{3},150\cdot 10^{3}\}\).
Methodology
### Two-stages architecture
The proposed _data-driven_ approach is realized through a two-stages architecture, which can be interpreted as an implementation of a _partitioning-averaging_ method, trained to potentially reveal the system's non-linear dependencies on the considered parameters. The partitioning-averaging method generates accurate estimations valid over local partitions in the first stage, while the second one globally averages the local estimates in an appropriate sense. This approach implements a regression method based on _k-means_ clustering [3], which is a standard method to cluster data vectors.
Here, the _k-means_ clustering is performed in the sampled parameter space \(\Theta_{\mathrm{training}}=\{\theta_{i}^{a}\}_{i=1}^{n}\), which is assumed to be a sufficiently fine sample of the \(p\)-dimensional parameter space \(\Theta\subset\mathbb{R}^{p}\). The k-means clustering results in \(k\) different data sets (or clusters), which form a partition of \(\Theta_{\mathrm{training}}\). The _centroid_ of each cluster is denoted \(\{\theta_{c}^{i}\}_{i=1}^{k}\). Each parameter vector \(\theta_{c}^{i}\in\Theta_{\mathrm{training}}\) defines a trajectory
\[\{\mathbf{x}_{1}^{i},\mathbf{x}_{2}^{i},\ldots,\mathbf{x}_{T}^{i}\}\quad \text{ where }\quad\mathbf{x}_{j}^{i}\in\mathbb{R}^{i}\quad\quad\forall j\in\{1, \ldots,T\} \tag{1}\]
through the solution of the respective ODE or PDE, where \(z\) represents the number of variables evolving and \(T\) the number of time steps. The solution trajectories corresponding to the training parameter values of the same cluster are concatenated, forming the final data-sets \(\{D_{i}\}_{i=1}^{k}\).
Subsequently, the \(k\) Neural Networks (NNs) of the first stage are trained respectively with those \(k\) generated data-sets \(\{D_{i}\}_{i=1}^{k}\), resulting in a set of \(k\)_localized_ models (see Figure 1). More precisely, we can approximate the trained models with a set of \(k\) functions:
\[\{\mathcal{F}_{i}(\mathbf{x}_{t-w+1},\mathbf{x}_{t-w+2},\ldots,\mathbf{x}_{t} ;\theta)\}_{i=1}^{k}\quad\quad\text{s.t.}\quad\quad\mathcal{F}_{i}:(\mathbb{R }^{w}\times\ \mathbb{R}^{i};\mathbb{R}^{p})\rightarrow\mathbb{R}^{m}\mathbf{x}\ \mathbb{R}^{i},\]
where \(w\) is the size of the past system evolution time-window, while \(m\) represents the number of next time-step predictions about the system dynamics that the model has been trained to perform.
Coming to the second stage of the architecture, here a Neural Network receives as input all the outputs \(\{f_{i}\}_{i=1}^{k}\) of the \(k\) first-stage models, and aims to implement an "averaging-function" between these first "local" predictions, based on the difference between the respective _centroids_\(\{\theta_{c}^{i}\}_{i=1}^{k}\) and the current parameter values \(\theta\):
\[\mathcal{G}(f_{1},f_{2},\ldots,f_{k},\theta_{c}^{1},\theta_{c}^{2},\ldots, \theta_{c}^{k};\theta).\]
Therefore, the trained architecture can provide an approximation of the time-evolution corresponding to a general _online parameter_, obtained by simply giving as input the first exact time-window \(W\) and the new parameter value \(\theta_{\mathbf{new}}\). Indeed the evolution is achieved through an iterative recursion, in which the outputs of the architecture are suddenly reused as inputs for the next cycles (Figure 2).
Figure 1: Example of the training phases of the two stages assuming a 2-dimensional parameter space \(\Theta\) with \(n=k=4\).
It is to be noted that, in this way, the advancement in time of the system's variables for a general _online parameter_ can be potentially obtained for each desired amount of time-steps, independently from the training solutions' extension.
Summing up, this framework could be seen as a variant of the Random Forest method [4], as it builds \(k\) different models with \(k\) different training data-sets, whose guesses are "averaged" to obtain the final prediction.
On the other hand, the choice of the data-sets is not "random", but derived from locality-based considerations implemented through the _k-means_ algorithm. Hence, a more suitable already proposed methodology, of which our two-stages framework could be considered a generalization, is the _weighted k-Nearest Neighbour_ technique [9]. Indeed, _k-NN_ considers the samples \(\{\theta_{i}^{i}\}_{i=1}^{n}\) in \(\Theta\), and each time a new parameter's value \(\theta_{\mathbf{n}}\) is introduced, its correspondent prediction is computed as the weighted sum of the values associated to its \(k\)-Nearest Neighbours in the parameters' space.
The differences between what we propose and the _k-NN_ method lie in two principal points. Firstly, in our case the values given to the "weighted averaging function" are not the same ones associated to the \(k\)\(\{\theta_{c}^{i}\}_{i=1}^{k}\), but are computed for the new \(\theta_{\mathbf{n}}\) by the \(k\) different models. In second place, to average those values a non-linear function is found by the second-stage NN, being much more complex than a simple weighted average.
Furthermore, it is to be noted that the presented approach is markedly different from other partition-based methods such that the one proposed in [19]. Indeed, our procedure considers the k-means employed in the parameter space, not in the space of the discrete PDE solutions.
### C-Lstm
As stated above, both the two stages of the architecture are realized through the exploitation of a particular type of _integrated Long Short Term Memory networks_: the C-LSTM architecture [52].
It consists in a succession of a convolutional and an LSTM layer, two mainstream architectures for such modeling tasks. The usage of the first is aimed at extracting a sequence of higher-level representations of the input, that are successively fed into a recurrent neural network (LSTM) to obtain the final outputs. Indeed, LSTM layers allow to learn from the extracted features' evolution the correct predictions, according with the maintaining of a memory of their long-time and short-time dependencies (appendix 7).
The combination of convolutional neural network (CNN) and LSTM results in a powerful tool for our purposes. In fact, CNN is able to learn local context from temporal or spatial data but lacks the ability of learning sequential correlations. On the other hand, LSTM is specialized for sequential modelling, despite being unable to extract features in a parallel way.
Examples of C-LSTM employment can be already found in some computer vision tasks like text classification [53], image caption [51] and speech recognition [42].
In our particular case, we need to distinguish between the first stage and the second one, owing to different inputs passed to the two C-LSTM networks (Figure 3).
Indeed, while in the first stage the C-LSTM is trained to extract temporal dependencies from the time-window given in input \(\{\mathbf{x}_{t-w+1},\mathbf{x}_{t-w+2},\ldots,\mathbf{x}_{t}\}_{i=1}^{k}\), in the second one it has to learn the spatial dependencies of the \(k\) first-stage predictions according to the relation between the respective spatial training parameters \(\{\theta_{c}^{i}\}_{i=1}^{k}\) and the current parameter of interest \(\theta_{\mathbf{new}}\).
Hence, the combined effect of the two stages results in a pipeline, which is shown in Fig.3. This figure complements Fig.2 in the sense that it provides a detailed view of a single iteration. At first, time-dependencies of the considered variables' trajectories are analysed: a time-window \(W\) of the past \(w\) temporal steps of such variables is given in input asking to the network to predict their next \(m\)-steps evolution according to their previous values in \(W\) and to the parameters value \(\{\theta_{new_{i}}\}_{i=1}^{p}\). In second place,
Figure 2: Given a time instant \(T\), the evolution precedes predicting the next \(m\) values of the variable interested per cycle.
the \(k\) outputs of the \(k\) first-stage networks are collected and given as input to the second stage. Here, their spatial dependencies are taken into consideration through the extraction of features by the CNN layer (basing on the local distance between the training parameters and the current one), thus elaborated by a LSTM layer.
## 3 Application to ODEs
The proposed architecture has been at first tested on simple ODEs systems in order to prove its generalization capabilities and to investigate the role of some of its parameters, _i.e._ the number of time-steps predicted per iteration and the time-window length.
In particular, we report some results about two examples: the _Duffing Oscillator_, parameterized in its non-linear component and taken with null driving force (2), and the _Predator Prey_ system in the case of limited resources, with parameterization applied to the predators' growth component (3):
\[\left\{\begin{aligned} \frac{dr}{dt}&=v\\ \frac{dv}{dt}&=r-a\cdot r^{3}\end{aligned}\right. \tag{2}\]
Figure 3: General inputs and outputs of the first-stage and second-stage C-LSTM networks, considering general parameters’ values \(\{\theta_{i}\}_{i=1}^{p}\)
The above predictions have been computed considering \(w=200\) and \(m=1\). The training phase has been performed in the parameter range of \(a\in[1,10]\) and \(a\in[1,5]\) for the Duffing Oscillator and for the Predatory Prey system respectively, with \(k=10\) (training set of \(\{\theta_{c}^{i}\}_{i=1}^{10}=\{1,2,3,4,5,6,7,8,9,10\}\)), and with \(k=5\) (training set of \(\{\theta_{c}^{i}\}_{i=1}^{5}=\{1,2,3,4,5\}\)).
As we can in general see from Figure 4, the architecture succeeds in reproducing the systems' dynamics also for parameters not included in the training set, thus it is able to generalize the system's parametric behaviour.
Furthermore, in order to investigate the role of some architectural parameters, tests have been conducted on the influence that the number of time-steps predicted per iteration, \(m\), have in the accuracy of the predictions.
As it can be seen in Figure 6, enlarging \(m\) could bring some advantages in terms of the amount of time needed to predict a certain number \(T\) of time-steps (less iteration cycles are required). On the other hand, the drawback for large values of \(m\) appears to be the reducing of generalization capabilities of the architecture.
Nevertheless, the size of such models is too small to observe any speed-up, therefore these examples only want to serve as introductory analysis. Indeed, referring to the python package _tfdiffeq1_, we take as our baseline the time spent by its function _odeint()_ for the integration of the system. Those times for the above ODEs amount to \(3s\) on average (with the default available solver, that implements an adaptive Runge-Kutta algorithm). On the other hand, observing the graphics in Figure 6, we find that the two better alternatives with our framework are obtained with \(m=5\) or \(m=10\) (for what concerns a trade-off between accuracy and time needed). The correspondent prediction-times are respectively \(70s\) and \(31s\), both of which under-perform our baseline by one order of magnitude.
Footnote 1: [https://github.com/titu1994/tfdiffeq/tree/master/tfdiffeq](https://github.com/titu1994/tfdiffeq/tree/master/tfdiffeq)
Moreover, tests have been also performed on the correlation between the time-window size and the accuracy of the predictions. As is can be seen observing Figure 5, an initial decreasing trend in the relative
Figure 4: On the left column, the exact and the predicted evolution of the Duffing Oscillator system for different values of the parameter \(a\) (4a,4b,4e). On the right one, the exact and the predicted evolution of the non-linear Predator Prey system (4b,4d,4f).
error appears evident with the increase of the window size. This could be justified by the fact that a larger time-window implies more exact information at the beginning of the prediction iteration, thus bringing a slower error propagation in the process. We can also note that increasing too much \(w\) actually does not bring any additional improvement on the error (in this case from \(w=200\) on). The time measurements are here not reported, as no difference is encountered varying the time-window width.
## 4 Rayleigh-Benard cavity flow
In order to extend our tests to larger systems, we present here the Rayleigh-Benard cavity flow: a benchmark example that has been introduced in [40] and widely used since then, for example in [13], [37] and [20]. It considers the incompressible Navier-Stokes equations in a rectangular cavity. In particular, the model describes an important process in semiconductor crystal growth [28], as it models the flow in the molten semiconductor material.
### Model description
The incompressible Navier-Stokes equations describe viscous, Newtonian flow in the computational domain \(\Omega\subset\mathbb{R}^{d}\). The unknowns are the vector field velocity \(\mathbf{u}\) and scalar field pressure \(p\). The incompressible Navier-Stokes equations are given as
\[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla\mathbf{u})-\nu \Delta\mathbf{u}+\nabla p =\mathbf{f} \text{in }\Omega\times(0,T], \tag{4}\] \[\nabla\cdot\mathbf{u} =0 \text{in }\Omega\times(0,T]. \tag{5}\]
where the kinematic viscosity is denoted \(\nu\), the body forcing \(\mathbf{f}\), and time interval as \(T\). The spatial dimension is either \(d=2\) or \(d=3\), while boundary and initial conditions are provided as
\[\mathbf{u} =\mathbf{u}_{0} \text{in }\Omega\times\{0\}, \tag{6}\] \[\mathbf{u} =\mathbf{u}_{D} \text{on }\partial\Omega_{D}\times(0,T],\] (7) \[-p\mathbf{n}+\nu\frac{\partial\mathbf{u}}{\partial\mathbf{n}} =\mathbf{g} \text{on }\partial\Omega_{N}\times,(0,T], \tag{8}\]
Figure 5: mean absolute error committed in the first 1000 time-steps versus the value imposed on \(w\). These tests have been performed on both the ODEs systems previously described for 24 testing parameters sampled in the respective parameter spaces.
Figure 6: Time needed for the predictions and mean absolute error committed in the first 1000 time-steps versus the value imposed on \(m\). These tests have been performed on both the ODEs systems previously described for 24 testing parameters sampled in the respective parameter spaces.
where \(\partial\Omega_{D}\cap\partial\Omega_{N}=\emptyset\) and \(\overline{\partial\Omega_{D}}\cup\overline{\partial\Omega_{N}}=\overline{\Omega}\). Here, \(\mathbf{u}_{0}\), \(\mathbf{u}_{D}\), and \(\mathbf{g}\) are given and \(\mathbf{n}\) denotes the outward pointing unit normal vector on the boundary \(\partial\Omega_{N}\). The boundary \(\partial\Omega_{D}\) is called the Dirichlet boundary and \(\partial\Omega_{N}\) the Neumann boundary.
Let \(L^{2}(\Omega)\) denote the space of square integrable functions in \(\Omega\) and \(H^{1}(\Omega)\) the space of functions belonging to \(L^{2}(\Omega)\) with weak first derivatives in \(L^{2}(\Omega)\). Define the sets
\[\mathbf{V} := \left\{\mathbf{v}\in[H^{1}(\Omega)]^{d}:\mathbf{v}=\mathbf{u}_{D}\text{ on } \partial\Omega_{D}\right\}, \tag{9}\] \[\mathbf{V}_{0} := \left\{\mathbf{v}\in[H^{1}(\Omega)]^{d}:\mathbf{v}=\mathbf{0}\text{ on } \partial\Omega_{D}\right\}. \tag{10}\]
The variational form of (4)-(8) is given by: find \((\mathbf{u},p)\in\mathbf{V}\times L^{2}(\Omega)\), with \(\mathbf{u}\) satisfying the initial condition (6), such that
\[\int_{\Omega}\frac{\partial\mathbf{u}}{\partial t}\cdot\mathbf{v}\mathbf{ x}+\int_{\Omega}(\mathbf{u}\cdot\nabla\mathbf{u})\cdot\mathbf{v}\mathbf{x}+\nu\int_{ \Omega}\nabla\mathbf{u}\cdot\nabla\mathbf{v}\mathbf{x}-\int_{\Omega}p\nabla\cdot\mathbf{v }\mathbf{x}\] \[=\int_{\Omega}\mathbf{f}\cdot\mathbf{v}\mathbf{x}+\int_{\partial \Omega_{N}}\mathbf{g}\cdot\mathbf{v}\mathbf{x},\qquad\forall\,\mathbf{v}\in\mathbf{V}_{0}, \tag{11}\] \[\int_{\Omega}q\nabla\cdot\mathbf{u}\mathbf{x}=0,\qquad\forall\,q\in L ^{2}(\Omega). \tag{12}\]
Consider as computational domain \(\Omega\) the rectangle with aspect ratio \(4\), i.e., a rectangle of height \(1\) and length \(4\). The whole boundary is a no-slip boundary, so that \(\partial\Omega_{D}=\partial\Omega\) and \(\mathbf{u}_{D}=0\). The body forcing \(\mathbf{f}\) is given by
\[\mathbf{f}=(0,\mathrm{Gr}\nu^{2}x)^{T}, \tag{13}\]
where \(x\) is the horizontal coordinate and \(\mathrm{Gr}\) is the Grashof number. The Grashof number is a dimensionless number that describes the ratio of the buoyancy to viscous forces.
### Discretization
The numerical discretization method employed is the spectral element method [29], which uses high-order polynomial ansatz functions over a coarse mesh, see Fig. 7. The time-stepping scheme is an IMEX scheme of order \(2\) (IMplicit-EXplicit, see [17], [30]), which is a standard option of the used PDE solver _Nektar++2_.
Footnote 2: [https://www.nektar.info/](https://www.nektar.info/)
Our numerical studies will focus on the parameter domain \(\Theta=[100\cdot 10^{3},150\cdot 10^{3}]\), and a full-order solution is computed at \(Gr=150\cdot 10^{3}\) over a long time interval to ensure that the limit cycle is reached. Then, each solution of interest in the interval \([100\cdot 10^{3},150\cdot 10^{3}]\) is initialized with the solution at \(Gr=150\cdot 10^{3}\).
The time step is set to \(1\cdot 10^{-6}\) and for the tests \(5\cdot 10^{5}\) time steps have been computed.
Figure 7: The computational mesh of the cavity is composed of \(24\) rectangles.
### Model order reduction
Our Model Order Reduction technique aims to reduce the cost of the full order solution computation by breaking it into two parts: a computationally expensive _offline phase_, and a computationally efficient _online phase_.
Indeed, the offline phase is the most time consuming because it comprehends both the collection of the full order solutions, and the training of the two stages architecture with these. On the other hand, the online phase is intended to be particularly fast, as it consists only in the computation of the first exact window \(W_{new}\) and in the iterative time-step prediction by the framework (see Section 2.1).
Recalling what has been already analyzed in the ODE case in section 3, the number of time-steps \(w\) belonging to the time-window has an influence on the accuracy of the predictions, as a lower one, in general, implies a stronger error propagation. Despite of this, in the case of high-dimensional models, the computation of a long initial exact window can be as time consuming as the iterative prediction phase. Therefore, a compromise has to be made.
Moreover, to further reduce the order of the model, previous operations are performed on the full order solutions.
In the first place, the velocity field solutions at every time step in the time interval of interest \(T\), which are real vectors of high dimension \(N\) (\(N\) referring to size of the spatial discretization), are projected on a lower dimensional space through the Proper Orthogonal Decomposition approach (POD). It consists in finding a certain number of POD modes that reduces the dimension of the _snaphots matrix_\(\mathbf{S}=[\mathbf{s}_{1},\ldots,\mathbf{s}_{T}]\) (being \([\mathbf{s}_{1},\ldots,\mathbf{s}_{T}]\) the \(T\)\(N\)-dimensional full order solutions). Such basis are computed through the Singular Value Decomposition of \(\mathbf{S}\):
\[\mathbf{S}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\mathbf{T}},\]
where the columns of the unitary matrix \(\mathbf{U}\) are the POD modes, and the diagonal matrix \(\mathbf{\Sigma}\) contains the corresponding singular values in decreasing order. Considering only the first \(N_{POD}\) rows of \(\mathbf{U}\) according to the error we are willing to commit, we obtain the reduced order representation:
\[\mathbf{S}_{\text{POD}}=\mathbf{U}_{\mathbf{N}_{\text{POD}}}^{\mathbf{T}} \mathbf{S}.\]
In second place, because of the very small time-step required by the cavity simulations, in the collection of the training data-set for the two stages architecture only 1 every 100 time-steps is considered, so that also the online phase is accelerated having a larger prediction step.
## 5 Results
As discussed in Section 4.3, the first step of our model order reduction approach in the case of significantly large systems is Proper Orthogonal Decomposition. The _snapshots matrix_\(\mathbf{S}\) is thus composed by full-order solutions, which have been collected for 6 equispaced values of Grashof number:
\[\{\theta_{t}^{i}\}_{t=1}^{n}=\{100\cdot 10^{3},110\cdot 10^{3},120\cdot 10^{3}, 130\cdot 10^{3},140\cdot 10^{3},150\cdot 10^{3}\}.\]
SVD is then performed to obtain \(N_{POD}\) basis, chosen to achieve a certain level of accuracy in the approximation.
Indeed, the accuracy can be derived as the ratio between the summation of the singular values correspondent to the considered POD basis and the summation of the whole \(\Sigma\) diagonal. In our case, a number of 140 and 147 POD basis is identified to achieve a 99.99% level of accuracy, respectively for the horizontal and vertical dimension.
As a preliminary analysis, we are interested in the number of POD modes coefficient it is actually convenient to consider during the training in the offline phase. Indeed, even if the more modes are involved, the more POD succeeds in an accurate decomposition, the architecture could encounter more difficulties in predicting a larger number of outputs rather then a lower one.
For this reason, different training phases have been firstly performed considering each time a different number of POD modes, and the results are shown in Figure 8. A distinction is made between the _projection error_, _i.e._ the error due to POD, and the _NNs error_, which is the prediction error of the architecture. This latter one clearly depends both on the projection error and on the generalization capabilities of the framework.
In particular, the chosen framework's parameters are \(m=6\) and \(w=100\), correspondent to a time-window of 10000 real time-steps and to an horizon of 600 future time-steps predicted per iteration.
The results displayed in Figure 8 are obtained with 10 testing parameters sampled in the Grashof number space. Starting from an initial exact window, the evolution in time of the correspondent POD coefficients is computed for the next 500000 real time-steps.
From Figure 8 it is clear that, while the projection error monotonously decreases, the NNs error reaches a minimum and then increases with the number of POD coefficients considered. Owing to such evidence, the following results are computed considering only the first 60 POD coefficients.
Considering thus the 10 testing values of the Grashof number randomly sampled, we report the relative error evolution in Figure 10 with the blue lines in the upper graphic. As it can be seen, for the represented 5000 time-steps, the accuracy tends to decrease quickly, touching mean error peaks of 10%.
Investigating more the problem, we note, from Figure 9, that two different behaviours in the snapshots time evolution can be observed: an initial _swing-in_ non-periodic phase, and a successive periodic one. Given that, we try to further reduce the dimensionality by searching new POD basis considering only the periodic-part's snapshots.
In this way, we find that the number of POD basis needed to reach the same purpose as before (99.99% of accuracy) is significantly lower: respectively 37 and 41 for the horizontal and vertical axis. Therefore, wanting to investigate the error committed in the prediction of such 37 modes coefficients, we train the two stages architecture on the new collected training set.
Performing the same tests with 10 different values of the Grashof number, we obtain that the framework is now able to approximate the velocity field with a much lower relative error, as it can be seen in Figure 10 in the upper graphic with the red lines. On the other hand, with this second approach the initial non-periodic part approximation, well predicted in the first case, results generally worsened.
Hence, not considering the swing-in phase time-steps in the _snapshots matrix_, the number of POD basis \(N_{POD}\) needed to achieve a certain accuracy of the approximation considerably decreases, as a lower number of singular values turn out to be significantly energetic. As a consequence, error propagation seems to be attenuated in the second case, owing to the lower number of different coefficients to be predicted. On the other hand, with less significant modes the initial swing-in phase seems to not be sufficiently well approximated.
The solution we propose, displayed in the lower graphic of Figure 10, is to form a pipeline with the two architectures previously trained. More precisely, we let the first framework predict the first \(N_{I}\) time-steps correspondent to the non-periodic behaviour, and then a basis change is performed to project the last \(w\) approximated time-steps \(\mathbf{S_{POD_{1}}}\) into the lower dimensional space (from 140 to 37 dimensions). At this point, the new input window \(\mathbf{S_{POD_{2}}}\) is obtained for the second framework, which can now deal with the approximation of the periodic part. In particular, the matrix for basis change is obtained as:
\[\mathbf{M}=\mathbf{U_{N_{POD_{2}}}^{T}}\cdot\mathbf{U_{N_{POD_{1}}}},\]
and the new input for the second framework is:
\[\mathbf{S_{POD_{2}}}=\mathbf{M}\cdot\mathbf{S_{POD_{1}}}.\]
Figure 8: Projection mean error and NNs mean error with respect to different numbers of POD coefficients taken into consideration.
Figure 9: Evolution in time of the first 3 POD basis’ coefficients correspondent to the exact solution of the Cavity problem with Grashof number equal to 100000. Being the problem treated in two dimensions, we find on the left the coefficients related to the horizontal axis, and on the right the ones for the vertical axis.
As we can observe in the comparison between the upper and lower graphic in Figure 10, the relative error's evolution for what concerns the periodic part does not significantly differ in the two cases, thus this pipeline is effective and does not generate a worse error propagation. The testing results are then reported in Figure 11 in terms of mean errors on a time horizon of 500000 time-steps, and of the angular coefficients correspondent to their linear regression.
In general, the big advantage that the application of the two stages architecture brings, involves the time needed to obtain a new Grashof number's velocity field solution. Indeed, the computational time for predicting 500000 real time-steps is reduced to 5 minutes on average, versus the 3 hours spent with the _Nektar solver_ to obtain the high-dimension evolution of a new Grashof number's solution.
Finally, we report a visual example of some modes' coefficients projections compared with the exact evolution (see Figure 12) in the case of \(Gr=132.755\cdot 10^{3}\), and the correspondent velocity field evolution in time (with the correspondent error committed) in Figure 14. A frequency analysis is also visualized in Figure 13, where the Fourier Transform of some modes coefficients evolution in time is reported. As it is noticeable in both Figure 12 and Figure 13 by the overlap of the red dashed-lines (exact coefficients and Fourier Transforms) and the coloured solid ones (approximated ones), the architecture succeeds in general in the prediction of all the coefficients dynamics.
## 6 Conclusions and further developments
In this work, we presented a novel approach to parametric time-dependent problems, consisting in a previous k-means clustering of the available solutions with respect to their associated parameters value. In this way, we proceeded with the training of \(k\) C-LSTM independent models with the purpose of obtaining \(k\) local representations of the solutions space. Each of these models was in principle able to generalize the problem's solution in a neighborhood of the parameters value with whom it had been trained, thus the second stage of the architecture was designed to find a non-linear function that combined in a proper way the predictions coming from the first-stage models.
This C-LSTM architecture has firstly been tested on low-dimensional ODEs systems such as the Duffing Oscillator parametrized in its non-linear component, and the Predator-Prey system.
Subsequently, we presented promising results obtained in the case of the Rayleigh-Bernard cavity flow, where the Incompressible Navier-Stokes equations in a rectangular cavity were considered. Here, a previous
Figure 12: Example of some modes coefficients correspondent to \(Gr=132775\), for the horizontal and vertical axis respectively in the first and second row. In coloured solid lines the predicted time-evolution, while in red dashed-lines the exact one. The coefficients here reported are related to the 37 and 41 POD basis discussed above.
Figure 13: Example of the Fourier Transform of some modes coefficients for the horizontal and vertical axis respectively in the first and second row (\(Gr=132775\)). In coloured solid lines the predicted time-evolution Fourier Transform, while in red dashed-lines the exact one.
Proper Orthogonal Decomposition was applied to the discretized system, in order to properly reduce the problem's dimensionality.
The results obtained were extremely positive considering the limited error propagation and the time reduction. Indeed, referring to the numerical solver taken as a baseline, the time needed for the online phase was decreased of the 97%.
\begin{tabular}{|c||c c|} \hline
**Netkar solver** & **Two stages architecture** & \\ \hline _Solving time_ & _Training time_ & _Online phase_ \\ \hline \(\approx 3h\) & \(\approx 4h\) & \(\approx 5m\) \\ \hline \end{tabular}
It can finally be concluded that this method could reveal itself particularly useful in parametric large-scale systems, whose dynamic exhibits a non-linear behaviour difficult to generalize in the whole parameter space.
Therefore, further applications are to be investigated for instance in the field of bifurcating systems, where such _partitioning-averaging_ approach could reveal crucial for a better and faster evaluation of the solution qualitative behaviour depending on the interplay between a given set of parameters.
It is in addition to be noted that the possibility of connecting multiple architectures with each other offers various advantages when it comes to different behaviours in time, also related to projection-based ROMs. As in the case of Section 5 indeed, where different PODs were performed in the initial _swing-in_ phase and in the _periodic_ one, different projection-based ROMs could be used to create multiple reduced-order spaces, later connected together with the evolution predicted by subsequent pre-trained two-stages architectures (one in every reduced order space).
## Statements and Declarations
* _Conflict of interest/Competing interests_ The authors have no conflicts of interest or competing interests.
Figure 14: On the left, the approximated velocity field computed with our reduced order model for \(Gr=132755\). On the right, the error committed with respect to the full order model solution. Both the columns are computed at different time-steps, respectively at \(t=20000,t=50000,t=100000,t=200000\)
Appendix-A
### Recurrent Neural Networks and Long-Short Term Memory cells
Recurrent Neural Networks (RNN) are currently the most commonly used Neural Network architecture for sequence prediction problems [23]. Every RNN is a combination of a certain number of RNN cells, which can be chosen among different realizations varying in complexity. However, all of them still carry out the same basic idea displayed in Figure 15 and initially introduced by Elman [10] in 1990.
He essentially proposes to implement a system of internal _gates_ aimed to build a bridge between the input and the output. In particular, this relation is mediated by a _hidden state_ (_context cell_) \(h_{t}\), managed through the equation (14) at each time-step, according to a trainable combination of the current input \(x_{t}\) and the previous hidden state \(h_{t-1}\). On the other hand, the cell's output \(o_{t}\) at each temporal step is obtained through the equation (15), working on the current \(h_{t}\).
\[h_{t}= \sigma(W_{h}\cdot[h_{t-1},x_{t}]+b_{h}) \tag{14}\] \[o_{t}= tanh(W_{o}\cdot h_{t}+b_{o}) \tag{15}\]
Here, the weight matrices \(W_{h}\) and \(W_{o}\), and the bias vectors \(b_{h}\) and \(b_{o}\) represent the trainable parameters of the network.
It can be easily seen that those kind of update-laws insert feedback loops in the RNN cell, connecting its current state to the next one. These connections are of extreme importance in order to consider past information when updating the current cell state, conferring to the Recurrent Neural Network the possibility to preserve a _memory of the system_.
However, the Elman Recurrent Neural cell suffers from the _vanishing gradient_ and _exploding gradient_ problems over very long sequences. This implies that the simple RNN cells are not capable of carrying long-term dependencies to the future: the back-propagated gradients tend to _vanish_ (and consequently the weights are not updated adequately) [25], or _explode_ (resulting in unstable weight matrices).
Over the years lots of variations have been proposed to overcome such problems. One of the currently more popular solutions are the Long-Short-Term-Memory cells (LSTM), first introduced in 1997 [26].
They present a more complex internal structure:
\[q_{t}= tanh(W_{q}\cdot[h_{t-1},x_{t}]+b_{q}) \tag{16}\] \[i_{t}= \sigma(W_{i}\cdot[h_{t-1},x_{t}]+b_{i})\] (17) \[f_{t}= \sigma(W_{f}\cdot[h_{t-1},x_{t}]+b_{f})\] (18) \[o_{t}= \sigma(W_{o}\cdot[h_{t-1},x_{t}]+b_{o})\] (19) \[c_{t}= f_{t}\odot c_{t-1}+i_{t}\odot q_{t}\] (20) \[h_{t}= o_{t}\odot tanh(c_{t}) \tag{21}\]
where \(W_{i}\), \(b_{i}\), \(W_{f}\), \(b_{f}\), \(W_{o}\), \(b_{o}\), \(W_{c}\), \(b_{c}\) are the trainable weight matrices and bias vectors, while \(\odot\) is the Hadamard product.
The _vanishing_ and _exploding gradient_ problems have been solved with the introduction of Constant Error Carousels units (CECs) [47]. Indeed, they enforce in the LSTM cells a system of internal gates and loops that makes them able to learn time lags of more than 1000 discrete time steps, in contrast to previous ERNNs, which were already failing with time lags of 10 time steps [45]. From here, the name itself of LSTM cells is derived, underlying that they are able to capture both the short and the long-term dependencies in the training inputs.
Intuitively, such cells retain information about their past history through two quantities. Firstly \(c_{t}\), which can be seen as the _long-term memory_ of the cell, and whom update is both designed for forgetting something of the past and incorporating new information coming from the current input. In second
Figure 15: The first historical example of a recurrent cell (ERNN).
place, there is \(h_{t}\), the _hidden state_ (and the output itself) of the cell, representing the _short-term memory_ component, updated both considering a non-linear transformation of the _long-term memory_ information \(c_{t}\), and the _output gate_ one \(o_{t}\).
The other quantities computed inside the LSTM cells can be explained as an interplay of gated structures, which are trained combining in a non-linear way the cell's _hidden state_ and _input_ information. Indeed, we can identify the _input gate_\(i_{t}\), aimed to perform a non-linear transformation of the current input, the _forget gate_\(f_{t}\) holding indications on the amount of past information which is safe to forget, and the _output gate_, that offers a first proposal about the cell's final output.
Figure 16: Internal logic structure of LSTM cells.
Acknowledgements
This work was partially funded by European Union Funding for Research and Innovation -- Horizon 2020 Program -- in the framework of European Research Council Executive Agency: H2020 ERC CoG 2015 AROMA-CFD project 681447 "Advanced Reduced Order Methods with Applications in Computational Fluid Dynamics" P.I. Professor Gianluigi Rozza. We also acknowledge the PRIN 2017 "Numerical Analysis for Full and Reduced Order Methods for the efficient and accurate solution of complex systems governed by Partial Differential Equations" (NA-FROM-PDEs).
|
2304.12266 | Magnetic plateaus and jumps in a spin-1/2 ladder with alternate
Ising-Heisenberg rungs: a field dependent study | We study a frustrated two-leg spin-1/2 ladder with alternate Ising and
isotropic Heisenberg rung exchange interactions, whereas, interactions along
legs and diagonals are Ising type. The ground-state (GS) of this model has four
exotic phases: (i) the stripe rung ferromagnet (SRFM), (ii) the anisotropic
anti-ferromagnet (AAFM), (iii) the Dimer, and (iv) the stripe leg ferromagnet
(SLFM) in absence of any external magnetic field. In this work, we study the
effect of externally applied longitudinal and transverse fields on GS phases
and note that there are two plateaus with per-site magnetization $1/4$ and
$1/2$. There is another plateau at zero magnetization due to a finite spin gap
in the presence of a longitudinal field. The exact diagonalization (ED) and the
transfer matrix (TM) methods are used to solve the model Hamiltonian and the
mechanism of plateau formation is analyzed using spin density, quantum
fidelity, and quantum concurrence. In the (i) SRFM phase, Ising exchanges are
dominant for all spins but the Heisenberg rungs are weak, and therefore, the
magnetization shows a continuous transition as a function of the transverse
field. In the other three phases [(ii)-(iv)], the Ising dimer rungs are weak
and those are broken first to reach a plateau with per-site magnetization
$1/4$, having a large gap which is closed by further application of the
transverse field. | Sk Saniur Rahaman, Manoranjan Kumar, Shaon Sahoo | 2023-04-24T17:06:51Z | http://arxiv.org/abs/2304.12266v3 | Magnetic plateaus and jumps in a spin-1/2 ladder with alternate Ising-Heisenberg rungs: a field dependent study
###### Abstract
We study a frustrated two-leg spin-1/2 ladder with alternate Ising and isotropic Heisenberg rung exchange interactions, whereas, interactions along legs and diagonals are Ising type. The ground-state (GS) of this model has four exotic phases: (i) the stripe-rung ferromagnet (SRFM), (ii) the anisotropic anti-ferromagnet (AAFM), (iii) the Dimer, and (iv) the stripe-leg ferromagnet (SLFM) in absence of any external magnetic field. In this work, the effect of externally applied longitudinal and transverse fields on quantum phases is studied. In both cases, we show that there exist two plateau phases at 1/4, and 1/2 of the saturation of magnetization. Due to the strong rung dimer formation, the system opens a finite spin gap for all the phases resulting in a zero magnetization plateau in the presence of a longitudinal field. The mechanism of plateau formation is analyzed using spin density, quantum fidelity, and quantum concurrence. In the (i) SRFM phase, Ising exchanges are dominant for all spins but the Heisenberg rungs are weak, and therefore, the magnetization shows a continuous transition as a function of the transverse field. In the other three phases [(ii)-(iv)], Ising dimer rungs are weak and broken first to reach the plateau at 1/2 of the saturation magnetization, having a large gap, which is closed by further application of the field. We use the exact diagonalization (ED) and the transfer matrix method (TM) to solve the Hamiltonian.
## I Introduction
Frustrated low-dimensional quantum magnets exhibit a zoo of quantum phases which attracts more interest to both theoreticians as well as experimentalists, and so the theoretical studies are quite necessary for the verification of the experimental results due to the ever-growing synthesis of low-dimensional magnetic materials [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. In the spin chains and ladder systems, the competing exchange interactions lead to many interesting quantum phases like ferromagnetic ground state [16], Neel phase [17; 18; 19], Luttinger liquid [20; 21], spiral [22], spin liquid [23; 24], dimer phase [4], etc. The ground state (GS) of antiferromagnetic isotropic Heisenberg spin-1/2 zigzag chain has a gapless spectrum in small or strong coupling limit, whereas, it has a gapped spectrum for the moderate value of the ratio of the exchange interactions [25; 26; 27; 28; 29; 30].
In a non-frustrated regime i.e, for the weak and strong rung isotropic exchange limit, the GS is in a spin liquid state with quasi-long range order (QLRO) and gapless spectrum [28; 29]. Whereas, in the presence of anisotropic exchanges, it is gapped if axial exchange term Z is dominant, otherwise, gapless if XY term dominates, and, the anisotropic Heisenberg spin-1/2 chain is its one of the best examples [31; 32]. The GS of spin-1/2 normal ladder with isotropic Heisenberg exchange is always gapped irrespective of the strength of the rung exchange but it may have a gapless spectrum for the anisotropic ladder systems [32; 33; 34; 35; 36; 37; 38]. As an example, having an isotropic exchange in the rung and axial anisotropy \(\Delta\) along the leg, the GS can be tuned from singlet to Neel phase by increasing \(\Delta\)[35]. In case having anisotropy on both: leg and rung exchange interactions, the GS can be XY, Neel, or rung singlet (RS) phase on tuning the rung exchange and axial anisotropy [38]. Another type of anisotropic spin-1/2 ladder is the Kitaev-Heisenberg model on a two leg ladder where the GS has many exotic quantum phases [36].
Many spin-1/2 ladder systems with isotropic and anisotropic exchange interactions have been extensively studied under magnetic field and are reported to exhibit some of the magnetic plateaus and jumps [39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52]. Japaridze et al. studied a two-leg spin-\(\frac{1}{2}\) ladder system with leg interaction \(J_{\parallel}\) and alternate rung as \(J_{\perp}^{+}\), \(J_{\perp}^{-}\) in presence of a longitudinal field \(h\). They have shown that the system shows a zero magnetization up to \(h_{c1}^{-}\), a plateau at half of the saturation magnetization between \(h_{c1}^{+}\) and \(h_{c2}^{-}\) and full saturation achieved at \(h_{c2}^{+}\)[53]. Moradmard et al. studied a spin-1/2 ladder system with XXZ interaction and they have shown different phases like x-FM, z-FM, y-Neel, and z-Neel in a magnetic phase diagram in the plane of anisotropy interaction \(\Delta\) and magnetic field \(h\)[45]. Similarly, Dey et.al. carried out the magnetization study of isotropic Heisenberg
spin-1/2 on a 5/7-skewed ladder and showed multiple plateau phases with field \(h\)[46]. They show that plateau phases are the consequences of gaps in the spectrum, and these plateau phases can be explained in terms of Oshikawa, Yamanaka, and Affleck (OYA) criterion [47]. Some of the rigorous studies on special types of ladders having the Ising-Heisenberg exchange interactions report the magnetization process in various GS phases [48; 49; 50; 51; 52; 54]. For a spin-1/2 two-leg ladder with Ising type leg, diagonal, and Heisenberg type rung exchange interaction, Verkholyak et. al. find that a Neel ordered GS phase undergoes a phase transition to full saturation of the magnetization through a 1/2 of the saturation, namely staggered bond (SB) phase in presence of an external field [50]. For a spin-1/2 Ising-Heisenberg branched chain, it is shown that the magnetization curve shows a plateau at the half saturation which can be characterized by quantum concurrence as well [52].
We consider a spin-1/2 frustrated two-leg ladder system with alternating Ising and Heisenberg type rung exchanges, where the diagonal and leg exchange are of Ising type as shown in Fig.1.a.i. In this model, \(J_{c}\) and \(J_{q}\) are the alternate Ising and Heisenberg rung exchange interactions respectively, where, \(J_{cq}\) and \(J_{d}\) are the leg and diagonal exchange interaction strengths of Ising type respectively. The quantum phase diagram of this model is studied earlier in parameter space of \(J_{q}\) and \(J_{d}\) (both are antiferromagnetic) by considering \(J_{c}=J_{cq}=1\)[55]. The system exhibits four distinct GS phases under periodic boundary condition (PBC): (i) stripe rung ferromagnet (SRFM), (ii) the anisotropic antiferromagnet (AAFM), (iii) the Dimer, and (iv) the stripe leg ferromagnet (SLFM) depending upon the values of \(J_{q}\) and \(J_{d}\) in absence of any magnetic field [55]. The GS phases are schematically represented in Fig1.b.[(i)-(iv)]. In this manuscript, we study the effect of both axial or longitudinal and transverse magnetic fields on the four GS phases with a few sets of \(J_{q}\), \(J_{d}\) values discussed in Sec.IV.
In this work, we observe that in all the quantum phases, the system exhibits a plateau at zero, half, and full of saturation magnetization in the presence of an externally applied longitudinal magnetic field. The calculations are done using the exact diagonalization (ED) [56] and transfer matrix (TM) [57] methods, and results from both methods agree excellently with each other. Furthermore, we calculate the zero-temperature limit quantum fidelity, fidelity susceptibility, and quantum concurrence from the partition function of the ladder using TM, and find that these results are in accordance with the exact calculation. The study of the magnetization under a transverse field is carried out using ED only, and it is noticed that the magnetization shows a half, and full of saturation magnetization plateaus.
This paper is divided into a few sections as follows. First, the model is discussed briefly in Sec. II. This is followed by a discussion on methods in Sec. III. The Sec. IV has two subsections dedicated to the study in the presence of a magnetic field along two directions: longitudinal and transverse. In Sec. IV.1.1, the magnetization process is discussed in the presence of the longitudinal field. In Sec. IV.1.2, we discuss the zero-temperature limit quantum fidelity and bipartite concurrence for different phases. In Sec. IV.2, we discuss the magnetization process in the presence of a transverse field. In Sec. V, we summarise the results and conclude the paper.
## II Model Hamiltonian
We construct the Hamiltonian for a spin-1/2 two-leg ladder with \(N\) number of spins periodically connected along the leg, which turns out to be comprised of \(n=\frac{N}{4}\) number of unit cells. In each unit
Figure 1: (color online) a.(i) Schematic diagram of the spin ladder with alternate Ising-Heisenberg rung interactions. \(J_{c}\) and \(J_{q}\) are the alternative Ising and Heisenberg type rung interactions respectively. \(J_{cq}\), \(J_{d}\) are the Ising type leg and diagonal exchange interactions respectively. Blue and magenta color circles represent \(\sigma\) and S spins in Eq.1 respectively. \(l\), \(k\), and \(i\) represent leg and rung, and site indices respectively. The spin configurations of four exotic phases with \(J_{c}=J_{cq}=1\): b.(i) SRFM (\(J_{q}=0.2\), \(J_{d}=2.0\)), b.(ii) AAFM (\(J_{q}=2.0\), \(J_{d}=0.4\)), b.(iii) Dimer (\(J_{q}=2.0\), \(J_{d}=1.0\)), and b.(iv) SLFM (\(J_{q}=2.0\), \(J_{d}=1.6\)) are shown. Blue and magenta rung pairs are representing \(\sigma-\sigma\) and \(S-S\) rung pairs. The boxes shown in Subfigure b.(iii) represent perfect singlets. Quantum phases are studied earlier in Ref. [55].
cell, one rung pair is connected through an Ising type exchange \(J_{c}\), whereas, the other one is coupled with a Heisenberg type exchange \(J_{q}\) as shown in Fig.1. These rungs couple each other through Ising type exchanges: \(J_{cq}\) along the leg, \(J_{d}\) along the diagonal. The spins with rung coupling \(J_{c}\) and \(J_{q}\) are marked with \(\sigma\) and \(\vec{S}\) respectively. Here onward, the Ising type and Heisenberg type rung spin pairs are to be called \(\sigma-\sigma\) and \(S-S\) pairs respectively. Let us now write down the Hamiltonian for one unit cell.
\[{\bf H_{i}}=J_{q}\vec{S}_{2i,1}\cdot\vec{S}_{2i,2}+\frac{J_{c}}{2 }[\sigma_{2i-1,1}\sigma_{2i-1,2}+\sigma_{2i+1,1}\sigma_{2i+1,2}]\] \[+J_{cq}[S^{z}_{2i,1}(\sigma_{2i-1,1}+\sigma_{2i+1,1})+S^{z}_{2i,2 }(\sigma_{2i-1,2}+\sigma_{2i+1,2})]\] \[+J_{d}[S^{z}_{2i,1}(\sigma_{2i-1,2}+\sigma_{2i+1,2})+S^{z}_{2i,2 }(\sigma_{2i-1,1}+\sigma_{2i+1,1})]\] \[-\frac{h}{2}\sum_{l=1}^{2}(2S^{z}_{2i,l}+\sigma_{2i-1,l}+\sigma_{ 2i+1,l})\] \[-\frac{h^{x}}{2}\sum_{l=1}^{2}(2S^{x}_{2i,l}+\sigma^{x}_{2i-1,l}+ \sigma^{x}_{2i+1,l}) \tag{1}\]
Here, \(h\) and \(h^{x}\) are the longitudinal (+z direction) and transverse ( +x direction) fields respectively. The general Hamiltonian under PBC for a finite size ladder is the summation of the \(n\) unit cells, which can be written as \({\bf H}=\sum_{i=1}^{n}{\bf H_{i}}\).
## III Methods
We employ the ED method to solve the energy eigenvalues and eigenvectors of the Hamiltonian in Eq.1 for the system sizes \(N=16,20,24\) in the presence of longitudinal and transverse fields both. Whereas, in the absence of a transverse field i.e., for \(h^{x}=0\), the Hamiltonian of two consecutive units commute to each other, and so we employ the TM method to calculate the magnetization, quantum fidelity, quantum concurrence from free energy, and partition function. The partition function for the entire system of system size \(N\) can be written as \(Q_{N}(h,\beta)=Tr(e^{-\beta{\bf H}})=[Q_{4}(h,\beta)]^{n}\) (see appendix VII). \(Q_{4}(h,\beta)\) is the partition function for one small unit of 4 spins and \(\beta\) is the inverse temperature. For this model, \(Q_{N}(h,\beta)=[\lambda_{1}+\lambda_{2}+\lambda_{3}+\lambda_{4}]^{n}\), where, \(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\) are the eigenvalues of a \(4\times 4\) transfer matrix for one unit (see appendix VII). In the limit \(n\rightarrow\infty\), and with the condition \(\lambda_{1}\gg\lambda_{2}\gg\lambda_{3}\gg\lambda_{4}\), one can write \(Q_{N}(h,\beta)\approx\lambda_{1}^{n}\) and \(Q_{4}(h,\beta)\approx\lambda_{1}\). At zero-temperature limit i.e., for \(\beta\rightarrow\infty\), after defining some of the system parameters: \(\Delta_{2}=\sqrt{1+4(\frac{1-J_{d}}{J_{q}})^{2}}\), \(Q=e^{\beta J_{q}/4}\), we obtain the partition function \(Q_{4}(h,\beta)\)
\[=e^{\frac{-\beta(J_{c}-4h)}{4}}\left[Q^{-1}cosh[\beta(h\!-\!1\!-\!J_{d})]\!+ \!Qcosh[\frac{\beta J_{q}}{2}]\right]\]
\[+e^{\frac{\beta J_{c}}{4}}\left[2Q^{-1}cosh[\beta h]\!+\!Qcosh[\frac{\beta J_ {q}\Delta_{2}}{2}]\!+\!Qcosh[\frac{\beta J_{q}}{2}]\right] \tag{2}\]
We rewrite \(Q_{4}(h,\beta)\) as a polynomial function of \(e^{\beta h}\):
\[Q_{4}(h,\beta)=a_{0}e^{2\beta h}+b_{0}e^{\beta h}+c_{0}e^{-\beta h}+d_{0} \tag{3}\]
Where, the coefficients of the polynomial are defined as:
\(a_{0}=e^{-\frac{\beta}{4}(J_{q}+J_{d}+2)}\),
\(b_{0}=2e^{-\frac{\beta}{4}(J_{q}-1)}+2e^{\frac{\beta}{4}(J_{q}-1)}Cosh[\frac{ \beta J_{q}}{2}]\),
\(c_{0}=2e^{-\frac{\beta}{4}(J_{q}-1)}\)
\(d_{0}=2Cosh[\frac{\beta J_{d}}{2}](e^{\frac{\beta J_{q}}{4}}+e^{\frac{-\beta}{4 }})+2e^{\frac{\beta J_{d}}{4}}Cosh[\frac{\beta J_{q}\Delta_{2}}{2}]+e^{\frac{ \beta(\tau+8J_{d}+J_{d})}{4}}\)
## IV Results
We study the magnetization properties for four sets of exchange parameters: (i) \(J_{q}=0.2,J_{d}=2.0\) for the SRFM, (ii) \(J_{q}=2.0,J_{d}=0.4\) for the AAFM, (iii) \(J_{q}=2.0,J_{d}=1.0\) for the Dimer, and (iv) \(J_{q}=2.0,J_{d}=1.6\) for the SLFM phases. It is to be mentioned that in all these phases, \(J_{c}\) and \(J_{cq}\) are unity.
### Magnetization process in the presence of a longitudinal magnetic field
#### iv.1.1 Magnetization vs field
To calculate the magnetization using ED, we first define the spin gap \(\Gamma_{k_{1},k_{2}}\) which is the difference between the lowest energies \(E_{0}(k_{1})\) and \(E_{0}(k_{2})\) of two spin sectors \(S^{z}=k_{1}\), \(S^{z}=k_{2}\) respectively, and can be written as
\[\Gamma_{k_{1},k_{2}}=E_{0}(k_{2})-E_{0}(k_{1}) \tag{4}\]
\(S^{z}_{k}\) is the z component of the total spin in spin sector \(k\) for the entire ladder. The per-site magnetization is calculated as \(m=k/N\). This energy gap can be closed i.e, \(h|k_{2}-k_{1}|=|\Gamma_{k_{1},k_{2}}|\) by applying a longitudinal field \(h\). For spin-1/2 systems, \(m\) takes the value between 0 and 1/2. Using the TM method, the per-site magnetization is obtained as \(m=-\frac{\partial F(h,\beta)}{\partial h}\), where \(F(h,\beta)\) is the free energy (see Appendix VII). In Fig.2[(i)-(iv)], we show the finite size scaling of the m-h curve for three system sizes \(N=16,20,24\) using
ED and also for the thermodynamic limit (\(N\rightarrow\infty\)) in zero-temperature limit using TM. \(m-h\) curve shows three plateau phases: \(m=0,1/4\), and \(1/2\) connected by two magnetic jumps in each of the subfigures. The first jump is at \(h_{c1}\) from \(m=0\) to \(1/4\), and the other at \(h_{c2}\) from \(m=1/4\) to full saturation of magnetization in all four quantum phases as shown in Fig.2[(i)-(iv)]. The \(h_{c1}\) takes values 2.5, 0.7, 0.5, and 0.7, and \(h_{c2}\) is 3.5, 3.5, 4, 4.5 for (i) the SRFM, (ii) the AAFM, (iii) the Dimer, and (iv) the SLFM phases respectively. Later on, we discuss in detail that these magnetic transitions show plateaus due to the spin gap and the jumps correspond to unbinding of the rung dimers of equal energy. It is noticed that there is no finite size effect in the magnetization curve.
In the stationary condition, the differentiation of free energy with respect to magnetization is always zero i.e, \(\frac{\partial F(h,\beta)}{\partial m}=0\) (followed from Eq.17). Using this condition, we find two critical fields \(h_{c_{1}}=h_{-}\), \(h_{c_{2}}=h_{+}\)
\[h_{\pm}=\frac{1}{\beta}ln\big{[}\frac{b_{0}\pm\sqrt{b_{0}^{2}-3a_{0}d_{0}}}{3 a_{0}}\big{]} \tag{5}\]
Since, \(a_{0}=e^{-\frac{\beta}{4}(J_{q}+J_{d}+2)}\) is negligibly small for \(\beta\rightarrow\infty\), one can consider \(b_{0}^{2}\gg 3a_{0}d_{0}\) and can take the binomial expansion of the square root term in the above equation. This leads to writing down the critical fields approximately as \(h_{c1}=\frac{1}{\beta}ln\left[\frac{d_{0}}{2b_{0}}\right]\) and \(h_{c2}=\frac{1}{\beta}ln\left[\frac{2b_{0}}{3a_{0}}\right]\) in zero temperature limit. These critical fields are found to be matching with the exact calculations discussed above and shown in the \(m-h\) curve in Fig.2. Plateau width can be obtained as
\[d=|h_{c2}-hc1|=\frac{1}{\beta}ln\left[\frac{4b_{0}^{2}}{3a_{0}d_{0}}\right] \tag{6}\]
For a critical analysis of the \(m=1/4\) plateau formation for all the phases, the spin density \(\langle S_{i}^{z}\rangle\) is shown as a function of site index \(i\) in Fig.3.a.(i). The SRFM phase is highly Ising dominated whereas, the other three phases are dominated by isotropic Heisenberg rung coupling. So, the mechanism of plateau formation is supposed to be different in all other phases than the SRFM phase. In the GS, the SRFM phase is highly frustrated with \(J_{d}=2.0\) and \(J_{q}=0.2\). The diagonal bond \(J_{d}\) is more strong compared to others due to which it induces a finite gap between the nonmagnetic (\(S^{z}=0\)) and magnetic (\(S^{z}=N/4\)) states. For the given set of parameters in this phase, a large field \(h_{c1}=2.5\) is required to close the gap so that \(m\) jumps from zero magnetization to \(m=1/4\) plateau as shown in Fig.2.(i). For \(h>2.5\), the weakly coupled Heisenberg type rung pairs \(S-S\) have the finite value of spin density 0.5 as shown in Fig.3.a.(i), which means that the \(S-S\) pairs are aligned in the \(m=1/4\) plateau phase. The spin configuration of this type of plateau is called \(m=1/4\) plateau type-1 and it is shown in Fig.3.b.(i). The \(m=1/4\) plateau phase also has a finite gap due to its stability coming from the strong \(\sigma-\sigma\) rung pair for the
Figure 3: a.(i) Longitudinal spin density \(S_{i}^{z}\) is shown as a function of site index \(i\) for four GS phases: the SRFM (\(J_{q}=0.2\),\(J_{d}=2.0\)), the AAFM (\(J_{q}=2.0\),\(J_{d}=0.4\)), the Dimer (\(J_{q}=2.0\),\(J_{d}=1.0\)), and the SLFM (\(J_{q}=2.0\),\(J_{d}=1.6\)) using black, red, green, blue colors respectively. The spin configurations in b.(i) \(m=1/4\) plateau type-1, b.(ii) \(m=1/4\) plateau type-2 are shown. In \(m=1/4\) plateau type-1, all the Heisenberg rung pairs \(S-S\) (magenta spins) are fully polarized, and in \(m=1/4\) plateau type-2, all the Ising rung pairs \(\sigma-\sigma\) (blue spins) are fully polarized.
Figure 2: (PBC) Black, red, and green colors are the magnetization per site in the presence of a longitudinal field \(h\) for the system sizes \(N=16,20,24\) using ED. The blue color represents the magnetization calculated using the TM method at \(T/J_{c}\to 0\). The values of \(J_{q}\), \(J_{d}\) are (i) \(0.2,2.0\) for the SRFM, (ii) \(2.0,0.4\) for the AAFM, (iii) \(2.0,1.0\) for the Dimer, and (iv) \(2.0,1.6\) for the SLFM phases respectively.
large \(J_{c}\), and it requires a field \(h=3.5\) to close the gap to reach the saturation. The other three phases: the AAFM, the Dimer, and the SLFM have strong Heisenberg rung exchange (\(J_{q}=2.0\)), for which it forms strong singlets on the \(S-S\) pairs, and are more stable energetically compared to the \(\sigma-\sigma\) pairs with Ising type rung exchange. The AAFM phase has anisotropic antiferromagnetic spin alignment on the ladder with \(J_{q}=2.0\), \(J_{d}=0.4\), where, a very small field \(h_{c1}=0.7\) is sufficient to break the \(\sigma-\sigma\) pairs and to reach \(m=1/4\) plateau as shown in Fig.2.(ii). In the \(m=1/4\) plateau of AAFM, the rung spin pairs of \(\sigma\) take the value \(0.5\) of spin density as shown in Fig.3.a.(i). The spin configuration of this type of magnetic phase is called \(m=1/4\) plateau type-2 and is shown in Fig.3.b.(ii). In the Dimer phase, all the Ising type exchanges are equal to unity for which it has a lesser spin gap than the AAFM phase, and it is closed by an external field \(h_{c1}=0.5\) for the given \(J_{q}(=2.0)\). Due to the perfect singlet formation of \(S-S\) pairs through strong \(J_{q}\) coupling in this case, the \(m=1/4\) plateau has a much larger width than AAFM as shown in Fig.2.(iii). The GS of the SLFM phase has ferromagnetic spin arrangements along the leg, whereas, the legs are aligned oppositely to each other. Fig.2.(iv) shows that the \(m=1/4\) plateau onsets at a field \(h_{c1}=0.7\) and has the largest width. From the spin density, it is noticed that the Ising dimers are polarized along the field in the plateau phase. The Ising rungs are less energetic in this phase also like the AAFM, whereas, oppositely aligned legs enhance the stability of the singlets on the Heisenberg pair \(S-S\), which gives rise to the largest plateau width. With further increase in field, the plateau is broken and a sharp jump takes place at \(h_{c2}=4.5\). In all four phases, either all \(S-S\) or all \(\sigma-\sigma\) rung pairs are broken simultaneously, which results in magnetic jumps.
#### iii.1.2 Quantum Fidelity and Bipartite Concurrence
The plateau formation discussed above is quite different for the four phases, and it can be understood from the perspective of quantum information study also. In this subsection, we calculate and show the zero temperature limit quantum fidelity and bipartite concurrence to analyze the plateaus, and these can be obtained from the partition function as discussed below. Quantum fidelity is the measurement of overlap between two states and can be used to characterize the phase transition on the tuning of parameters. Quan et. al. have shown that for any field, fidelity can be obtained from the partition function [58]. Similarly, we calculate the quantum fidelity for the field \(h\) with small perturbation \(\delta h\) as: \({\cal F}(h,\beta)=\frac{Q_{4}(h,\beta)}{\sqrt{Q_{4}(h+\delta h,\beta)Q_{4}(h- \delta h,\beta)}}\), where \(Q_{4}(h,\beta)\) is the partition function for one unit cell in our case Eq.2. Field fidelity susceptibility can also be obtained as \(\chi(h,\beta)=\frac{\partial{\cal F}(h,\beta)}{\delta h}\). \({\cal F}(h,\beta)\) is unity when there is a unique state and discontinuous at the phase transition points. Fig.4.a.[(i)-(iv)], and b.[(i)-(iv)] show the plot of \({\cal F}(h,\beta)\) (left column) and \(\chi(h,\beta)\) (right column) as a function of \(h\) respectively for four different phases: (i) the SRFM, (ii) the AAFM, (iii) the Dimer, and (iv) the SLFM. In each of the sub-figures of \({\cal F}(h,\beta)\) and \(\chi(h,\beta)\), two discontinuities are noticed for all four phases. All of these discontinuities are consistent with the jumps of the \(m-h\) curve in Fig.2 and represent the magnetic phase transitions.
We also calculate the bond order to understand the configurational change and bipartite concurrence to measure the quantum nature of the \(S-S\) pair which is connected through a Heisenberg rung exchange \(J_{q}\). If the concurrence has some finite value, the wavefunction is in a mixed or entangled state, otherwise, it is in a pure state if the concurrence is zero. Wooters et.al. and Karlova et. al. in their study calculate concurrence for a spin pair connected by Heisenberg interaction in terms of the local pair magnetization and spatial correlations: longitudinal, transverse to detect phase transitions at a finite temperature [59; 60]. In our study, we calculate the bipartite
Figure 4: (a) (left column) Quantum fidelity and (b) (right column) fidelity susceptibility calculated using TM are shown for the thermodynamic limit (\(N\rightarrow\infty\)) at \(T/J_{c}\to 0\). Black, red, green, and blue colors representing four phases: the SRFM (\(J_{q}=0.2\),\(J_{d}=2.0\)), the AAFM (\(J_{q}=2.0\),\(J_{d}=0.4\)), the Dimer (\(J_{q}=2.0\),\(J_{d}=1.0\)), and the SLFM (\(J_{q}=2.0\),\(J_{d}=1.6\)) respectively are arranged sequentially from top to bottom in both the columns.
concurrence for the Heisenberg rung pair connected by \(J_{q}\) as
\[\mathcal{C}(J_{q},h)=max\Big{\{}0,4|c^{T}(J_{q},h)|-\]
\[2\sqrt{(\frac{1}{4}+(c^{L}(J_{q},h))^{2}-m^{2}(J_{q},h)}\Big{\}} \tag{7}\]
Where, \(m(J_{q},h)\), \(c^{L}(J_{q},h)\) and \(c^{L}(J_{q},h)\) are the pair magnetization, the longitudinal and transverse component of bond order respectively, and these can be defined as:
\(m(J_{q},h)=\frac{m(h,\beta)}{2}=-\frac{1}{2}\frac{\partial F(h,T)}{\partial h}\)
\(c^{L}(J_{q},h)=\left<S_{2i,1}^{z}S_{2i,2}^{z}\right>=-\frac{1}{4\beta}\frac{ \partial[\log Q_{4}(h,T)]}{\partial J_{q}}\)
\(c^{T}(J_{q},h)=\left<S_{2i,1}^{x}S_{2i,2}^{z}\right>=-\frac{1}{8\beta}\frac{ \partial[\log Q_{4}(h,T)]}{\partial J_{q}}\)
The longitudinal bond order \(c^{L}(J_{q},h)\) is shown in Fig.5.(a) for all the four phases (i) the SRFM, (ii) the AAFM, (iii) the Dimer, and (iv) the SLFM. In the SRFM phase, the small positive value of \(c^{L}(J_{q},h)\) indicates the ferromagnetic alignment of \(S-S\) pair for any value of \(h\). Because, \(c^{L}(J_{q},h)\) and \(c^{T}(J_{q},h)\) both are small, the Eq.7 demands the \(\mathcal{C}(J_{q},h)\) to be always zero, which in other words can be thought of as there is no quantum concurrence or no quantum entanglement between the two \(S\) spins. For the other three cases: the AAFM, the Dimer, and the SLFM, \(c^{L}(J_{q},h)\) is negative below a critical field \(h_{c2}\), and then it goes to a positive value for \(h>h_{c2}\). Above the certain field \(h_{c1}\) and below \(h_{c2}\), the system is in the \(m=1/4\) plateau as it is noticed in Fig.5.(a) and (b), which is consistent with the magnetic jumps in Fig.2.[(i)-(iv)]. Since, the Heisenberg rung pair spins \(S-S\) are still anti-parallel for \(h_{c1}<h<h_{c2}\), \(c^{L}(J_{q},h)\) also supports the \(m=1/4\) plateau type-2 configuration as shown in Fig.3.b.(ii). The concurrence \(\mathcal{C}(J_{q},h)\) is positive i.e, entangled below \(h_{c1}\) for the AAFM, the Dimer, and the SLFM phases. However, for \(h>h_{c2}\) because both the bond orders: longitudinal and transverse are very small positives, the Eq.7 takes the value of \(\mathcal{C}_{J_{q}}\) to zero and the spin pairs \(S-S\) loose quantum concurrence in the saturation magnetization and becomes a pure state.
### Magnetization process in the presence of a transverse field
Using many experimental techniques in general, this kind of system is synthesized in powder form or single crystal form and so to understand the directional dependence of the field on the magnetization, we study the effect of the transverse field \(h^{x}\) in our model [2; 3]. In presence of \(h^{x}\), the Hamiltonian of different units do not commute to each other, and therefore, the TM method never works, we use the ED method to show the finite size scaling of magnetization for three system sizes and then analyze the spin density for \(N=24\).
#### iii.2.1 Transverse component of magnetization
It is to be mentioned that all four GS phases are in the \(S^{z}=0\) sector, and so it does not change the longitudinal but rather the transverse component of magnetization on the application of an external transverse field \(h^{x}\). We calculate the transverse
Figure 5: (a) Longitudinal bond order \(c^{L}(J_{q},h)\), (b) Quantum concurrence \(\mathcal{C}(J_{q},h)\) for the Heisenberg spin pair \(S-S\) connected by rung strength \(J_{q}\) are shown as a function \(h\). Black, red, green, and blue colors represent four phases: the SRFM (\(J_{q}=0.2\),\(J_{d}=2.0\)), the AAFM (\(J_{q}=2.0\),\(J_{d}=1.0\)), and the SLFM (\(J_{q}=2.0\),\(J_{d}=1.6\)) respectively at \(T/J_{c}=0.02\).
Figure 6: Transverse magnetization \(m^{x}\) is shown as a function of the transverse field \(h^{x}\) for four phases: (i) SRFM (\(J_{q}=0.2\), \(J_{d}=2.0\)), (ii) AAFM (\(J_{q}=2.0\), \(J_{d}=0.4\)) (iii) Dimer (\(J_{q}=2.0\), \(J_{d}=1.0\)), and (iv) SLFM (\(J_{q}=2.0\), \(J_{d}=1.6\)). Black, red, and green colors represent the system sizes \(N=16,20,\text{and }24\) respectively.
magnetization \(m^{x}\) in terms of spin density \(\langle S_{i}^{x}\rangle\) at each site \(i\) as-
\[m^{x}=\frac{1}{N}\sum_{i=1}^{N}\langle S_{i}^{x}\rangle \tag{8}\]
In Fig.6[(i)-(iv)], we show the transverse (along \(+x\) direction) magnetization for all four phases: (i) the SRFM, (ii) the AAFM, (iii) the Dimer, and (iv) the SLFM with the corresponding set of chosen \(J_{q}\), \(J_{d}\) values as in section IV.1.1 for the system sizes \(N=16,20\), and \(24\). In the SRFM phase, the \(m^{x}\) shows continuous variation with the field \(h^{x}\) up to saturation value. In this phase, the GS has Ising bond dominance for all the spins, and therefore the spins along the x-direction get smoothly oriented along the field. In the AAFM phase, the curve increases smoothly and then it shows a \(m^{x}\approx 1/4\) plateau-like behavior in the range \(0.45<h^{x}<1.75\), and afterward it jumps to full saturation. A similar behavior is noticed in the Dimer phase as well in which the plateau onsets at a field \(h^{x}=0.4\). In this case, the jump from the \(m^{x}=1/4\) plateau to saturation is much faster than in the case of the AAFM phase. In the SLFM phase for \(h^{x}<0.7\), \(m^{x}\) increases smoothly, and then it forms the plateau-like structure for \(0.7<h^{x}<1.7\). It shows a sudden jump almost around \(h^{x}=1.75\) and slowly reaches saturation magnetization for higher \(h^{x}\). From all the subfigures, it is noticed that there is a negligibly small finite-size effect. In the next subsection, we analyze the \(m^{x}=1/4\) plateau mechanism for all the phases based on the spin density.
#### iv.2.2 Transverse component of spin density
For a more detailed understanding, we show the color map of the spin density \(\langle S_{i}^{x}\rangle\) in all four phases for \(N=24\). It is to be mentioned that the \(\sigma-\sigma\) and \(S-S\) pairs alternate with site index \(i\). To be more specific, \(\sigma-\sigma\) pairs take the site indices [1,2,5,6,....] whereas, \(S-S\) pairs take the indices [3,4,7,8....] as shown in all subfigures of Fig.7. In Fig.7(i), \(\langle S_{i}^{x}\rangle\) varies continuously as \(h^{x}\) increases for all the sites. As the GS of the SRFM has only Ising interactions dominance and there is no transverse spin correlation, all the spins continuously get oriented along \(h^{x}\) in this phase. In the AAFM phase, the \(S-S\) pair has a strong transverse spin correlation whereas, the \(\sigma-\sigma\) rung pairs are weak and are aligned continuously with an increase in \(h^{x}\) as shown in the color map Fig.7.(ii). Saturation is attained by an even further increase of field when it breaks the strong \(S-S\) pair at \(h^{x}=1.75\). As shown in Fig.7(iii) for the Dimer phase, the continuous increase of \(m^{x}\) is similar to the AAFM phase but a sudden jump at \(h^{x}=1.7\) is noticed because of unbinding of the perfect \(S-S\) singlet pairs. Fig.7.(iv) shows \(\langle S_{i}^{x}\rangle\) for the SLFM phase. As in the GS of this phase, both the
Figure 7: Transverse component of spin density \(\langle S_{i}^{x}\rangle\) at each site \(i\) for for four phases: (i) SRFM (\(J_{q}=0.2\), \(J_{d}=2.0\)), (ii) AAFM (\(J_{q}=2.0\), \(J_{d}=0.4\)) (iii) Dimer (\(J_{q}=2.0\), \(J_{d}=1.0\)), and (iv) SLFM (\(J_{q}=2.0\), \(J_{d}=1.6\)) are shown for system size \(N=24\). Along the horizontal axis, the site index \(i\) for each spin is shown. Along the vertical axis, the transverse field \(h^{x}\) is varied. The color bar shown in all subfigures represents the amplitude of \(\langle S_{i}^{x}\rangle\) and it varies from \(0\) to \(0.5\) for spin-\(1/2\) systems.
Ising pairs and Heisenberg pairs are aligned parallel but spins of opposite legs are aligned oppositely, with the increase in \(h^{x}\), it is noticed that all the Ising dimer pairs are continuously broken until the magnetization reaches to \(1/4\). An even further increase in \(h^{x}\) does not easily close the spin gap and results in a large \(m^{x}=1/4\) plateau until a field \(h^{x}=1.75\) is applied to break the Heisenberg rung \(S-S\) pairs.
## V Summary
In this manuscript, we study the effect of external magnetic field on the GS phases of Hamiltonian in Eq.1 of a frustrated spin-\(\frac{1}{2}\) two-leg ladder with alternate Ising and Heisenberg type of rung and Ising type of interaction in the leg and diagonal. Tuning of the exchange parameters in the model Hamiltonian can give rise to four GS phases: (i) the SRFM, (ii) the AAFM, (iii) the Dimer, and (iv)the SLFM, whose spin arrangements are schematically represented in Fig.1.b. [(i)-(iv)]. We analyzed the magnetization behavior in the presence of external magnetic fields: longitudinal and transverse. In the presence of the longitudinal magnetic field \(h\), the GS shows three magnetic plateaus: the first one is due to the finite spin gap, the second plateau at \(m=1/4\) is formed due to the polarization of either type of rung spin dimers along the field. This can be of two types as shown in Fig.3.b. In the SRFM phase, the Heisenberg rung spin pairs \(S-S\) are polarized giving rise to \(m=1/4\) plateau type-1 as shown in Fig.3.b.(i). But for the other three phases: the AAFM, the Dimer, and the SLFM, the \(\sigma-\sigma\) spins are polarized at \(m=1/4\) and give rise to plateau type-2 as shown in Fig.3.b.(ii). In the presence of a large external field in all phases, all of the spins are completely polarized along the field and form the third plateau at the saturation magnetization \(m=1/2\). The \(m=1/4\) plateau width is sensitive to the parameter values as obtained in Eq.6. We also notice that two plateaus are connected by jumps in the magnetization curve and this is because of the unpairing of either all \(S-S\) or all \(\sigma-\sigma\) rung dimers. To understand the quantum nature of the wave function of the GS, we calculate the quantum fidelity and quantum concurrence in the presence of a longitudinal field. In all four phases, fidelity shows deviation from unity at the critical fields of magnetic phase transitions as shown in Fig.4. The quantum concurrence shown in Fig.5 measures the entanglement between two spins at the Heisenberg rung. The concurrence is always zero as a function of the field for the SRFM phase, and it means that the SRFM phase is a pure state. Whereas, in other phases: the AAFM, the Dimer, and the SLFM, the concurrence has a finite value below a critical field before the formation of the \(m=1/4\) plateau. In these phases, the zero plateau is a mixed or entangled state but the other two plateaus are pure states. However, all of the jumps in the magnetization can be indirectly predicted based on the jumps in concurrence also as shown in Fig.5.
In the SRFM phase, spin alignments are along the z direction and there is weak exchange interaction along the \(+\)x direction in the Heisenberg rung dimers, therefore, magnetization \(m^{x}\) shows a continuous variation till saturation on the application of a transverse field. In other phases: the AAFM, the Dimer, and the SLFM, the magnetization process can be understood in terms of two sublattice behavior. The sublattice with \(\sigma-\sigma\) dimer is paramagnetic along x, whereas, in the other sublattice, the Heisenberg spin pair \(S-S\) dimer has a strong transverse exchange component which induces a finite spin gap in the system. As a consequence, at a lower value of the transverse field, the system shows a continuous behavior in the magnetization curve due to gradual change of \(m^{x}\) of \(\sigma-\sigma\) spins upto \(m^{x}=1/4\) with an increase in \(h^{x}\) as shown using spin density in Fig.7.[(ii)-(iv)]. With further increase in field, at a critical value, the magnetization curve shows a sudden jump from \(m^{x}\approx 1/4\) to \(1/2\) for the three phases: the AAFM, the Dimer, and the SLFM, and this phase transition seems to be of the first order. The plateau width is sensitive to the set of parameters or the set of exchange interactions \(J_{q}\) and \(J_{d}\) in the presence of a transverse field also. In conclusion, this model system gives many insightful mechanisms of the plateau and jumps and these magnetic properties can be utilized in designing quantum switches, magnetic memories, and other similar devices. Also, these systems might have tremendous applications in quantum information processing and quantum computation.
## VI Acknowledgements
M.K. thanks SERB Grant Sanction No. CRG/2022/000754 for the computational facility. S.S.R. thanks CSIR-UGC for financial support.
## VII Appendix
The partition function in presence of a longitudinal field \(h\) for \(N\) sites, \(Q_{N}(h,\beta)\) with Hamiltonian \(H\) can be written as-
\[Q_{N}(h,\beta)=Tr\left(e^{-\beta{\bf H}}\right) \tag{9}\]
where, Tr means trace of the matrix, \(\beta=1/\left(k_{B}T\right)\) and \(k_{B}\) is the Boltzmann constant. Using explicit configuration basis for the system, Eq. 9 is rewritten in the following form,
\[Q_{N}(h,\beta)=\sum_{\{\sigma,S\}}<\cdots,\sigma_{2i-1,1},\sigma_{2i-1,2},S_{2i,1},S_{2i,2},\cdots\mid e^{-\beta{\bf H}}\mid\cdots,\sigma_{2i-1,1},\sigma_{2i- 1,2},S_{2i,1},S_{2i,2},\cdots>,\]
here the summation is over all possible configurations \(\{\sigma,S\}\) of the system. For a given configuration, \(\mid\cdots,\sigma_{2i-1,1},\sigma_{2i-1,2},S_{2i,1},S_{2i,2},\cdots>\) represents a basis state. In our case, the system is composed of \(n=N/4\) units, and for each unit, the Hamiltonian is written in Eq.1. The partition function of the entire ladder can be written as:
\[Q_{N}(h,\beta)=\sum_{\sigma}<\cdots,\sigma_{2i-1,1},\sigma_{2i-1,2},\cdots\mid \prod_{i=1}^{n}{\bf T}_{i}\mid\cdots,\sigma_{2i-1,1},\sigma_{2i-1,2},\cdots>\]
where \(T_{i}=\sum_{\{S\}_{i}}<S_{2i,1},S_{2i,2}\mid e^{-\beta{\bf H}_{i}(\sigma,S)} \mid S_{2i,1},S_{2i,2}>\) is well-known transfer matrix operator for each unit. Here the summation is over \(\{S\}_{i}\) which represents all possible configurations of spins \(S_{2i,1}\) and \(S_{2i,2}\) (from the \(i^{th}\) unit). It may be noted that \(T_{i}\) does not contain the components of spin \(S\) operators and it has only \(\sigma\) variables, namely, \(\sigma_{2i-1,1},\sigma_{2i-1,2},\sigma_{2i+1,1}\) and \(\sigma_{2i+1,2}\).
Since, the Hamiltonians of each unit commute to each other, by introducing identity operators \(I=\sum_{\{\sigma\}_{i}}|\sigma_{2i-1,1},\sigma_{2i-1,2}><\sigma_{2i-1,1}, \sigma_{2i-1,2}|\) between successive \({\bf T}\) operators, we can finally write the partition function as the trace of the \(n\)-th power of a small \((4\times 4)\) transfer matrix \({\bf P}\). We have,
\[Q_{N}(h,\beta)=Tr({\bf P}^{n}),\]
The elements of the transfer matrix are given by
\[P_{(\sigma_{2i-1,1},\sigma_{2i-1,2}),(\sigma_{2i+1,1},\sigma_{2i+1,2})}=<\sigma _{2i-1,1},\sigma_{2i-1,2}\mid{\bf T}_{i}\mid\sigma_{2i+1,1},\sigma_{2i+1,2}> \tag{10}\]
Before we construct and diagonalize the \({\bf P}\) matrix, we first need to carry out the trace over the configurations \(\{S\}_{i}\) to find out the form of \({\bf T}_{i}\). Since \({\bf T}_{i}=\sum_{\{S\}_{i}}<S_{2i,1},S_{2i,2}\mid e^{-\beta{\bf H}_{i}(\sigma,S)}\mid S_{2i,1},S_{2i,2}>\), if we take the eigenstate basis of \({\bf H}_{i}\), we will get \({\bf T}_{i}\) as the summation over exponential of eigenvalues of \(-\beta{\bf H}_{i}\). Next, we calculate the eigenvalues of \({\bf H}_{i}\) operator. By considering,
\[{\bf H}_{i}=\frac{J_{q}}{2}\left(S_{2i,1}^{+}S_{2i,2}^{-}+S_{2i,1}^{-}S_{2i,2 }^{+}\right)+J_{q}\left(S_{2i,1}^{z}S_{2i,2}^{z}\right)+aS_{2i,1}^{z}+bS_{2i,2 }^{z}+f\]
We can write down the following Hamiltonian matrix in the eigenstate basis of \(S_{2i,1}^{z}S_{2i,2}^{z}\) operator,
\[H_{i}=\left[\begin{array}{cccc}\frac{J_{q}}{4}+\frac{(a+b)}{2}+f&0&0&0\\ 0&\frac{-J_{q}}{4}+\frac{(a-b)}{2}+f&\frac{J_{q}}{2}&0\\ 0&\frac{J_{q}}{2}&\frac{-J_{q}}{4}-\frac{(a-b)}{2}+f&0\\ 0&0&0&\frac{J_{q}}{4}-\frac{(a+b)}{2}+f\end{array}\right].\]
The Hamiltonian matrix comes up with its four eigenvalues from three \(S^{z}_{SS}\) sectors based on S-S pairs-
**(i) From \(S^{z}_{SS}=1\) sector (formed by S-S pair) \(\theta_{1}=(f+\frac{J_{q}}{4})+\frac{(a+b)}{2}\) (ii) From \(S^{z}_{SS}=-1\) sector (formed by S-S pair) \(\theta_{2}=(f+\frac{J_{q}}{4})-\frac{(a+b)}{2}\) (iii) From \(S^{z}_{SS}=0\) sector (formed by S-S pair) \(\theta_{3}=(f-\frac{J_{q}}{4})+\frac{\sqrt{J_{q}^{2}+(a-b)^{2}}}{2}\) \(\theta_{4}=(f-\frac{J_{q}}{4})-\frac{\sqrt{J_{q}^{2}+(a-b)^{2}}}{2}\).**
We note that the eigenvalues (\(\theta_{k}\)) are functions of \(\sigma\) variables, namely \(\sigma_{2i-1,1},\sigma_{2i-1,2},\sigma_{2i+1,1}\) and \(\sigma_{2i+1,2}\). Using these eigenvalues, we rewrite \({\bf T}_{i}\) as,
\[{\bf T}_{i} =\sum_{\{S\}_{i}}<S_{2i,1},S_{2i,2}\mid e^{-\beta{\bf H}_{i}( \sigma,S)}\mid S_{2i,1},S_{2i,2}>\] \[=\sum_{k=1}^{4}e^{-\beta\theta_{k}}.\]
\[=2e^{-\beta f}\biggl{[}e^{-\frac{\beta J_{q}}{4}}cosh\biggl{(} \frac{\beta(a+b)}{2}\biggr{)}\] \[+e^{\frac{\beta J_{q}}{4}}cosh\biggl{(}\frac{\beta J_{q}}{2} \sqrt{1+\frac{(a-b)^{2}}{J_{q}^{2}}}\biggr{)}\biggr{]}\]
Further, we consider- \(e^{\frac{\beta J_{q}}{4}}=Q\), \(e^{\frac{\beta J_{c}}{4}}=C\)\(e^{\frac{\beta h}{4}}=H\), \(\frac{(J_{eq}+J_{d})}{2}=X\)\(\frac{(J_{eq}-J_{d})}{J_{q}^{2}}=Y\) and also, \(\Delta_{1}=\sqrt{1+Y}\), \(\Delta_{2}=\sqrt{1+4Y}\)
The Transfer matrix for one unit becomes -
\[{\bf P}=\left[\begin{array}{cccc}p&q&q&r\\ q&s&u&v\\ q&u&s&v\\ r&v&v&w\end{array}\right].\]
Where,
\[p=2e^{-\beta(J_{c}/4+h)}\] \[\times\biggl{[}Q^{-1}cosh\bigl{(}\beta(2X+h)\bigr{)}+Qcosh \bigl{(}\beta\frac{J_{q}}{2}\bigr{)}\biggr{]}\] \[q=2e^{-\beta(h/2)}\] \[\times\biggl{[}Q^{-1}cosh\bigl{(}\beta(X+h)\bigr{)}+Qcosh \bigl{(}\beta\frac{J_{q}\Delta_{1}}{2}\bigr{)}\biggr{]}\] \[r=2e^{-\beta(J_{c}/4)}\] \[\times\biggl{[}Q^{-1}cosh\bigl{(}\beta h\bigr{)}+Qcosh\bigl{(} \beta\frac{J_{q}}{2}\bigr{)}\biggr{]}\] \[u=2e^{\beta(J_{c}/4)}\] \[\times\biggl{[}Q^{-1}cosh\bigl{(}\beta h\bigr{)}+Qcosh\bigl{(} \beta\frac{J_{q}}{2}\bigr{)}\biggr{]}\] \[v=2e^{\beta(h/2)}\] \[\times\biggl{[}Q^{-1}cosh\bigl{(}\beta(-X+h)\bigr{)}+Qcosh \bigl{(}\beta\frac{J_{q}\Delta_{1}}{2}\bigr{)}\biggr{]}\] \[w=2e^{-\beta(J_{c}/4-h)}\] \[\times\biggl{[}Q^{-1}cosh\bigl{(}\beta(-2X+h)\bigr{)}+Qcosh \bigl{(}\beta\frac{J_{q}}{2}\bigr{)}\biggr{]}\]
From \(|P-\lambda I_{4}|=0\), we get the eigenvalues in the form of
\[\boxed{\lambda_{4}=(s-u)}\] \[\boxed{\lambda^{3}-B_{0}\lambda^{2}-C_{0}\lambda+D_{0}=0} \tag{11}\]
Here, \(\lambda_{4}\) is one of the eigenvalues, whereas, the other three come from Eq.11. The coefficients of the equation are defined as:
\[B_{0} =(s+u+w)\] \[C_{0} =[2(q^{2}+v^{2}-\frac{r^{2}}{2})-pw+(p-w)(s+u)]\] \[D_{0} =[4qrv-2pv^{2}-2q^{2}w+pw^{2}-(s+u)(r^{2}+pw)]\]
For the polynomial equation 11, the eigenvalues \(\lambda_{i}\) satisfy the relations-
\[\sum_{i=1}^{3}\lambda_{i}=B_{0},\sum_{i=1}^{3}\lambda_{i}\lambda_{i+1}=-C_{0} \tag{12}\]
Now, let us make a very reasonable assumption to make the calculation easy. We assume, \(\lambda_{1}\gg\lambda_{2}\gg\lambda_{3}\) are in descending order and \(\lambda_{3}\) has the least contribution in the partition function so that Eq.12 can approximately be written as:
\[\lambda_{1}+\lambda_{2}=B_{0},\lambda_{1}\lambda_{2}=-C_{0} \tag{13}\]
The Eq. 13 leads us to getting other two eigenvalues:
\[\lambda_{1,2}^{2}-B_{0}\lambda_{1,2}-C_{0}=0\] \[\boxed{\Longrightarrow\lambda_{1,2}=\frac{B_{0}\pm\sqrt{B_{0}^{2} +4C_{0}}}{2}} \tag{14}\]
We find Eq. 14 becomes much more simpler with further approximation in \(\beta\rightarrow\infty\) limit as
\[\lambda_{1}=(w+s+u)\] \[=2e^{\frac{-\beta(J_{g}-4h)}{4}}\] \[\times\bigg{[}Q^{-1}cosh[\beta(h-2X)]+Qcosh[\frac{\beta J_{q}}{ 2}]\bigg{]}\] \[+2e^{\frac{\beta J_{c}}{4}}\] \[\times\bigg{[}2Q^{-1}cosh[\beta h]+Qcosh[\frac{\beta J_{q}\Delta _{2}}{2}]+Qcosh[\frac{\beta J_{q}}{2}]\bigg{]} \tag{15}\]
\(Q_{N}(h,\beta)\) takes the form as
\(Q_{N}(h,\beta)=[\lambda_{1}+\lambda_{2}+\lambda_{3}+\lambda_{4}]^{n}\)
For \(n\rightarrow\infty\), and \(\lambda_{1}\) being the largest, the partition function for the entire system and one unit become \(Q_{N}(h,\beta)\approx[\lambda_{1}]^{n}\) and \(Q_{4}(h,\beta)\approx\lambda_{1}\) respectively.
Now, we write down \(\lambda_{1}\) as a polynomial function of \(e^{\beta h}\) as
\[\lambda_{1}=a_{0}e^{2\beta h}+b_{0}e^{\beta h}+c_{0}e^{-\beta h}+d_{0} \tag{16}\]
We define the system parameters as follows-
\(a_{0}=e^{-\frac{\beta}{4}(J_{q}+J_{d}+2)}\),
\(b_{0}=2e^{-\frac{\beta}{4}(J_{q}-1)}+2e^{\frac{\beta}{4}(J_{q}-1)}Cosh[\frac{ \beta J_{q}}{2}]\),
\(c_{0}=2e^{-\frac{\beta}{4}(J_{q}-1)}\),
\(d_{0}=2Cosh[\frac{\beta J_{q}}{2}](e^{\frac{\beta J_{q}}{4}}+e^{\frac{-\beta}{ 4}})+2e^{\frac{\beta J_{q}}{4}}Cosh[\frac{\beta J_{q}\Delta_{2}}{2}]+e^{\frac {\beta(J+sJ_{d}+J_{d})}{4}}\)
By equating the differential of the free energy with respect to magnetization to zero i.e, \(\frac{\partial F}{\partial m}=0\), two critical fields \(h_{\pm}\) are obtained as
\[h_{\pm}=\frac{1}{\beta}ln\big{[}\frac{b_{0}\pm\sqrt{b_{0}^{2}-3a_{0}d_{0}}}{3a _{0}}\big{]} \tag{17}\]
|
2307.10121 | Symmetrically pulsating bubbles swim in an anisotropic fluid by
nematodynamics | Swimming in low-Reynolds-number fluids requires the breaking of time-reversal
symmetry and centrosymmetry. Microswimmers, often with asymmetric shapes,
exhibit nonreciprocal motions or exploit nonequilibrium processes to propel.
The role of surrounding fluids has also attracted attention because
viscoelastic, non-Newtonian, and anisotropic properties of fluids matter in
propulsion efficiency and navigation. Here we experimentally demonstrate that
anisotropic fluids, nematic liquid crystals (NLC), can make a pulsating
spherical bubble swim despite its centrosymmetric shape and time-symmetric
motion. The NLC breaks the centrosymmetry by a deformed nematic director field
with a topological defect accompanying the bubble. The nematodynamics renders
the nonreciprocity in the pulsation-induced fluid flow. We also report the
speed enhancement by confinement and the propulsion of another symmetry-broken
bubble dressed by a bent disclination. Our experiments and theory elucidate
another possible mechanism of moving bodies in complex fluids by spatiotemporal
symmetry breaking. | Sung-Jo Kim, Žiga Kos, Eujin Um, Joonwoo Jeong | 2023-07-19T16:37:52Z | http://arxiv.org/abs/2307.10121v1 | # Symmetrically pulsating bubbles swim in an anisotropic fluid by nematodynamics
###### Abstract
Swimming in low-Reynolds-number fluids requires the breaking of time-reversal symmetry and centrosymmetry. Microswimmers, often with asymmetric shapes, exhibit nonreciprocal motions or exploit nonequilibrium processes to propel. The role of surrounding fluids has also attracted attention because viscoelastic, non-Newtonian, and anisotropic properties of fluids matter in propulsion efficiency and navigation. Here we experimentally demonstrate that anisotropic fluids, nematic liquid crystals (NLC), can make a pulsating spherical bubble swim despite its centrosymmetric shape and time-symmetric motion. The NLC breaks the centrosymmetry by a deformed nematic director field with a topological defect accompanying the bubble. The nematodynamics renders the nonreciprocity in the pulsation-induced fluid flow. We also report the speed enhancement by confinement and the propulsion of another symmetry-broken bubble dressed by a bent disclination. Our experiments and theory elucidate another possible mechanism of moving bodies in complex fluids by spatiotemporal symmetry breaking.
## Introduction
Low-Reynolds-number (Re \(\ll 1\)) hydrodynamics governs the locomotion of microswimmers [1, 2]. The Navier-Stokes equation becomes time-independent when the viscous force dominates over the inertial force in the low Re regime. Applying this equation to Newtonian incompressible fluids, the scallop theorem implies that swimmers exhibiting only a nonreciprocal motion may gain net propulsion via time-reversal symmetry breaking [1, 3]. Examples in the nature include whip- or corkscrew-like flagellar motions and cilia's metachronal wave of microorganisms [4, 5, 6, 1]. External field-driven artificial swimmers [7, 8, 9, 10, 11] and various theoretical models, such as a swimming sheet and a three-link swimmer, mimic the biological motions [12, 13, 14, 5, 6, 15]. It is noteworthy that the symmetry-breaking motions of the swimmers set their swimming direction, _i.e._, the head and tail.
Microswimmers with no mechanical motions also break the symmetries in various ways [16, 17, 18, 5, 18]. Self-propelling micro-objects often generate and sustain a gradient, _e.g._, of chemicals and temperature, over their anisotropic bodies imposing the head and tail. Moreover, the gradient-generating processes, often involving chemical reactions and external energy input, are at nonequilibrium, which inherently breaks the time-reversal symmetry. One well-known example is an active Janus particle with anisotropic chemical or electrical properties in a chemically reactive medium or under external electric fields [16, 17, 18]. Even with no intrinsic anisotropy, a spontaneous symmetry breaking can also give rise to net propulsion in active emulsion and Quincke rollers [19, 20, 21, 16]. It is no surprise that many studies have focused on the understanding
and design of symmetry breaking by the microswimmers themselves.
Deploying complex fluids environment is another strategy to achieve symmetry breaking and even guide their swimming direction. For instance, an artificial scallop can swim despite its reciprocal motion exploiting time-asymmetric motions in non-Newtonian fluids [22]. Nematic liquid crystals (NLC), as a structured fluid with elasticity, can accommodate microswimmers. A topologically required point defect accompanies a spherical colloid dispersed in the NLC and breaks the colloid's symmetry, resulting in net propulsion [23, 24, 25, 16, 26]. Furthermore, an aligned NLC can guide the swimming directions of motile objects in the NLC; flagellated bacteria favor swimming along the alignment direction [15, 16, 25].
In this study, utilizing the symmetry breaking in a surrounding fluid solely, we experimentally demonstrate a spherical swimmer displaying a time-symmetric size pulsation. Body motion-assisted microswimmers hitherto studied should have intrinsic anisotropy and show either nonreciprocal or time-asymmetric motion to gain net propulsion. However, our pulsating spherical bubble in NLC acquires the centrosymmetry breaking by having a point defect, and nematodynamics in the viscoelastic and anisotropic NLC renders time-reversal symmetry breaking despite the time-symmetric pulsation.
## Results and discussion
We recruit pulsating spherical bubbles dispersed in a homogeneously aligned nematic liquid crystal (NLC) cell as microswimmers (Fig. 1a). Two surface-treated substrates sandwich the NLC to form the homogeneously aligned cell of a uniform thickness (see Method and Supplementary Fig. 1) [27]. The nematic director \(\mathbf{n}\), representing the average direction of the rod-like LC molecules, is aligned uniaxially, defining the far-field director \(\mathbf{n_{0}}\) parallel to the substrates. Spherical bubbles of approximately 100 \(\mu\)m in diameter, changing their radii under pressure modulation, are dispersed in the NLC (Fig. 1b). The buoyant bubbles float but do not touch the top substrate because of the elastic repulsion in NLC [27, 28, 29, 30]; They remain spherical if they are smaller than the distance between the substrates.
The bubble distorts the homogeneously aligned NLC director field to satisfy the boundary conditions. The directors are perpendicular to the bubble surface [31, 29, 32], which causes the bubble to acquire a topological defect conserving the zero net topological charge of the homogeneous director configuration [33, 34]. Figs. 1c and d illustrate director configurations with a disclination ring called the Saturn-ring (SR) and a point defect called the hyperbolic hedgehog (HH), respectively [35]. The bubbles accompanying each defect are labelled SRB and HHB, respectively. The energetics regarding the bubble size and confinement determines the director configuration and the type of accompanying defects [28, 36]. For instance, as displayed in Fig. 1e, we can transform SR into HH by decreasing the bubble size (Supplementary Fig. 2). The location of the HH (left or right side of the bubble in Fig. 1a) is determined randomly. This point defect breaks the centrosymmetry, meaning that the defect side of HHB differs from the defect-free side, in contrast to the centrosymmetric SRB.
The pulsating HHB swims toward the accompanying HH, whereas the displacement of pulsating SRB is negligible (Figs. 1c and d). We prepare and investigate a single bubble in the whole sample cell to exclude interference from other bubbles (see Methods for experimental details and Supplementary Movie 1). As shown in Fig. 1b, \(\kappa=\frac{t}{T}\) represents the cycle number of the pulsating bubble when \(t\) and \(T\) are the elapsed time and period of the sinusoidal pressure modulation, respectively. The centrosymmetric SRB practically does not move (Fig. 1c and Supplementary Movie 2); The centre position \(z_{\text{B}}\) of SRB moves by approximately 1 \(\mu\)m during \(\kappa=480\) with no change in the bubble average radius \(R\). We then decrease the radius of the pulsating SRB gradually by applying the positive DC offset pressure \(P_{\text{offset}}\) to the sinusoidal pressure modulation (see Methods for experimental details). Fig. 1e in the bubble's centre frame reveals the transformation from SRB to HHB as the SR shrinks to HH. Subsequently, the
centrosymmetry-broken HHB shown in Fig. 1d and Supplementary Movie 3 swims toward HH along \(\mathbf{n_{0}}\) by the periodic size modulation. Specifically, the HHB of the average radius \(R_{0}=60\)\(\mu\)m swims at the average speed of 0.2 \(\mu\)m/s under 4-Hz pulsation with the amplitude \(\Delta R\approx 5\)\(\mu\)m. We confirm that this motion is not an overall drift because multiple HHBs in the same cell migrate toward their own HHs (Supplementary Movie 4).
We find the bubble's centre translates while oscillating with a phase delay to the sinusoidal radius oscillation. As shown in Fig. 2a, upon the sinusoidal pressure modulation of the frequency \(f\), HHB's radius \(R(t)\) oscillates about \(R_{0}\) with the same frequency \(f\) and an amplitude \(\Delta R\), following the isothermal volume change of the ideal gas (the red solid line in Fig. 2a), which is expressed as \(R(t)\approx R_{0}(1+\frac{\Delta R}{R_{0}}\sin 2\pi ft)\) with a linear approximation. This \(R(t)\) results in the motion of HHB's centre \(z_{\text{B}}(t)\) with an oscillation amplitude \(\Delta z\) and a linear translation \(z_{0}(t)\), _i.e._, \(z_{\text{B}}(t)\approx z_{0}(t)+\Delta z\sin\left(2\pi f(t-t_{\text{d}})\right)\) with \(z_{0}(t)=Ut+z_{\text{const}}\), a constant velocity \(U\), and a constant position \(z_{\text{const}}\) (Fig. 2a). As shown in the last row of Fig. 1d, we define the positive \(z\)-direction in the bubble's centre frame as the defect-to-bubble-centre direction parallel
Figure 1: Pulsating bubbles dispersed in NLC. **a**, Optical microscopy images of bubbles dressed with different types of NLC director configurations. The bubbles are dispersed in a homogeneously aligned NLC cell with the far-field director \(\mathbf{n_{0}}\parallel\mathbf{\hat{z}}\) and observed with a linearly polarised illumination \(\mathbf{Pol\perp n_{0}}\), in the transmission mode; This optical configuration is applied to all images in Fig. 1. The scale bars are 100 \(\mu\)m. HHB and SRB stand for the bubble with hyperbolic-hedgehog (HH) defect and Saturn-ring (SR) defect, respectively. The two HHBs have HHs on different sides. **b**, Time-lapse image sequence of the pulsating HHB with a periodic size modulation under sinusoidal pressure modulation. \(\kappa=\frac{t}{T}\) indicates the cycle number with the elapsed time \(t\) and pulsating period \(T\). **c**, Stroboscopic observation of the pulsating SRB in the laboratory frame according to the cycle number \(\kappa\). In the first row, we use a red solid line to indicate the SR and blue dashed lines to illustrate the NLC director configuration. \(z_{\text{B}}\), indicated by the green arrow in the second row, is the centre of the bubble of the diameter \(2R\), and the red arrows point to the SR. **d**, Stroboscopic observation of the pulsating HHB and its translation in the laboratory frame. The red dot in the first row indicates HH in the director configuration (blue dashed lines). The HHB translates toward the HH from the initial position, manifested by the green dashed line and arrow. We define the positive \(z\)-direction as the direction from HH to the centre of bubble. **e**, SR-to-HH transformation in the bubble frame. The red dotted arrows at \(t=0\) s illustrate how the SR collapses into the HH. The bubble gradually shrinks because we apply a positive offset pressure in addition to the sinusoidal pressure modulation.
to the \(\mathbf{n_{0}}\). To our interest, \(z_{\mathrm{B}}(t)\) has a time delay \(t_{\mathrm{d}}\) to \(R(t)\), and Fig. 2b indicates the phase-delay \(\Psi=2\pi ft_{\mathrm{d}}\propto f^{0.46\pm 0.02}\). We can exclude possible roles of inertia [37] and NLC's shear-rate dependent viscosity [22] in HHB's propulsion. The density (\(\approx 1.0\) g/cm\({}^{3}\)), viscosity (\(\approx 10^{-1}\) Pa\(\cdot\)s), characteristic time and length scales (\(T\approx 1\) s and \(\Delta R\approx 10\)\(\mu\)m) and flow speed (\(\frac{dR}{dt}\)) give Re < \(10^{-4}\) and shear-rate < 1 s\({}^{-1}\), where our NLC shows negligible shear-rate dependent viscosity [38].
The comparison of the two time scales hints that nematodynamics around the pulsating HHB results in the experimentally observed phase delay and time-reversal symmetry breaking, enabling the swimming motion. The nematic director field at a length scale \(l\) has a diffusive elastic response with a timescale of \(\tau=\gamma_{1}l^{2}/K\); \(\gamma_{1}\) is the nematic rotational viscosity, and \(K\) is the nematic elastic constant in the one-constant approximation [39]. Within the slow-pulsation regime where the director oscillation period \(T\) by the pulsation is much longer than \(\tau\), the nematic directors have enough time to globally adapt to the oscillating environments, resulting in negligible time-reversal symmetry breaking and no net translation according to the Scallop theorem. However, for the fast-pulsation regime where \(T\) is comparable or shorter than \(\tau\), the directors cannot respond quickly enough in the quasi-static way, causing that the directors near the pulsating HHB may change in a time-asymmetric manner with a time delay to the fast pulsation.
The elasticity-mediated director dynamics resulting from a local director oscillation can be illustrated in the following one-dimensional system simplifying the director fields around a pulsating bubble. Nematodynamics formulates that the director tilt angle \(\phi(x,\,t)\) at time \(t\) and distance \(x\) from a surface obeys the dynamic equation \(\gamma_{1}\frac{\partial\phi}{\partial t}=K\frac{\partial^{2}\phi}{\partial x ^{2}}\)[39]. With a periodic distortion \(\phi(x=0,t)=\phi_{0}\sin(\omega t)\) imposed at \(x=0\), the time-dependent solution for the tilt angle is \(\phi(x,t)=\phi_{0}\ e^{-x/\zeta}\sin(\omega t-x/\zeta)\), where \(\zeta=\sqrt{\frac{2K}{\omega\gamma_{1}}}\). The elastic diffusion of the oscillating director decays exponentially away from the surface with a characteristic length scale \(\zeta\). Additionally, to our interest, the phase delay term scales as \(\sqrt{\omega}\), and a similar scaling is
Figure 2: Measurements of the pulsation-induced propulsion of HHB. **a**, Size and position of a representative pulsating HHB. The radius \(R\) oscillates about \(R_{0}\) with the amplitude \(\Delta R\) by the sinusoidal pressure modulation of the period \(T\). The red solid line corresponds to a fit according to the isothermal volume change of an ideal-gas bubble. The position \(z_{\mathrm{B}}(t)\) of the bubble’s centre also exhibits an oscillation with the amplitude \(\Delta z\) and the same frequency \(f=T^{-1}\), but with a linear translation shown as \(z_{0}\) and a time delay \(t_{\mathrm{d}}\) to \(R(t)\). The blue solid line is the best fit with the oscillation and linear translation. **b**, Scaling relation between a phase delay \(\Psi=2\pi ft_{\mathrm{d}}\) and \(f\). The fit line indicates \(\Psi\propto f^{0.46\pm 0.02}\). **c**, Polarised optical microscopy observations of HHB during a single pulsation cycle. With the incident polarisation **Pol** parallel to the far-field director \(\mathbf{n_{0}}\), we observe the transmitted light intensity around the pulsating HHB according to the cycle number \(\kappa\) with a discrete colour mapping of 8-bit intensity. The scale bar is 50 \(\mu\)m.
experimentally observed in the phase delay \(\Psi\) between \(R(t)\) and \(z_{\rm B}(t)\) as shown in Fig. 2b: \(\Psi\propto f^{0.46\pm 0.02}\). This phase delay breaks the time-reversal symmetry, making the nematic director fields around the expanding and shrinking bubble different. We find experimentally no clear \(\lambda\)-dependency of the phase delay under the same frequency as shown in Supplementary Fig. 3. For typical material parameters, such as \(K\approx 10\) pN and \(\gamma_{1}\approx 0.1\) Pa\(\cdot\) s, the characteristic length scale \(\zeta\) at \(f=2\) Hz is approximately \(4\,\mu\)m \(\sim 0.1\,R_{0}\). This indicates that the time-asymmetric director deformation in the vicinity of the bubble, which includes the point-defect region, should be mostly responsible for the net swimming motion.
We experimentally verify the time-reversal symmetry breaking of the director fields around the pulsating HHB. As depicted in Fig. 2c and Supplementary Movie 5, we observe the NLC around a pulsating HHB using polarised optical microscopy and measure the transmitted intensity profile. Because the transmitted intensity of polarised light through the birefringent NLC reflects the director configuration along the beam path [40], the time-asymmetric intensity profile reveals that the director configurations near the HHB during the expansion and shrinkage differ. For instance, as shown in Fig. 2c, the HHB has the same size, _i.e._, \(R(\kappa=0.2)=R(\kappa=0.8)\), but the transmitted intensity profiles near the bubble do not overlap; See the area and location of the green equi-intensity region indicated by green arrows. Namely, the sinusoidal pulsation is reciprocal and time-symmetric, but the NLC environment is not.
Here we present an analytical model to explain the pulsating HHB's motion considering the nematodynamics around the bubble accompanying the point defect. As the first approximation, our model considers the HHB in an infinite bulk system with no wall and buoyancy. Then, we characterize the system with a dimensionless Ericksen number \(\mathrm{Er}=\frac{\omega\gamma_{1}R_{0}^{2}}{K}\) that compares a time period of radius oscillation with the nematic director relaxation time at the length scale of \(R_{0}\). For a slow pulsation, i.e., \(\mathrm{Er}\ll 1\), the energetics
Figure 3: An oscillating dumbbell model for the pulsating HHB and its comparison with experimental data. **a**, A schematic diagram of a pulsating HHB as an oscillating dumbbell. The pulsating HHB of the time-varying radius \(R(t)\) with its centre at \(z_{\rm B}(t)\) accompanies a point defect at \(z_{\rm def}(t)\). The connecting spring of a spring constant \(k\) represents an effective quadratic potential between the defect and the bubble. Dashed lines sketch the deformed nematic director field with the perpendicular anchoring at the bubble surface. **b** and **c**, Scaling relation between the dimensionless oscillation amplitude \(\frac{\Delta z}{R_{0}}\), dimensionless swimming speed \(\frac{|\langle U\rangle|}{fR_{0}\sin\Psi}\), and deformation ratio \(\lambda\), where \(\Psi=2\pi ft_{\rm d}\) is the phase delay. The fit lines in **b** and **c** denote that the oscillation amplitude is proportional to \(\lambda^{1.3\pm 0.1}\), and the dimensionless speed is proportional to \(\lambda^{1.7\pm 0.1}\). The data in Fig. 2**b**, Figs. 3**b** and **c** are from the same dataset of observed pulsating HHBs confined in a 155 \(\mu\)m-thick cell. Each data point is the average value from a 2-min-long recorded movie at 60 frames per second, and the error bars represent its standard deviation; The large errors in **c** partly result from the error propagation of \(\sin\Psi\) in the denominator.
of the nematic director field and the viscous drag on the point defect govern the bubble displacement dynamics (see Methods for the calculation of viscous loss in a pulsating flow). The director field around a spherical bubble with the point defect can be characterized by a distance between the defect and the bubble surface, and the equilibrium distance is determined by a quadratic potential [41]. When the defect deviates from its equilibrium position, a pair of elastic forces aims to restore the equilibrium configuration and displaces the defect and the bubble. The Ericksen stress tensor \(\sigma_{ij}^{\mathrm{Er}}=-\frac{\delta f}{\delta\sigma_{i}n_{k}}\partial_{i}n _{k}+f\delta_{ij}\) with the free energy density \(f\) of the nematic director field \(\mathbf{n}\) formally mediates the forces. However, instead of working directly with the stress tensor, we adopt a coarse-grained approach where the energetics and drag of defect structures determines their dynamics. This is a common approach formulating analytical descriptions of nematodynamics [42].
As shown in Fig. 3(a), an effective dumbbell-like model for the defect and the bubble connected by a spring describes a quadratic potential \(\mathcal{F}=\frac{k}{2}(d-\varepsilon R)^{2}\), where \(d\) is a bubble surface-to-defect distance with the constant \(\varepsilon=0.17\) and \(k=16.5\pi K/R(t)=k_{0}\frac{R_{0}}{R(t)}\) is the effective spring constant [41]. Employing the sinusoidal pulsation \(R(t)=R_{0}+\Delta R\sin\omega t=R_{0}(1+\lambda\sin\omega t)\) with the pulsation ratio \(\lambda=\Delta R/R\), we find that the spring constant \(k\) and the equilibrium length \(\varepsilon R\) have a first order correction in \(\lambda\). The spring transmits a force \(\vec{F}\) that drives the overdamped motion of the point defect and the bubble. The drag coefficient of the point defect \(c_{\mathrm{def}}=\pi^{2}\gamma_{1}\) is derived in Eq. 13 (see Methods for the calculation of viscous loss in a pulsating flow), where \(\gamma_{1}\) is the rotational viscosity, and the drag coefficient of the gaseous bubble equals \(c_{\mathrm{B}}\approx 4\pi\eta\) for an average isotropic viscosity \(\eta\)[43].
We first consider the slow propulsion regime of \(\mathrm{Er}\ll 1\). The oscillation of the bubble radius generates a propulsion force on the bubble (see Methods for derivations)
\[F_{\mathrm{slow}}=\frac{\omega\Delta RR_{0}c_{\mathrm{B}}(1+\varepsilon)}{1+ \frac{c_{\mathrm{B}}}{c_{\mathrm{def}}}}\cos\omega t\,. \tag{1}\]
The force is proportional to \(\dot{R}(t)\) with the proportionality constant depending on the parameters of the dumbbell-like model. The propulsion force induces a periodic oscillation of the bubble position \(z_{\mathrm{B}}\)
\[z_{\mathrm{B}}(t)=z_{0}+\frac{\Delta R(1+\varepsilon)}{1+\frac{c_{\mathrm{B}}} {c_{\mathrm{def}}}}\sin\omega t\,. \tag{2}\]
The bubble position \(z_{\mathrm{B}}(t)\) in Eq. 2 exhibits no net displacement but a periodic motion in phase with the bubble radius \(R(t)=R_{0}+\Delta R\sin\omega t\). This is consistent with the Scallop theorem [44], since the pulsation with the repeating expansion and shrinkage is reciprocal.
Now, extending the slow-pulsation model into the fast-pulsation one, we present a minimal model explaining the bubble's net propulsion. As discussed above for the one-dimensional director dynamics model with an oscillating-director boundary condition at finite Ericksen numbers, the nematic response to periodic modulation is not instantaneous. Thus, the propulsion force by the director fields can experience the phase delay with respect to \(\dot{R}(t)\). To construct a minimal model of bubble propulsion in this fast-pulsation regime, we impose a periodic sinusoidal propulsion force with a phase delay \(\psi\):
\[F_{\mathrm{fast}}=a\omega\lambda R_{0}^{2}\cos\left(\omega t-\psi\right), \tag{3}\]
where \(a\) is the proportionality coefficient that can depend on the system size and material parameters. At the low Re regime showing the overdamped motion, the propulsion force is counteracted by the Stokes drag on the bubble
\[F_{\rm drag}=c_{\rm B}R(t)\dot{z}_{\rm B}(t)=c_{\rm B}R_{0}\left(1+\lambda R\sin \omega t\right)\dot{z}_{\rm B}(t). \tag{4}\]
Note that, in the view of building a minimal model, we adopt only the sinusoidal propulsion force in Eq. 3 and the sinusoidal drag coefficient in Eq. 4, although higher Fourier modes are possible. As shown in the bottom panel of Fig. 2a, deviations from sinusoidal oscillations are negligible in the experiments, which supports our approximation.
Equating \(F_{\rm fast}=F_{\rm drag}\) gives the bubble velocity \(\dot{z}_{\rm B}(t)\) with both the oscillation and the translation, which supports our experimental observation. When expanded for \(\lambda\ll 1\),
\[\dot{z}_{\rm B}(t)=\frac{a\omega R_{0}}{c_{\rm B}}\left[\lambda\cos(\omega t- \psi)-\lambda^{2}\cos(\omega t-\psi)\sin(\omega t)\right]+\mathcal{O}(\lambda ^{3}). \tag{5}\]
We integrate the velocity over time to obtain the bubble position
\[z_{\rm B}(t)=z_{\rm const}+\frac{a\omega R_{0}}{c_{\rm B}}\left[\frac{\lambda} {\omega}\sin(\omega t-\psi)+\frac{\lambda^{2}}{4\omega}\cos(2\omega t-\psi)- \frac{\lambda^{2}}{2}t\sin\psi\right]+\mathcal{O}(\lambda^{3}). \tag{6}\]
The oscillation amplitude \(\Delta z\) of \(z_{\rm B}(t)=z_{0}(t)+\Delta z\sin(\omega t-\psi)\) corresponds to \(\lambda\frac{aR_{0}}{c_{\rm B}}\) which scales linearly with \(\lambda\) and supports the experimentally observed scaling in Fig. 3b, showing the oscillation ratio \(\frac{\Delta z}{R_{0}}\propto\lambda^{1.3\pm 0.1}\). Note that the additional oscillation contribution with a doubled frequency \(2\omega\) should have a minor effect on the bubble position compared to the \(\omega\) oscillation term, because of the prefactor of \(\lambda^{2}/4\) with the experimental \(\lambda<0.18\). Importantly, the phase delay \(\psi\) results in the net translation term, \(-\frac{\lambda^{2}}{2}t\sin\psi\), giving the time-averaged swimming speed \(\langle U\rangle=-\lambda^{2}R_{0}\omega\sin\psi\frac{a}{2c_{\rm B}}\). This result is in line with the experimentally observed scaling of the swimming speed shown in Fig. 3c, where the dimensionless translation swimming speed \(\frac{|\langle U\rangle|}{JR_{0}\sin\psi}\) is proportional to \(\lambda^{1.7\pm 0.1}\). The proportional relation between net displacement (\(|\langle U\rangle|T\)) and oscillation amplitude (\(\Delta z\)) shown in Supplementary Fig. 4 also support our model shown as Eq. 6. However, note that our experimental setup limits the ranges of \(R_{0}\), \(\lambda\), and \(f\); see Methods for the details. Additionally, the data points in Fig. 3b and c have only the limited range of \(R_{0}\), from 27.1 to 37.6 \(\mu\)m, because we want to exclude a confinement effect that will be discussed in the following paragraph. Thus, Fig. 3b and c do not validate the \(|\langle U\rangle|\propto R_{0}\) experimentally.
We also discover that an optimal confinement exists for pulsating HHB's propulsion, and the HHB can reach a maximum speed of approximately 1 \(\mu\)m/s, which is about one order of magnitude faster than the slowest observed bubble. The swimming speed of the bubble in Fig. 1d and Supplementary Movie 3 is relatively slow compared to other microswimmers [45], only achieving 0.2 \(\mu\)m/s and meaning \(\frac{|\langle U\rangle|T}{2R_{0}}\sim 0.003\). However, regardless of the cell thickness \(H\), the dimensionless swimming speed \(\frac{|\langle U\rangle|}{fH\lambda^{2}}\) increases considerably as the bubble diameter \(2R_{0}\) approaches the cell thickness \(H\), achieving the maximum value near \(\frac{2R_{0}}{H}\approx 1.1\), as shown in Fig. 4a. Because the spherical HHB should be squeezed when the diameter reaches \(\frac{2R_{0}}{H}\approx 1\), the bubble's shape in this range keeps transforming between the sphere and disk by pulsation. We presume that this shape change enhances the time asymmetry of the pulsation process; The bubble feels a different environment during the expansion and shrinkage, respectively. Supplementary Fig. 5 and Movies 6 and 7 show that flow fields around the bubble indeed change when the spherical bubble becomes the disk. The detailed mechanism of the speed enhancement deserves further investigation.
Lastly, we report that the pulsating SRB also can swim when the bent SR breaks the symmetry [35], as shown in Fig. 4b and Supplementary Movie 8. SRB can retain its bent SR, _i.e._, not at the equator, possibly because of the cell boundary condition (Supplementary Fig. 2). Fig. 4c shows how the bent angle \(\theta_{\text{SR}}\) and centre position \(z_{\text{B}}\) change upon the pulsation cycle. In contrast to the symmetric SRB with its SR at the equator (Fig. 1c), asymmetric SRB realises swimming through pulsation, and the translational and oscillatory motions are similar to those of the HHB (Fig. 4c). We find no strong correlation between the translational speed and \(\theta_{\text{SR}}\). The \(\theta_{\text{SR}}\) changes spontaneously during the pulsation cycles. This observation demonstrates that centrosymmetry breaking in any form can lead to net propulsion when combined with the time-reversal-symmetry-breaking NLC relaxation.
## Conclusion
The main quest for propulsion in a low Re environment is to break the symmetries. This work demonstrates that even a symmetric object exhibiting time-symmetric motion can swim by symmetry breaking solely in a structured fluid. Our findings could help us better understand and design microswimmers, from bacteria to artificial sperm, to navigate complex environments. Specifically, a relaxation in complex fluids, responsible for time-reversal symmetry breaking in our case, could be exploited to increase swimming efficiency. Moreover, the observed existence of optimal confinement for propulsion may shed light on the unexpected roles of confinement, _e.g._, speed enhancement. Lastly, beyond the single-swimmer behaviour studied here, collective swimming resulting from the interaction and symmetry-breaking in complex fluids
Figure 4: Effects of confinement and bent SRB on the bubble propulsion. **a**, Cell thickness-dependent propulsion speed. The data shows the dimensionless swimming speed \(\frac{|\langle U\rangle|}{fH\lambda^{2}}\) according to cell thickness \(H\) and different size ratios \(\frac{2R_{0}}{H}\). While the data at \(H=155\)\(\mu\)m are acquired from multiple bubbles of various sizes, the other data at each \(H\) are from a representative bubble of which the central radius \(R_{0}\) decreases over time during pulsation. We split the data into equally spaced durations of nearly constant \(R_{0}\). Each data point represents the average value of each duration, with the error bands denoting the standard deviation. Optimal propulsion is achieved at \(\frac{2R_{0}}{H}\approx 1.1\), regardless of \(H\). **b**, Stroboscopic observation of the pulsating SRB with a bent SR according to the cycle number \(\kappa\). The SR pointed by the red arrows does not lie at the equator of the SRB. The angle \(\theta_{\text{SR}}\) is the angle between the bent SR and the moving direction (\(z\)-axis). The scale bar is \(100\)\(\mu\)m. **c**, Representative data of a SRB’s centre position \(z_{\text{B}}\) and \(\theta_{\text{SR}}\) as functions of \(\kappa\). **d**, Representative data of the radius \(R(t)\) (black solid) and the centre position \(z_{\text{B}}(t)\) (red dashed) of the pulsating SRB according to the time \(t\).
would be an intriguing question to pursue.
## Method
### Materials, Sample Preparation, and Optical Microscopy
We conduct all experiments using 4-cyano-4'-pentylbiphenyl (5CB, Sigma-Aldrich) as the anisotropic viscoelastic medium at the room temperature \(22\pm 2^{\circ}\)C, at which 5CB has the nematic phase. This 5CB is practically incompressible because its volume change ratio is only \(10^{-6}\) when the pressure increase by 0.5 MPa from the ambient pressure [46].
A sample cell with a single bubble can be prepared in three steps. First, we prepare an empty sandwich cell with two parallel polyimide-coated glass substrates [27]. They are rubbed along the same direction and assembled face-to-face to impose a uniaxial planar alignment of 5CB at the surface, as displayed in Supplementary Fig. 1a; the NLC directors align along the rubbing direction. The cell gap between two substrates is controlled by film spacers of thickness 50, 80, and 155 \(\mu\)m, and the cell area approximately 1 cm \(\times\) 1 cm. Only two opposite sides of the square cell are sealed with spacers and adhesive to facilitate pressure propagation to the dispersed bubbles through the openings.
In the second step, we fill the sandwich cell with the bubble-dispersed 5CB. Air bubbles are dispersed into 5CB in a vial by bubbling 5CB with a syringe needle, and the bubble volume fraction is controlled by varying the injection volume and speed. Subsequently, we fill the bubble-injected 5CB into the sandwich cell along the rubbing direction through the unsealed sides. Multiple bubbles exist immediately after filling the cell (Fig. 1a). The bubbles float to the top substrate because of buoyancy but make no physical contact with it because of elastic repulsion in the NLC [27, 28, 29, 31, 32].
Finally, we place the homogeneously aligned NLC cell with multiple bubbles in a custom pressure chamber, as shown in Supplementary Fig. 1a. The pressure chamber with the window allows the optical observation of the bubbles and is connected to a pressure controller (OB1 MK3, Elveflow) that controls the chamber pressure in the range of \(|\Delta P|<\) 0.5 MPa from the ambient pressure \(P_{0}\). We decrease the radii of the dispersed bubbles by applying the positive DC offset pressure \(P_{\text{offset}}\) in the pressure chamber, as shown in Supplementary Movie 1. Monitoring this shrinking process, we eliminate all but a single bubble in the whole cell to investigate the dynamics of a single bubble without interference from the other bubbles. The radius of a single bubble can be controlled by applying pressure. For example, we increase the radius of the small HHB in Fig. 1d after the SR-to-HH transformation to produce a large HHB (Fig. 1e) by applying negative \(P_{\text{offset}}\).
We use transmission light microscopy with polarised illumination to observe the bubble, as shown in Supplementary Fig. 1b. An inverted microscope (IX73, Olympus) with a 4\(\times\) and 10\(\times\) objective lenses and a CCD camera (STC-MC202USB, Omron Sentech) captures the motion of the bubble at a maximum acquisition rate of 60 frames per second. The polarised illumination is derived from the linear polariser **Pol** placed in front of the halogen lamp. When the **Pol** of the illumination is perpendicular to the far-field director \(\mathbf{n_{0}}\) (\(\textbf{Pol}\perp\mathbf{n_{0}}\)), the boundary of the bubble can be clearly identified, as shown in Figs. 1 and 4b. When **Pol**\(\parallel\mathbf{n_{0}}\), as shown in Fig. 2c, the transmitted light intensity reflects the non-uniform director field [40], which allows us to observe qualitatively how the director configuration responds to the pulsation of the bubble.
### Size Modulation of the Spherical Bubble and Its Measurement
We modulate the size of the bubble by controlling the pressure in the chamber. The bubble remains spherical because of the dominant surface energy with the surface tension \(\sigma\sim 10^{-2}\) N/m [30]. For instance, when a bubble of radius \(R\) = 50 \(\mu\)m pulsates under the pressure modulation of infrasound frequency (
20 Hz), the surface energy (\(\sigma R^{2}\sim 2.5\times 10^{-10}\) J) surpasses both the elastic energy (\(KR\sim 5\times 10^{-16}\) J) and viscous energy (\(\gamma fR^{3}\sim 2.5\times 10^{-13}\) J) with the average elastic constant (\(K\sim 10^{-11}\) N) [47] and viscosity (\(\gamma\sim 10^{-1}\) Pa\(\cdot\)s) [40] of 5CB.
The bubble size oscillates almost sinusoidally upon sinusoidal pressure modulation. The infrasound frequency (\(f<20\) Hz) of a wavelength considerably longer than the sample cell size results in uniform pressure across the entire cell. This simplifies the Rayleigh-Plesset equation, describing the dynamics of a spherical bubble in an incompressible fluid into the Young-Laplace equation \(P_{\rm bubble}=P_{\rm out}+\frac{2\sigma}{R}\). Since the large bubble size (\(R\sim 50\)\(\mu\)m) makes the Laplace pressure \(\frac{2\sigma}{R}\) sufficiently smaller than the applied pressure \(P_{\rm out}\) with the surface tension \(\sigma\sim 10^{-2}\) N/m [30], \(P_{\rm bubble}\approx P_{\rm out}\). The pressure \(P_{\rm out}(t)\) is \(P_{0}+P_{\rm offset}-\Delta P\sin 2\pi ft\) consisting of the ambient pressure \(P_{0}\), DC offset pressure \(P_{\rm offset}\), and sinusoidally modulating pressure with the amplitude \(\Delta P\) and frequency \(f\). Applying the isothermal volume change of the ideal gas under \(P_{\rm out}(t)\), we determine that the radius \(R(t)\) follows \(R(t)=R_{0}\left(\frac{P_{\rm out}(0)}{P_{\rm out}(t)}\right)^{1/3}\), as displayed by the red solid line in Fig. 2a. The \(R(t)\) can be linearly approximated to \(R(t)\approx R_{0}\left(1+\frac{1}{3}\frac{\Delta P\sin 2\pi ft}{P_{0}+P_{\rm offset }}\right)\) when \(|\frac{\Delta P}{P_{0}+P_{\rm offset}}|\ll 1\), and becomes \(R(t)\approx R_{0}(1+\lambda\sin 2\pi ft)\) with the deformation ratio \(\lambda=\frac{\Delta R}{R_{0}}\) and pulsating amplitude \(\Delta R\). We experimentally confirm the proportional relationship between \(\lambda\) and \(\Delta P\), as shown in Supplementary Fig. 3.
We find the envelopes of the oscillating data under sinusoidal pressure modulation, _i.e._, the bubble's radius \(R(t)\) and the centre position \(z_{\rm B}(t)\), to estimate their oscillation centre and amplitude. As shown in Supplementary Fig. 6, we apply the Envelope method provided by OriginPro 2020 (OriginLab) to determine the enveloping curves connecting the extrema of the oscillating data,_e.g._, \(R_{\rm Max}(t)\) and \(R_{\rm Min}(t)\). Then, we acquire the oscillation centre \(R_{0}=\frac{R_{\rm Max}(t)+R_{\rm Min}(t)}{2}\) and amplitude \(\Delta R=\frac{R_{\rm Max}(t)-R_{\rm Min}(t)}{2}\) as functions of time. \(R_{0}\) and \(\Delta R\) may change even under the constant pressure modulation amplitude \(\Delta P\) with \(P_{\rm offset}=0\) because 5CB has finite gas solubility [32]. However, the deformation ratio \(\lambda=\frac{R_{\rm Max}(t)-R_{\rm Min}(t)}{R_{\rm Max}(t)+R_{\rm Min}(t)}\) remains constant, and we experimentally confirm it. We apply the same method to retrieve \(z_{0}(t)\) and \(\Delta z(t)\) from \(z_{\rm B}(t)\).
Our experiment studies the scaling behavior within the limited range, as shown in Figs. 2 and 3, because of unavoidable experimental limitations and the system's nature. First, optical resolution and the pressure range limit the pulsation ratio \(\lambda\). Very small \(\lambda\) results in optically unresolvable oscillation amplitude and net displacement of the bubble; The typical net displacement observed after one pulsation cycle with no confinement effect is already sub-micron. On the other hand, because of the inverse relationship between the pressure and the volume = length [3], approximately ten times higher amplitude of pressure modulation than the current value is required to increase \(\lambda\) range by the factor of two; We use the maximum pressure range covered by our pressure pump, _i.e._, \(\pm 1\) bar.
In a similar vein, the ranges of frequency and bubble size are limited. It is challenging to observe small bubbles because the small ones dissolve quickly into the LC. The large spherical bubbles demand a homogeneously aligned thick LC cell of mm thickness, which is practically impossible to prepare because of the very long relaxation time. Moreover, as shown in Fig. 4, the propulsion is sensitive to the confinement, _i.e._, \(2R_{0}/H\); thus, in Fig. 3, we investigate the bubbles of similar radii to exclude the confinement effect in understanding the swimming mechanism. In the case of frequency, the pressure pump's response time of \(\sim 100\) ms sets the maximum frequency \(\sim 10\) Hz. In other words, when the pulsation frequency exceeds 10 Hz, the pumps cannot follow the set frequency and fail to generate the sinusoidal pressure modulation; \(\lambda\) decreases at a higher frequency, as shown in Supplementary Fig. 3.
### Viscous loss in pulsating flow
We estimate the magnitude of two propulsion mechanisms of pulsating bubbles: (i) anisotropic viscosity of a dipolar director field structure and (ii) drag force of a moving point defect in the director field. Pulsating bubble generates a radial flow that is subject to the anisotropic viscosity of the surrounding nematic liquid crystal, described by the nematic viscous stress tensor [42]
\[\sigma_{ij}^{\rm viscous}=\alpha_{1}n_{i}n_{j}n_{k}n_{l}A_{kl}+\alpha_{2}n_{j} N_{i}+\alpha_{3}n_{i}N_{j}+\alpha_{4}A_{ij}+\alpha_{5}n_{j}n_{k}A_{ik}+\alpha_{6}n_{ i}n_{k}A_{jk}, \tag{7}\]
where \(\alpha_{i}\) are Leslie viscosity coefficients, \(A_{ij}=\left(\partial_{i}v_{j}+\partial_{j}v_{i}\right)/2\) is the symmetric shear tensor, and \(N_{i}=\dot{n}_{i}-\left((\nabla\times\vec{v})\times\vec{n}\right)_{i}/2\) is the corrotational time derivative of the director. Dipolar director structure around the bubble breaks the symmetry and allows for a net force due to the bubble radial expansion. To estimate the propulsion force at \(\rm Er\ll 1\), we take a stationary dipolar director field ansatz [41]
\[\vec{n}(\vec{r})=\left(\frac{R_{0}^{2}}{r^{3}}x,\frac{R_{0}^{2}}{r^{3}}y, \sqrt{1-\frac{R_{0}^{4}}{r^{6}}(x^{2}+y^{2})}\right) \tag{8}\]
and a radial flow
\[\vec{v}(\vec{r},t)=\frac{\omega\Delta RR_{0}^{2}}{r^{2}}\cos(\omega t)\hat{ \vec{\bar{e}}}_{r}. \tag{9}\]
Force density is computed from the divergence of the stress tensor \(f_{i}=\partial_{j}\sigma_{ij}^{\rm viscous}\) for the viscosity parameters of 5CB, \(\alpha_{1}=-0.011\) Pa\(\cdot\) s, \(\alpha_{5}=0.102\) Pa\(\cdot\) s, \(\alpha_{6}=-0.027\) Pa\(\cdot\) s [42]. Other viscosity components do not contribute to a net force in a stationary director field and a radial flow. Due to the director field symmetry, the net force has a component only in the \(z\) direction and equals
\[F_{z}^{\rm shear}=\int_{r>R_{0}}\mathrm{d}V\,\partial_{j}\sigma_{zj}^{\rm viscous }\approx 0.48\,\pi R_{0}\omega\Delta R\alpha_{5}\cos(\omega t). \tag{10}\]
We now estimate the viscous force due to reorientation of the director field by considering a point defect moving with a constant velocity \(v_{\rm def}\) in analogy to the two-dimensional case [42]. The director field of a moving hyperbolic defect at small velocities (\(\rm Er\ll 1\)) has the shape of
\[\vec{n}(\vec{r},t)=(x,y,v_{\rm def}t-z)/\sqrt{x^{2}+y^{2}+(v_{\rm def}t-z)^{2}}. \tag{11}\]
Drag force on a moving point defect is estimated from the energy dissipation rate
\[\Sigma=\gamma_{1}\int\mathrm{d}V\,\dot{\vec{n}}^{2}=\gamma_{1}v_{\rm def}^{2} \pi^{2}R_{\rm max}, \tag{12}\]
where the integration is performed over a spherical region with radius \(R_{\rm max}\). Taking the defect velocity to be equal to the speed of the bubble surface and estimating the size of the point defect region with \(R_{\rm max}\approx R(t)\), the force can be directly estimated from the dissipation rate [42]
\[F_{z}^{\rm drag}=\Sigma/v=c_{\rm def}R(t)v_{\rm def}=\pi^{2}R(t)\omega\Delta R \gamma_{1}\cos(\omega t). \tag{13}\]
Comparing Eq. 10 to Eq. 13, we observe that both mechanisms can produce a force in the direction from the defect towards the colloid. The force due to displacement of the point defect is stronger in magnitude, and we use it in the derivation of the swimming dynamics.
### Slow pulsation dynamics
Here we calculate the dynamics of the spherical bubble and the topological point defect in the slow pulsation regime at \(\mathrm{Er}\ll 1\). In the main text, we introduce an effective dumbbell-like description of the bubble and the defect due to a quadratic potential \(\mathcal{F}=\frac{k}{2}(d-\varepsilon R)^{2}\) between them, where \(d\) is a bubble surface-to-defect distance with the constant \(\varepsilon=0.17\) and \(k\) is the effective spring constant [41]. For slow pulsation, the bubble surface-to-defect distance \(d=z_{\rm B}(t)-R(t)-z_{\rm def}(t)\) equals the equilibrium distance \(\varepsilon R(t)\), which is proportional to the bubble radius. Here, \(z_{\rm B}\) and \(z_{\rm def}\) are the bubble and the defect positions, respectively. From the sinusoidal oscillation of the bubble radius \(R(t)=R_{0}+\Delta R\sin(\omega t)\), it follows that
\[z_{\rm B}(t)-z_{\rm def}(t)=R_{0}(\varepsilon+1)(1+\lambda\sin\omega t), \tag{14}\]
where \(\lambda=\Delta R/R\). The spring transmits a force \(\vec{F}\) that drives the overdamped motion of the point defect and the bubble:
\[F=-c_{\rm def}R(t)\dot{z}_{\rm def}(t)=c_{\rm B}R(t)\dot{z}_{\rm B}(t), \tag{15}\]
where \(c_{\rm def}\) and \(c_{\rm B}\) are the drag coefficients for the defect and the bubble, respectively. Combining Eq. 15 and the time derivative of Eq. 14, we can express the propulsion force and the bubble position in the slow pulsation regime as Eqs. 1 and 2, respectively.
## Data availability
The data that support the findings of this study are mostly available within the main text and the Supplementary Information. But further data can be available from the corresponding author upon request.
## Acknowledgements
The authors gratefully acknowledge financial support from the National Research Foundation (NRF) of Korea. S.-J.K. acknowledges NRF-2018R1A6A3A01010921 and IBS-R020-D1. E.U. acknowledges NRF-2022R1A2C1010700. J.J. acknowledges NRF-2020R1A4A1019140 and NRF-2021R1A2C101116312. Z. K. acknowledges funding from Slovenian Research Agency (ARRS) under contracts P1-0099 and N1-0124. The authors would like to thank Hyuk Kyu Pak and Simon Copar for valuable discussions and feedback.
## Author contributions
S.-J.K. conceived the idea and performed experiments. Z.K. developed the theoretical model. E.U. and J.J. designed and supervised the research. S.-J.K., Z.K., E.U., and J.J. analyzed the data and wrote the manuscript.
## Competing information
The authors declare no competing interests.
|
2306.10208 | Learning Space-Time Semantic Correspondences | We propose a new task of space-time semantic correspondence prediction in
videos. Given a source video, a target video, and a set of space-time
key-points in the source video, the task requires predicting a set of keypoints
in the target video that are the semantic correspondences of the provided
source keypoints. We believe that this task is important for fine-grain video
understanding, potentially enabling applications such as activity coaching,
sports analysis, robot imitation learning, and more. Our contributions in this
paper are: (i) proposing a new task and providing annotations for space-time
semantic correspondences on two existing benchmarks: Penn Action and Pouring;
and (ii) presenting a comprehensive set of baselines and experiments to gain
insights about the new problem. Our main finding is that the space-time
semantic correspondence prediction problem is best approached jointly in space
and time rather than in their decomposed sub-problems: time alignment and
spatial correspondences. | Du Tran, Jitendra Malik | 2023-06-16T23:15:12Z | http://arxiv.org/abs/2306.10208v1 | # Learning Space-Time Semantic Correspondences
###### Abstract
We propose a new task of space-time semantic correspondence prediction in videos. Given a source video, a target video, and a set of space-time key-points in the source video, the task requires predicting a set of keypoints in the target video that are the semantic correspondences of the provided source keypoints. We believe that this task is important for fine-grain video understanding, potentially enabling applications such as activity coaching, sports analysis, robot imitation learning, and more. Our contributions in this paper are: (i) proposing a new task and providing annotations for space-time semantic correspondences on two existing benchmarks: Penn Action and Pouring; and (ii) presenting a comprehensive set of baselines and experiments to gain insights about the new problem. Our main finding is that the space-time semantic correspondence prediction problem is best approached jointly in space and time rather than in their decomposed sub-problems: time alignment and spatial correspondences.
## 1 Introduction
**What are space-time semantic correspondences?** Given two videos \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) which are assumed to have similar semantic content,, two videos of people performing the same actions. The two space-time keypoints \(p:(x_{p},y_{p},t_{p})\) in \(\mathcal{V}_{1}\) and \(q:(x_{q},y_{q},t_{q})\) in \(\mathcal{V}_{2}\) are defined as the space-time semantic correspondence of each other when they are semantically aligned in both space and time. More specifically, \(p\) and \(q\) are semantically aligned in time when \(t_{p}\) and \(t_{q}\) are the correct alignment of each other defined by the key moments [11] in \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\). And \(p\) and \(q\) are semantically aligned in space when \((x_{p},y_{p})\) and \((x_{q},y_{q})\) are the true visual semantic correspondence of each other at the \(t_{p}\)-th frame of \(\mathcal{V}_{1}\) and the \(t_{q}\)-th frame of \(\mathcal{V}_{2}\).
**Space-time semantic correspondence prediction**. Given a pair of videos: a source video \(\mathcal{V}_{S}\) and a target video \(\mathcal{V}_{T}\), and a set of space-time keypoints \(P_{S}\) in \(\mathcal{V}_{S}\), the problem of _space-time semantic correspondence prediction_ is to predict the set of keypoints \(P_{T}\) in \(\mathcal{V}_{T}\) those are the space-time semantic correspondences of \(P_{S}\). Figure 1 provides two examples of space-time semantic correspondence prediction in two pairs of videos: one includes "bowling" videos, and the other includes "pouring" videos. Ground truth space-time semantic correspondences are visualized with red markers in these videos. The ground truth keypoints are temporally aligned by key moments: "ball swung fully back" and "ball release" for bowling, or "liquid starts pouring" and "liquid stops pouring" for pouring videos. The ground truth keypoints are also spatially aligned at seman
tic keypoints: head, wrists, bowling ball (in bowling videos) and fingertips, cup corners, and hand (in pouring videos).
**Why's this problem important?** This problem, if solved, will enable various practical applications including activity coaching, sports analysis, and robot imitation learning. In activity coaching, a space-time semantic correspondence prediction model may assist to point out the differences between a professional golf player versus a novice. The model can also be useful in assessing how well a person is performing the bowling swing compared with herself or himself one month ago. In sports analysis, a similar model can be used to analyze and compare different players and provide feedback. In robot imitation learning, a robot may watch the human teacher in an exo-view while it imitates the task in an ego-view. The space-time semantic correspondence prediction can also be adopted to solve the correspondence matching across ego-exo views. In addition to that, we believe the problem of matching space-time keypoints semantically across videos and views is fundamental as models are required to understand the key moments, objects, and their interactions to complete the task.
Our contributions in this paper are:
* We propose a novel task of space-time semantic correspondence prediction which is an essential task for video understanding with various practical applications.
* We provide **two** new datasets for this task by adding space-time semantic keypoint annotations to two existing datasets: Penn Action [39] and Pouring [32].
* We present a set of comprehensive baseline approaches and perform an in-depth analysis to gain insights about the new problem. All annotations, source code, and models will be released upon publication.
## 2 Related Work
**Visual semantic correspondences in images**. Visual semantic correspondence prediction in images is a fundamental problem and well-studied [36, 12, 29]. Early methods approached this problem by local descriptor matching [23, 19, 2, 5, 12, 36] normally with hand-crafted features, _e.g_., SIFT [26] or HOG [10]. After deep learning, CNN features are also used for semantic correspondence matching [25, 8, 13, 20, 21]. More recently, visual semantic correspondence prediction is approached by various architecture-based methods including Hyperpixel [28], Neighbourhood Consensus Networks [31], Multi-scale Matching Networks [40], Optimal Transport problem [24], Dynamic Hyperpixel Flow [30], Convolutional Hough Matching [27], Cost Aggregation Transformers (CATs) [6, 7], Volumetric Aggregation with Transformers (VATs) [16]. Inspired by the fundamental and practical nature of this problem in the image domain, we extend this problem into space-time and study the extended problem in videos.
**Time alignment in videos**. Although time alignment is well studied in time-series analysis [1, 9], there are not many works on video time alignment. Cao _et al_. [3] used video time alignment for few-shot video classification. Yi _et al_. [4] utilized video transcript alignment for weakly-supervised learning. More recently, dense temporal alignment in videos is used for self-supervised learning [11, 14]. The later work [11, 14] are closely related to ours, however, their problem setup is dense temporal alignment and ignores the spatial details. In contrast, our problem is set up to perform space-time alignment and at only sparse space-time keypoints, _e.g_., at semantic keypoints in key-moment frames.
**Space-time correspondences in videos**. Space-time correspondences have been previously studied in videos. Wang _et al_. [38] proposed cycle consistency in time for visual image representation learning. Jabri _et al_. [17] further employed random walks and cycle consistency for self-supervised learning. More recently, Son [34] proposed a contrastive learning approach using self-cycle consistency for self-supervised representation learning. We note that these works are self-supervised learning methods that utilize space-time correspondences within the same video to learn visual representations. In contrast, our work is a supervised-learning approach that predicts space-time correspondences across two different videos and for predicting semantic correspondences as opposed to learning visual representations.
**Cross-video semantic prediction**. The Action Similarity Labeling Challenge (ASLAN) [22] is also related to our work in terms of cross-video semantic labeling where models have to predict if two input videos contain the same semantic action or not, _e.g_., both videos of playing soccer. Different from ASLAN, our problem requires models to predict semantic correspondences across videos at the keypoint level, not just at the action level.
## 3 Benchmark Construction
In this work, we adopt two existing benchmarks: Penn Action [39] and Pouring [32] for our new task of space-time semantic correspondence prediction. The following subsections describe the process for annotating these benchmarks.
### Penn Action
**Data selection**. Penn Action was proposed for action recognition which contains 2,326 videos of 15 human actions, and all video frames are provided with 2D human keypoints. Penn Action is suited for our study because it was previously used in time alignment problem [11] and provided with 2D human keypoint annotations which we
can use as space-time keypoints. Since our problem requires aligning the keypoints both in space and time, we also adopt the definition of key moments for Penn Action used in [11] for "semantic" time alignment. As we are interested in aligning space-time keypoints that capture both the subjects and the objects involved in the actions, _e.g_., bowling requires interacting with a bowling ball or playing golf requires using a golf stick and a ball, we eliminate actions involving no object such as "jumping jacks", "Pushups", "Situps". We also eliminate actions that have only one key moment, _e.g_., "bench press" and "Pullups" as it requires less time alignment. Table 1 presents the selected actions with their associated key moments and the objects involved during these actions.
**Annotating space-time keypoints**. We define our space-time semantic keypoints as 2D semantic keypoints happening at the key moments (the second column of Table 1). A 2D semantic keypoint is a spatial location in the image which has its own semantic meaning that can be matched with a similar 2D semantic keypoint in another image. Examples of semantic keypoints can be human or object keypoints such as the left knee, the right wrist, the head of a person, a golf stick, or a bowling ball, etc. By this definition, we can leverage the human keypoints from Penn Action at the key-moment frames as our space-time semantic keypoints. Since we need also semantic keypoints on the objects, we annotate the keypoints for involving objects (the last column of Table 1) at the key-moment frames whenever they are visible. To ensure consistency in the object keypoints, we explicitly define the object keypoints as follows. For circular objects such as tennis ball, golf balls, baseball ball, bolwing balls, and gym discs, the keypoints are the center of these objects. For bar-shaped objects such as baseball bats and gym bars, the keypoint is at the center of the bar. For golf sticks, the keypoint is at the club-head, and for tennis rackets, the keypoint is at the center of the racket head.
**Constructing pairs of correspondences**. Because we have all human and object semantic keypoints annotated at the key moments in each video, in theory, any pair of videos with the same action label can be used to form a pair for our task. This can be done by selecting one video as the source and the other as the target, and using space-time semantic keypoints in these two videos as space-time semantic correspondences of each other. In practice, not all key-points at key-moment frames are visible, thus we can only form a pair when two videos (with the same action label) share a minimum number of visible keypoints (_e.g_., 3). We also present two different benchmark setups for our problem (Table 2). The "13+3" setup uses all semantic keypoints available which could be up to 13 human keypoints and up to 3 object keypoints per frame. The "3+3" setup is designed to balance between human keypoints and object keypoints.
**Benchmark size and split**. We annotated object keypoints for all 1,482 videos of the actions listed in Table 1. We use the same training and validation splits defined in the original Penn Action dataset, meaning using only video in the training split to form the pairs for our training split (similarly for the validation split). Even though the number of annotated videos is moderate, the number of pairs is much larger. The numbers of training and validation pairs are shown in Table 2 (the 3+3 setup has a lightly smaller number of pairs as a result of removing pairs with fewer than 3 visible keypoints). An example of bowling in our dataset (3+3 setup) is visualized with both ground truth and predicted keypoints in Figure 1.
### Pouring
The Pouring dataset is proposed and used in robotic research [32, 33] which includes 17 (11 training, 6 testing) videos of a human hand pouring liquid from a container into a cup. We define key moments in pouring as the time when the liquid starts and stops pouring. Since this dataset is quite small, we don't need to pre-define a fixed set of spatial semantic keypoints for annotating. Indeed, we can annotate each video pair independently to maximize the keypoint diversity. Our annotation process is described as follows. First, we annotate the pre-defined key moments for each video. Next, for each pair of videos, we annotate each pair of frames at the key moments independently. The keypoints are normally selected at the center of the hand, the fingertips, the corners of the liquid container, and the corners of the cup. This annotation process provides us with 55 training and 15 testing pairs of pouring videos. Fig 2 shows one example from the Pouring dataset with annotations.
**Verification and correction**. For both Penn Action and Pouring, after the annotation finishes, all key-moment frames, visualized with annotated keypoints, are shown to
Figure 2: **An example of pouring**. The upper row shows the source video, and the lower row shows the target video. The two key moment frames are visualized with key-points. Each pair of corresponding key-points is visualized by markers with the same type and color.
annotators for quality assessment and potential corrections.
## 4 Approaches
### Space-time baselines
**Overview of the approaches**. Given a pair of videos size t\(\times\)h\(\times\)w (where t is the number of frames, and h\(\times\)w is the frame size), the dense correspondence prediction problem can be formulated as finding a matching tensor size (t\(\times\)h\(\times\)w)\({}^{2}\)1 which encodes the matching likelihood for all pairs of pixels in the two videos. Since matching in the pixel space is costly and also less robust to semantic content, the matching is preferably done at the feature level, _e.g_., videos are fed into a feature extraction backbone to produce a feature map of T\(\times\)H\(\times\)W (where T, H, W are much smaller than t, h, w), to predict a smaller matching flow of (T\(\times\)H\(\times\)W)\({}^{2}\). At inference, upsampling is used to render predictions at the pixel level. Figure 3 presents our two baseline approaches which follow the paradigm of feature extraction followed by matching.
Footnote 1: for simplicity we denote (t\(\times\)h\(\times\)w)\({}^{2}\) instead of its full notation of t\(\times\)h\(\times\)w\(\times\)t\(\times\)h\(\times\)w
**Feature extraction and correlation volume construction**. One common practice in the visual semantic correspondence problem in images is to extract features at different layers and then up- or down-sample the features into the same size (_e.g_., hyperpixel [28]). We follow the same practice but instead we use a 3D CNN backbone for videos. As shown in Figure 2(a), features at different layers of a 3D CNN backbone are extracted and then up- or down-sampled into the same size of T\(\times\)H\(\times\)W. The two feature maps from both source and target videos are used to construct a correlation cost volume with a size of (T\(\times\)H\(\times\)W)\({}^{2}\). We note that the correlation volume is computed independently per feature map, thus if we have M selected layers (for feature extraction), then M correlation volumes are constructed and concatenated into M\(\times\)(T\(\times\)H\(\times\)W)\({}^{2}\). This correlation volume and both source and target feature maps are fed into an aggregator network to produce matching predictions.
**The space-time CATs**. The **C**ost **A**ggregation **T**ransformers (CATs) [6] is the state-of-the-art for the image visual correspondence prediction problem. Here we adopt CATs for our problem. We extend CATs to work on space-time feature maps of T\(\times\)H\(\times\)W instead of H\(\times\)W. The networks perform linear projections on the feature maps (both source and target), and sequentially concatenate the projected feature maps (source, then target) with the correlation volume. The transformers (multi-head attention) blocks and transpose are applied after each concatenation. Skip connections are also employed for stabilizing the training. The st-CATs predict a flow map size (T\(\times\)H\(\times\)W)\({}^{2}\) followed by an L2-loss w.r.t the ground truth sparse flow. Figure 2(b) visualizes the architecture of our st-CATs baseline. For further details, readers are referred to read [6]. We name this baseline space-time CATs to differentiate it from our later baselines where we use CATs in the space-only problem.
**The simple Aggregation NeTworks**. Besides adopting CATs, we also introduce ANTs (simple **A**ggregation **NeT**works) as another baseline. Similar to CATs, ANTs also take both feature maps of the source and the target videos and the correlation volume as input and predict a small flow map at the feature map size. In contrast, instead of using transformers, ANTs employ a few layers of 3D convolutions for aggregation and prediction. Our ANTs baseline is shown in Figure 2(c). ANTs first reshape the correlation volume, then concatenate it with both source and target feature maps. The concatenated maps are fed into a few-layer 3D CNN. ANTs use appropriate padding and no striding in its 3D convolution layers, thus the feature map
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Action** & **Key moments** & **Involving objects** \\ \hline Baseball pitch & Knee fully up, Arm fully stretched out, Ball release & ball \\ Baseball swing & Bat swung back fully, Bat hits ball & bat, ball \\ Bowling & Ball swung fully back, Ball release & ball \\ Golf swing & Stick swung fully back, Stick hits ball & golf stick, ball \\ Squats & Hips at knees (going down), Hips at floor, Hips at knee (going up) & bar (center), left- and right discs \\ Tennis forehand & Racket swung fully back, Racket touches ball & racket, ball \\ Tennis serve & Ball released from hand, Racket swung fully back, Ball touches racket & racket, ball \\ \hline \end{tabular}
\end{table}
Table 1: **Selected actions with key moments and objects**. We select a subset of actions from Penn Action [39] for annotating our benchmark. Beside adopting the key moment definitions from [11], we define the set of objects that are involved in each action for annotating.
\begin{table}
\begin{tabular}{|c|l|c|c|c|} \hline setup & selected human keypoints & objects & \# train & \# val \\ & & pairs & pairs \\ \hline
13+3 & All 13 human keypoints & all & 39.9k & 21,6k \\
3+3 & head, left and right wrists & all & 39.9k & 21,5k \\ \hline \end{tabular}
\end{table}
Table 2: **Benchmark setup & statistics**. We experiment with two benchmark setup for space-time semantic correspondences. The 13+3 uses all human keypoints from Penn Action (up to 13) and all object keypoints (up to 3 per frame). The 3+3 uses only 3 key-points on human (head, left and right wrists) and all object keypoints.
size remains the same as T\(\times\)H\(\times\)W. For the final layer or prediction layer, ANTs map it back into \(THW\) channels, then reshape the prediction into the size of (T\(\times\)H\(\times\)W)\({}^{2}\). The same L2 loss that was used for training CATs is applied for ANTs.
**The space-time MATCH baseline**. Besides st-CATs and ANTs, we also provide a simplified version of both st-CATs and ANTs. The st-MATCH is a non-trainable version of feature matching in space-time. It takes the correlation volume (in Figure 3 a), and performs a mean pooling over M channels to obtain the prediction size of (T\(\times\)H\(\times\)W)\({}^{2}\).
### Sequential baselines
One may wonder if we can decompose this problem into two sub-problems: time alignment [11] and then spatial alignment (aka visual semantic correspondence). We present here some baselines for approaching this problem sequentially: time, then space alignment.
**Time alignment options**. Visual features are extracted at each frame. We can use Nearest Neighbor (NN) search or Dynamic Time-Wraping (DTW) for alignment. Time-Cycle Consistency (TCC) [11] is also a strong alternative for time alignment.
**Space alignment options**. This step assumes that time alignment has already been done. Thus, any frame from the source video now has exactly one frame from the target video which is matched/aligned to it using one of the time alignment options. The problem is now reduced to space alignment or visual correspondence prediction: i.e., given two frames (one source and one target) and a set of keypoints in the source image, it then asks the model to predict the set of correspondences of those keypoints in the target image. Since we have many keypoints are human keypoints, one may wonder if a simple pose estimator can solve the problem. We present a pose-based baseline for space alignment. The same pose estimator [35] is first employed to detect human poses in both source and target images. The source keypoint is used to find the closest detected human pose w.r.t the source keypoint. If there is more than one detected human pose in the target image, a simple pose descriptor is used to find the most similar pose in the target image. Finally, one of keypoints (on the matched pose in the target image) is returned as the predicted correspondence of the source keypoint. Besides the pose-based baseline, we also employ CATs [6] as another baseline for space alignment.
Figure 3: **Baseline Approaches**. (a) both source and target videos are fed into a 3D CNN backbone for feature extraction. All selected feature maps (from different layers) are up- or down-sampled into the same feature size of T\(\times\)H\(\times\)W with their specific number of channel \(C_{i}\). Correlation volumes are computed per feature map, then concatenated to form a tensor of M\(\times\)(T\(\times\)H\(\times\)W)\({}^{2}\) where M is the number of selected feature maps. (b) A space-time Cost Aggregation Transformer [6] (st-CAT) takes both source and target feature maps and the correlation volume then applies a few transformer blocks to predict a space-time displacement flow of size (T\(\times\)H\(\times\)W)\({}^{2}\). (c) A simple Aggregation NeTwork (ANT) reshapes the correlation volume, concatenates it with the source and target features, applies a few 3D convolution layers, then predicts a space-time displacement flow of size (T\(\times\)H\(\times\)W)\({}^{2}\).
## 5 Experiments
### Implementation details
**Setup**. We train our baseline models on the training set and evaluate them on the validation of Penn Action using 2 benchmark setups described in section 3.
**Input & augmentation**. Each video (either source or target) undergoes independent augmentation. Given an input video, we randomly select a clip of 64 frames ( 2 seconds at 30fps) such that it covers all key moments of that video (i.e., no keypoints are cropped). Standard image augmentations such as grayscale, posterize, equalize, rightness contrast adjusting, and solarize are all applied with a probability of \(0.2\). After standard augmentations, random cropping is also applied with a probability of \(0.5\). We note that all frames from the clip need to go through the same set of augmentations, otherwise the clip is no longer temporal coherent. The cropped (or uncropped) clip is then scaled to have a frame size of 128\(\times\)128, making the input clip a size of 64\(\times\)128\(\times\)128. Note that when random cropping and/or scaling is used, the keypoints are shifted and/or scaled accordingly.
**Backbone architectures**. We experiment with different 3D CNN backbones including R3D with 18 layers and R(2+1)D with 34 layers [37], both were pre-trained with Kinetics-400 [18]. The feature maps at different layers are either up- or down-sampled into the same size of 8\(\times\)8\(\times\)8. Due to the larger memory required for video input, it is not possible to increase this feature map size even with 32G memory GPUs. For the st-MATCH baseline, since there is no trainable parameter, the feature maps can be larger to compensate for lacking learning capacity. We find the feature map size of 32\(\times\)16\(\times\)16 is best for st-MATCH with the current 32G GPU limit.
**Training details**. Training is done distributedly with 8 nodes of 8 volta GPUs each with 32G memory. A mini-batch size of 2 per GPU is used and thus making an effective batch size of 128. We follow the training schedule provided in [6]: training is done in 100 epochs with the step learning rate schedule to be reduced (divided by 2) at epoch 70, 80, and 90. The initial learning rate is set to \(1.2\times 10^{-4}\). When full backbone finetuning is used, the initial learning rate for backbone is \(1.2\times 10^{-5}\).
**Evaluation metrics**. A predicted space-time keypoint is classified as correct when it is within a close proximity to the expected ground truth, both in space and time. Formally, a predicted keypoint \((x_{pr},y_{pr},t_{pr})\) is regarded as a correct prediction w.r.t the ground keypoint \((x_{gt},y_{gt},t_{gt})\) when \(|t_{pr}-t_{gt}|\leq k\) and \(\|(x_{pr},y_{pr})-(x_{gt},y_{gt})\|_{2}\leq\alpha\times b\), where \(\alpha\) is normally 0.1, \(b\) is the of the larger size of the smallest bounding covering key-points in that frame, and \(k\) is the number of frames the model is allowed to miss-align, e.g., \(k\)=1,3,5. The spatial metric is standard in visual correspondence prediction, regarded as [email protected] (percentage of correct keypoints). Our metrics are the augmented version of PCK where we add a time-misalignment allowance, denoted as T@[email protected].
### Baseline results
Table 3 presents the space-time semantic correspondence prediction results for all baselines on two benchmark setups. All the space-time baselines use the same backbone of R(2+1)D-34. The upper table presents the results of sequential baselines while the lower reports the space-time baselines' performance. In addition to sequential baselines, we also provide an upper bound for time alignment with CATs, _e.g_., ground truth time alignments are given and CATs are used for spatial matching.
**Sequential baselines perform poorly**. Some observations from the sequential baselines include: (i) the pose-based baselines perform poorly, indicating that the problem should be addressed directly instead of using poses as intermediate predictions, even though many keypoints are hu
\begin{table}
\begin{tabular}{|l r|r r|r r r|} \hline \multicolumn{2}{|c|}{Benchmark} & \multicolumn{3}{c|}{3+3} & \multicolumn{3}{c|}{13+3} \\ \multicolumn{2}{|c|}{Metric} & T@1 & T@3 & T@5 & T@1 & T@3 & T@5 \\ \multicolumn{2}{|c|}{in \%} & \multicolumn{3}{c|}{[email protected]} & \multicolumn{3}{c|}{[email protected]} \\ \hline \multicolumn{2}{|c|}{sequential baselines} & \multicolumn{3}{c|}{8.2} \\ \hline NN & Pose- & & 3.2 & & 8.2 \\ DTW & based & & 3.0 & & 7.7 \\ TCC [11] & & & 4.2 & & 10.7 \\ \hline NN & & & 5.9 & & & 13.5 \\ DTW & CATs & & 5.6 & & & 12.9 \\ TCC [11] & [6] & & 8.1 & & & 17.0 \\ groundtruth & & 31.0 & & & 58.9 \\ \hline \multicolumn{2}{|c|}{joint space-time baselines} & \multicolumn{3}{c|}{} \\ \hline st-MATCH & 4.2 & 11.6 & 15.9 & 6.2 & 17.2 & 24.7 \\
**st-CATs** & 19.4 & 34.7 & 37.7 & 22.7 & 48.2 & 55.8 \\
**ANTs** & **19.9** & **35.1** & **38.1** & **24.3** & **49.9** & **57.1** \\ \hline \end{tabular}
\end{table}
Table 3: **Comparison between baselines**. Space-time correspondence prediction on two benchmark setups: 3+3 and 13+3 of pose and object-keypoints, respectively. The upper table presents sequential baselines in which the problem is approached by time aligment, then spatial correspondence prediction. The lower table presents the joint space-time baselines. Our proposed baselines, st-CATs and ANTs, significantly outperform all other baselines. st-CATs and ANTs outperform the baseline of CATs (with ground-truth time aligment provided) on the 3+3 setup while comparable with this baseline on the 13+3 setup with T@[email protected] metric. Our experimental results suggest that it is more advantaged to approach this problem jointly in space-time rather than solving the decomposed sub-problems. Sequential baselines perform poorly on T@1 and T@3 due to challenging temporal alignment using global features, for simplicity, we omit them from the table.
man keypoints (_e.g._, in 13+3 setup); (ii) TCC [11] is consistently better than NN and DTW as expected; and (iii) TCC [11] with CATs [6] performs best among sequential baselines as expected but still far below space-time baselines.
**The problem should be approached jointly in space and time**. It is interesting to see even the simple st-MATCH (with no learning capacity) outperforms all sequential baselines. This indicates that the problem should be approached jointly both in space and time rather than decomposed subproblems. This intuitively makes sense as the decomposed problems are harder with limited context for making predictions. On one hand, for the spatial correspondence subproblem, models have limited temporal context and no notion of motions, thus it is harder for them to predict space-time keypoints. On the other hand, for the temporal alignment sub-problem, the models often give up spatial modeling, due to long sequence inputs and the model has to focus on dense temporal predictions. Our space-time semantic correspondence prediction requires sparse predictions, normally at salient space-time keypoints.
**Simple convolutions are better than transformers on small feature maps**. When comparing learning-based methods, ANTs slightly outperform st-CATs. This can be explained as the capacity of modeling larger receptive fields of the transformers used in CATs is not crucial for small feature maps, i.e., size of \(8^{3}\), while a few 3D convolution layers (with \(3^{3}\) kernels) could cover such a small receptive field. At the same time, the larger parameter size in st-CATs can cause more overfitting. Last but not least, even though both st-CATs and ANTs make predictions at low-resolution displacement flows, e.g., at \(8^{3}\), and then perform upsampling back to \(64\times 128^{2}\), these models still perform reasonably well. Future work on this problem should explore the trade-offs of increasing the feature map, and prediction size for higher accuracy with more memory and computation requirements.
### Model generalization
**Different activities and keypoint types bring in different challenges**. Table 4 presents the detailed performance of our ANTs on different activities and across different types of keypoints (human vs. object). When we look at the "all" keypoint columns, "golf swing" and "bowling" are among the easiest while "squats" is the hardest activity. This can be understood by the fact that both "golf swing" and "bowling" have quite distinctive poses at key-moments while "squats" has the poses at the key-moments are closely similar to nearby frames (before and after the key-moment frames) making time alignment harder. We note that for the first and third key-moments of "squats", the motion directions and patterns are also similar to nearby frames. When we look at the "obj" columns, "squats", "tennis forehand", and "baseball pitch" are among the most challenging activities. While the "squats" category inherits hardness from time alignment (it has low performance across all three keypoint types), "tennis forehand" and "baseball pitch" struggle with object keypoints mainly due to the presence of small objects, e.g., the ball, and with fast motions.
**ANTs fairly generalize across keypoints**. Table 5 presents the performance of our ANTs trained on 3+3 or 13+3 keypoints and tested on 3+3, 13+3, and r10 setups. The r10 is a new setup denoted as the keypoints in 13+3, but not in 3+3 which is equivalent to the rest 10 types of human keypoints that are not head, left and right wrists. First, when evaluated on 13+3, the model trained on 13+3 is 7.5% higher than the one trained on 3+3, but this is not a surprise because the model trained on 13+3 has a lot more supervision. Second, when a model is trained on 3+3, but evaluated on 13+3 and r10, performance is dropped by 3.1% and 7.9%, respectively. As 3+3 and r10 are two sets of non-overlapped keypoint types, an accuracy of 24.4% on T@[email protected], when trained on 3+3 and tested on r10, is a good one compared with other baselines (see Table 3). Third, when comparing the performance on different evaluation setups (3+3, r10, 13+3) with the model trained on 13+3, we observe that these results are fairly similar except that the performance on 3+3 is lower than r10, this is indicated that object keypoints are more challenging compared with human keypoints.
**ANTs and CATs generalize across datasets**. We investigate to find out if our models also work on another dataset such as Pouring. We use the models pre-trained earlier on Penn Action and further fine-tune them on Pouring. Table 6 presents the results of ANTs and CATs on the Pouring dataset. Both CATs and ANTs consistently outper
\begin{table}
\begin{tabular}{|l|c c c|c c c|} \hline Metric & \multicolumn{3}{c|}{T@ [email protected]} & \multicolumn{3}{c|}{T@[email protected]} \\ Keypoint type & hum & obj & all & hum & obj & all \\ \hline Baseball pitch & 20.6 & 9.5 & 18.4 & 44.1 & 14.4 & 37.9 \\ Baseball swing & 25.2 & 17.4 & 21.7 & 54.5 & 32.3 & 44.7 \\ Bowling & 27.5 & 44.2 & 32.0 & 48.2 & 70.9 & 54.2 \\ Golf swing & 45.6 & 26.9 & 37.6 & 84.2 & 52.5 & 69.9 \\ Squats & 9.9 & 6.3 & 7.2 & 25.7 & 14.8 & 17.9 \\ Tennis forehand & 21.0 & 7.7 & 14.8 & 43.5 & 12.4 & 29.0 \\ Tennis serve & 19.0 & 16.1 & 18.0 & 39.3 & 26.1 & 34.9 \\ \hline All & 21.7 & 17.4 & 19.9 & 43.9 & 30.7 & 38.1 \\ \hline \end{tabular}
\end{table}
Table 4: **Detailed prediction on different acitities and keypoint types**. Our ANTs model is trained and evaluated on the 3+3 setup with the T@1 and T@5 at [email protected] metrics. For activities, squats is the hardest while golf swing is the easiest. For keypoint type, object keypoints are hard in “Baseball pitch” and “Tennis forehand” due to small object, e.g., the ball, and fast motions. Object keypoints in “Bowling” is the easiest one due to large object size and with predictable context, e.g., the human pose at keymoments.
form the st-MATCH baseline. Due to the small size of the Pouring dataset, we repeat 3 runs of CATs and ANTs and report their mean accuracy with standard deviation. For st-MATCH, there is no learning, thus repeating experiments is not needed. We note that pre-training on Penn-Action is crucial due to the small size of the Pouring dataset. For example, CATs, without Penn-Action pretraining, _e.g_. with an R(2+1)D-34 backbone pretrained with only K400, achieves \(15.1\pm 1.6\), \(32.1\pm 3.3\), and \(42.8\pm 3.0\), for T@1, T@3, and T@5, respectively. These are significant performance drops of 12-20%.
### Ablation
**Different backbones**. Table 7 presents the performance of CATs and ANTs using different backbones. Both baselines have the benefit of a deeper and stronger backbone when we replace an R3D-18 with an R(2+1)D-34.
**ANTs components**. Table 8 presents the ablation of our ANTs components. Our observation is that increasing the number of layers in ANTs slightly improves the results at the expense of more parameters and computation. For simplicity, we set the number of layers to 2 for all other experiments with ANTs. We found that it is very important to finetune the whole backbone instead of keeping it frozen.
**Feature layers for hyperpixel**. In the image problem, most recent works used the hyper-pixel combination provided in [28] with a ResNet-101 backbone [15]. The selection is done via beam search [28]. Since our problem is for video and with a different backbone, e.g., R(2+1)D-34, we conduct an ablation to find a good combination for our hyperpixel. Here we summarize the main findings (details in appendix). As a ResNet-style architecture, R(2+1)D-34 has the following components: conv1, followed by 4 groups of resnet blocks. We ablate with only conv1 and the last layer of each resnet block. Our findings are: (i) adding conv1 or feature maps from group 1 hurts performance, while adding feature maps from group \(2\), \(3\), \(4\) helps; (ii) using two last feature maps from group \(2\), \(3\), \(4\) provides a good trade-off of memory and computation vs. accuracy.
## 6 Conclusions
We have proposed a new task of space-time semantic correspondence prediction which requires matching and aligning semantic key-points across videos. The problem is essential in various practical applications from activity coaching and sports analysis, to robot imitation learning. We introduced two new benchmarks for this problem by adding annotations to the existing Penn Action [39] and Pouring [32] datasets. Our experiments with a set of comprehensive baselines and ablations help us gain useful insights about the problem. Some potential future directions include, but not limited to, interesting applications of space-time semantic correspondence prediction, single-shot video retrieval with explainability, and self-supervised learning with space-time cycle consistency.
## Acknowledgement
The authors would like to thank Xitong Yang for helping with the distributed traing setup and experiments.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline Model & backbone & pretrain & T@1 & T@3 & T@5 \\ \hline st- & R3D-18 & K400 & 3.9 & 15.2 & 24.1 \\ MTCH & R(2+1)D-34 & K400 & 3.6 & 20.2 & 28.6 \\ \hline CATs & R3D-18 & PenAct & \(18.6\pm 0.8\) & \(37.8\pm 1.0\) & \(55.8\pm 0.7\) \\ \cline{2-6} & R(2+1)D-34 & PenAct & \(27.7\pm 1.6\) & \(53.0\pm 1.2\) & \(62.2\pm 1.9\) \\ \hline ANTs & R3D-18 & PenAct & \(21.3\pm 0.4\) & \(45.9\pm 0.7\) & \(63.4\pm 1.5\) \\ \cline{2-6} & R(2+1)D-34 & PenAct & \(27.6\pm 1.3\) & \(57.8\pm 2.8\) & \(64.2\pm 1.0\) \\ \hline \end{tabular}
\end{table}
Table 6: **Results on Pouring dataset**. Both ANTs and CATs consistently outperform the st-MATCH baseline on different backbones and evaluation metrics.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Model & backbone & 3+3 & 13+3 \\ \hline CATs & R3D-18 & 31.9 & 51.7 \\ & R(2+1)D-34 & 37.7 & 55.8 \\ \hline ANTs & R3D-18 & 31.9 & 50.9 \\ & R(2+1)D-34 & 38.1 & 57.1 \\ \hline \end{tabular}
\end{table}
Table 7: **CATs and ANTs with different backbones**. ANTs slightly outperform CATs on two experimenting backbones of R3D-18 and R(2+1)D-34. All reporting results are with T@5 and [email protected] metric.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \multicolumn{1}{|c|}{Evaluate (\(\rightarrow\))} & \multicolumn{1}{c|}{3+3} & \multicolumn{1}{c|}{r10} & \multicolumn{1}{c}{13+3} \\ Train (\(\downarrow\)) & T@1 & T@5 & T@1 & T@5 & T@1 & T@5 \\ \hline
3+3 & 19.9 & 38.1 & 12.0 & 24.4 & 16.8 & 34.2 \\ \hline
13+3 & 22.8 & 49.4 & 25.2 & 61.1 & 24.3 & 57.1 \\ \hline \end{tabular}
\end{table}
Table 5: **Cross keypoint type evaluation**. ANTs models are trained on 3+3, then evaluated on 3+3, 13+3, and r10 setup. The r10 setup includes all keypoints in 13+3, but exclude those in 3+3 (r10 means the rest 10 human keypoints).
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Model & finetuned & \# of layers & 3+3 \\ \hline ANTs & ✓ & 1 & 37.6 \\ & ✓ & 2 & 38.1 \\ & ✓ & 3 & 38.5 \\ & ✗ & 2 & 21.0 \\ \hline \end{tabular}
\end{table}
Table 8: **ANTs architecture ablation**. Results of ANTs model with R(2+1)D-34 backbone using the T@5 and [email protected] metric. The number of 3D convolutional layers (not include the final prediction layer) and the option of full-finetuning backbone are ablated. While significant difference can be observed when backbone is frozen, the number of layers are less sensitive. |
2304.06213 | Fluid mode spectroscopy for measuring kinematic viscosity of fluids in
open cylindrical containers | On a daily basis we stir tee or coffee with a spoon and leave it to rest. We
know empirically the larger the stickiness, viscosity, of the fluid, more
rapidly its velocity slows down. It is surprising, therefore, that the
variation, the decay rate of the velocity, has not been utilized for measuring
(kinematic) viscosity of fluids. This study shows that a spectroscopy
decomposing a velocity field into fluid modes (Stokes eigenmodes) allows us to
measure accurately the kinematic viscosity. The method, Fluid Mode Spectroscopy
(FMS), is based on the fact that each Stokes eigenmode has its inherent decay
rate of eigenvalue and that the dimensionless rate of the slowest decaying mode
(SDM) is constant, dependent only on the normalized shape of a fluid container,
obtained analytically for some shapes including cylindrical containers. The FMS
supplements major conventional measuring methods with each other, particularly
useful for measuring relatively low kinematic viscosity and for a direct
measurement of viscosity at zero shear rate without extrapolation. The method
is validated by the experiments of water poured into an open cylindrical
container, as well as by the corresponding numerical simulations. | Hideshi Ishida, Masaaki Horie, Takahiro Harada, Shingo Mizuno, Seita Hamada, Haruki Imura, Shoma Ashiwake, Naoya Isayama, Ryomei Saeki, Ryotaro Kozono, Daichi Taki, Asuka Kurose | 2023-04-13T01:42:18Z | http://arxiv.org/abs/2304.06213v2 | # Fluid mode spectroscopy for measuring kinematic viscosity of fluids in open cylindrical containers
###### Abstract
On a daily basis we stir tee or coffee with a spoon and leave it to rest. We know empirically the larger the stickiness, viscosity, of the fluid, more rapidly its velocity slows down. It is surprising, therefore, that the variation, the decay rate of the velocity, has not been utilized for measuring (kinematic) viscosity of fluids. This study shows that a spectroscopy decomposing a velocity field into fluid modes (Stokes eigenmodes) allows us to measure accurately the kinematic viscosity. The method, Fluid Mode Spectroscopy (FMS), is based on the fact that each Stokes eigenmode has its inherent decay rate of eigenvalue and that the dimensionless rate of the slowest decaying mode (SDM) is constant, dependent only on the normalized shape of a fluid container, obtained analytically for some shapes including cylindrical containers. The FMS supplements major conventional measuring methods with each other, particularly useful for measuring relatively low kinematic viscosity and for a direct measurement of viscosity at zero shear rate without extrapolation. The method is validated by the experiments of water poured into an open cylindrical container, as well as by the corresponding numerical simulations.
kinematic viscosity; Stokes eigenmodes; (dimensionless) decay rate; cylindrical container |
2305.11446 | Some results on the solubility graph of a finite group | Let $G$ be a finite insoluble group with soluble radical $ R(G)$. The
solubility graph $\Gamma_{\rm S}(G)$ of $G$ is a simple graph whose vertices
are the elements of $G\setminus R(G) $ and two distinct vertices $x$ and $y$
are adjacent if and only if they generate a soluble subgroup of $G$. In this
paper, we investigate the several properties of the solubility graph
$\Gamma_{\rm S}(G)$. | Mina Poozesh, Yousef Zamani | 2023-05-19T05:56:09Z | http://arxiv.org/abs/2305.11446v2 | # Some results on the solubility graph of a finite group
###### Abstract.
Let \(G\) be a finite insoluble group with soluble radical \(R(G)\). The solubility graph \(\Gamma_{\mathrm{S}}(G)\) of \(G\) is a simple graph whose vertices are the elements of \(G\setminus R(G)\) and two distinct vertices \(x\) and \(y\) are adjacent if and only if they generate a soluble subgroup of \(G\). In this paper, we investigate the several properties of the solubility graph \(\Gamma_{\mathrm{S}}(G)\).
Key words and phrases:Finite insoluble group, solubility graph, solubility degree, solubilizer 2010 Mathematics Subject Classification: Primary: 05C25; Secondary: 20D05, 20D99, 20P05 \({}^{*}\) Corresponding author
## 1. **Introduction**
Let \(G\) be a finite insoluble group with soluble radical \(R(G)\). The solubility graph of \(G\) is a simple graph whose vertices are the elements of \(G\setminus R(G)\) and two vertices \(u\) and \(v\) are adjacent if the subgroup \(\langle u,v\rangle\) is soluble. We denote this graph with \(\Gamma_{\mathrm{S}}(G)\). For any \(v\in G\setminus R(G)\), we denote the degree of \(v\) in \(\Gamma_{\mathrm{S}}(G)\) by \(\deg(v)\). Notice that \(\deg(v)=|\mathsf{Sol}_{G}(v)|-|R(G)|-1\). Here, \(\mathsf{Sol}_{G}(v)\) is the solubilizer of \(v\) in \(G\), which consists of the elements \(g\) in \(G\) such that the subgroup generated by \(v\) and \(g\), i.e., \(\langle v,g\rangle\), is soluble. For further exploration of the arithmetic and structural properties of the solubilizer of an element in a finite group \(G\), we refer the reader to the references [1, 2, 17, 19]. Denote by \(\deg(\Gamma_{\mathrm{S}}(G))\) the vertex degree set of \(\Gamma_{\mathrm{S}}(G)\). According to the results presented in [7], the solubility graph \(\Gamma_{\mathrm{S}}(G)\) exhibits certain properties. It is proven that \(\Gamma_{\mathrm{S}}(G)\) is not a star graph, a tree, an \(n\)-partite graph for any positive integer \(n\geq 2\), or a regular graph. Additionally, the girth of \(\Gamma_{\mathrm{S}}(G)\), which is the length of the shortest cycle in the graph, is determined to be \(3\). Furthermore, the clique number of \(\Gamma_{\mathrm{S}}(G)\), which represents the size of the largest complete subgraph within the graph, is proven to be at least \(4\).
In a separate study conducted by Burness et al. [10], it is established that the solubility graph \(\Gamma_{\mathrm{S}}(G)\) is a connected graph, meaning that there exists a path between any pair of vertices in the graph. Additionally, it is shown that the diameter of \(\Gamma_{\mathrm{S}}(G)\), which is the maximum distance between any pair of vertices in the graph, is at most \(5\).
It is indeed noteworthy to mention that the solubility graph \(\Gamma_{\mathrm{S}}(G)\) can be viewed as the complement of the non-solvable graph of the group \(G\) as discussed in [17, 8]. The non-solvable graph focuses on the relationships among the elements of \(G\) that do not generate soluble subgroups. Therefore, by taking the complement, we obtain the solubility graph, which emphasizes the soluble relationships between the elements of \(G\).
Moreover, the solubility graph can be seen as an extension of the commuting and nilpotent graphs of finite groups that have been extensively studied in various research works such as [3,
###### Abstract
We prove the following result for the following problem:
**Theorem 1.1**.: _Let \(G\) be a finite insoluble group. Then for every \(x\in G\setminus R(G)\), \(\deg(x)=s-r-1\)._
Proof.: Let \(G\) be a finite insoluble group. Then for every \(x\in G\setminus R(G)\), \(\deg(x)=s-r-1\). By [1, Corollary 4.7], \(s\geq 10\). So if \(r=1\), then \(\deg(x)\geq 8\). Assume that \(r\neq 1\). Since
\[|\mathsf{Sol}_{G/R(G)}(xR(G))|=\frac{|\mathsf{Sol}_{G}(x)|}{|R(G)|}=\frac{s}{r }\geq 10,\]
so \(\deg(x)\geq 9r-1\). Now \(r\geq 2\), implies that \(\deg(x)\geq 17\).
**Remark 1.2**.: _Let \(G\) be a finite insoluble group. Then for every \(x\in G\setminus R(G)\), \(\deg(x)=s-r-1\), we have \(\deg(x)=17\). Also for \(G=A_{5}\) and \(x=(1\ 2\ 3\ 4\ 5)\in A_{5}\), we have \(\deg(x)=8\). This example shows that the bounds in Proposition 2.2 are sharp._
**Proposition 2.4**.: _In the solubility graph \(\Gamma_{\mathrm{S}}(G)\), \(\Delta_{s}(G)\leq n-7\), where \(n\) is the number of vertices in \(\Gamma_{\mathrm{S}}(G)\)._
Proof.: We know that there exists \(n=|G|-|R(G)|\) vertices in \(\Gamma_{\mathrm{S}}(G)\). Assume that \(v\) is an arbitrary vertex in \(\Gamma_{\mathrm{S}}(G)\). Applying Lemma 2.1, we have
\[\deg(v)=|\mathsf{Sol}_{G}(v)|-|R(G)|-1=(|\mathsf{Sol}_{G}(v)|-|G|)+n-1\leq n-7,\]
so the result holds.
**Proposition 2.5**.: _Let \(G\) be a finite insoluble group and \(\Gamma_{\mathrm{S}}(G)\) be its solubility graph. If for a prime number such as \(p\), the degree of a vertex of \(\Gamma_{\mathrm{S}}(G)\) is \(p-1\), then \(R(G)=1\)._
Proof.: Assume that \(x\) is a vertex of \(\Gamma_{\mathrm{S}}(G)\) with degree \(p-1\). Then \(|\mathsf{Sol}_{G}(x)|-|R(G)|=p\). Since \(|R(G)|\) divides \(|\mathsf{Sol}_{G}(x)|\), so \(|R(G)|=1\) or \(p\). If \(|R(G)|=p\), then \(|\mathsf{Sol}_{G}(x)|=2p\) and \(|\mathsf{Sol}_{G/R}(xR)|=\frac{|\mathsf{Sol}_{G}(x)|}{|R(G)|}=2\), a contradiction. So \(|R(G)|=1\) and the result holds.
**Proposition 2.6**.: _Let \(G\) be a finite group such that for all \(x\in G\setminus R(G)\), \(|\mathsf{Sol}_{G}(x)|\geq\frac{|G|+|R(G)|}{2}+1\). Then \(\Gamma_{\mathrm{S}}(G)\) is Hamiltonian._
Proof.: Let \(v\in G\setminus R(G)\). Then
\[\deg(v) =|\mathsf{Sol}_{G}(v)|-|R(G)|-1\] \[\geq\frac{|G|+|R(G)|}{2}-|R(G)|\] \[=\frac{|G|-|R(G)|}{2},\]
so the result holds by Dirac's theorem (see [9]).
**Proposition 2.7**.: _Let \(G\) be a finite insoluble group. Then for every \(v\in G\setminus R(G)\), we have_
\[\frac{1+\deg(v)}{1+\deg(vR(G))}=|R(G)|.\]
_Furthermore \(\deg(\Gamma_{\mathrm{S}}(G))=\deg(\Gamma_{\mathrm{S}}(G/R(G)))\)._
Proof.: We have
\[1+\deg(v) = |\mathsf{Sol}_{G}(v)|-|R(G)|\] \[= |R(G)||\mathsf{Sol}_{G/R(G)}(vR(G))|-|R(G)|\] \[= |R(G)|(|\mathsf{Sol}_{G/R(G)}(vR(G))|-1)\]
Also we have
\[1+\deg(vR(G))=|\mathsf{Sol}_{G/R(G)}(vR(G))|-1.\]
By combining the two previous relations, the result is obtained.
## 3. **The number of edges and the solubility degree**
The solubility degree of a finite group \(G\) (see [7]) is the probability that two randomly chosen elements of \(G\) generate a soluble group. It is is given by \(P_{s}(G)=\frac{|\mathsf{S}|}{|G|^{2}}\), where
\[\mathbb{S}=\{(x,y)\in G\times G\ |\ \langle x,y\rangle\text{ is soluble}\}.\]
It is not difficult to see that \(|\mathsf{S}|=\sum_{x\in G}|\mathsf{Sol}_{G}(x)|\). Thus
\[P_{s}(G)=\frac{1}{|G|^{2}}\sum_{x\in G}|\mathsf{Sol}_{G}(x)|.\]
Notice that \(G\) is soluble if and only if \(P_{s}(G)=1\). It is known (see [16]) that \(P_{s}(G)\leq\frac{11}{30}\). By using GAP [15], we can see that \(P_{s}(A_{5})=\frac{11}{30}\), which shows that the bound is sharp. In [7], it is proved that if \(G\) is a finite group, then \(P_{s}(G)\geq Pr(G)\) and the equality holds if and only if \(G\) is a soluble group, where \(Pr(G)\) is the commutativity degree of \(G\). However, we provide a counterexample where the equality condition is not true. Additionally, we provide a condition for equality in the theorem.
**Example 3.1**.: _Consider \(G=S_{3}\). Then \(Pr(G)=\frac{k(G)}{|G|}=\frac{1}{3}\), where \(k(G)\) is the number of the conjugacy classes of \(G\). Since \(G\) is soluble, we have \(P_{s}(G)=1\)._
**Proposition 3.2**.: _Let \(G\) be a finite group. Then \(P_{s}(G)\geq Pr(G)\) and the equality holds if and only if \(G\) is an abelian group, where \(Pr(G)\) is the commutativity degree of \(G\)._
Proof.: We know that \(Pr(G)=\frac{1}{|G|^{2}}\sum_{x\in G}|\mathcal{C}_{G}(x)|\). Now \(\mathcal{C}_{G}(x)\subseteq\mathsf{Sol}_{G}(x)\) implies that \(P_{s}(G)\geq Pr(G)\). If \(P_{s}(G)=Pr(G)\), then \(|\mathcal{C}_{G}(x)|=|\mathsf{Sol}_{G}(x)|\), for every \(x\in G\). This implies that for every \(x\in G\), \(\mathsf{Sol}_{G}(x)\) is a subgroup of \(G\). Thus \(G\) is soluble by [17, Proposition 2.22]. Therefore
\[G=R(G)=\bigcap_{x\in G}\mathsf{Sol}_{G}(x)=\bigcap_{x\in G}\mathcal{C}_{G}(x) =Z(G),\]
and the result holds. The converse is obvious.
**Proposition 3.3**.: _Let \(G\) be an insoluble finite group and \(N\) a normal subgroup of \(G\). Then \(P_{s}(G)\leq P_{s}(G/N)\). Furthermore, if \(N\) is soluble, then the equality holds._
Proof.: By [19, Lemma 2.3], we have
\[|G/N|^{2}P_{s}(G/N) =\sum_{xN\in G/N}|\mathsf{Sol}_{G/N}(xN)|\] \[=\frac{1}{|N|}\sum_{x\in G}|\mathsf{Sol}_{G/N}(xN)|\] \[\geq\frac{1}{|N|}\sum_{x\in G}\frac{|\mathsf{Sol}_{G}(x)|}{|N|}\] \[=\frac{|G|^{2}}{|N|^{2}}P_{s}(G),\]
so the result holds. If \(N\) is soluble, then by [19, Lemma 2.3] we have \(|\mathsf{Sol}_{G/N}(xN)|=\frac{|\mathsf{Sol}_{G}(x)|}{|N|}\), so the equality holds.
Applying Proposition 3.3, we deduce the following corollary.
**Corollary 3.4**.: _If \(G/R(G)\cong H/R(H)\), then \(P_{s}(G)=P_{s}(H)\)._
**Proposition 3.5**.: _If \(G\) and \(H\) are two finite groups, then \(P_{s}(G\times H)\geq P_{s}(G)P_{s}(H)\). If \(G\) or \(H\) is soluble (in particular \((|G|,|H|)=1\)), then the equality holds._
Proof.: For any \((g,s),(x,t)\in G\times H\), we have
\[\langle(g,s),(x,t)\rangle\subseteq\langle g,x\rangle\times\langle t,s\rangle.\]
So
\[\mathsf{Sol}_{G}(g)\times\mathsf{Sol}_{H}(s)\subseteq\mathsf{Sol}_{G\times H} (g,s)\]
Thus
\[P_{s}(G)P_{s}(H) = \frac{1}{|G|^{2}}\sum_{x\in G}|\mathsf{Sol}_{G}(x)|\frac{1}{|H|^ {2}}\sum_{t\in H}|\mathsf{Sol}_{H}(t)|\] \[= \frac{1}{|G|^{2}|H|^{2}}\sum_{x\in G}\sum_{t\in H}|\mathsf{Sol}_{ G}(x)||\mathsf{Sol}_{H}(t)|\] \[\leq \frac{1}{|G\times H|^{2}}\sum_{(x,t)\in G\times H}|\mathsf{Sol}_ {G\times H}(x,t)|\] \[= P_{s}(G\times H).\]
If \(H\) is soluble, then by Proposition 3.3, we have
\[P_{s}(G\times H)\leq P_{s}(G\times H/H)=P_{s}(G)=P_{s}(G)P_{s}(H).\]
Therefore we obtain the equality.
Let \(|E(\Gamma_{\mathrm{S}}(G))|\) be the number of edges of the graph \(\Gamma_{\mathrm{S}}(G)\). The next proposition gives a relation between \(|E(\Gamma_{\mathrm{S}}(G))|\) and \(P_{s}(G)\). Also. it gives a sufficient condition for the solubility graph \(\Gamma_{\mathrm{S}}(G)\) to be Hamiltonian.
**Proposition 3.6**.: _Let \(G\) be a finite insoluble group. Then_
* \(2|E(\Gamma_{\mathrm{S}}(G))|=|G|^{2}P_{s}(G)+|R(G)|^{2}+|R(G)|-|G|(2|R(G)|+1).\)__
* _If_ \(P_{s}(G)\geq 1-\frac{2}{|G|}+\frac{2|R(G)|}{|G|^{2}}+\frac{4}{|G|^{2}}\)_, then_ \(\Gamma_{\mathrm{S}}(G)\) _is Hamiltonian._
Proof.: **(i)**
\[2|E(\Gamma_{\mathrm{S}}(G))| =\sum_{v\in G\setminus R(G)}\deg(v)\] \[=\sum_{v\in G\setminus R(G)}(|\mathsf{Sol}_{G}(v)|-|R(G)|-1)\] \[=\sum_{v\in G}|\mathsf{Sol}_{G}(v)|-|G||R(G)|-(|G|-|R(G)|)(|R(G)|+1)\] \[=|G|^{2}P_{s}(G)+|R(G)|^{2}+|R(G)|-|G|(2|R(G)|+1)\]
**(ii)** By part \((i)\), we have
\[2|E(\Gamma_{\mathrm{S}}(G))| =|G|^{2}P_{s}(G)+|R(G)|^{2}+|R(G)|-|G|(2|R(G)|+1)\] \[\geq|G|^{2}(1-\frac{2}{|G|}+\frac{2|R(G)|}{|G|^{2}}+\frac{4}{|G|^{ 2}})+|R(G)|^{2}+|R(G)|-2|R(G)||G|-|G|\] \[=(|G|-|R(G)|)^{2}-3(|G|-|R(G)|)+4\] \[=2\binom{|G|-|R(G)|-1}{2}+2,\]
therefore
\[|E(\Gamma_{\mathrm{S}}(G))|\geq\binom{|G|-|R(G)|-1}{2}+1.\]
Then Ore-Bondy Corollary [9] implies that the solubility graph is Hamiltonian.
**Proposition 3.7**.: _Let \(G\) be an insoluble finite group with \(R(G)=1\). Then_
\[|E(\Gamma_{\mathrm{S}}(G))|\geq\frac{|G|}{2}(k(G)-3)+1,\]
_and the equality holds if and only if \(G\) is abelian._
Proof.: By using Proposition 3.6 and Proposition 3.2, we have
\[2|E(\Gamma_{\mathrm{S}}(G))| =|G|^{2}P_{s}(G)+|R(G)|^{2}+|R(G)|-|G|(2|R(G)|+1)\] \[\geq|G|^{2}Pr(G)+2-3|G|\] \[=|G|^{2}\frac{k(G)}{|G|}+2-3|G|\] \[=|G|(k(G)-3)+2.\]
The equality holds if and only if \(P_{s}(G)=Pr(G)\), which is equivalent to the condition that \(G\) is an abelian group, as stated in Proposition 3.2.
In the following we give some lower and upper bounds for the number of edges of the graph \(\Gamma_{\mathrm{S}}(G)\).
**Proposition 3.8**.: _Let \(G\) be an insoluble finite group with \(R(G)=1\). Then \(|E(\Gamma_{\mathrm{S}}(G))|>|G|+1\)._
Proof.: Since \(G\) is insoluble, we have \(k(G)\geq 5\). Thus, \(|E(\Gamma_{\mathrm{S}}(G))|\geq|G|+1\) by Proposition 3.7. If \(|E(\Gamma_{\mathrm{S}}(G))|=|G|+1\), then again by Proposition 3.7, we would have \(k(G)=5\). According to [11, Note A], this would imply \(G\cong A_{5}\), which is impossible because \(|E(\Gamma_{\mathrm{S}}(A_{5}))|=571\) by using Proposition 3.6.
**Proposition 3.9**.: _Let \(G\) be a finite insoluble simple group. Then \(|E(\Gamma_{\mathrm{S}}(G))|>4|G|+1\)._
Proof.: Assume that \(|E(\Gamma_{\mathrm{S}}(G))|\leq 4|G|+1\). Then, by Proposition 3.7, we have \(k(G)\leq 11\). The groups with such a property have been classified (see [18, Tables 1-3]). Since \(G\) is simple, it follows that \(G\) is isomorphic to one of the following groups:
\[A_{6},A_{7},\mathrm{PSL}(2,q)\ (q=7,11,13,17),\mathrm{PSL}(3,4),M_{11}, \mathrm{Sz}(8).\]
Using GAP [15], we can check that for any of the above groups, \(|E(\Gamma_{\mathrm{S}}(G))|>4|G|+1\), which leads to a contradiction.
Applying the upper bound \(\frac{11}{30}\) for the solubility degree and Proposition 3.6, we immediately obtain the following corollary.
**Corollary 3.10**.: _Let \(G\) be a finite insoluble group with \(R(G)=1\). Then_
\[|E(\Gamma_{\mathrm{S}}(G))|\leq\frac{11}{60}|G|^{2}-\frac{3}{2}|G|+1.\]
By Proposition 3.6, we see that if \(G=A_{5}\), then \(|E(\Gamma_{\mathrm{S}}(G))|=571\). Therefore the upper bound in the previous corollary is sharp.
## 4. **Groups with isomorphic solubility graphs**
Let \(G\) and \(H\) be two finite insoluble groups. A graph isomorphism between the solubility graphs \(\Gamma_{\mathrm{S}}(G)\) and \(\Gamma_{\mathrm{S}}(H)\) is a one-to-one correspondence \(\phi:G\setminus R(G)\to H\setminus R(H)\) such that \(\phi\) preserves edges. In other words, if \(x,y\in G\setminus R(G)\) and \(\langle x,y\rangle\) is soluble, then \(\langle\phi(x),\phi(y)\rangle\) is soluble as well. We denote the isomorphism by \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(H)\).
It is worth noting that if \(G\cong H\), then \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(H)\). However, there are cases where groups are not isomorphic, yet their solubility graphs are isomorphic. For example, we have \(\mathrm{SL}(2,5)\not\cong\mathbb{Z}2\times A_{5}\), but by applying GAP [15], we can observe that \(\Gamma_{\mathrm{S}}(\mathrm{SL}(2,5))\cong\Gamma_{\mathrm{S}}(\mathbb{Z}_{2} \times A_{5})\).
**Proposition 4.1**.: _There is no finite insoluble group \(G\) with an insoluble proper subgroup \(H\) such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(H)\)._
Proof.: Contrarily, let us assume that there exists a finite insoluble group \(G\) with an insoluble proper subgroup \(H\) such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(H)\). This implies that the vertex sets of \(\Gamma_{\mathrm{S}}(H)\) and \(\Gamma_{\mathrm{S}}(G)\) coincide. Therefore, we have
\[|G|-|R(G)|=|H|-|R(H)|. \tag{1}\]
Since \(G\) is insoluble, \(R(G)\) is a proper subgroup of \(G\), which implies that \([G:R(G)]\geq 2\). Consequently, we have \(|R(G)|\leq\frac{|G|}{2}\). Moreover, since \(H\) is a proper subgroup of \(G\), we have \([G:H]\geq 2\). Now, Equation (4.1) implies
\[\frac{|G|}{2}\leq|H|-|R(H)|\leq\frac{|G|}{2}-|R(H)|,\]
which is impossible.
**Proposition 4.2**.: _There is no finite insoluble group \(G\) with a non-trivial normal subgroup \(N\) such that the quotient group \(G/N\) is insoluble and \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(G/N)\)._
Proof.: Contrarily, let's assume that there exists a finite insoluble group \(G\) with a non-trivial normal subgroup \(N\) such that the quotient group \(G/N\) is insoluble and \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(G/N)\). This implies that the vertex sets of \(\Gamma_{\mathrm{S}}(G/N)\) and \(\Gamma_{\mathrm{S}}(G)\) correspond to each other. Therefore, we have
\[|G|-|R(G)|=|G/N|-|R(G/N)|. \tag{2}\]
Since \(N\) is a non-trivial subgroup of \(G\), we have \(|G/N|\leq\frac{|G|}{2}\). Similar to the previous proposition, we also have \(|R(G)|\leq\frac{|G|}{2}\). Now, using Equation (4.2), we obtain
\[\frac{|G|}{2}\leq|G/N|-|R(G/N)|\leq\frac{|G|}{2}-|R(G/N)|,\]
which leads to a contradiction. Thus, the proof is complete.
**Proposition 4.3**.: _If \(G\) and \(H\) are two finite insoluble simple groups such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(H)\) then \(|G|=|H|\)._
Proof.: Since \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(H)\), so the vertex set of \(\Gamma_{\mathrm{S}}(G)\) corresponds to the vertex set of \(\Gamma_{\mathrm{S}}(H)\). Therefore, we have \(|G|-|R(G)|=|H|-|R(H)|\). Consequently, \(|G|=|H|\).
**Theorem 4.4**.: _Let \(G\) be finite insoluble group. Assume that \(n\geq 5\) is a natural number such that \(p=\frac{n!}{2}-1\) is a prime number. If \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(A_{n})\) then \(|G|=|A_{n}|\)._
Proof.: Suppose \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(A_{n})\). Then the vertex set of \(\Gamma_{\mathrm{S}}(G)\) corresponds to the vertex set of \(\Gamma_{\mathrm{S}}(A_{n})\). Therefore, we have \(|G|-|R(G)|=p\). However, since \(|R(G)|\) divides \(|G|\), we find that \(|R(G)|\ \mid\ p\), implying \(|R(G)|=1\) or \(p\). If \(|R(G)|=p\), then \(|G|=2p\), which is impossible. Hence, we conclude that \(|R(G)|=1\), and thus \(|G|=|A_{n}|\).
**Corollary 4.5**.: _Let \(G\) be a finite insoluble group such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(A_{5})\). Then \(G\cong A_{5}\)._
**Corollary 4.6**.: _Let \(G\) be a finite insoluble group such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(A_{6})\). Then \(|G|=|A_{6}|\) and \(G\) is isomorphic with one of the following groups:_
\[A_{6},\ C_{3}\rtimes S_{5},\ C_{3}\times S_{5},\ S_{3}\times A_{5},\ C_{6} \times A_{5},\ \text{or}\ C_{3}\times\mathrm{SL}(2,5).\]
_Furthermore, if \(G\) is quasisimple or almost simple group, then \(G\cong A_{6}\)._
**Proposition 4.7**.: _Let \(G\) be a finite insoluble group such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(A_{7})\). Then_
\[|G|=|A_{7}|\]
Proof.: Suppose \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(A_{7})\). Then the vertex set of \(\Gamma_{\mathrm{S}}(G)\) corresponds to the vertex set of \(\Gamma_{\mathrm{S}}(A_{7})\). Therefore, we have \(|G|-|R(G)|=2519\). However, note that \(|R(G)|\) must divide \(|G|\). Thus, we have \(|R(G)|\mid 2519\). This implies that \(|R(G)|\) can only be equal to \(1\), \(11\), \(229\), or \(11\cdot 229\). Therefore, we have four possible values for \(\frac{|G|}{|R(G)|}\): \(2520\), \(23\cdot 5\cdot 2\), \(3\cdot 2^{2}\), or \(2\). Given that \(G\) is insoluble, we can conclude that \(|G|=|A_{7}|=2520\). Therefore, the result holds.
**Remark 4.8**.: _Similar to the proof of the previous theorem and using GAP, it has been observed that if \(G\) is a finite insoluble group such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(A_{n})\) (\(n\leq 12\)), then \(|G|=|A_{n}|\)_
Now the following question arises.
**Question 4.9**.: _Let \(G\) be a finite insoluble group such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(A_{n})\). Is it true that \(|G|=|A_{n}|\)?_
We can pose the following question in general:
**Question 4.10**.: _Let \(H\) be a finite insoluble simple group. Let \(G\) be a finite insoluble group such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(H)\). Is it true that \(|G|=|H|\)?_
By using Artin Theorem ([4, 5]) and the classification of finite simple groups, we have the following theorem.
**Theorem 4.11**.: _[_13_, Lemma 2.3]_ _Let \(G\) and \(H\) be finite simple groups, \(|G|=|H|\), then the following holds:_
1. _If_ \(|G|=|A_{8}|=|\operatorname{PSL}(3,4)|\)_, then_ \(G\cong A_{8}\) _or_ \(G\cong\operatorname{PSL}(3,4)\)_;_
2. _If_ \(|G|=|B_{n}(q)|=|C_{n}(q)|\)_, where_ \(n\geq 3\) _and_ \(q\) _is odd, then_ \(G\cong B_{n}(q)\) _or_ \(G\cong C_{n}(q)\)_;_
3. _If_ \(H\) _is not in the above cases, then_ \(G\cong H\)_._
**Corollary 4.12**.: _Let \(G\) and \(H\) be finite simple groups such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(H)\). The following statements hold:_
1. _If_ \(H\in\{A_{8},\operatorname{PSL}(3,4)\}\)_, then_ \(G\cong A_{8}\) _or_ \(G\cong\operatorname{PSL}(3,4)\)_._
2. _If_ \(H\in\{B_{n}(q),C_{n}(q)\}\)_, where_ \(n\geq 3\) _and_ \(q\) _is odd, then_ \(G\cong B_{n}(q)\) _or_ \(G\cong C_{n}(q)\)_._
3. _If_ \(H\) _does not fall into the above cases, then_ \(G\cong H\)_._
Proof.: The result immediately follows from Proposition 4.3 and Theorem 4.11.
As an immediate consequence of Corollary 4.12, we get the following corollary.
**Corollary 4.13**.: _Let \(G\) be a finite insoluble simple group such that \(\Gamma_{\mathrm{S}}(G)\cong\Gamma_{\mathrm{S}}(A_{n})\), where \(n\) is a natural number, \(n\geq 5\), \(n\neq 8\). Then \(G\cong A_{n}\)._
## 5. **Degree pattern**
Now we introduce the degree pattern of the solubility graph and use it to give a conjecture. _The degree pattern_ of the solubility graph of \(G\), denoted as \(\mathrm{D}_{s}(G)\), is represented by an \(n\)-tuple:
\[\mathrm{D}_{s}(G)=(d_{1},d_{2},\cdots,d_{n}),\]
where \(d_{1}\geq d_{2}\geq\cdots\geq d_{n}\), and these values correspond to the degrees of vertices in the solubility graph \(\Gamma_{\mathrm{S}}(G)\). Here, \(n\) is determined by \(n=|G|-|R(G)|\).
**Proposition 5.1**.: _Let \(G\) be a finite insoluble group, and let \(\mathrm{D}s(G)=(d_{1},d_{2},\cdots,d_{n})\) denote the degree pattern of the solubility graph of \(G\). Then, there exist indices \(i\) and \(j\) such that \(d_{i}\neq d_{j}\)._
Proof.: First, we prove the assertion for the case \(|R(G)|=1\). Assume that for all \(1\leq i\leq n\), \(d_{i}=d\). Then
\[d(|G|-1) = \sum_{i=1}^{|G|-1}d_{i}\] \[= \sum_{x\in G\setminus\{1\}}(|\mathsf{Sol}_{G}(x)|-2)\] \[= \sum_{x\in G\setminus\{1\}}|\mathsf{Sol}_{G}(x)|-2|G|+2\] \[= \sum_{x\in G}|\mathsf{Sol}_{G}(x)|-3|G|+2.\]
Since \(|G|\ \Big{|}\ \sum_{x\in G}|\mathsf{Sol}_{G}(x)|\), we have \(|G|\ \big{|}\ 2+d\).This implies that \(|G|\) divides \(|\mathsf{Sol}_{G}(x)|\) for each \(x\in G\). As a result, we have \(G=\mathsf{Sol}_{G}(x)\) for all \(x\in G\). However, this contradicts the assumption that \(G\) is an insoluble group. Therefore, the assumption that all \(d_{i}\) are equal leads to a contradiction.
Now, let \(R=R(G)\neq\{1\}\) and \(\mathrm{D}_{s}(G/R)=(d_{1}^{\prime},d_{2}^{\prime},\cdots,d_{m}^{\prime})\) be the degree pattern of the solubility graph of \(G/R\). Suppose \(\deg_{G/R}(x_{i}R)=d_{i}^{\prime},\ 1\leq i\leq m\). Since \(R(G/R)=1\), so there exist the indices \(1\leq i<j\leq m\) such that \(d_{i}^{\prime}\neq d_{j}^{\prime}\). Then, we have
\[|\mathsf{Sol}_{G/R}(x_{i}R)|-2\neq|\mathsf{Sol}_{G/R}(x_{j}R)|-2,\]
which implies \(\frac{|\mathsf{Sol}_{G/R}(x_{i}R)|}{|R|}\neq\frac{|\mathsf{Sol}_{G/R}(x_{j}R)| }{|R|}\). Consequently, we obtain \(|\mathsf{Sol}_{G}(x_{i})|\neq|\mathsf{Sol}_{G}(x_{j})|\), and thus \(\deg_{G}(x_{i})\neq\deg_{G}(x_{j})\). This implies that there exist the indices \(1\leq k<l\leq n\) such that \(d_{k}=\deg_{G}(x_{i})\neq\deg_{G}(x_{j})=d_{l}\), and the result holds.
As an immediate consequence of Proposition 5.1 we have the following corollary.
**Corollary 5.2**.: _Let \(G\) be a finite insoluble group. Then \(\Gamma_{\mathrm{S}}(G)\) is not regular._
We will now determine the structure of a finite group \(G\) based on the degrees of vertices in its solubility graph \(\Gamma_{\mathrm{S}}(G)\). We pose the following question:
**Question 5.3**.: _Let \(G\) and \(H\) be two insoluble groups such that \(\mathrm{Ds}(\mathrm{G})=\mathrm{Ds}(\mathrm{H})\). Is it true that \(|G|=|H|\)?_
In the following proposition, we demonstrate that the question holds true when one of the degrees of vertices in \(\mathrm{D}_{\mathrm{s}}(\mathrm{G})=\mathrm{D}_{\mathrm{s}}(\mathrm{H})\) is a specific number.
**Proposition 5.4**.: _Let \(G\) and \(H\) be two insoluble groups such that \(\mathrm{D}_{\mathrm{s}}(\mathrm{G})=\mathrm{D}_{\mathrm{s}}(\mathrm{H})\). Furthermore, let one of the degrees of vertices be \(p-1\), where \(p\) is a prime, and \(n=|G|-|R(G)|\). Then \(|G|=|H|\)._
Proof.: By assumption there are two elements \(x\in G\) and \(y\in H\) such that \(\deg(x)=\deg(y)=p-1\). Then using Theorem 2.5 we deduce that \(R(G)=R(H)=1\). The rest of proof is obvious.
Finally, we utilize Proposition 5.4 and Theorem 4.11 to deduce the following corollary:
**Corollary 5.5**.: _Let \(G\) be a finite insoluble simple group, excluding \(A_{8}\), \(\mathrm{PSL}(3,4)\), \(B_{n}(q)\), and \(C_{n}(q)\), where \(q\) is odd and \(n\geq 3\). Additionally, let \(H\) be an insoluble group such that \(\mathrm{D}_{\mathrm{s}}(\mathrm{G})=\mathrm{D}_{\mathrm{s}}(\mathrm{H})\). If one of the degrees of vertices is \(p-1\), where \(p\) is a prime, then \(H\cong G\)._
|
2305.16329 | A simple protocol to automate the executing, scaling, and
reconfiguration of Cloud-Native Apps | We propose a simple protocol for Service Mesh management. The protocol
specification consists of the formats of messages, and the actions taken by
senders and recipients. The idea is that microservices of Cloud-Native
Application should be also involved in configurations of their communication
sessions. It does not interfere with the business logic of the microservices
and requires only minor and generic modifications of the microservices
codebase, limited only to network connections. Thus, sidecars are no longer
needed, which is in line with the current trends, e.g. Cilium Service Mesh.
This article presents the full formal specification of the proposed protocol
SSMMP/v1.1. | Stanislaw Ambroszkiewicz, Waldemar Bartyna | 2023-05-16T18:47:16Z | http://arxiv.org/abs/2305.16329v3 | # A simple protocol to automate the executing, scaling, and reconfiguration of Cloud-Native Apps
###### Abstract
We propose a simple protocol for Service Mesh management. The protocol specification consists of the formats of messages, and the actions taken by senders and recipients. The idea is that microservices of Cloud-Native Application should be also involved in configurations of their communication sessions. It does not interfere with the business logic of the microservices and requires only minor and generic modifications of the microservices codebase, limited only to network connections. Thus, sidecars are no longer needed, which is in line with the current trends, e.g. Cilium Service Mesh [1]. This article presents the full formal specification of the proposed protocol.
Cloud-Native Applications, abstract architecture, management protocols, Service Mesh
## I Introduction
This work is a continuation of [2], our research into the foundations of the Cloud-Native paradigm.
Microservices (as a software architecture) were first developed from the service-oriented architecture (SOA) and the concept of Web services (HTTP and WSDL) by Amazon in the early 2000s. Hence the name AWS as short of Amazon Web Services. Perhaps Amazon didn't invent microservices alone at that time. However, AWS has become the most successful application of the microservices for Cloud computing.
Microservice architecture comprises services that are fine-grained and protocols that are lightweight. The architecture inherited the HTTP protocol (in the form of REST) from Web services as the basic means of data transport between microservices. Cloud Native Application (CNApp) is a distributed application composed of microservices and deployed in the Cloud.
RESTful API is a gate for transferring data between microservices. The meaning (business logic) of actual communication protocols between microservices is hard-coded into the microservices. This is why no special requirements of microservice codebase are required when deploying a CNApp except this very one: every microservice needs also to be a HTTP server, i.e. it needs to include, as part of its codebase, a webserver. This also applies to the HTTP client side.
To run a microservice, we need to start with a HTTP server dedicated only to that microservice. The server must run all the time even if there are no client requests. The microservice also needs to run all the time, even when it's not needed.
HTTP is an application layer protocol, implemented over TCP, except HTTP/3 which is over UDP. What is special and unique about HTTP that it must be used to transfer data between microservices? Why can't the transfer be done on raw TCP?
HTTP is a key component of the Web. A CNApp has usually one interface to the Web; it is API Gateway. It seems that there is no reason to maintain the Web structure behind API Gateway and inside the Cloud cluster, especially for data transfer between microservices. This view is gaining more and more attention, see for example Butcher 2022 [3].
### _The problem and related work_
How to automate the executing, scaling and reconfiguration of Cloud-Native Apps in a general way, but not at the software level? Following Mulligan 2023 [4], can this automation be accomplished by implementing a generic protocol that extends the networking stack, on the top of TCP/IP?
The solution we propose is the Simple Service Mesh Management Protocol (SSMMP) as a specification to be implemented in a Cloud cluster. The specification consists of the formats of the messages exchanged between the parties to the conversation (actors) of the protocol, and the actions taken by the senders and receivers of the messages. The actors are: Manager, agents (residing on the nodes that make up the cluster), and instances of microservices running on these nodes. All these actors are almost the same as in Kubernetes clusters. The main difference is the abstract architecture of CNApps (introduced in Section II below) and simple, general rules that allow for automation.
Let's take a brief look at the current work on this topic. Service Mesh is an infrastructure for CNApps that allows to transparently add security, observability and management of network traffic between the microservices without interfering with the codebase of the microservices.
Usually, Service Mesh Managers are built on the top of Kubernetes and Docker. For an extensive overview, see, e.g. _Service Mesh Comparison_[5], and _8 Best Service Mesh Managers to Build Modern Applications_[6].
Each microservice is equipped with its own local proxy (called sidecars). Sidecars can be automatically injected into Kubernetes pods, and can transparently capture all microservice traffic. The sidecars form the data plane of Service Mesh.
The control plane of Service Mesh is (logically) one manager responsible for configuring all proxies in the data plane to route traffic between microservices and load balancing, and to provide resiliency and security.
Linkerd [7] and Istio [8], both extending Kubernetes, are the best known and most popular open source software platforms for realizing Service Mesh. Istio uses Envoy's proxy [9], while Linked uses its own specialized micro-proxies.
Cilium [10] is also an open source software platform for cloud native environments such as Kubernetes clusters. It is claimed that by exploring and applying eBPF (a new revolutionary Linux kernel technology) and WebAssembly, Cilium can challenge Docker and Kubernetes, see [11] and [12]. Envoy proxies are not necessary as eBPF in the kernel can replace them. Istio Ambient Mesh (see [13] and [14]) also follows this idea.
While all modern Service Meshes are on the open source software level, the recent idea (see. e.g. Mulligan 2023 [4]) that the service mesh is now becoming part of the networking stack is extremely interesting. It should be emphasized that the networking stack is primarily based on protocol specifications, not software.
Open Application Model [15] is "_a platform-agnostic open source documentation that defines cloud native applications. OAM is a new layer (abstraction) on top of Kubernetes. Designed to solve how distributed apps are composed and transferred to those responsible for operating them. Focused on application rather than container or orchestrator, Open Application Model brings modular, extensible, and portable design for defining application deployment with higher level API._"
While interesting in its intent, it's still just an idea suggesting that operational behaviors of CNApp need to be a part of its definition, independent of its deployment. A modern application should include management, monitoring, and observability components. Moreover, the behavior should be defined in the codebase of CNApp by developer, see also Toffetti et al. 2017 [16].
The topic of self-management of CNApps in service mesh has been studied for quite a long time. Before the rise of the Cloud, it was called management of component-based distributed systems. There are many interesting and important works in the literature on the subject, e.g. Di Cosmo et al. 2014 [17], Duran and Salaun 2016 [18], Toffetti et al. 2017 [16], Etchevers et al. 2017 [19], Brogi et al. 2018 [20], Brogi et al. 2019 [21], Kosinska et al. 2020 [22], Wojciechowski et al. 2021 [23], Hadded et al. 2022 [24], Brogi et al. 2022 [25], CNCF 2022 [26], Sedhpour and Townend 2022 [27], Alboqmi et al. 2022 [28], Gawel 2022 [29] to mention only some of them.
To complete the short review, also NGINX Modern Reference Architectures [https://github.com/nginxinc/kit-reference-architectures/](https://github.com/nginxinc/kit-reference-architectures/) should be mentioned. It's an interesting idea, but still far from formal specifications.
Let's present the idea of our SSMMP. There are no sidecars and no proxies. Each microservice instance communicates (according to SSMMP) directly with the agent running on the same host.
Execution of microservices, their replications and closing are controlled and monitored by Manager via its agents. A similar idea is also in Duran and Salaun 2016 [18].
Communication sessions between microservices (determined by the CNApp business logic) are controlled and monitored by Manager through its agents.
Each communication session is (like in TCP) connection-oriented. A connection between client and server needs to be established before data can be sent during the session. The server is listening for clients.
Dynamic management of such communication sessions is the essence of the proposed protocol.
A rough description of the protocol is provided in the next two Sections II and III. Then, the generic functionality of the protocol is presented in Section IV. The Section V provides the formal specification of protocol messages and the corresponding actions to be performed. The final Section VI is a short summary.
## II Microservices
Microservices constitute an architectural pattern where a complex and sophisticated application (CNApp) is made up of a collection of fine-grained, self-contained microservices that are developed and deployed independently of each other. They communicate over the network using protocols in accordance with the business logic of the application.
Note that CNApp is a network application where microservices communicate with each other by exchanging messages (following CNApp's business logic) using dedicated, specific protocols implemented on top of the network protocol stack. This is usually TCP/UDP/IP. Due to its ubiquity, HTTP, implemented on the top of TCP/IP, can also be used as a transport protocol for these messages.
Each of these protocols is based on the client-server model of communication. This means that the server (as part of a running microservice on a host with a network address) is listening on a fixed port for a client that is a part of another microservice, usually running on a different host. Since client initiates a communication session with the server, this client needs to know the address and port number of the server.
A single microservice can implement and participate in many different protocols, acting as a client and/or as a server.
Thus, a microservice can be roughly defined as a collection of servers and clients of the protocols it participates in, and its own internal functionality (business logic).
Usually, communication protocols (at application layer) are defined as more or less formal specifications independently of their implementation.
Let _protocol_ be denoted by two closely related parties to the conversation: the server \(S\) and the client \(P\) which are to be implemented on two microservices.
Formally, let protocol be denoted as a pair of \((P,S)\) with appropriate superscripts and/or subscripts if needed. After implementation, they are integral parts (modules) of microservices that communicate using this protocol.
_Abstract inputs_ of a microservice can be defined as a collection of the servers (of the protocols) it implements:
\[IN:=(S_{1},S_{2},\ldots S_{k})\]
_Abstract outputs_ of a microservice is defined as a collection of the clients (of the protocols) it implements:
\[OUT:=(P_{1}^{\prime},P_{2}^{\prime},\ldots P_{n}^{\prime})\]
To omit confusions, the server part and a client part of a protocol will be renamed. Components of abstract input will be called _abstract sockets_, whereas components of abstract output will be called _abstract plugs_.
An abstract plug (of one microservice) can be associated to an abstract socket (of another microservice) if they are two complementary parties of the same communication protocol. There can be multiple abstract plugs for the same abstract socket.
Fig. 1 presents a directed acyclic graph representing a workflow of microservices that comprise a simple CNApp. The edges of the graph are of the form (abstract plug \(\rightarrow\) abstract socket). They are directed, which means that a client (of a protocol) can initiate a communication session with a server of the same protocol. These directions do not necessarily correspond to the data flow. This means that if a communication session is established, data (protocol messages) can also flow in the opposite direction, i.e. from an abstract input (abstract socket) to an abstract output (abstract plug).
Let us formalize the concept described above.
_Microservice_ is defined as
\[A:=(IN,\mathcal{F},OUT)\]
where \(IN\) is the abstract inputs of the microservice, \(OUT\) is the abstract outputs, and \(\mathcal{F}\) denotes the business logic of the microservice.
Incoming messages, via abstract sockets of \(IN\) or/and via abstract plugs of \(OUT\), invoke (as events) functions that comprise the internal functionality \(\mathcal{F}\) of the microservice. This results in outgoing messages sent via \(IN\) or/and \(OUT\).
The proposed definition of microservice is at much more higher level of abstraction than OASIS standard TOSCA [30]. Generally, we distinguish three kinds of such microservices.
1. The first one is for API Gateways. They are entry points of CNApp for users. Usually, \(IN\) of API Gateway has only one element. Its functionality comprises in forwarding users requests to appropriate microservices. Therefore, API Gateway is supposed to be stateless.
2. The second kind consists of regular microservices. Their \(IN\) and \(OUT\) are not empty. These microservices are also supposed to be stateless. Persistent data (states) of these microservices should be stored in backend storage services (BaaS).
3. The third kind is for backend storage services (BaaS) where all data and files of CNApp are stored. Their \(OUT\) is empty.
From now on, all of them are also called _services_ of CNApp.
Fig. 1 illustrates a CNApp composed of one API Gateway, five stateless regular microservices, and two backend storage services (BaaS).
Note that the edges denote abstract connections and can also be seen as abstract compositions of services within a workflow.
## III Abstract architecture of CNApp
_Abstract graph of CNApp_ is defined as the following directed labeled multi-graph.
\[\mathcal{G}:=(\mathcal{V},\mathcal{E})\]
where \(\mathcal{V}\) and \(\mathcal{E}\) denote respectively Vertices and Edges.
* Vertices \(\mathcal{V}\) is a collection of names of services of CNApp, i.e. elements denoted in Fig. 1 as: A (the API Gateway); regular microservices: service B, service-1, service-2, service-3, and service-4; and BaaS services: BaaS-1 and BaaS-2.
* Edges \(\mathcal{E}\) is a collection of labeled edges of the graph. Each edge is of the form: \[(A,\ (P,S),\ B)\] where \(A\) and \(B\) belong to \(\mathcal{V}\), and \((P,S)\) denotes a protocol. That is, \(P\) belongs to \(OUT\) of \(A\), and \(S\) belongs to \(IN\) of \(B\). Hence, the edges correspond to _abstract connections_ between microservices. The direction of an edge represents the client-server order of establishing a concrete connection. There may be multiple edges (abstract connections) between two vertices.
The above graph is an abstract view of a CNApp. Vertex is a service name, whereas an edge is an abstract connection consisting of names of two services and a communication protocol name.
An implementation of abstract connection \((A,\ (P,S),\ B)\) in a running CNApp may result in multiple concrete plugs
Fig. 1: Abstract graph of CNApp - a simple example
(in an instance of service \(A\)) corresponding to this abstract plug \(P\). Each of the concrete plugs is connected to a concrete socket of an instance of service B. This connection is called a communication session and will be explained in detail in Section IV-A.
Abstract graphs have been defined as acyclic. This is relevant to the proposed protocol.
Initial vertices of the abstract graph correspond to API Gateways (entry points for users), whereas the terminal vertices correspond to backend storage services (BaaS) where all data and files of the CNApp are stored.
The vertices representing regular microservices are between the API gateways and the backend storage services (BaaS).
Scaling through replication and reduction (closing replicas) of a service forces it to be stateless. The reason is that if the service is statefull, then closing (crashing) a replica causes it to lose its state. We assume that API Gateways and regular microservices are stateless and can be replicated, i.e. multiple instances of such a service can run simultaneously.
To run CNApp, instances of its services must first be executed, then abstract connections can be configured and established as real connections, and finally protocol sessions (corresponding to theses connections) can be started.
Some services and/or connections may not be used by some executions. Temporary protocol sessions can be started for already established connections (and then closed along with their connections) dynamically at runtime. Multiple service instances may be running, and some are shutting down. This requires dynamic configurations (network addresses and port numbers) of plugs and sockets of the instances. The novelty of SMMP lies precisely in the smart use of these configurations.
## IV Simple service mesh management protocol - SSMMP
Before we get to the formal specification of SSMMP in Section V, let's introduce the protocol in an intuitive, slightly informal way. The main actors of the protocol are: Manager, agents, and running instances of services (API Gateways, regular microservices, and BaaS services), see Fig. 2.
There may be two (or more) running instances of the same service. Hence, the term service refers rather to its bytecode.
Manager communicates only with the agents. Agent, on a node, communicates with all service instances running on that node. A service instance (running on a node) can only communicate (via SSMMP) with its agent on that node.
Agent has a service repository at its disposal. It consists of bytecodes of services that can be executed (as service instances) on this node by the agent. The agent (as an application) should have operating system privileges to execute applications and to kill application processes.
In general, each agent acts as an intermediary in performing the tasks assigned by the Manager. All service instance executions as well as shutting down running instances are controlled by the Manager through its agents.
The key element of SSMMP is the concept of _communication session_ understood jointly as establishing a connection and then starting a protocol session on this connection.
The process of establishing and closing such sessions is controlled by the Manager through its agents. This is explained in detail below.
Fig. 2: Simple protocol to automate the executing, scaling, and reconfiguration of Cloud-Native Apps
### _Communication sessions_
Communication session is an implementation of an abstract connection \((A,(P,S),B)\) that is an edge of the abstract graph of CNApp. The service name \(B\) (as a parameter of the abstract connection) is not encoded explicitly in service \(A\). It must be given as a configuration parameter (according to the abstract graph) for execution of an instance of service \(A\). Thus, service \(A\) as well as the all services are supposed to be generic, i.e. \(A\) may be used as a component of another CNApp for a connection, say \((A,(P,S),C)\), where \(C\) is different than \(B\).
Let an instance of service \(A\) denoted \(i\), and an instance of service \(B\) denoted \(j\), are already running. To implement \(P\) (as a client) in the instance \(i\), this instance needs to have a translation of the parameter \(B\) to the network address of the host where the instance \(j\) of the service \(B\) is already running, and the port number on which the socket \(S\) (as a server) of the instance \(j\) listens to clients.
The port number of this socket, and generally the port numbers of all sockets of \(B\), are configured by Manager as parameters dedicated to that very instance execution. This allows multiple instances of the same service to run on the same node (same network address but different port numbers for the sockets).
Once a translation of \(B\) to the network address and the port number is done, the communication session can be established. This translation is a part of the protocol (we are going to introduce) and will be explained in detail in Section IV below.
Note that the above establishing of a communication session for this abstract connection and this \(A\) instance can be done multiple times if \(P\) is used in the thread constructor in the \(A\) codebase. Each time a new thread is started, a new communication session must be established for that particular socket \(P\) in that new thread.
In Java, the constructor of plug \(P\) (for the TCP connection) is Socket(InetAddressAddress_B, int port).
On the request of the instance \(i\) of \(A\) (for establishing a communication session with an instance of \(B\)), the values of the parameters address_B and port are send by Manger to the instance \(i\) via its agent.
The address_B is IP address of the node where the instance \(j\) of \(B\) is already running, and port is the port number of the socket \(S\) of \(j\).
In Java, the constructor of socket \(S\) (for the TCP connection) is of the form ServerSocket(int port). The port number has been assigned to \(S\) by Manager and sent to the agent residing on the node (with the IP address address_B) as a parameter to execute the instance \(j\) of \(B\).
Fig. 1 may serve as a working example. It is assumed that the CNApp abstract graph is known to Manager before an execution of the CNApp. Manager starts (through its agent) with an instance of API Gateway (called service A in Fig. 1). To execute the instance (denoted \(i\)) of service \(A\), the agent needs configuration (assignment) of the plugs of \(i\) according to the abstract connections (edges) of the abstract graph depicted in Fig. 1. Manager sends to the agent the following abstract connections of the graph: \((A,(P,S),B)\), \((A,(P_{2},S_{2}),\mathrm{service-1})\) and \((A,(P_{3},S_{3}),\mathrm{service-2})\).
Then, Manager can run (via its agents) instances of subsequent services, i.e. service B, service-1, service-2, service-3, service-4, BaaS-1 and BaaS-2; (according to the partial order of the abstract graph) only as needed.
In order to execute an instance (denoted \(j\)) of \(B\), Manager needs to sent to its agent residing on a node (with a network address) the following configuration: port number for socket \(S\), and the abstract connections \((B,(P_{4},S_{4}),\mathrm{service-4})\) and \((B,(P_{5},S_{5}),\mathrm{service-4})\).
Then, the agent can execute the instance \(j\) of \(B\). The network address of \(S\) is the address of this very node where the instance \(j\) is running.
If the instance \(i\) of service A needs to establish a communication session for abstract connection \((A,(P,S),B)\), then an instance of B, say \(j\), must be already running. On the request of \(i\), the network address of the node where \(j\) is running and the port number of \(S\) (both assigned to \(j\) by Manager), are sent (via the agent) to instance \(i\) that initializes the session.
If the business logic of instance \(j\) of service B needs to establish communication session for abstract connection \((B,~{}(P_{4},S_{4}),~{}\mathrm{service-4})\), then an instance, say \(l\), of service-4 needs to be already running with appropriate configuration of its plugs and sockets. The network address of the node where \(l\) is already running, and the port number of \(S_{4}\) (both assigned by Manager to \(l\)), are sent by Manager (via the agent of \(j\)) to instance \(j\) that initializes the session.
### _More details about SSMMP_
At a request of Manager, an agent can execute instances of services whose bytecodes are available in its repository and shut down these instances. The agent can monitor the functioning of service instances running on its node (in particular, their communication sessions) and report their status to Manager.
Manager can also shut down (via its agent) a running instance that is not being used, is malfunctioning, or is being moved to another node.
Manager controls the execution of CNApps in accordance with the policy defined by the Cloud provider.
Current state of the control as well as its history are stored in a dedicated database DB of Manager. Manager knows the service repositories of all its agents.
The Knowledge-base of Manager consists of abstract graphs of CNApps, i.e. the CNApps that can be deployed on the cluster comprising all the nodes.
The current state of any running instance of service is stored in Manager's database, and consists of:
1. open communication sessions and their load metrics;
2. observable (healthy, performance and security) metrics, logs and traces.
If a service instance is running and not being used, there was no reason to start it. Therefore, service instances should only be started when needed and shut down when no longer needed. The same applies to establishing communication sessions. These two aspects are closely related, i.e. if all sessions of
a running service instance (which is not an API Gateway) are closed or have not been established for some predefined time period, then the instance should be closed.
_Current state_ of a running CNApp is defined by running service instances, and already established (and not closed) communication sessions between the instances. It is the basis for management decisions made by Manager. There are four kinds of such decisions:
* execution and shutdown of service instances,
* load balancing by multiple instance executions of stateless services, and closing some of them,
* reconfiguration by invoking instances of needed services, and shutting down instances that are no longer in use,
* and establishing or closing communication sessions.
These decisions (mutually interrelated) are forwarded to appropriate agents as tasks to be accomplished.
Usually, API Gateway is the Web interface for users of CNApp. In Fig. 1 socket \(S_{1}\) implements HTTP server listening at default port number 80. Multiple instances of an API Gateway can be executed (on the basis of DNS load balancing controlled by Manager) so that users requests are distributed across many instances of the API Gateway. This is done in the following way. The alias (the name) of an API Gateway and the port number (by default it is 80) are supposed to be known to all users of the CNApp. Execution of an instance of the API Gateway is done in the following way.
1. Manager sends a request to an agent residing on a node to execute an instance of the API Gateway. The request includes a configuration of the plugs of that API Gateway. In Fig. 1 the plugs are \(P\), \(P2\), and \(P_{3}\). It is supposed that the bytecode of that API Gateway belongs to the agent's repository.
2. The agent executes the instance, and sends the confirmation to Manager.
3. Manager stores the instance identifier (as a canonical name) and its network address in its database, and adds two records to its DNS: a record of type A, and a record of type CNAME. Thus, a request to resolve the API Gateway name (alias) via DNS is answered by sending the network address of one of the running API Gateway instances.
Note that no fixed port numbers for sockets are a priori assigned to any regular service or any BaaS service. The exceptions are API Gateways, where port numbers are fixed and should be well known to users.
Execution of an instance of a regular service or a backend storage service is done as follows. Let us consider service-\(1\) from Fig. 1 as an example.
1. Manager sends a request to an agent residing on a node to execute an instance of service-\(1\). The request includes the configuration of the port numbers (assigned by Manager) for all sockets of service-\(1\). In this case, it is one socket \(S_{2}\). The configuration of the plugs of service-\(1\) (i.e. plugs \(P_{6}\) and \(P_{7}\)) consists in assigned to each such plug a service name according the the abstract graph of the CNApp. In Fig. 1, plug \(P_{6}\) is assigned to service-\(4\) according to the edge \((\text{service-}1,(P_{6},S_{6}),\text{service-}4)\). Similarly, the plug \(P_{7}\) is assigned to service-\(3\) according to the edge \((\text{service-}1,(P_{7},S_{7}),\text{service-}3)\). The type-code of service-\(1\) belongs to the agent's repository.
2. The agent executes an instance of service-\(1\) for these configurations.
3. Agent sends to Manager the confirmation of the successful execution.
4. Manager stores in its database DB the following items: the name of the service, identifier of the instance, the network address of the instance, the configuration of port numbers of the sockets, and the configuration of the plugs. This is crucial for establishing new communication sessions that this instance will participate in.
Establishing a communication session for abstract connection \((A,(P,S),B)\) is done in the following way.
1. An already running instance \(i\) of service \(A\) sends a request to its agent for: the network address of a node where an instance of \(B\) is running, and the port number of the socket \(S\) of this instance.
2. The request is forwarded to Manager that is responsible either to choose (from its DB) a node where an instance of \(B\) is already running, or to execute new instance of service \(B\) on a node.
3. Manager sends the network address and the port number of \(S\) of a running instance (denoted \(j\)) of \(B\) to the agent. Then, the agent forwards it to instance \(i\) of \(A\) where a concrete plug for the abstract plug \(P\) can be construed now, and the communication session can be established to socket \(S\) of \(j\). For the TCP/IP networking stack, this is exactly the establishment of a TCP connection.
Closing an existing communication session (between running instance \(i\) of \(A\) and a running instance \(j\) of \(B\)) for abstract connection \((A,(P,S),B)\) can be initiated by one of the instances, like for the TCP connection. Then, the services must report the successful closing to their agents. These reports are forwarded to Manager.
The session closing may be also requested by Manager via the agents. The agents forward the request to the instances. The instances close the session, and report this to their agents. The agents forward these reports to Manager.
The manager can request the agent to shut down a running service instance. For graceful shutdown of a running instance, all its communication sessions should be closed beforehand. Then, an internal method (like System.exit() in Java) can be evoked to shut down the instance.
Manager can enforce (via its agent) a hard shutdown of a running instance of a service by killing the instance process. This can be done also when a running instance fails, e.g. it does not response to its agent. After that, all communication sessions, in which that instance was participated, should be closed by participants of the opposite sides of the sessions at the request of Manager via its agents. If that instance was
an instance of an API Gateway, Manager removes the records (related to that instance) from its DNS.
Let us consider the case of a failure of an agent or a node. Since Manager can not communicate (via its agent) to the service instances running on that node, the malfunctioning node must be isolated from the running CNApp. This is done by closing all communication sessions, in which the running instances on that node have been participated. This closing can be done only partially by the opposite sides of the sessions at the request of Manager via its agents. Then, the node is isolated and all instances that were running on that node are regarded as closed by Manager. An isolated node requires intervention of the Cloud provider to recover, and to return to the Cluster.
Each agent must register with Manager so that the network address of its node and its service repository are known to Manager.
After a service instance is executed, it initiates an SSMMP communication session with its agent. The network address of the agent is, of course, localhost for the all service instances running on the same node (host). The port number of the agent (to communicate with its service instances) is fixed for SSMMP, and is the same for all the agents.
## V Specification of the protocol messages
From now on, we will use normal (not italic) letters to denote services, their instances, plugs, sockets and connections, like A, P, S, B, i, and j.
There are two general kids of messages: request, and response to this request. All messages are strings. Message consists of a sequence of lines. Line is of the form:
line_name: contents
The first line is for message type. The second line is for message identifier (an integer). The identifier is unique, and is the same for request and its response.
### _Initialization of the protocol_
Let a CNApp be fixed. The abstract graph of CNApp is known to Manager. For each service of CNApp, there are agents that can execute instances of this service, i.e. the bytecode of this service belongs to the agents repositories.
Each entry of the repository is of the form (service-name, list of socket names, list of plug names, bytecode of service).
An agent registers with Manager and sends the list of service names of its repository. Request for the registration from agent to Manager is of the following form.
type: initiation_request message_id: [integer] agent_network_address: [IPv6] service_repository: [service name list]
In square brackets of message_id: [integer] there is an element of a datatype, in this case it is a positive integer determined by the agent. In the case of agent_network_address: [IPv6], it is a concrete IPv6 network address. For the line service_repository: [service name list], the contents denotes a sequence of the form (name_1; name_2;...name_k).
Registration response form Manager to agent is as follows.
type: initiation_response message_id: [integer] status: [status code]
Universal HTTP response status codes are proposed to be adapted to SSMMP. Their meaning depends on response types. Each code consists of three digits, and is of the form:
lxx informational response,
2xx successful - the request was successfully received, understood, and accepted,
3xx redirection - further action needs to be taken in order to complete the request,
4xx requester error - the request contains bad syntax or cannot be fulfilled,
5xx respondent error.
### _Execution of service A_
Manager assigns a unique identifier, say i, (a positive integer) to a new instance of A to be executed. Manager also determines port numbers to all sockets of the instance. It is called socket configuration, and is a sequence of pairs:
(socket_name, port_number)
Configuration of plugs is a sequence of pairs:
(plug_name, service_name)
assigned to instance i by Manager according to the CNApp abstract graph. This means that each abstract plug is assigned a service name, where is the corresponding abstract socket. Manager also determines a network address (denoted NA_i) of the node where the instance i of A is to be executed by the agent residing on that node.
Request from Manager to the agent to execute the instance i of service A is as follows.
type: execution_request message_id: n agent_network_address: NA_i service_name: A service_instance_id: i socket_configuration: [configuration of sockets] plug_configuration: [configuration of plugs]
Action of the agent: execution of instance i of the service A for these configurations of sockets and plugs.
Response from agent to Manager:
type: execution_response message_id: n status: [status code]
### _Communication session establishment_
Establishing a communication session for abstract connection \((A,(P,S),B)\) between instance i of A, and instance j of B.
Let us suppose that instance i of service A is already running on the node that has network address NA_i.
Request from the instance i of service A to its agent:
type: session_request
message_id: n sub_type: service_to_agent source_service_name: A source_service_instance_id: i source_plug_name: P dest_service_name: B dest_socket_name: S Request is forwarded to Manager by the agent:
```
type:session_request message_id: n sub_type: agent_to_Manager agent_network_address: NA_i source_service_name: A source_service_instance_id: i source_plug_name: P dest_service_name: B dest_socket_name: S
```
If there is no instance of service B already running, then Manager sends a request to an agent to execute an instance j of service B. Otherwise, i.e. if instance j of service B (on the node with network address NA_j and the port k for S) is already running, then Manager sends the following response to the agent:
```
type:session_response message_id: n sub_type: Manager_to_agent status: [status code] dest_service_instance_network_address: NA_j dest_socket_port: k
```
Response from agent to instance of A:
```
type:session_response message_id: n sub_type: agent_to_service status: [status code] dest_service_instance_network_address: NA_j dest_socket_port: k
```
Action of instance i of A: initialize \((P,S)\) session to instance j of B. The port number of plug P is determined; let it be denoted by m.
By default, the socket S of instance j of B accepts the session establishment. This acceptance will be known to Manager, if the session acknowledgment is send by instance i to Manager via the agent.
Action of instance j of B: accept the establishing \((P,S)\) session to instance i of A. New socket port number (say l) is assigned to this session; this is exactly the same as for TCP connection.
Instance j of B gets to know the values of the parameters:
```
source_service_instance_network_address: NA_i source_plug_port: m
```
Instance i of A gets to know the value of the parameter dest_socket_new_port: l Acknowledgment of the established session is sent by the instance i of A to its agent.
```
type:session_ack message_id: n sub_type: service_to_agent status: [status code] source_plug_port: m dest_socket_new_port: l
```
Note that message_id is n (determined by instance i of A), and is the same for all the above request, response and acknowledgment messages.
The complete list of the parameters of the session is as follows.
```
source_service_name: A source_service_instance_network_address: NA_i source_service_instance_id: i source_plug_name: P source_plug_port: m dest_service_name: B dest_service_instance_network_address: NA_j dest_service_instance_id: j dest_socket_port: k
```
The number k is the port number (assigned by Manager) to the socket S (of instance j of B) for listening to clients. New port l of the socket S is dynamically assigned by instance j of B solely for the communication session with P of instance i of A.
The instance i of A knows the above parameters except dest_service_instance_id: j.
The instance j of B knows the above parameters except source_service_instance_id: i source_service_name: A
Manager knows all parameters of the session.
### _Closing a communication session_
Suppose that there is an already established session of the protocol \((P,S)\) between running instance i of A and instance j of B.
#### Iv-D1 Session closing by instance i of A, or by instance j of B
Each of the instances can initialize session closing, like in TCP connection, according to its own business logic. If instance i does so, it informs instance j of B that does the same; and vice versa. It is a regular closing of the session.
A session may be closed only by one part of the communication due to failure of the other part or a broken link making the communication between these two parts impossible. In any of these cases above a running instance sends a message to its agent informing that the session was closed. Then, the agent forwards it to Manager.
The message from instance i of A to its agent is as follows.
``` type:source_service_session_close_info message_id: n sub_type: source_service_to_agent source_service_name: A
source_service_instance_network_address: NA_i source_service_instance_id: i source_plug_name: P source_plug_port: m dest_service_name: B dest_service_instance_network_address: NA_j dest_socket_name: S dest_socket_port: k dest_socket_new_port: l
Note that the instance i of A sends all (known to it) parameters of the session. The agent forwards the info to Manager:
```
type:source_service_session_close_info message_id: n sub_type:agent_to_Manager source_service_name: A source_service_instance_network_address: NA_i source_service_instance_id: i source_plug_name: P source_plug_port: m dest_service_name: B dest_service_instance_network_address: NA_j dest_socket_name: S dest_socket_port: k dest_socket_new_port: l
```
The value of the parameter message_id: n is the same for the both messages above, and is determined by instance i of A. Manager can determine the identifier j of the instance of B on the basis of the port numbers: m, k and l.
Session closing by instance j of B is similar. The message from instance j of B to its agent is as follows.
```
type:dest_service_session_close_info message_id: o sub_type:dest_service_to_agent source_service_instance_network_address: NA_i source_plug_name: P source_plug_port: m dest_service_instance_network_address: NA_j dest_service_instance_id: j dest_socket_name: S dest_socket_port: k dest_socket_new_port: l
```
The instance j of B sends all (known to it) parameters of the session. The agent forwards the info to Manager.
```
type:dest_service_session_close_info message_id: o sub_type:agent_to_Manager source_service_instance_network_address: NA_i source_plug_name: P source_plug_port: m dest_service_name: B dest_service_instance_network_address: NA_j dest_service_instance_id: j dest_socket_name: S dest_socket_port: k dest_socket_new_port: l
```
The value of the parameter message_id: o is the same for both messages above and is determined by instance j of B. Manager can determine the identifier i of the instance of A on the basis of the port numbers: m, k and l.
#### Iii-B2 Session closing on the request of Manager
An initiation of a communication session for abstract connection \((A,(P,S),B)\) between instance i of A, and instance j of B is done by the instance i according to its business logic. Upon the request of instance i, configuration for such session is sent to instance i by Manager via the agent of instance i.
By default, the socket S of instance j of B accepts the session establishment. This acceptance is known to Manager.
Manager's request to close this session is only sent to the instance i of service A via the agent of instance i. The request contains all parameters of the session known to the instance i of A, i.e. except dest_service_instance_id: j.
Request, from Manager to instance i of A via its agent to close a session, is as follows. The value o of the parameter message_id: is determined by Manager.
```
type:source_service_session_close_request message_id: o sub_type: Manager_to_agent source_service_name: A source_service_instance_network_address: NA_i source_service_instance_id: i source_plug_name: P source_plug_port: m dest_service_name: B dest_service_instance_network_address: NA_j dest_socket_name: S dest_socket_port: k dest_socket_port: l
```
The agent forwards the request to instance i of A
```
type:source_service_session_close_request message_id: o sub_type: agent_to_source_service source_service_name: A source_service_instance_network_address: NA_i source_service_instance_id: i source_plug_name: P source_plug_port: m dest_service_name: B dest_service_instance_network_address: NA_j dest_socket_name: S dest_socket_port: k dest_socket_new_port: l
```
Action of instance i of A: closing P.
Response of instance i of A to its agent:
```
type:source_service_session_close_response message_id: o sub_type: source_service_to_agent status: [status code]
```
Forwarding the response to Manager.
```
type:source_service_session_close_response message_id: o sub_type: agent_to_Manager status: [status code]
```
After the successful session closing, the instance i may need a new communication session (for the same connection) to complete the task interrupted by the enforced closing. In order to do so the instance i can send a request to Manager for a
configuration needed to establish such session. The message format of this request is given in Section V-C.
In the case of failure of instance i of service A, or its agent or node, a similar request must be sent to instance j of service B to close the session. This requires only minor modifications to the message sequence above.
### _Shutdown of service instances_
Graceful shutdown of a running instance of service by itself (on a request of Manager forwarded by agent) can be done after closing of all its communication sessions on the request of Manager via agent. The appropriate requests and responses are as follows. From Manager to agent:
type: graceful_shutdown_request message_id: o sub_type: agent_to_service_instance service_name: A service_instance_id: i
Instance i of service A invokes internal method to shut down itself, and sends the following response to the agent.
type: graceful_shutdown_response message_id: o sub_type: service_instance_to_agent status: [status code]
The agent forwards the response to Manager:
type: graceful_shutdown_response message_id: o sub_type: agent_to_Manager status: [status code]
Hard shutdown of instance i of service A is done by agent on the request of Manager.
type: hard_shutdown_request message_id: n sub_type: Manager_to_agent service_name: A service_instance_id: i
Action of the agent: kill the process of service instance i.
The response is as follows.
type: hard_shutdown_response message_id: n sub_type: agent_to_Manager status: [status code]
### _Simple monitoring of service instances by agent_
Agent's request for observable metrics from a service instance is as follows.
type: health_control_request message_id: o sub_type: agent_to_service_instance service_name: A service_instance_id: i
Service instance response:
type: health_control_response message_id: o sub_type: service_instance_to_agent service_name: A service_instance_id: i
The status codes are to express the metrics. Agent forwards the response to Manager only in the case of abnormal behavior (marked with a status code) of instance i.
type: health_control_response message_id: o sub_type: agent_to_Manager service_name: A service_instance_id: i
### _Failures_
Since executing, scaling and reconfiguration of CNApp can be done by SSMMP, it seems reasonable to include SSMMP as an integral part of CNApp. Hence, failures of CNApp also include failures of SSMMP. It seems that SSMMP can be used to implement management of CNApp in general and recovery from failures in particular.
Graph of CNApp and the states of its running instances are stored by Manager in its KB and DB. Failures of agents and service instances can be handled if Manager is running properly. Replications of cluster nodes, agents and their service repositories are sufficient means for recoveries from such failures.
The central Manager is the weakest point here; its failure results in an irreversible failure of the running CNApp. However, if the Manager's current state is kept securely at all times, the Manager process can be recovered from that state. An additional supervising manager is needed for this.
Status codes can be used to handle failures. This is left to SSMMP implementations.
## VI Summary
SSMMP is simple if we consider the specification above. The concept of abstract connection between services (in the abstract graph of CNApp) and its implementation as communication sessions is crucial. The abstract definition of service of CNApp is also important here. Separation of these abstract notions from deployment is important.
The novelty of SSMMP consists in dynamically establishing and closing communication sessions at runtime based on configurations assigned to sockets and plugs by the Manager.
Complete list of the parameters of communication session for abstract connection \((A,(P,S),B)\), between instance i of A, and instance j of B, is worth to be mention again here:
* _Name A of service, and identifier i of the instance of A._
* _IP address of the node where the instance i of A is running._
* _Name P of the plug of A, and port number of the plug P._
* _Name B of service, and identifier j of the instance of B._
* _IP address of the node where the instance j of B is running._
* _Name B of the socket of B, and its port number for listening._
* _New port number of socket S dedicated to the session with P._
Although SSMMP was designed to be independent of transport and network protocol stack, the TCP/IP is the default stack for communication sessions. That is, network addresses are IP addresses, whereas TCP serves for establishing and closing connections. It would be interesting to implement communication sessions on Named Data Networking.
Requirements for developing services of CAApp participating in SSMMP are as follows. Each instance of a service of CNApp is obliged to close its communication session at the request of the Manager. This can interfere with the business logic of the instance. For this reason, the current state of the communication session (until closed) must be stored in a BaaS service. To continue a task interrupted by the closing, the instance (at the client side of the connection) can establish a new session for the same abstract connection to continue and possibly complete the task. Retrieval of the current state from the BaaS service may also be needed. This is the most complex requirement to be implemented in the codebase of each service participating in SSMMP. This requirement can also be seen as a standard recovery mechanism (independent of SSMMP) for handling failures of communication session, e.g. resulting from broken network connections. It seems reasonable to implement these recovery mechanisms in each CNApp service, regardless of SSMMP. The rest of the implementation requirements are relatively simple and can be completely separated from the business logic of the services.
A number of prototype SSMMP implementations for e-commerce and social media CNApps are already done and will soon be available on GitHub.
|
2307.00332 | Homomorphism of independent random variable convolution and matrix
multiplication | A map is given showing that convolutions of independent random variables over
a finite group and matrix multiplications of doubly stochastic matrices are
homomorphic. As an application, a short proof is given to the theorem that the
limiting distributions of stochastic processes with stationary independent
increments over a finite group are always uniform. | Yue Liu | 2023-07-01T13:03:48Z | http://arxiv.org/abs/2307.00332v1 | # Homomorphism of independent random variable convolution and matrix multiplication
###### Abstract
A map is given showing that convolutions of independent random variables over a finite group and matrix multiplications of doubly stochastic matrices are homomorphic. As an application, a short proof is given to the theorem that the limiting distributions of stochastic processes with stationary independent increments over a finite group are always uniform.
_AMS classification:_ 60G50, 15B51
keywords: convolution; doubly stochastic matrix; random walk; finite group +
Footnote †: journal:
**The Result**
Convolution of random variables is a basic operation in probability theory, especially the convolutions of independent ones. In this paper, attention is restricted to convolutions of independent random variables over finite groups.
Let \(G=\{g_{1},\ldots,g_{n}\}\) be a finite group. By Caley Theorem, let
\[\phi:\ G\rightarrow\Sigma,\ g_{k}\mapsto\sigma_{k}\]
be an isomorphism, where \(\sigma_{k}\) is the left transition of \(G\) with the element \(g_{k}\), thus \(\Sigma=\{\sigma_{1},\ldots,\sigma_{n}\}\) can be also regarded as a permutation subgroup of the symmetric group \(S_{n}\) over \([n]\). For each \(k\), there is an order \(n\) permutation matrix \(P_{\sigma_{k}}=\left(p_{ij}^{(k)}\right)_{n\times n}\) corresponding to \(\sigma_{k}\), where
\[p_{ij}^{(k)}=\left\{\begin{array}{ll}1,&i=\sigma_{k}(j),\\ 0,&\text{otherwise.}\end{array}\right.\]
Let \(X\) be a random variable over \(G\). Write \(p_{k}=\mathbb{P}[X=g_{k}],\ k=1,\ldots,n\). Define the _convolution matrix_ of \(X\), denoted by \(Con(X)\), as
\[Con(X)=\sum_{k=1}^{n}p_{k}P_{\sigma_{k}}.\]
It is easy to see that \(Con(X)\) is a doubly stochastic matrix. The following lemma shows that convolutions of independent random variables over a finite group and matrix multiplications of doubly stochastic matrices are homomorphic.
**Lemma 1**.: _Let \(X,Y\) be two independent random variables over a finite group \(G\). Then_
\[Con(X\cdot Y)=Con(X)Con(Y),\]
_where \(X\cdot Y\) is the convolution of \(X\) and \(Y\)._
Proof.: The lemma can be verified by a direct computation.
Random walk is a typical stochastic process. Random walks with stationary and independent increments over Euclidean spaces are called _Levy processes_ (see [1] for example). A more general framework for the study of such processes is proposed in [2], where the random variables are assumed to be over any topological semigroups, rather than just Euclidean spaces. It was shown that the asymptotic behaviors of the processes over compact spaces and noncompact spaces are quite different (see Chapter 2 and 3 in [2]).
As an application of Lemma 1, together with the well-known Perron Theorem (Theorem 1), we give a short proof to the following Theorem 2, which characterize the asymptotic behavior of random walks with stationary and independent increments over finite groups.
**Theorem 1**.: _(Perron Theorem, [3, Theorem 8.4.4]) Let \(A\in M_{n}\) be irreducible and nonnegative, and suppose that \(n\geq 2\). Then_
1. \(\rho(A)>0\)_._
2. \(\rho(A)\) _is an algebraically simple eigenvalue of_ \(A\)_._
3. _there is a unique real vector_ \(x=[x_{i}]\) _such that_ \(Ax=\rho(A)x\) _and_ \(x_{1}+\cdots+x_{n}=1\)_; this vector is positive._
4. _there is a unique real vector_ \(y=[y_{i}]\) _such that_ \(y^{T}A=\rho(A)y^{T}\) _and_ \(x_{1}y_{1}+\cdots+x_{n}y_{n}=1\)_; this vector is positive._
Let \(X\) be a random variable. \(\mathcal{L}(X)\) is used to denote the distribution law of \(X\).
**Theorem 2**.: _Let \(G=\{g_{1},\ldots,g_{n}\}\) be a finite group. \((X_{m})_{m\geq 1}\) be a \(G\)-valued stochastic process such that \(X_{1}=\xi_{1}\) and \(X_{m+1}=X_{m}\xi_{m+1},\ m\geq 1,\) where \((\xi_{m})_{m\geq 1}\) are i.i.d. random variables whose supports are \(G\). Then_
\[\lim_{m\to\infty}\mathcal{L}(X_{m})=\mathcal{L}\]
_exists, and \(\mathcal{L}\) is convolution invariant, i.e., \(\mathcal{L}\) is a uniform distribution over \(G\)._
Proof.: Let \(P_{\sigma_{1}},\ldots,P_{\sigma_{n}}\) be the permutation matrices as in the definition of convolution matrices. Write \(P_{\sigma_{k}}=(p_{ij}^{(k)})\). By the definitions, \(p_{ij}^{(k)}=1\) means \(g_{k}=g_{j}\cdot g_{i}^{-1}\). Thus, for every pair of \(i,j\in[n]\), there exists only one \(k\in[n]\) such that \(p_{ij}^{(k)}=1\). Then \(P_{\sigma_{1}},\ldots,P_{\sigma_{n}}\) are linearly independent, and \(\sum_{k=1}^{n}P_{\sigma_{k}}\) is the matrix \(J\) whose entries are all \(1\).
Since \((\xi_{m})_{m\geq 1}\) are i.i.d. random variables, their convolution matrices are the same. Write \(A=Con(\xi_{m})\). Since the support of \(\xi_{m}\) is \(G\), every \(p_{k}=\mathbb{P}[\xi_{m}=g_{k}]\) is positive for \(k=1,\ldots,n\), which means \(A\) is a positive matrix.
By Lemma 1, \(Con(X_{m})=A^{m}\). Since \(P_{\sigma_{1}},\ldots,P_{\sigma_{n}}\) are linearly independent, \(\lim_{m\to\infty}\mathcal{L}(X_{m})\) exists is equivalent to \(\lim_{m\to\infty}A^{m}\) exists, and the limiting distribution is uniquely determined by \(\lim_{m\to\infty}A^{m}\).
Since \(A\) is a positive doubly stochastic matrix, by Perron Theorem, the spectral radius \(\rho(A)=1\) is an algebraically simple eigenvalue, and \(x=\frac{1}{n}[1,\ldots,1]^{T},y^{T}=[1,\ldots,1]\) are the unique left and right eigenvectors corresponding to \(\rho(A)\) as described in Theorem 1, respectively. Then
\[\lim_{m\to\infty}A^{m}=x\cdot y^{T}=\frac{1}{n}J=\frac{1}{n}P_{\sigma_{1}}+ \cdots+\frac{1}{n}P_{\sigma_{n}},\]
yielding that the limiting distribution of \((X_{m})_{m\geq 1}\) is the uniform distribution.
## Acknowledgments
The author would like to thank Prof. Jian Wang of Funjian Normal University for the helpful discussions and suggestions, especially for providing the background of random walks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.